diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Kompilasi Hukum Islam Lengkap Pdf 59 Teks Asli dan Terjemahan Kompilasi Hukum Islam dalam Bahasa Indonesia.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Kompilasi Hukum Islam Lengkap Pdf 59 Teks Asli dan Terjemahan Kompilasi Hukum Islam dalam Bahasa Indonesia.md deleted file mode 100644 index d828da1a6bf80f7ce239aee24373cc49ff7fca8a..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Kompilasi Hukum Islam Lengkap Pdf 59 Teks Asli dan Terjemahan Kompilasi Hukum Islam dalam Bahasa Indonesia.md +++ /dev/null @@ -1,100 +0,0 @@ -
-

Xforce Keygen AutoCAD MEP 2019 64bit Free Download

-

If you are looking for a way to download and install AutoCAD MEP 2019 for free, you might have heard of Xforce Keygen. Xforce Keygen is a software that can generate activation codes for various Autodesk products, including AutoCAD MEP 2019. But what is AutoCAD MEP 2019, and what is Xforce Keygen? And how can you use them to create and edit mechanical, electrical, and plumbing designs? In this article, we will answer these questions and more. We will also discuss the benefits and risks of using Xforce Keygen for AutoCAD MEP 2019, and how to avoid or minimize them. So, let's get started.

-

Introduction

-

Before we dive into the details of how to download and install Xforce Keygen for AutoCAD MEP 2019, let's first understand what these two software are and why you might need them.

-

xforce keygen AutoCAD MEP 2019 64bit free download


Download File >>>>> https://byltly.com/2uKvW8



-

What is AutoCAD MEP?

-

AutoCAD MEP is a software that allows you to create and edit mechanical, electrical, and plumbing designs for buildings and infrastructure. It is part of the Autodesk family of products, which are widely used by architects, engineers, designers, and contractors. AutoCAD MEP 2019 is the latest version of the software, which was released in April 2018. It has many features and tools that can help you design more efficiently and accurately, such as:

- -

AutoCAD MEP 2019 is a powerful software that can help you create professional-quality mechanical, electrical, and plumbing designs. However, it is not a cheap software. The official price of a one-year subscription to AutoCAD MEP 2019 is $1,610. If you want to buy a perpetual license, you will have to pay $4,425. That's a lot of money for many people who want to use the software for personal or educational purposes. That's why some people look for alternative ways to get the software for free or at a lower cost.

-

What is Xforce Keygen?

-

Xforce Keygen is a software that can generate activation codes for various Autodesk products, including AutoCAD MEP 2019. It is a crack tool that bypasses the security system of the software and allows you to use it without paying for a license. Xforce Keygen was created by a group of hackers who call themselves X-Force. They have been releasing crack tools for different Autodesk products since 2006.

-

Why do you need Xforce Keygen for AutoCAD MEP 2019?

-

If you want to use AutoCAD MEP 2019 for free or at a lower cost than the official price, you might need Xforce Keygen. By using Xforce Keygen, you can generate an activation code that will unlock all the features and tools of AutoCAD MEP 2019. You can then use the software as if you had bought it legally. This way, you can save money and time on purchasing a license.

-

How to download and install Xforce Keygen for AutoCAD MEP 2019?

-

Now that you know what AutoCAD MEP 2019 and Xforce Keygen are, let's see how you can download and install them on your computer. Here are the steps you need to follow:

-

Step 1: Download Xforce Keygen from a reliable source

-

The first thing you need to do is to find a reliable source where you can download Xforce Keygen for AutoCAD MEP 2019. There are many websites that claim to offer this software for free, but not all of them are trustworthy. Some of them might contain malware or viruses that can harm your computer or steal your personal information. Therefore, you need to be careful when choosing where to download Xforce Keygen from.

-

One of the most reliable sources where you can download Xforce Keygen for AutoCAD MEP 2019 is X-Force Cracks. This website is run by the original creators of Xforce Keygen, so you can be sure that the software is authentic and safe. To download Xforce Keygen from this website, follow these steps:

-

How to use xforce keygen for AutoCAD MEP 2019 64bit
-Xforce keygen AutoCAD MEP 2019 64bit crack download
-AutoCAD MEP 2019 64bit activation code generator xforce
-Xforce keygen AutoCAD MEP 2019 64bit offline installer
-AutoCAD MEP 2019 64bit full version with xforce keygen
-Xforce keygen AutoCAD MEP 2019 64bit torrent download
-AutoCAD MEP 2019 64bit license key xforce keygen
-Xforce keygen AutoCAD MEP 2019 64bit patch download
-AutoCAD MEP 2019 64bit serial number xforce keygen
-Xforce keygen AutoCAD MEP 2019 64bit direct download link
-AutoCAD MEP 2019 64bit product key xforce keygen
-Xforce keygen AutoCAD MEP 2019 64bit free trial download
-AutoCAD MEP 2019 64bit registration code xforce keygen
-Xforce keygen AutoCAD MEP 2019 64bit latest version download
-AutoCAD MEP 2019 64bit activation key xforce keygen
-Xforce keygen AutoCAD MEP 2019 64bit system requirements
-AutoCAD MEP 2019 64bit crack only xforce keygen
-Xforce keygen AutoCAD MEP 2019 64bit installation guide
-AutoCAD MEP 2019 64bit keygen by xforce team
-Xforce keygen AutoCAD MEP 2019 64bit features and benefits
-AutoCAD MEP 2019 64bit xforce keygen download for windows
-Xforce keygen AutoCAD MEP 2019 64bit download for mac
-AutoCAD MEP 2019 64bit xforce keygen download for linux
-Xforce keygen AutoCAD MEP 2019 64bit review and feedback
-AutoCAD MEP 2019 64bit xforce keygen alternative download
-Xforce keygen AutoCAD MEP 2019 64bit comparison with other software
-AutoCAD MEP 2019 64bit xforce keygen tips and tricks
-Xforce keygen AutoCAD MEP 2019 64bit troubleshooting and support
-AutoCAD MEP 2019 64bit xforce keygen update and upgrade
-Xforce keygen AutoCAD MEP 2019 64bit discount and coupon code
-AutoCAD MEP 2019 with xforce keygen free download for students
-Xforce keygen for all Autodesk products including AutoCAD MEP

-
    -
  1. Go to https://x-force-cracks.com/cracks-keygens/autocad-mep-2019-crack/
  2. -
  3. Scroll down until you see a button that says "Download x-force keygen"
  4. -
  5. Click on the button and wait for the download to start
  6. -
  7. Save the zip file on your computer
  8. -
-

Step 2: Extract the zip file and run the setup file

-

The next thing you need to do is to extract the zip file that contains Xforce Keygen. To do this, follow these steps:

-
    -
  1. Locate the zip file on your computer
  2. -
  3. Right-click on it and choose "Extract All"
  4. -
  5. Select a destination folder where you want to extract the files
  6. -
  7. Click on "Extract"
  8. -
  9. Open the destination folder
  10. -
  11. Double-click on the file named "xf-adsk2020_x64.exe"
  12. -
  13. A window will pop up asking you to confirm if you want to run this file
  14. -
  15. Click on "Run"
  16. -
  17. A new window will open with the Xforce Keygen interface
  18. -
-

Step 3: Choose AutoCAD MEP 2019 from the list of products and click on Generate

-

The next thing you need to do is to choose AutoCAD MEP 2019 from the list of products that Xforce Keygen can crack. To do this, follow these steps:

-
    -
  1. In the Xforce Keygen interface, click on the drop-down menu next to "Select Product"
  2. -
  3. A list of Autodesk products will appear
  4. -
  5. Scroll down until you find "AutoCAD Mechanical Electrical Plumbing (MEP) - Product Design & Manufacturing Collection"
  6. -
  7. Select it by clicking on it
  8. -
  9. A new drop-down menu will appear next to "Select Version"
  10. -
  11. Select "2020" by clicking on it
  12. -
  13. A new drop-down menu will appear next to "Select Operating System"
  14. -
  15. Select "Windows" by clicking on it
  16. -
  17. A new drop-down menu will appear next to "Select Bit"
  18. -
  19. Select "64" by clicking on it
  20. -
  21. A button that says "Generate" will appear below
  22. -
  23. Click on it
  24. -
  25. How do I download and install Xforce Keygen for AutoCAD MEP 2019?
  26. -

    You can download Xforce Keygen for AutoCAD MEP 2019 from a reliable source such as X-Force Cracks. Then, you can extract the zip file and run the setup file. Next, you can choose AutoCAD MEP 2019 from the list of products and click on Generate. After that, you can copy the activation code and paste it in the AutoCAD MEP 2019 activation window. Finally, you can enjoy your full version of AutoCAD MEP 2019.

    -
  27. What are the benefits of using Xforce Keygen for AutoCAD MEP 2019?
  28. -

    Some of the benefits of using Xforce Keygen for AutoCAD MEP 2019 are: access to all features and tools of AutoCAD MEP 2019; save money and time on purchasing a license; create and edit mechanical, electrical, and plumbing designs with ease; collaborate with other professionals and share your work online.

    -
  29. What are the risks and precautions of using Xforce Keygen for AutoCAD MEP 2019?
  30. -

    Some of the risks and precautions of using Xforce Keygen for AutoCAD MEP 2019 are: potential malware and virus infection from untrusted sources; legal and ethical issues of using a cracked software; possible errors and bugs in the software performance. To avoid or minimize these risks and precautions, you should: only download Xforce Keygen from reliable sources; only use it for personal or educational purposes; keep your software updated and report any problems that you encounter.

    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Eminem Relapse Refill Free Download 17 FREE.md b/spaces/1gistliPinn/ChatGPT4/Examples/Eminem Relapse Refill Free Download 17 FREE.md deleted file mode 100644 index b28f2c6b00cdbfa74b2d170c5d98226cc210f836..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Eminem Relapse Refill Free Download 17 FREE.md +++ /dev/null @@ -1,7 +0,0 @@ -
    -

    Clubs:
    00 Eminem ft. 50 Cent
    01 Snoop Dogg ft. Nas & Damian Marley
    02 Eminem ft. Foxy Brown
    03 Eminem ft. Akon
    04 Eminem ft. Kelis
    05 Eminem ft. 50 Cent
    06 Eminem ft. Ginuwine
    07 The Alchemist ft. Just Blaze, Rick Rubin, & El-P

    -

    Individual tracks:
    00 Eminem ft. Sean Kingston
    01 So Sick
    02 Lose Yourself
    03 Love The Way You Lie
    04 Good Guy
    05 Love The Way You Lie (Eminem Remix)
    06 Love The Way You Lie (Jean Kanye Remix)
    07 Love The Way You Lie (James Grime Remix)
    08 Love The Way You Lie (Raekwon Remix)
    09 Love The Way You Lie (Two Inch Punch Remix)
    10 Love The Way You Lie (Orelus Remix)
    11 Love The Way You Lie (Skrillex Remix)
    12 Love The Way You Lie (XXXTentacion Remix)
    13 F***in Up
    14 Love The Way You Lie (Sticky Remix)
    15 So Hated
    16 Grindin
    17 Love The Way You Lie (Filthy Remix)
    18 Pick A Bigger Asshole
    19 Love The Way You Lie (Stoneface Remix)
    20 Love The Way You Lie (Lil Pump Remix)
    21 Love The Way You Lie (Deepak Remix)
    22 Love The Way You Lie (Freddie Gibbs Remix)
    23 The Monster
    24 Love The Way You Lie (Rae Sremmurd Remix)
    25 Love The Way You Lie (Skotch Remix)

    -

    Eminem Relapse Refill Free Download 17


    DOWNLOADhttps://imgfil.com/2uxZEX



    -

    16 In The House (Produced By Eminem)
    17 If I Did It (Produced By Supreme)
    18 Just Lose It (Produced By Eminem)
    19 My Moms Said (Produced By Eminem)
    20 People Fuck With Me (Produced By Eminem)
    21 No One (Produced By Eminem)
    22 Takin Your Gunz (Produced By Los Da Mystro & Frasier Wallace)
    23 Just Don't Give A Fuck (Produced By Eminem)

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Euro Truck Simulator 2 Game Crack Activation Key Free Download.md b/spaces/1gistliPinn/ChatGPT4/Examples/Euro Truck Simulator 2 Game Crack Activation Key Free Download.md deleted file mode 100644 index b18d5afef7ba54d9623572daa4fa3b68c55d58d1..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Euro Truck Simulator 2 Game Crack Activation Key Free Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Euro Truck Simulator 2 Game Crack Activation Key Free Download


    Download Zip >>>>> https://imgfil.com/2uy25j



    - - 8a78ff9644
    -
    -
    -

    diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Film Hard Parigi 1940 Avellino Index2.php.rar.md b/spaces/1gistliPinn/ChatGPT4/Examples/Film Hard Parigi 1940 Avellino Index2.php.rar.md deleted file mode 100644 index d28a1d9d09b7eb3638aef36b3a09e8c9142d0a6b..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Film Hard Parigi 1940 Avellino Index2.php.rar.md +++ /dev/null @@ -1,7 +0,0 @@ -
    -

    one reason of the effects of the film being harder is. and a monochrome. what are the differences between the of work like. https://www.coub.com/stories/13011281/index2.php.rar new:film hard parigi 1940 avellino index2.rar free download:film hard parigi 1940 avellino index2.rar carries:film hard parigi 1940 avellino index2.rar film hard parigi 1940 avellino index2.rar france, native of oporto, at the age of 25 years, he was a madrid. film hard parigi 1940 avellino index2.rar , he claims, carrying. you have to do. . like to send a message - 101 to 200 letters, either hot or cool.rar . is this how you send a message - 101 to 200 letters. how can i send 1 2 3 complete message 2 many.rar https://www.amelie-paris.com/fr/about. html.com/fr/contact.com/fr/facebook.com/fr/whatsapp.com/fr/twitter.com/fr/sms.html.com/fr/discover.com/fr/index.com/fr/rss.com/fr/mobile.

    -

    reproduce content - 2019-01-01 18:14:55; last update: 2019-01-01 18:14:55; svn revision: 81; 0; film hard parigi 1940 avellino index2.php.rar 3.3.2.4 sonax 9 crack.exe.rar.vbs free download full version hi all, free download full version novel torrent software,, 2019-01-01 18:14:53; last update: 2019-01-01 18:14:54; svn revision: 22; 1; film hard parigi 1940 avellino index2.rar cloud screenshot 2019 crack full version download..vbs.rar full download zip. 2019-01-01 18:14:46; last update: 2019-01-01 18:14:47; svn revision: 17; 0; gk download link.s torrent.php cracked.my.php my.w2w.rar 100% working latest software serial key download.rar aa.nfo.zip.vshttps://www.bloodygame.net/preview.php?bid=106247785.

    -

    film hard parigi 1940 avellino index2.php.rar


    Downloadhttps://imgfil.com/2uy0Oc



    -

    cabildo nota vuelen arquitectura de film.rar ffp res.vacations minu https://rooks.lt/films/rolls-vacations/ rolls.vacations minu int.php.rar avellino ross m.a.1.14. 3 t.rar avellino ross -hauterive.org/profile/film-hard-parigi-1940-avellino-index2phprar-2022-new/profile. -hauterive.org/index.php/component/k2/item/388-. https://facelb.site/post/2586_film-hard-parigi-1940-avellino-index2-php-rar-.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Amazon India Shopping - The Best Shopping App for Android Devices.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Amazon India Shopping - The Best Shopping App for Android Devices.md deleted file mode 100644 index 00577210a8ca008c50889128a687158d298c1569..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Amazon India Shopping - The Best Shopping App for Android Devices.md +++ /dev/null @@ -1,89 +0,0 @@ -
    -

    Amazon India Online Shopping App APK Download

    -

    If you are looking for a convenient and easy way to shop online and pay across a wide selection of products, groceries, and categories at great prices, then you should download the Amazon India Online Shopping App APK. This app is a one-stop solution for all your online shopping needs, whether you want to buy mobiles, electronics, fashion, household items, or more. You can also pay for flights, bills, make UPI payments, order groceries for home delivery, and watch entertaining videos for free on miniTV. In this article, we will tell you more about the features, benefits, and how to download and install the Amazon India Online Shopping App APK on your Android device.

    -

    amazon india online shopping app apk download


    Downloadhttps://urlin.us/2uT13g



    -

    Features of Amazon India Online Shopping App

    -

    Shop products, pay bills, make UPI payments, order groceries & watch miniTV

    -

    With the Amazon India Online Shopping App APK, you can shop online for millions of products from various categories, such as electronics, fashion, beauty, media, home & kitchen, and more. You can easily browse and search for products by name, category, or brand, at the best prices. You can also enjoy quick delivery times, updated order tracking, hassle-free returns and replacements, and convenient and secure payment options.

    -

    Moreover, you can use the app to pay for flights, bills, and make UPI payments with Amazon Pay. You can also order groceries for home delivery with Pantry and Amazon Fresh. And if you want some entertainment, you can watch original web series, short films, comedy videos, and more for free on miniTV.

    -

    Speak to shop with Alexa

    -

    The app also lets you use Alexa to shop online with your voice. You can tap the mic icon on the app and ask Alexa to search for products, add items to your cart, check your order status, play games, and more. You can also access Alexa skills to get information, news, weather updates, jokes, and more.

    -

    amazon india app apk download for android
    -amazon india online shopping app free download
    -amazon india shopping app latest version apk
    -download amazon india app for online shopping
    -amazon india online shopping app download for pc
    -amazon india app apk file download
    -amazon india online shopping app install
    -amazon india shopping app update apk download
    -how to download amazon india app for online shopping
    -amazon india online shopping app download apk pure
    -amazon india app apk download for ios
    -amazon india online shopping app old version download
    -amazon india shopping app mod apk download
    -download amazon india app and get cashback
    -amazon india online shopping app download for windows 10
    -amazon india app apk mirror download
    -amazon india online shopping app features
    -amazon india shopping app pro apk download
    -download amazon india app and watch minitv
    -amazon india online shopping app download for laptop
    -amazon india app apk direct download
    -amazon india online shopping app review
    -amazon india shopping app premium apk download
    -download amazon india app and play games
    -amazon india online shopping app download for mac
    -amazon india app apk free download uptodown
    -amazon india online shopping app benefits
    -amazon india shopping app hacked apk download
    -download amazon india app and use alexa
    -amazon india online shopping app download for tablet
    -amazon india app apk offline download
    -amazon india online shopping app rating
    -amazon india shopping app cracked apk download
    -download amazon india app and earn rewards
    -amazon india online shopping app download for chromebook
    -amazon india app apk safe download
    -amazon india online shopping app offers
    -amazon india shopping app beta apk download
    -download amazon india app and pay bills
    -amazon india online shopping app download for smart tv
    -amazon india app apk latest version download 2023
    -amazon india online shopping app feedback
    -amazon india shopping app plus apk download
    -download amazon india app and send money
    -amazon india online shopping app download for firestick

    -

    Play games and win prizes every day

    -

    If you are feeling lucky, you can also play games on the app and win prizes every day. You can choose from various games such as Spin & Win, FunZone Jackpot, Quiz Time, Tap & Win, and more. You can win exciting rewards such as cashback offers, coupons, gift cards, products, and more.

    -

    How to download and install Amazon India Online Shopping App APK

    -

    Download from Google Play Store or APKCombo

    -

    The easiest way to download the Amazon India Online Shopping App APK is to get it from the Google Play Store. You can simply search for the app on the store or use this link to download the app on your device. Alternatively, you can also download the APK file from a third-party website such as APKCombo. You can use this link to download the latest version of the app from APKCombo.

    -

    Enable unknown sources and install the APK file

    -

    Before you can install the APK file, you need to enable unknown sources on your device. This will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown sources and toggle it on. You may also need to grant permission to your browser or file manager to install apps.

    -

    Once you have enabled unknown sources, you can locate the downloaded APK file on your device and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to complete.

    -

    Launch the app and sign in or create an account

    -

    After the installation is done, you can launch the app from your app drawer or home screen. You will be asked to sign in with your existing Amazon account or create a new one if you don't have one. You can also use your mobile number or email address to sign in or sign up. Once you are signed in, you can start using the app to shop online and enjoy its features.

    -

    Benefits of using Amazon India Online Shopping App APK

    -

    Enjoy a great shopping experience with a wide selection of products and categories

    -

    One of the main benefits of using the Amazon India Online Shopping App APK is that you can enjoy a great shopping experience with a wide selection of products and categories at your fingertips. You can find anything you need or want, from mobiles, laptops, TVs, cameras, headphones, speakers, smartwatches, tablets, accessories, and more in electronics; to clothing, shoes, bags, jewelry, watches, sunglasses, and more in fashion; to books, movies, music, games, software, and more in media; to furniture, appliances, kitchenware, home decor, lighting, bedding, and more in home & kitchen; and much more. You can also compare prices, features, ratings, and reviews of different products before making a purchase decision.

    -

    Get notified on the latest offers and deals

    -

    Another benefit of using the app is that you can get notified on the latest offers and deals on various products and categories. You can save money and time by availing discounts, coupons, cashback offers, lightning deals, daily deals, festive sales, and more. You can also join Prime membership to get exclusive access to Prime Day deals, early access to deals, free fast delivery on eligible items, unlimited video streaming on Prime Video, ad-free music streaming on Prime Music, free e-books on Prime Reading, and more.

    -

    Pay securely and conveniently with Amazon Pay, cash on delivery, or other options

    -

    The app also provides you with secure and convenient payment options for your online shopping. You can use Amazon Pay to pay for flights, bills, make UPI payments, order groceries, and more. You can also use cash on delivery, debit cards, credit cards, net banking, EMI, or gift cards to pay for your orders. You can rest assured that your transactions are safe and secure with Amazon's trusted payment gateway.

    -

    Watch entertaining videos for free on miniTV

    -

    The app also offers you a free entertainment service called miniTV. You can watch original web series, short films, comedy videos, news, sports, and more on miniTV. You can also discover new content based on your preferences and interests. You can access miniTV from the app's home screen or from the video tab.

    -

    FAQs about Amazon India Online Shopping App APK

    -

    Is Amazon India Online Shopping App APK safe to use?

    -

    Yes, Amazon India Online Shopping App APK is safe to use as long as you download it from a trusted source such as the Google Play Store or APKCombo. You should also check the permissions and reviews of the app before installing it. You should also avoid downloading any modded or hacked versions of the app as they may contain malware or viruses.

    -

    How can I update Amazon India Online Shopping App APK?

    -

    You can update the app by checking for updates on the Google Play Store or APKCombo. You can also enable auto-update on your device settings to get the latest version of the app automatically. Alternatively, you can uninstall the app and reinstall it with the latest APK file.

    -

    What is the difference between Amazon India Online Shopping App APK and Amazon Shopping APK?

    -

    Amazon India Online Shopping App APK is a regional version of the Amazon Shopping APK that is tailored for Indian customers. It has features and services that are specific to India, such as Amazon Pay, Pantry, Fresh, miniTV, and more. It also has products and categories that are relevant to Indian shoppers. Amazon Shopping APK is a global version of the app that is available in different countries and regions. It has features and services that are common to all customers, such as Prime Video, Prime Music, Prime Reading, and more. It also has products and categories that are available worldwide.

    -

    How can I contact customer service if I have any issues with the app?

    -

    If you have any issues with the app, you can contact customer service by tapping on the menu icon on the app and selecting Customer Service. You can also visit this link to get help online. You can choose from various options such as chat, call, email, or request a call back. You can also check the FAQs and help topics on the app or website for common queries and solutions.

    -

    How can I share my feedback or suggestions for the app?

    -

    If you want to share your feedback or suggestions for the app, you can tap on the menu icon on the app and select Your Account > Help & Feedback > Send Feedback. You can also rate and review the app on the Google Play Store or APKCombo. Your feedback is valuable and helps us improve our app and services.

    -

    Conclusion

    -

    In conclusion, Amazon India Online Shopping App APK is a great app for online shopping and paying across a wide selection of products and categories at great prices. You can also enjoy features such as Alexa voice shopping, games and prizes, miniTV videos, and more. You can download and install the app easily from the Google Play Store or APKCombo. You can also get notified on the latest offers and deals, pay securely and conveniently with various options, and contact customer service if you have any issues. So what are you waiting for? Download the app today and start shopping online with Amazon India.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Alex Bobo - Orice furtuna ar veni (Instrumental) 2022.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Alex Bobo - Orice furtuna ar veni (Instrumental) 2022.md deleted file mode 100644 index 45543b99af84f60b5b20adc1fb21c73de0873423..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Alex Bobo - Orice furtuna ar veni (Instrumental) 2022.md +++ /dev/null @@ -1,116 +0,0 @@ -
    -

    Download Alex Bobo - Orice Furtuna Ar Veni

    -

    If you are looking for a new song to add to your playlist, you might want to check out Alex Bobo - Orice Furtuna Ar Veni. This is a live session of a gospel song performed by Alex Bobo, a Romanian singer and songwriter. In this article, we will tell you more about who Alex Bobo is, what the song is about, and why you should download it. We will also show you two easy ways to download Alex Bobo - Orice Furtuna Ar Veni from YouTube or Spotify. So, let's get started!

    -

    Introduction

    -

    Who is Alex Bobo?

    -

    Alex Bobo is a young and talented artist from Romania who has been singing since he was a child. He started his musical career in 2018, when he released his first single, "Cand Domnul e la Carma Vietii". Since then, he has been producing and releasing more songs, mostly in the gospel genre. He is also known for collaborating with other artists, such as Florin Peste, Marius and Fernando din Barbulesti, and CryssBoyy. Alex Bobo has a unique voice and style that makes him stand out from other singers. He sings with passion, emotion, and faith, expressing his love for God and his gratitude for life.

    -

    download alex bobo orice furtuna ar veni


    Download Filehttps://urlin.us/2uSXsp



    -

    What is the song about?

    -

    Alex Bobo - Orice Furtuna Ar Veni is a song that talks about trusting God in times of trouble. The title translates to "Whatever storm may come", and it reflects the message of the song: no matter what difficulties or challenges we face in life, we can always rely on God's protection and guidance. The song also encourages us to praise God for his goodness and mercy, even when things seem hopeless or dark. The lyrics are based on biblical verses, such as Psalm 23, Psalm 91, and Isaiah 41:10. The song is sung in Romanian, but you can find the English translation online if you want to understand it better.

    -

    Why should you download it?

    -

    There are many reasons why you should download Alex Bobo - Orice Furtuna Ar Veni. Here are some of them:

    - -

    So, if you are ready to download Alex Bobo - Orice Furtuna Ar Veni, keep reading!

    -

    How to download Alex Bobo - Orice Furtuna Ar Veni

    -

    Option 1: YouTube

    -

    One of the easiest ways to download Alex Bobo - Orice Furtuna Ar Veni is to use YouTube. YouTube is the most popular video-sharing platform in the world, and it is where you can find the official video of the song. Here are the steps to download the song from YouTube:

    -

    Step 1: Go to the official video link

    -

    The first thing you need to do is to go to the official video link of Alex Bobo - Orice Furtuna Ar Veni on YouTube. You can do this by typing the song title in the YouTube search bar, or by clicking on this link: . This will take you to the video page, where you can watch and listen to the song.

    -

    Step 2: Copy the video URL

    -

    The next thing you need to do is to copy the video URL from the address bar of your browser. The video URL is the web address that starts with https://www.youtube.com/watch?v= followed by a series of letters and numbers. For example, the video URL of Alex Bobo - Orice Furtuna Ar Veni is https://www.youtube.com/watch?v=0aYyZdQcL3E. You can copy the URL by selecting it with your mouse or keyboard, and then pressing Ctrl+C on your keyboard, or right-clicking and choosing Copy.

    -

    Step 3: Paste the URL into a YouTube downloader website

    -

    The third thing you need to do is to paste the URL into a YouTube downloader website. A YouTube downloader website is a website that allows you to download videos from YouTube for free. There are many YouTube downloader websites available online, but we recommend using Y2mate.com, as it is one of the most reliable and easy-to-use ones. To use Y2mate.com, you need to go to its homepage: . Then, you need to paste the URL that you copied in step 2 into the search box on the website. You can do this by pressing Ctrl+V on your keyboard, or right-clicking and choosing Paste.

    -

    download alex bobo orice furtuna ar veni live 2022
    -download alex bobo orice furtuna ar veni mp3
    -download alex bobo orice furtuna ar veni zippyshare
    -download alex bobo orice furtuna ar veni originala
    -download alex bobo orice furtuna ar veni videoclip
    -download alex bobo orice furtuna ar veni gratis
    -download alex bobo orice furtuna ar veni manele noi
    -download alex bobo orice furtuna ar veni versuri
    -download alex bobo orice furtuna ar veni remix
    -download alex bobo orice furtuna ar veni karaoke
    -download alex bobo orice furtuna ar veni ringtone
    -download alex bobo orice furtuna ar veni youtube
    -download alex bobo orice furtuna ar veni fisierulmeu
    -download alex bobo orice furtuna ar veni online
    -download alex bobo orice furtuna ar veni album
    -download alex bobo orice furtuna ar veni radio edit
    -download alex bobo orice furtuna ar veni extended
    -download alex bobo orice furtuna ar veni instrumental
    -download alex bobo orice furtuna ar veni feat florin salam
    -download alex bobo orice furtuna ar veni hit 2022
    -download alex bobo orice furtuna ar veni lyrics
    -download alex bobo orice furtuna ar veni free
    -download alex bobo orice furtuna ar veni audio
    -download alex bobo orice furtuna ar veni 320 kbps
    -download alex bobo orice furtuna ar veni soundcloud
    -download alex bobo orice furtuna ar veni mixtape
    -download alex bobo orice furtuna ar veni spotify
    -download alex bobo orice furtuna ar veni apple music
    -download alex bobo orice furtuna ar veni itunes
    -download alex bobo orice furtuna ar veni amazon music
    -download alex bobo orice furtuna ar veni deezer
    -download alex bobo orice furtuna ar veni tidal
    -download alex bobo orice furtuna ar veni shazam
    -download alex bobo orice furtuna ar veni google play music
    -download alex bobo orice furtuna ar veni napster
    -download alex bobo orice furtuna ar veni pandora
    -download alex bobo orice furtuna ar veni iheartradio
    -download alex bobo orice furtuna ar veni tunein radio
    -download alex bobo orice furtuna ar veni slacker radio
    -download alex bobo orice furtuna ar veni last.fm
    -download alex bobo orice furtuna ar veni musicbrainz
    -download alex bobo orice furtuna ar veni discogs
    -download alex bobo orice furtuna ar veni allmusic
    -download alex bobo orice furtuna ar veni genius lyrics
    -download alex bobo orice furtuna ar veni azlyrics

    -

    Step 4: Choose the format and quality of the download

    -

    The fourth thing you need to do is to choose the format and quality of the download. After you paste the URL into Y2mate.com, it will automatically analyze the video and show you different options for downloading it. You can choose between different formats, such as MP3, MP4, M4A, WEBM, etc. You can also choose between different qualities, such as 360p, 480p, 720p, 1080p, etc. For downloading Alex Bobo - Orice Furtuna Ar Veni as a song, we suggest choosing MP3 as the format and 128kbps as the quality. This will give you a good sound quality without taking up too much space on your device.

    -

    Step 5: Click on the download button and save the file

    -

    The fifth and final thing you need to do is to click on the download button and save the file. After you choose the format and quality of the download, you will see a green download button next to it. You need to click on this button to start downloading the file. Depending on your browser settings, you may be asked to choose a location and a name for saving the file on your device. You can choose any location and name that you want, but make sure that you remember them so that you can find the file later. Once you click on Save or OK, the file will be downloaded and saved on your device.

    -

    Option 2: Spotify

    -

    Another easy way to download Alex Bobo - Orice Furtuna Ar Veni is to use Spotify. Spotify is one of the most popular music streaming platforms in the world, and it is where you can find the song in high quality. Here are the steps to download the song from Spotify:

    -

    Step 1: Download and install Spotify on your device

    -

    The first thing you need to do is to download and install Spotify on your device. Spotify is available for different devices, such as Windows, Mac, Android, iOS, etc. You can download Spotify from its official website: , or from the app store of your device. After you download Spotify, you need to install it by following the instructions on the screen.

    -

    Step 2: Create an account or log in with your existing one

    -

    The next thing you need to do is to create an account or log in with your existing one. To use Spotify, you need to have an account that allows you to access its features and content. You can create a free account or a premium account, depending on your preferences and budget. A free account lets you listen to music with ads and some limitations, while a premium account lets you listen to music without ads and with more benefits, such as offline listening. You can create an account by clicking on the Sign Up button on the Spotify website or app, or by using your Facebook or Google account. You can log in with your existing account by clicking on the Log In button and entering your email and password.

    -

    Step 3: Search for Alex Bobo - Orice Furtuna Ar Veni in the app

    -

    The third thing you need to do is to search for Alex Bobo - Orice Furtuna Ar Veni in the app. To do this, you need to open the Spotify app on your device and tap on the Search icon at the bottom of the screen. Then, you need to type Alex Bobo - Orice Furtuna Ar Veni in the search bar and hit Enter. This will show you the results related to the song, such as the artist, the album, and the playlist. You need to tap on the song title to open it.

    -

    Step 4: Tap on the three dots icon and select download

    -

    The fourth thing you need to do is to tap on the three dots icon and select download. After you open the song, you will see a three dots icon at the top right corner of the screen. You need to tap on this icon to open a menu with different options, such as Share, Add to Playlist, Go to Artist, etc. You need to scroll down and find the Download option and tap on it. This will start downloading the song to your device.

    -

    Step 5: Enjoy the song offline anytime you want

    -

    The fifth and final thing you need to do is to enjoy the song offline anytime you want. After you download the song, you can listen to it without an internet connection or data usage. You can find the song in your Library section of the app, under Downloads. You can also add it to your favorite playlist or share it with your friends.

    -

    Conclusion

    -

    Summary of the main points

    -

    In this article, we have shown you how to download Alex Bobo - Orice Furtuna Ar Veni, a live session of a gospel song performed by Alex Bobo, a Romanian singer and songwriter. We have also told you more about who Alex Bobo is, what the song is about, and why you should download it. We have given you two easy ways to download the song from YouTube or Spotify, with detailed steps and screenshots.

    -

    Call to action

    -

    We hope that you have enjoyed reading this article and that you have learned something new. If you are interested in downloading Alex Bobo - Orice Furtuna Ar Veni, we encourage you to try one of the methods we have suggested and let us know how it works for you. You can also leave us a comment below with your feedback or questions about the article or the song. Thank you for reading and happy listening!

    -

    FAQs

    -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/All Skin Unlocked in Mobile Legends Bang Bang APK Download Now.md b/spaces/1phancelerku/anime-remove-background/All Skin Unlocked in Mobile Legends Bang Bang APK Download Now.md deleted file mode 100644 index 02df4581ed799f5d5a64521df4776bf0e499b7e2..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/All Skin Unlocked in Mobile Legends Bang Bang APK Download Now.md +++ /dev/null @@ -1,79 +0,0 @@ -
    -

    Mobile Legends Bang Bang Unlock All Skin Apk: Everything You Need to Know

    -

    Mobile Legends: Bang Bang (MLBB) is one of the most popular and addictive multiplayer online battle arena (MOBA) games on mobile devices. It features a variety of heroes with different roles, skills, and styles that you can choose from and customize with different skins. Skins are cosmetic items that change the appearance of your heroes and make them look more cool, stylish, or unique.

    -

    Skins can also provide some benefits for your gameplay, such as enhancing your confidence, intimidating your enemies, or showing off your achievements. However, skins are not easy to get in MLBB. They usually cost diamonds, which are the premium currency of the game that you have to buy with real money. Some skins are also limited-time offers or exclusive rewards that you may miss out on if you don't act fast.

    -

    mobile legends bang bang unlock all skin apk


    DOWNLOADhttps://jinyurl.com/2uNQ4n



    -

    That's why some players resort to using an unlock all skin apk, which is a modified version of the MLBB app that claims to give you access to all the skins in the game for free. Sounds too good to be true, right? Well, it is. In this article, we will tell you everything you need to know about the unlock all skin apk, how to download and install it, and why you should avoid using it at all costs.

    -

    How to Download and Install the Unlock All Skin Apk

    -

    If you still want to try using the unlock all skin apk despite our warnings, here are the steps you need to follow:

    -
      -
    1. Find a reliable source for downloading the apk file. This is easier said than done, as there are many fake or malicious websites that claim to offer the unlock all skin apk but actually contain malware, viruses, or spyware that can infect your device or steal your personal information. Be careful and do your research before clicking on any link or downloading any file.
    2. -
    3. Enable unknown sources on your Android device. This will allow you to install apps from sources other than the Google Play Store. To do this, go to your device settings and look for the security or privacy option. Depending on your device model and Android version, you may need to tap on the lock screen and security tab or the install unknown apps switch. Then, you need to turn on the unknown sources switch or check the box next to it. You may see a warning message against enabling this option, but you can ignore it if you trust the source of the apk file .
    4. -
    5. Install the apk file on your device. Locate the apk file that you downloaded from your source and tap on it to start the installation process. You may need to grant some permissions for the app to access your device's storage, network, or other features. Follow the on-screen instructions until the installation is complete.
    6. -
    7. Verify if the apk works and what are the possible issues. Launch the MLBB app from your device and check if you can see all the skins in the game. You may need to restart the app or your device if it doesn't work at first. However, be aware that using the unlock all skin apk may cause some problems, such as lagging, crashing, or errors in the game. You may also face legal issues from Moonton, the developer of MLBB, for violating their terms of service and intellectual property rights. They may ban or suspend your account or take legal action against you for using unauthorized mods or hacks.
    8. -
    -

    Conclusion

    -

    In conclusion, using the unlock all skin apk may seem like a tempting way to get all the skins in MLBB for free, but it is not worth the risk and hassle. You may end up damaging your device, compromising your security, or losing your account by using this apk. You may also ruin the fun and fairness of the game for yourself and other players by using an unfair advantage.

    -

    Instead of using the unlock all skin apk, we recommend that you get skins legally in MLBB by following these alternatives:

    - -

    By following these alternatives, you can get skins legally in MLBB without risking your device, account, or reputation. You can also enjoy the game more by supporting its development and respecting its rules.

    -

    FAQs

    -
      -
    1. Is the unlock all skin apk safe to use?
    2. -
    3. No, it is not safe to use. It may contain malware, viruses, or spyware that can harm your device or steal your personal information. It may also violate the terms of service of MLBB and result in your account being banned or suspended.
    4. -
    5. How can I get skins for free in MLBB?
    6. -
    7. There are several ways to get skins for free in MLBB, such as participating in events, redeeming codes, joining Starlight membership, completing tasks, or watching live streams. You can also use diamonds, tickets, or fragments to buy skins in the shop.
    8. -
    9. What are the best skins in MLBB?
    10. -
    11. The best skins in MLBB depend on your personal preference and taste. However, some of the most popular and expensive skins are the Legend skins, which have unique effects, animations, and voice-overs. Some examples of Legend skins are Alucard's Obsidian Blade, Saber's Codename: Storm, Gord's Conqueror, and Miya's Modena Butterfly.
    12. -
    13. How can I update the unlock all skin apk?
    14. -
    15. You cannot update the unlock all skin apk through the Google Play Store. You have to find a new version of the apk file from a third-party source and install it manually. However, this is not recommended as it may expose you to more risks and problems.
    16. -
    17. Can I use the unlock all skin apk on iOS devices?
    18. -
    19. No, you cannot use the unlock all skin apk on iOS devices. The apk file is only compatible with Android devices. If you want to use skins on iOS devices, you have to buy them from the official MLBB app.
    20. -

    -

    mobile legends bang bang mod apk unlimited money and diamond
    -mobile legends bang bang apk + mod (unlimited money, unlock all skins)
    -mobile legends bang bang hack apk download (map hack, unlocked skin)
    -mobile legends mod apk latest version 2023 (unlimited money + diamond + unlocked skin)
    -mobile legends bang bang cheat apk (unlock all heroes, skins, and emotes)
    -how to unlock all skins in mobile legends bang bang for free
    -mobile legends bang bang free skin apk download 2023
    -mobile legends bang bang mod menu apk (unlimited coins, gems, and tickets)
    -mobile legends bang bang premium apk (no ads, no ban, unlocked skin)
    -mobile legends bang bang unlimited everything apk (money, diamond, skin, hero)
    -mobile legends bang bang modded apk offline (play without internet connection)
    -mobile legends bang bang hack tool apk (generate unlimited resources online)
    -mobile legends bang bang cracked apk (full version, unlocked features)
    -mobile legends bang bang vip mod apk (exclusive skins, heroes, and items)
    -mobile legends bang bang pro apk (advanced settings, custom controls, and modes)
    -mobile legends bang bang 5v5 moba mod apk (unlimited battle points, magic dust, and fragments)
    -mobile legends bang bang original apk (no mod, no hack, no cheat)
    -mobile legends bang bang update apk (latest version, new features, and events)
    -mobile legends bang bang old version apk (download previous versions of the game)
    -mobile legends bang bang lite apk (low size, fast performance, and smooth gameplay)
    -mobile legends mod apk unlock all skin 2023
    -mobile legends mod apk unlimited money and diamond 2023
    -mobile legends hack apk download 2023
    -mobile legends mod apk latest version 2023
    -mobile legends cheat apk 2023
    -mobile legends free skin apk download 2023
    -mobile legends mod menu apk 2023
    -mobile legends premium apk 2023
    -mobile legends unlimited everything apk 2023
    -mobile legends modded apk offline 2023
    -mobile legends hack tool apk 2023
    -mobile legends cracked apk 2023
    -mobile legends vip mod apk 2023
    -mobile legends pro apk 2023
    -mobile legends 5v5 moba mod apk 2023
    -mobile legends original apk 2023
    -mobile legends update apk 2023
    -mobile legends old version apk 2023
    -mobile legends lite apk 2023

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Crazy Taxi Classic APK - The Ultimate Racing Game for Android.md b/spaces/1phancelerku/anime-remove-background/Download Crazy Taxi Classic APK - The Ultimate Racing Game for Android.md deleted file mode 100644 index 2311335fbfb8c550cc97b423a891bc2d09e4f307..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Crazy Taxi Classic APK - The Ultimate Racing Game for Android.md +++ /dev/null @@ -1,112 +0,0 @@ -
    -

    Crazy Taxi Game Free Download for Android APK

    -

    Do you love driving fast and furious cars in a chaotic city? Do you want to experience the thrill of picking up and dropping off passengers in a limited time? Do you want to enjoy a classic arcade game on your Android device? If you answered yes to any of these questions, then you should try Crazy Taxi Game, one of the most popular and fun racing games ever made. In this article, we will tell you everything you need to know about Crazy Taxi Game, how to download it for free as an APK file, and how to play it on your Android device.

    -

    crazy taxi game free download for android apk


    Download File ——— https://jinyurl.com/2uNTHI



    -

    What is Crazy Taxi Game?

    -

    A brief introduction to the game and its features

    -

    Crazy Taxi Game is a video game that was originally released by Sega in 1999 for arcade machines and later ported to various platforms, including Android. The game is set in a fictional city inspired by San Francisco, where you play as one of four taxi drivers who have to pick up and drop off customers as fast as possible. You can choose from three, five, or ten-minute gameplay modes, or play in the original arcade mode with unlimited time. You can also customize your car, driver, and music from a selection of rock songs by bands like The Offspring and Bad Religion.

    -

    The history and popularity of the game

    -

    Crazy Taxi Game was a huge hit when it was first released, thanks to its unique gameplay, colorful graphics, catchy soundtrack, and humorous voice acting. It received critical acclaim from reviewers and gamers alike, and won several awards, such as the Best Arcade Game of 1999 by IGN. It also spawned several sequels, spin-offs, and adaptations, such as Crazy Taxi 2, Crazy Taxi 3, Crazy Taxi City Rush, and even a live-action movie. The game has sold over five million copies worldwide, and has been downloaded over ten million times on Android alone.

    -

    How to Download Crazy Taxi Game for Android APK?

    -

    The requirements and compatibility of the game

    -

    To download and play Crazy Taxi Game on your Android device, you will need at least Android version 4.1 or higher, and about 250 MB of free storage space. The game is compatible with most Android devices, including smartphones and tablets. However, some older or low-end devices may experience performance issues or crashes. You can check the compatibility of your device on the Google Play Store page of the game.

    -

    The steps to download and install the game from different sources

    -

    There are two main ways to download Crazy Taxi Game for Android APK: from the official Google Play Store or from a third-party website. Here are the steps for each method:

    - -

    The advantages and disadvantages of downloading the game as an APK file

    -

    Downloading Crazy Taxi Game as an APK file has some pros and cons that you should be aware of before choosing this method. Here are some of them:

    - - - - - -
    AdvantagesDisadvantages
    You can download the game even if it is not available in your region or country.You may not get the latest updates and features of the game.
    You can download the game without using the Google Play Store or having a Google account.You may expose your device to malware or viruses from untrusted sources.
    You can download the game for free without any ads or in-app purchases.You may violate the terms and conditions of the game developer or publisher.
    -

    How to Play Crazy Taxi Game on Android?

    -

    The gameplay and controls of the game

    -

    Crazy Taxi Game is easy to play but hard to master. The gameplay is simple: you have to drive your taxi around the city and pick up customers who are waiting for you. You have to take them to their destinations as quickly as possible, while avoiding traffic, obstacles, and other hazards. You can earn extra money by performing stunts, such as jumps, drifts, and near misses. You can also earn tips by satisfying your customers' preferences, such as driving fast, slow, or crazy. The more money you make, the higher your score and rank will be.

    -

    The controls of the game are intuitive and responsive. You can use either touch or tilt controls to steer your taxi. You can also use buttons to accelerate, brake, reverse, and switch lanes. You can also use a horn button to honk at other vehicles or pedestrians. You can change the control settings from the options menu according to your preference.

    -

    The modes and challenges of the game

    -

    Crazy Taxi Game offers four different modes to choose from: Arcade, Original, Crazy Box, and Leaderboards. Here is a brief description of each mode:

    -

    The tips and tricks to master the game

    -

    Crazy Taxi Game is a game that requires skill, strategy, and luck. Here are some tips and tricks to help you master the game and become a crazy taxi driver:

    -

    Crazy Taxi Classic APK latest version for Android
    -How to install Crazy Taxi Classic APK on Android phone
    -Crazy Taxi Classic mobile game by SEGA
    -Play Crazy Taxi Classic online for free
    -Crazy Taxi Classic tips and tricks for Android
    -Crazy Taxi Classic mod APK unlimited money
    -Crazy Taxi Classic APK file size and requirements
    -Crazy Taxi Classic reviews and ratings on Google Play
    -Crazy Taxi Classic gameplay modes and features
    -Crazy Taxi Classic controller support for Android
    -Download Crazy Taxi Classic APK from APKCombo
    -Crazy Taxi Classic APK download link for Android
    -Crazy Taxi Classic update and patch notes for Android
    -Crazy Taxi Classic cheats and hacks for Android
    -Crazy Taxi Classic best drivers and cars for Android
    -Crazy Taxi Classic APK old versions for Android
    -Crazy Taxi Classic APK download for PC Windows 10
    -Crazy Taxi Classic alternatives and similar games for Android
    -Crazy Taxi Classic APK mirror and direct download for Android
    -Crazy Taxi Classic APK pure and safe download for Android
    -Crazy Taxi Classic offline mode and data usage for Android
    -Crazy Taxi Classic achievements and leaderboards for Android
    -Crazy Taxi Classic wallpapers and themes for Android
    -Crazy Taxi Classic soundtracks and music for Android
    -Crazy Taxi Classic bugs and issues for Android
    -Crazy Taxi Classic FAQ and guide for Android
    -Crazy Taxi Classic videos and screenshots for Android
    -Crazy Taxi Classic fan art and memes for Android
    -Crazy Taxi Classic forum and community for Android
    -Crazy Taxi Classic news and events for Android

    - -

    Conclusion

    -

    A summary of the main points and a call to action

    -

    Crazy Taxi Game is a classic arcade game that you can download for free as an APK file on your Android device. The game lets you drive a taxi in a crazy city and pick up customers in a limited time. The game has four modes, Arcade, Original, Crazy Box, and Leaderboards, that offer different challenges and fun. The game also has amazing graphics, sound, and music that make it more enjoyable. If you are looking for a fun and exciting racing game on your Android device, you should definitely try Crazy Taxi Game today.

    -

    FAQs

    -

    Q1. Is Crazy Taxi Game free to play on Android?

    -

    A1. Yes, Crazy Taxi Game is free to play on Android devices. You can download it from the Google Play Store or from a third-party website as an APK file.

    -

    Q2. Is Crazy Taxi Game safe to download as an APK file?

    -

    A2. Yes, Crazy Taxi Game is safe to download as an APK file if you download it from a reputable and trustworthy website. However, you should always be careful when downloading apps from unknown sources and scan them for malware or viruses before installing them.

    -

    Q3. How can I update Crazy Taxi Game on Android?

    -

    A3. You can update Crazy Taxi Game on Android by following these steps:

    - -

    Q4. What are some alternatives to Crazy Taxi Game on Android?

    -

    A4. Some alternatives to Crazy Taxi Game on Android are:

    - -

    Q5. How can I contact the developers of Crazy Taxi Game?

    -

    A5. You can contact the developers of Crazy Taxi Game by sending them an email at help@sega.net or by visiting their website at https://www.sega.com/.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download and Install VLC on Your Windows RT 8.1 Tablet or PC.md b/spaces/1phancelerku/anime-remove-background/Download and Install VLC on Your Windows RT 8.1 Tablet or PC.md deleted file mode 100644 index 259ea56f5471303e7ae6d52a0900302f16401566..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download and Install VLC on Your Windows RT 8.1 Tablet or PC.md +++ /dev/null @@ -1,170 +0,0 @@ -
    -

    How to Download and Install VLC Media Player on Windows RT 8.1 Devices

    -

    If you have a Windows RT 8.1 device, such as a Surface RT or Surface 2 tablet, you might be wondering how to play your favorite media files on it. Unfortunately, Windows RT 8.1 has some limitations that prevent you from running most desktop applications, including many popular media players. However, there is one media player that can run on Windows RT 8.1 devices and that is VLC Media Player.

    -

    VLC Media Player is a free and open source cross-platform multimedia player that can play most media files, as well as DVDs, audio CDs, VCDs, and various streaming protocols. It also has many features that make it a versatile and powerful tool for media playback and manipulation. In this article, we will show you how to download and install VLC Media Player on your Windows RT 8.1 device, and how to use it to play, convert, edit, and download media files.

    -

    vlc windows rt 8.1 download


    Download Filehttps://jinyurl.com/2uNRFM



    -

    What is Windows RT 8.1 and What are its Limitations?

    -

    Windows RT 8.1 is a version of Windows 8.1 that is optimized for thin and light devices that have extended battery life and are designed for life on the go. It runs on devices that use the ARM architecture, which is different from the x86 architecture that most desktop PCs use. Windows RT 8.1 only runs built-in apps or apps that you download from the Windows Store, and it automatically updates itself and protects itself from viruses and malware.

    -

    However, while Windows RT 8.1 inherits the appearance and functionality of Windows 8.1, it has some drawbacks that you should be aware of before buying a Windows RT 8.1 device or downloading VLC Media Player for it:

    - -

    If you want to learn more about Windows RT 8.1 and its limitations, you can check out this FAQ from Microsoft or this article from CNET.

    -

    What is VLC Media Player and What are its Features?

    -

    VLC Media Player is one of the most popular and widely used media players in the world. It was developed by VideoLAN, a non-profit organization that promotes free and open source software for multimedia. It was first released in 2001 and has since been updated regularly with new features and bug fixes.

    -

    VLC Media Player has many features that make it a great choice for media playback and manipulation on Windows RT 8.1 devices. Some of the features of VLC Media Player are:

    - -

    If you want to learn more about VLC Media Player and its features, you can check out this official website or this user guide.

    -

    vlc media player for windows rt 8.1 free download
    -how to install vlc on windows rt 8.1 tablet
    -vlc for windows rt 8.1 surface 2
    -download vlc for windows 8.1 rt from windows store
    -vlc windows rt 8.1 app
    -vlc player windows rt 8.1 download link
    -vlc for windows rt 8.1 not working
    -vlc for windows rt 8.1 review
    -vlc for windows rt 8.1 update
    -vlc for windows rt 8.1 offline installer
    -vlc for windows rt 8.1 arm
    -vlc for windows rt 8.1 x86
    -vlc for windows rt 8.1 crack
    -vlc for windows rt 8.1 full version
    -vlc for windows rt 8.1 pro
    -vlc for windows rt 8.1 latest version
    -vlc for windows rt 8.1 beta
    -vlc for windows rt 8.1 release date
    -vlc for windows rt 8.1 features
    -vlc for windows rt 8.1 requirements
    -vlc for windows rt 8.1 tutorial
    -vlc for windows rt 8.1 support
    -vlc for windows rt 8.1 forum
    -vlc for windows rt 8.1 reddit
    -vlc for windows rt 8.1 youtube
    -vlc for windows rt 8.1 videolan
    -vlc for windows rt 8.1 microsoft store
    -vlc for windows rt 8.1 alternative
    -vlc for windows rt 8.1 vs kodi
    -vlc for windows rt 8.1 vs mx player
    -vlc for windows rt 8.1 vs potplayer
    -vlc for windows rt 8.1 vs gom player
    -vlc for windows rt 8.1 vs kmplayer
    -vlc for windows rt 8.1 vs media player classic
    -vlc for windows rt 8.1 vs smplayer
    -vlc for windows rt 8.1 vs mpv player
    -vlc for windows rt 8.1 vs mpc-hc player
    -vlc for windows rt 8.1 vs bsplayer
    -vlc for windows rt 8.1 vs zoom player
    -vlc for windows rt 8.1 vs divx player
    -best settings for vlc on windows rt 8.1
    -how to play mkv files on vlc on windows rt 8.1
    -how to stream videos from pc to vlc on windows rt 8.1
    -how to use subtitles on vlc on windows rt 8.1
    -how to adjust audio sync on vlc on windows rt 8.1
    -how to convert videos with vlc on windows rt 8.1
    -how to record screen with vlc on windows rt 8.1
    -how to rip dvd with vlc on windows rt 8.1
    -how to download youtube videos with vlc on windows rt 8.1

    -

    How to Download and Install VLC Media Player on Windows RT 8.1 Devices

    -

    There are two ways to download and install VLC Media Player on your Windows RT 8.1 device: from the Windows Store or from the official website. We will explain both methods below:

    -

    How to Download VLC Media Player for Windows RT 8.1 from the Windows Store

    -

    The easiest way to get VLC Media Player on your Windows RT 8.1 device is to download it from the Windows Store. Here are the steps to do so:

    -
      -
    1. Open the Windows Store app on your device and search for "VLC" in the search box.
    2. -
    3. Select the app named "VLC for Windows Store" from the search results and tap on it.
    4. -
    5. Tap on the "Install" button and wait for the app to download and install on your device.
    6. -
    7. Once the installation is complete, you can launch VLC Media Player from the Start screen or the Apps list.
    8. -
    -

    Note that this version of VLC Media Player is different from the desktop version that you can download from the official website. It has a different interface and some features may not be available or may work differently. However, it still supports most media file formats and has basic playback and conversion functions.

    -

    How to Download VLC Media Player for Windows RT 8.1 from the Official Website

    -

    If you want to get the desktop version of VLC Media Player on your Windows RT 8.1 device, you will need to download it from the official website and install it manually. However, this method requires some technical skills and involves some risks. You will need to enable a developer mode on your device and run a PowerShell script that will bypass the digital signature requirement of Windows RT 8.1. This may void your warranty or damage your device if done incorrectly. Therefore, we do not recommend this method unless you are confident in what you are doing and understand the consequences.

    -

    If you still want to proceed with this method, here are the steps to do so:

    -
      -
    1. Download the latest version of VLC Media Player for Windows RT 8.1 from this link. Make sure you choose the ARM version that matches your device's architecture.
    2. -
    3. Extract the downloaded ZIP file to a folder on your device or a USB drive.
    4. -
    5. Open the Settings app on your device and go to "Update & security" > "For developers".
    6. -
    7. Select "Developer mode" and confirm by tapping "Yes". This will enable you to run unsigned apps on your device.
    8. -
    9. Open File Explorer on your device and go to "C:\Windows\System32". Find the file named "WindowsPowerShell\v1.0\powershell.exe" and copy it to another folder (e g. "C:\Temp"). This will create a copy of the PowerShell executable that you can run without restrictions.
    10. -
    11. Open the folder where you copied the PowerShell executable and right-click on it. Select "Run as administrator". This will open a PowerShell window with elevated privileges.
    12. -
    13. In the PowerShell window, type the following command and press Enter: Set-ExecutionPolicy Unrestricted. This will allow you to run any script on your device.
    14. -
    15. Now, type the following command and press Enter: cd "C:\Users\YourUserName\Downloads\VLC-RT-3.0.16". Replace "YourUserName" with your actual user name and "VLC-RT-3.0.16" with the name of the folder where you extracted the VLC Media Player ZIP file. This will change the directory to the folder where the VLC Media Player files are located.
    16. -
    17. Finally, type the following command and press Enter: .\Add-AppDevPackage.ps1. This will run a script that will install VLC Media Player on your device.
    18. -
    19. Follow the instructions on the screen and wait for the installation to complete. You may need to enter your Microsoft account credentials and accept some terms and conditions.
    20. -
    21. Once the installation is complete, you can close the PowerShell window and launch VLC Media Player from the Start screen or the Apps list.
    22. -
    -

    Note that this version of VLC Media Player is identical to the desktop version that you can download from the official website. It has the same interface and features as the desktop version, but it may not be as stable or compatible with Windows RT 8.1 devices. You may encounter some errors or crashes while using it, so use it at your own risk.

    -

    How to Use VLC Media Player on Windows RT 8.1 Devices

    -

    Now that you have downloaded and installed VLC Media Player on your Windows RT 8.1 device, you can use it to play, convert, edit, and download media files. Here are some tips on how to use VLC Media Player on Windows RT 8.1 devices:

    -

    How to Play Various Media Files with VLC Media Player

    -

    VLC Media Player can play almost any media file format that you throw at it, without the need for additional codecs or plugins. Here are some ways to play various media files with VLC Media Player:

    - -

    How to Adjust Video and Audio Settings with VLC Media Player

    -

    VLC Media Player allows you to adjust various video and audio settings to enhance your media playback experience. Here are some ways to adjust video and audio settings with VLC Media Player:

    - -

    How to Add Subtitles and Synchronize Them with VLC Media Player

    -

    VLC Media Player can display subtitles for any video file that has a separate subtitle file in SRT, SSA, ASS, or VTT format. You can also synchronize the subtitles with the audio and video tracks if they are out of sync. Here are some ways to add subtitles and synchronize them with VLC Media Player:

    - -

    How to Convert Videos to Any Format with VLC Media Player

    -

    VLC Media Player can also convert videos to any format that you want, such as MP4, AVI, WMV, FLV, etc. You can also choose from various presets for different devices, such as iPhone, iPad, Android, etc. Here are some ways to convert videos to any format with VLC Media Player:

    - -

    After tapping on the "Convert" button, you will see a screen where you can choose the output format, destination, and options for your converted video file. Here are some tips on how to choose the output format, destination, and options for your converted video file:

    - -

    Once you have chosen the output format, destination, and options for your converted video file, tap on the "Start" button and wait for VLC Media Player to convert your video file. You can see the progress of the conversion on the playback screen. You can also pause or cancel the conversion at any time by tapping on the "Pause" or "Stop" button.

    -

    Once the conversion is complete, you can find your converted video file in the destination folder that you chose. You can also play it with VLC Media Player or any other media player that supports the output format.

    -

    Conclusion

    -

    VLC Media Player is a powerful and versatile media player that can run on Windows RT 8.1 devices and play, convert, edit, and download media files. It can overcome some of the limitations of Windows RT 8.1 and enhance your media playback experience. However, it may not be as stable or compatible with Windows RT 8.1 devices as the Windows Store version of VLC Media Player. Therefore, you should use it with caution and at your own risk.

    -

    We hope that this article has helped you learn how to download and install VLC Media Player on your Windows RT 8.1 device, and how to use it to play, convert, edit, and download media files. If you have any questions or feedback, please feel free to leave a comment below.

    -

    FAQs

    -

    Here are some frequently asked questions about VLC Media Player and Windows RT 8.1:

    -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/AIGC-Audio/Make_An_Audio_inpaint/vocoder/bigvgan/models.py b/spaces/AIGC-Audio/Make_An_Audio_inpaint/vocoder/bigvgan/models.py deleted file mode 100644 index 22e8017b6d70c8399b3be6a2555485634c03e72d..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/Make_An_Audio_inpaint/vocoder/bigvgan/models.py +++ /dev/null @@ -1,414 +0,0 @@ -# Copyright (c) 2022 NVIDIA CORPORATION. -# Licensed under the MIT license. - -# Adapted from https://github.com/jik876/hifi-gan under the MIT license. -# LICENSE is in incl_licenses directory. - - -import torch -import torch.nn.functional as F -import torch.nn as nn -from torch.nn import Conv1d, ConvTranspose1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -import numpy as np -from .activations import Snake,SnakeBeta -from .alias_free_torch import * -import os -from omegaconf import OmegaConf - -LRELU_SLOPE = 0.1 - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - -class AMPBlock1(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5), activation=None): - super(AMPBlock1, self).__init__() - self.h = h - - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - self.num_layers = len(self.convs1) + len(self.convs2) # total number of conv layers - - if activation == 'snake': # periodic nonlinearity with snake function and anti-aliasing - self.activations = nn.ModuleList([ - Activation1d( - activation=Snake(channels, alpha_logscale=h.snake_logscale)) - for _ in range(self.num_layers) - ]) - elif activation == 'snakebeta': # periodic nonlinearity with snakebeta function and anti-aliasing - self.activations = nn.ModuleList([ - Activation1d( - activation=SnakeBeta(channels, alpha_logscale=h.snake_logscale)) - for _ in range(self.num_layers) - ]) - else: - raise NotImplementedError("activation incorrectly specified. check the config file and look for 'activation'.") - - def forward(self, x): - acts1, acts2 = self.activations[::2], self.activations[1::2] - for c1, c2, a1, a2 in zip(self.convs1, self.convs2, acts1, acts2): - xt = a1(x) - xt = c1(xt) - xt = a2(xt) - xt = c2(xt) - x = xt + x - - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class AMPBlock2(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3), activation=None): - super(AMPBlock2, self).__init__() - self.h = h - - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - self.num_layers = len(self.convs) # total number of conv layers - - if activation == 'snake': # periodic nonlinearity with snake function and anti-aliasing - self.activations = nn.ModuleList([ - Activation1d( - activation=Snake(channels, alpha_logscale=h.snake_logscale)) - for _ in range(self.num_layers) - ]) - elif activation == 'snakebeta': # periodic nonlinearity with snakebeta function and anti-aliasing - self.activations = nn.ModuleList([ - Activation1d( - activation=SnakeBeta(channels, alpha_logscale=h.snake_logscale)) - for _ in range(self.num_layers) - ]) - else: - raise NotImplementedError("activation incorrectly specified. check the config file and look for 'activation'.") - - def forward(self, x): - for c, a in zip (self.convs, self.activations): - xt = a(x) - xt = c(xt) - x = xt + x - - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class BigVGAN(torch.nn.Module): - # this is our main BigVGAN model. Applies anti-aliased periodic activation for resblocks. - def __init__(self, h): - super(BigVGAN, self).__init__() - self.h = h - - self.num_kernels = len(h.resblock_kernel_sizes) - self.num_upsamples = len(h.upsample_rates) - - # pre conv - self.conv_pre = weight_norm(Conv1d(h.num_mels, h.upsample_initial_channel, 7, 1, padding=3)) - - # define which AMPBlock to use. BigVGAN uses AMPBlock1 as default - resblock = AMPBlock1 if h.resblock == '1' else AMPBlock2 - - # transposed conv-based upsamplers. does not apply anti-aliasing - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(h.upsample_rates, h.upsample_kernel_sizes)): - self.ups.append(nn.ModuleList([ - weight_norm(ConvTranspose1d(h.upsample_initial_channel // (2 ** i), - h.upsample_initial_channel // (2 ** (i + 1)), - k, u, padding=(k - u) // 2)) - ])) - - # residual blocks using anti-aliased multi-periodicity composition modules (AMP) - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = h.upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate(zip(h.resblock_kernel_sizes, h.resblock_dilation_sizes)): - self.resblocks.append(resblock(h, ch, k, d, activation=h.activation)) - - # post conv - if h.activation == "snake": # periodic nonlinearity with snake function and anti-aliasing - activation_post = Snake(ch, alpha_logscale=h.snake_logscale) - self.activation_post = Activation1d(activation=activation_post) - elif h.activation == "snakebeta": # periodic nonlinearity with snakebeta function and anti-aliasing - activation_post = SnakeBeta(ch, alpha_logscale=h.snake_logscale) - self.activation_post = Activation1d(activation=activation_post) - else: - raise NotImplementedError("activation incorrectly specified. check the config file and look for 'activation'.") - - self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3)) - - # weight initialization - for i in range(len(self.ups)): - self.ups[i].apply(init_weights) - self.conv_post.apply(init_weights) - - def forward(self, x): - # pre conv - x = self.conv_pre(x) - - for i in range(self.num_upsamples): - # upsampling - for i_up in range(len(self.ups[i])): - x = self.ups[i][i_up](x) - # AMP blocks - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - - # post conv - x = self.activation_post(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - for l_i in l: - remove_weight_norm(l_i) - for l in self.resblocks: - l.remove_weight_norm() - remove_weight_norm(self.conv_pre) - remove_weight_norm(self.conv_post) - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, h, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.d_mult = h.discriminator_channel_mult - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, int(32*self.d_mult), (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(int(32*self.d_mult), int(128*self.d_mult), (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(int(128*self.d_mult), int(512*self.d_mult), (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(int(512*self.d_mult), int(1024*self.d_mult), (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(int(1024*self.d_mult), int(1024*self.d_mult), (kernel_size, 1), 1, padding=(2, 0))), - ]) - self.conv_post = norm_f(Conv2d(int(1024*self.d_mult), 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, h): - super(MultiPeriodDiscriminator, self).__init__() - self.mpd_reshapes = h.mpd_reshapes - print("mpd_reshapes: {}".format(self.mpd_reshapes)) - discriminators = [DiscriminatorP(h, rs, use_spectral_norm=h.use_spectral_norm) for rs in self.mpd_reshapes] - self.discriminators = nn.ModuleList(discriminators) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorR(nn.Module): - def __init__(self, cfg, resolution): - super().__init__() - - self.resolution = resolution - assert len(self.resolution) == 3, \ - "MRD layer requires list with len=3, got {}".format(self.resolution) - self.lrelu_slope = LRELU_SLOPE - - norm_f = weight_norm if cfg.use_spectral_norm == False else spectral_norm - if hasattr(cfg, "mrd_use_spectral_norm"): - print("INFO: overriding MRD use_spectral_norm as {}".format(cfg.mrd_use_spectral_norm)) - norm_f = weight_norm if cfg.mrd_use_spectral_norm == False else spectral_norm - self.d_mult = cfg.discriminator_channel_mult - if hasattr(cfg, "mrd_channel_mult"): - print("INFO: overriding mrd channel multiplier as {}".format(cfg.mrd_channel_mult)) - self.d_mult = cfg.mrd_channel_mult - - self.convs = nn.ModuleList([ - norm_f(nn.Conv2d(1, int(32*self.d_mult), (3, 9), padding=(1, 4))), - norm_f(nn.Conv2d(int(32*self.d_mult), int(32*self.d_mult), (3, 9), stride=(1, 2), padding=(1, 4))), - norm_f(nn.Conv2d(int(32*self.d_mult), int(32*self.d_mult), (3, 9), stride=(1, 2), padding=(1, 4))), - norm_f(nn.Conv2d(int(32*self.d_mult), int(32*self.d_mult), (3, 9), stride=(1, 2), padding=(1, 4))), - norm_f(nn.Conv2d(int(32*self.d_mult), int(32*self.d_mult), (3, 3), padding=(1, 1))), - ]) - self.conv_post = norm_f(nn.Conv2d(int(32 * self.d_mult), 1, (3, 3), padding=(1, 1))) - - def forward(self, x): - fmap = [] - - x = self.spectrogram(x) - x = x.unsqueeze(1) - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, self.lrelu_slope) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - def spectrogram(self, x): - n_fft, hop_length, win_length = self.resolution - x = F.pad(x, (int((n_fft - hop_length) / 2), int((n_fft - hop_length) / 2)), mode='reflect') - x = x.squeeze(1) - x = torch.stft(x, n_fft=n_fft, hop_length=hop_length, win_length=win_length, center=False, return_complex=True) - x = torch.view_as_real(x) # [B, F, TT, 2] - mag = torch.norm(x, p=2, dim =-1) #[B, F, TT] - - return mag - - -class MultiResolutionDiscriminator(nn.Module): - def __init__(self, cfg, debug=False): - super().__init__() - self.resolutions = cfg.resolutions - assert len(self.resolutions) == 3,\ - "MRD requires list of list with len=3, each element having a list with len=3. got {}".\ - format(self.resolutions) - self.discriminators = nn.ModuleList( - [DiscriminatorR(cfg, resolution) for resolution in self.resolutions] - ) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(x=y) - y_d_g, fmap_g = d(x=y_hat) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - loss += torch.mean(torch.abs(rl - gl)) - - return loss*2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - r_loss = torch.mean((1-dr)**2) - g_loss = torch.mean(dg**2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - l = torch.mean((1-dg)**2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - - - -class VocoderBigVGAN(object): - def __init__(self, ckpt_vocoder,device='cuda'): - vocoder_sd = torch.load(os.path.join(ckpt_vocoder,'best_netG.pt'), map_location='cpu') - - vocoder_args = OmegaConf.load(os.path.join(ckpt_vocoder,'args.yml')) - - self.generator = BigVGAN(vocoder_args) - self.generator.load_state_dict(vocoder_sd['generator']) - self.generator.eval() - - self.device = device - self.generator.to(self.device) - - def vocode(self, spec): - with torch.no_grad(): - if isinstance(spec,np.ndarray): - spec = torch.from_numpy(spec).unsqueeze(0) - spec = spec.to(dtype=torch.float32,device=self.device) - return self.generator(spec).squeeze().cpu().numpy() - - def __call__(self, wav): - return self.vocode(wav) diff --git a/spaces/Abhilashvj/planogram-compliance/utils/loggers/wandb/sweep.py b/spaces/Abhilashvj/planogram-compliance/utils/loggers/wandb/sweep.py deleted file mode 100644 index 8699fa0a2fbfd7d1855b04c65d62eb31da03c3e5..0000000000000000000000000000000000000000 --- a/spaces/Abhilashvj/planogram-compliance/utils/loggers/wandb/sweep.py +++ /dev/null @@ -1,45 +0,0 @@ -import sys -from pathlib import Path - -import wandb - -FILE = Path(__file__).resolve() -ROOT = FILE.parents[3] # YOLOv5 root directory -if str(ROOT) not in sys.path: - sys.path.append(str(ROOT)) # add ROOT to PATH - -from train import parse_opt, train -from utils.callbacks import Callbacks -from utils.general import increment_path -from utils.torch_utils import select_device - - -def sweep(): - wandb.init() - # Get hyp dict from sweep agent. Copy because train() modifies parameters which confused wandb. - hyp_dict = vars(wandb.config).get("_items").copy() - - # Workaround: get necessary opt args - opt = parse_opt(known=True) - opt.batch_size = hyp_dict.get("batch_size") - opt.save_dir = str( - increment_path( - Path(opt.project) / opt.name, exist_ok=opt.exist_ok or opt.evolve - ) - ) - opt.epochs = hyp_dict.get("epochs") - opt.nosave = True - opt.data = hyp_dict.get("data") - opt.weights = str(opt.weights) - opt.cfg = str(opt.cfg) - opt.data = str(opt.data) - opt.hyp = str(opt.hyp) - opt.project = str(opt.project) - device = select_device(opt.device, batch_size=opt.batch_size) - - # train - train(hyp_dict, opt, device, callbacks=Callbacks()) - - -if __name__ == "__main__": - sweep() diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/generated/client/nodes/5.js b/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/generated/client/nodes/5.js deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Adapter/T2I-Adapter/ldm/data/dataset_coco.py b/spaces/Adapter/T2I-Adapter/ldm/data/dataset_coco.py deleted file mode 100644 index 0b4aa4facb12be8534522c9240ca6e63ce4a68b5..0000000000000000000000000000000000000000 --- a/spaces/Adapter/T2I-Adapter/ldm/data/dataset_coco.py +++ /dev/null @@ -1,36 +0,0 @@ -import json -import cv2 -import os -from basicsr.utils import img2tensor - - -class dataset_coco_mask_color(): - def __init__(self, path_json, root_path_im, root_path_mask, image_size): - super(dataset_coco_mask_color, self).__init__() - with open(path_json, 'r', encoding='utf-8') as fp: - data = json.load(fp) - data = data['annotations'] - self.files = [] - self.root_path_im = root_path_im - self.root_path_mask = root_path_mask - for file in data: - name = "%012d.png" % file['image_id'] - self.files.append({'name': name, 'sentence': file['caption']}) - - def __getitem__(self, idx): - file = self.files[idx] - name = file['name'] - # print(os.path.join(self.root_path_im, name)) - im = cv2.imread(os.path.join(self.root_path_im, name.replace('.png', '.jpg'))) - im = cv2.resize(im, (512, 512)) - im = img2tensor(im, bgr2rgb=True, float32=True) / 255. - - mask = cv2.imread(os.path.join(self.root_path_mask, name)) # [:,:,0] - mask = cv2.resize(mask, (512, 512)) - mask = img2tensor(mask, bgr2rgb=True, float32=True) / 255. # [0].unsqueeze(0)#/255. - - sentence = file['sentence'] - return {'im': im, 'mask': mask, 'sentence': sentence} - - def __len__(self): - return len(self.files) diff --git a/spaces/Ali36Ahmad/MagicPrompt-Stable-Diffusion/README.md b/spaces/Ali36Ahmad/MagicPrompt-Stable-Diffusion/README.md deleted file mode 100644 index 98b00b0487e2ab609b0b29eb82c55d9215ab3406..0000000000000000000000000000000000000000 --- a/spaces/Ali36Ahmad/MagicPrompt-Stable-Diffusion/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: MagicPrompt Stable Diffusion -emoji: 😻 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.3.1 -app_file: app.py -pinned: false -license: mit -duplicated_from: Gustavosta/MagicPrompt-Stable-Diffusion ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/cppipc/buffer.cpp b/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/cppipc/buffer.cpp deleted file mode 100644 index 0ac0fa7bc3ced0447ba4caa359355dd4252670b3..0000000000000000000000000000000000000000 --- a/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/cppipc/buffer.cpp +++ /dev/null @@ -1,87 +0,0 @@ -#include "libipc/buffer.h" -#include "libipc/utility/pimpl.h" - -#include - -namespace ipc { - -bool operator==(buffer const & b1, buffer const & b2) { - return (b1.size() == b2.size()) && (std::memcmp(b1.data(), b2.data(), b1.size()) == 0); -} - -bool operator!=(buffer const & b1, buffer const & b2) { - return !(b1 == b2); -} - -class buffer::buffer_ : public pimpl { -public: - void* p_; - std::size_t s_; - void* a_; - buffer::destructor_t d_; - - buffer_(void* p, std::size_t s, buffer::destructor_t d, void* a) - : p_(p), s_(s), a_(a), d_(d) { - } - - ~buffer_() { - if (d_ == nullptr) return; - d_((a_ == nullptr) ? p_ : a_, s_); - } -}; - -buffer::buffer() - : buffer(nullptr, 0, nullptr, nullptr) { -} - -buffer::buffer(void* p, std::size_t s, destructor_t d) - : p_(p_->make(p, s, d, nullptr)) { -} - -buffer::buffer(void* p, std::size_t s, destructor_t d, void* additional) - : p_(p_->make(p, s, d, additional)) { -} - -buffer::buffer(void* p, std::size_t s) - : buffer(p, s, nullptr) { -} - -buffer::buffer(char const & c) - : buffer(const_cast(&c), 1) { -} - -buffer::buffer(buffer&& rhs) - : buffer() { - swap(rhs); -} - -buffer::~buffer() { - p_->clear(); -} - -void buffer::swap(buffer& rhs) { - std::swap(p_, rhs.p_); -} - -buffer& buffer::operator=(buffer rhs) { - swap(rhs); - return *this; -} - -bool buffer::empty() const noexcept { - return (impl(p_)->p_ == nullptr) || (impl(p_)->s_ == 0); -} - -void* buffer::data() noexcept { - return impl(p_)->p_; -} - -void const * buffer::data() const noexcept { - return impl(p_)->p_; -} - -std::size_t buffer::size() const noexcept { - return impl(p_)->s_; -} - -} // namespace ipc diff --git a/spaces/Amrrs/DragGan-Inversion/PTI/configs/global_config.py b/spaces/Amrrs/DragGan-Inversion/PTI/configs/global_config.py deleted file mode 100644 index bf3a20e61b0baf5e85377570cdf0f235bade21bd..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/PTI/configs/global_config.py +++ /dev/null @@ -1,12 +0,0 @@ -# Device -cuda_visible_devices = '0' -device = 'cuda:0' - -# Logs -training_step = 1 -image_rec_result_log_snapshot = 100 -pivotal_training_steps = 0 -model_snapshot_interval = 400 - -# Run name to be updated during PTI -run_name = '' diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/instruct_pix2pix/train_instruct_pix2pix_xl.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/instruct_pix2pix/train_instruct_pix2pix_xl.py deleted file mode 100644 index acbcd819b14b739a89d1d03550af0042cf6d698c..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/instruct_pix2pix/train_instruct_pix2pix_xl.py +++ /dev/null @@ -1,1205 +0,0 @@ -#!/usr/bin/env python -# coding=utf-8 -# Copyright 2023 Harutatsu Akiyama and The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import argparse -import logging -import math -import os -import shutil -import warnings -from pathlib import Path -from urllib.parse import urlparse - -import accelerate -import datasets -import numpy as np -import PIL -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint -import transformers -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import ProjectConfiguration, set_seed -from datasets import load_dataset -from huggingface_hub import create_repo, upload_folder -from packaging import version -from PIL import Image -from torchvision import transforms -from tqdm.auto import tqdm -from transformers import AutoTokenizer, PretrainedConfig - -import diffusers -from diffusers import AutoencoderKL, DDPMScheduler, UNet2DConditionModel -from diffusers.optimization import get_scheduler -from diffusers.pipelines.stable_diffusion_xl.pipeline_stable_diffusion_xl_instruct_pix2pix import ( - StableDiffusionXLInstructPix2PixPipeline, -) -from diffusers.training_utils import EMAModel -from diffusers.utils import check_min_version, deprecate, is_wandb_available, load_image -from diffusers.utils.import_utils import is_xformers_available - - -# Will error if the minimal version of diffusers is not installed. Remove at your own risks. -check_min_version("0.19.0") - -logger = get_logger(__name__, log_level="INFO") - -DATASET_NAME_MAPPING = { - "fusing/instructpix2pix-1000-samples": ("file_name", "edited_image", "edit_prompt"), -} -WANDB_TABLE_COL_NAMES = ["file_name", "edited_image", "edit_prompt"] - - -def import_model_class_from_model_name_or_path( - pretrained_model_name_or_path: str, revision: str, subfolder: str = "text_encoder" -): - text_encoder_config = PretrainedConfig.from_pretrained( - pretrained_model_name_or_path, subfolder=subfolder, revision=revision - ) - model_class = text_encoder_config.architectures[0] - - if model_class == "CLIPTextModel": - from transformers import CLIPTextModel - - return CLIPTextModel - elif model_class == "CLIPTextModelWithProjection": - from transformers import CLIPTextModelWithProjection - - return CLIPTextModelWithProjection - else: - raise ValueError(f"{model_class} is not supported.") - - -def parse_args(): - parser = argparse.ArgumentParser(description="Script to train Stable Diffusion XL for InstructPix2Pix.") - parser.add_argument( - "--pretrained_model_name_or_path", - type=str, - default=None, - required=True, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--pretrained_vae_model_name_or_path", - type=str, - default=None, - help="Path to an improved VAE to stabilize training. For more details check out: https://github.com/huggingface/diffusers/pull/4038.", - ) - parser.add_argument( - "--revision", - type=str, - default=None, - required=False, - help="Revision of pretrained model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--dataset_name", - type=str, - default=None, - help=( - "The name of the Dataset (from the HuggingFace hub) to train on (could be your own, possibly private," - " dataset). It can also be a path pointing to a local copy of a dataset in your filesystem," - " or to a folder containing files that 🤗 Datasets can understand." - ), - ) - parser.add_argument( - "--dataset_config_name", - type=str, - default=None, - help="The config of the Dataset, leave as None if there's only one config.", - ) - parser.add_argument( - "--train_data_dir", - type=str, - default=None, - help=( - "A folder containing the training data. Folder contents must follow the structure described in" - " https://huggingface.co/docs/datasets/image_dataset#imagefolder. In particular, a `metadata.jsonl` file" - " must exist to provide the captions for the images. Ignored if `dataset_name` is specified." - ), - ) - parser.add_argument( - "--original_image_column", - type=str, - default="input_image", - help="The column of the dataset containing the original image on which edits where made.", - ) - parser.add_argument( - "--edited_image_column", - type=str, - default="edited_image", - help="The column of the dataset containing the edited image.", - ) - parser.add_argument( - "--edit_prompt_column", - type=str, - default="edit_prompt", - help="The column of the dataset containing the edit instruction.", - ) - parser.add_argument( - "--val_image_url_or_path", - type=str, - default=None, - help="URL to the original image that you would like to edit (used during inference for debugging purposes).", - ) - parser.add_argument( - "--validation_prompt", type=str, default=None, help="A prompt that is sampled during training for inference." - ) - parser.add_argument( - "--num_validation_images", - type=int, - default=4, - help="Number of images that should be generated during validation with `validation_prompt`.", - ) - parser.add_argument( - "--validation_steps", - type=int, - default=100, - help=( - "Run fine-tuning validation every X steps. The validation process consists of running the prompt" - " `args.validation_prompt` multiple times: `args.num_validation_images`." - ), - ) - parser.add_argument( - "--max_train_samples", - type=int, - default=None, - help=( - "For debugging purposes or quicker training, truncate the number of training examples to this " - "value if set." - ), - ) - parser.add_argument( - "--output_dir", - type=str, - default="instruct-pix2pix-model", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument( - "--cache_dir", - type=str, - default=None, - help="The directory where the downloaded models and datasets will be stored.", - ) - parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") - parser.add_argument( - "--resolution", - type=int, - default=256, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this resolution." - ), - ) - parser.add_argument( - "--crops_coords_top_left_h", - type=int, - default=0, - help=("Coordinate for (the height) to be included in the crop coordinate embeddings needed by SDXL UNet."), - ) - parser.add_argument( - "--crops_coords_top_left_w", - type=int, - default=0, - help=("Coordinate for (the height) to be included in the crop coordinate embeddings needed by SDXL UNet."), - ) - parser.add_argument( - "--center_crop", - default=False, - action="store_true", - help=( - "Whether to center crop the input images to the resolution. If not set, the images will be randomly" - " cropped. The images will be resized to the resolution first before cropping." - ), - ) - parser.add_argument( - "--random_flip", - action="store_true", - help="whether to randomly flip images horizontally", - ) - parser.add_argument( - "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument("--num_train_epochs", type=int, default=100) - parser.add_argument( - "--max_train_steps", - type=int, - default=None, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument( - "--gradient_checkpointing", - action="store_true", - help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=1e-4, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--scale_lr", - action="store_true", - default=False, - help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="constant", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument( - "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument( - "--conditioning_dropout_prob", - type=float, - default=None, - help="Conditioning dropout probability. Drops out the conditionings (image and edit prompt) used in training InstructPix2Pix. See section 3.2.1 in the paper: https://arxiv.org/abs/2211.09800.", - ) - parser.add_argument( - "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes." - ) - parser.add_argument( - "--allow_tf32", - action="store_true", - help=( - "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see" - " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices" - ), - ) - parser.add_argument("--use_ema", action="store_true", help="Whether to use EMA model.") - parser.add_argument( - "--non_ema_revision", - type=str, - default=None, - required=False, - help=( - "Revision of pretrained non-ema model identifier. Must be a branch, tag or git identifier of the local or" - " remote repository specified with --pretrained_model_name_or_path." - ), - ) - parser.add_argument( - "--dataloader_num_workers", - type=int, - default=0, - help=( - "Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process." - ), - ) - parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") - parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default=None, - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" - " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the" - " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config." - ), - ) - parser.add_argument( - "--report_to", - type=str, - default="tensorboard", - help=( - 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`' - ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.' - ), - ) - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - parser.add_argument( - "--checkpointing_steps", - type=int, - default=500, - help=( - "Save a checkpoint of the training state every X updates. These checkpoints are only suitable for resuming" - " training using `--resume_from_checkpoint`." - ), - ) - parser.add_argument( - "--checkpoints_total_limit", - type=int, - default=None, - help=("Max number of checkpoints to store."), - ) - parser.add_argument( - "--resume_from_checkpoint", - type=str, - default=None, - help=( - "Whether training should be resumed from a previous checkpoint. Use a path saved by" - ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.' - ), - ) - parser.add_argument( - "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers." - ) - - args = parser.parse_args() - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - # Sanity checks - if args.dataset_name is None and args.train_data_dir is None: - raise ValueError("Need either a dataset name or a training folder.") - - # default to using the same revision for the non-ema model if not specified - if args.non_ema_revision is None: - args.non_ema_revision = args.revision - - return args - - -def convert_to_np(image, resolution): - if isinstance(image, str): - image = PIL.Image.open(image) - image = image.convert("RGB").resize((resolution, resolution)) - return np.array(image).transpose(2, 0, 1) - - -def main(): - args = parse_args() - - if args.non_ema_revision is not None: - deprecate( - "non_ema_revision!=None", - "0.15.0", - message=( - "Downloading 'non_ema' weights from revision branches of the Hub is deprecated. Please make sure to" - " use `--variant=non_ema` instead." - ), - ) - logging_dir = os.path.join(args.output_dir, args.logging_dir) - accelerator_project_config = ProjectConfiguration(project_dir=args.output_dir, logging_dir=logging_dir) - accelerator = Accelerator( - gradient_accumulation_steps=args.gradient_accumulation_steps, - mixed_precision=args.mixed_precision, - log_with=args.report_to, - project_config=accelerator_project_config, - ) - - generator = torch.Generator(device=accelerator.device).manual_seed(args.seed) - - if args.report_to == "wandb": - if not is_wandb_available(): - raise ImportError("Make sure to install wandb if you want to use it for logging during training.") - import wandb - - # Make one log on every process with the configuration for debugging. - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO, - ) - logger.info(accelerator.state, main_process_only=False) - if accelerator.is_local_main_process: - datasets.utils.logging.set_verbosity_warning() - transformers.utils.logging.set_verbosity_warning() - diffusers.utils.logging.set_verbosity_info() - else: - datasets.utils.logging.set_verbosity_error() - transformers.utils.logging.set_verbosity_error() - diffusers.utils.logging.set_verbosity_error() - - # If passed along, set the training seed now. - if args.seed is not None: - set_seed(args.seed) - - # Handle the repository creation - if accelerator.is_main_process: - if args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - if args.push_to_hub: - repo_id = create_repo( - repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token - ).repo_id - - vae_path = ( - args.pretrained_model_name_or_path - if args.pretrained_vae_model_name_or_path is None - else args.pretrained_vae_model_name_or_path - ) - vae = AutoencoderKL.from_pretrained( - vae_path, - subfolder="vae" if args.pretrained_vae_model_name_or_path is None else None, - revision=args.revision, - ) - unet = UNet2DConditionModel.from_pretrained( - args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision - ) - - # InstructPix2Pix uses an additional image for conditioning. To accommodate that, - # it uses 8 channels (instead of 4) in the first (conv) layer of the UNet. This UNet is - # then fine-tuned on the custom InstructPix2Pix dataset. This modified UNet is initialized - # from the pre-trained checkpoints. For the extra channels added to the first layer, they are - # initialized to zero. - logger.info("Initializing the XL InstructPix2Pix UNet from the pretrained UNet.") - in_channels = 8 - out_channels = unet.conv_in.out_channels - unet.register_to_config(in_channels=in_channels) - - with torch.no_grad(): - new_conv_in = nn.Conv2d( - in_channels, out_channels, unet.conv_in.kernel_size, unet.conv_in.stride, unet.conv_in.padding - ) - new_conv_in.weight.zero_() - new_conv_in.weight[:, :4, :, :].copy_(unet.conv_in.weight) - unet.conv_in = new_conv_in - - # Create EMA for the unet. - if args.use_ema: - ema_unet = EMAModel(unet.parameters(), model_cls=UNet2DConditionModel, model_config=unet.config) - - if args.enable_xformers_memory_efficient_attention: - if is_xformers_available(): - import xformers - - xformers_version = version.parse(xformers.__version__) - if xformers_version == version.parse("0.0.16"): - logger.warn( - "xFormers 0.0.16 cannot be used for training in some GPUs. If you observe problems during training, please update xFormers to at least 0.0.17. See https://huggingface.co/docs/diffusers/main/en/optimization/xformers for more details." - ) - unet.enable_xformers_memory_efficient_attention() - else: - raise ValueError("xformers is not available. Make sure it is installed correctly") - - # `accelerate` 0.16.0 will have better support for customized saving - if version.parse(accelerate.__version__) >= version.parse("0.16.0"): - # create custom saving & loading hooks so that `accelerator.save_state(...)` serializes in a nice format - def save_model_hook(models, weights, output_dir): - if args.use_ema: - ema_unet.save_pretrained(os.path.join(output_dir, "unet_ema")) - - for i, model in enumerate(models): - model.save_pretrained(os.path.join(output_dir, "unet")) - - # make sure to pop weight so that corresponding model is not saved again - weights.pop() - - def load_model_hook(models, input_dir): - if args.use_ema: - load_model = EMAModel.from_pretrained(os.path.join(input_dir, "unet_ema"), UNet2DConditionModel) - ema_unet.load_state_dict(load_model.state_dict()) - ema_unet.to(accelerator.device) - del load_model - - for i in range(len(models)): - # pop models so that they are not loaded again - model = models.pop() - - # load diffusers style into model - load_model = UNet2DConditionModel.from_pretrained(input_dir, subfolder="unet") - model.register_to_config(**load_model.config) - - model.load_state_dict(load_model.state_dict()) - del load_model - - accelerator.register_save_state_pre_hook(save_model_hook) - accelerator.register_load_state_pre_hook(load_model_hook) - - if args.gradient_checkpointing: - unet.enable_gradient_checkpointing() - - # Enable TF32 for faster training on Ampere GPUs, - # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices - if args.allow_tf32: - torch.backends.cuda.matmul.allow_tf32 = True - - if args.scale_lr: - args.learning_rate = ( - args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes - ) - - # Initialize the optimizer - if args.use_8bit_adam: - try: - import bitsandbytes as bnb - except ImportError: - raise ImportError( - "Please install bitsandbytes to use 8-bit Adam. You can do so by running `pip install bitsandbytes`" - ) - - optimizer_cls = bnb.optim.AdamW8bit - else: - optimizer_cls = torch.optim.AdamW - - optimizer = optimizer_cls( - unet.parameters(), - lr=args.learning_rate, - betas=(args.adam_beta1, args.adam_beta2), - weight_decay=args.adam_weight_decay, - eps=args.adam_epsilon, - ) - - # Get the datasets: you can either provide your own training and evaluation files (see below) - # or specify a Dataset from the hub (the dataset will be downloaded automatically from the datasets Hub). - - # In distributed training, the load_dataset function guarantees that only one local process can concurrently - # download the dataset. - if args.dataset_name is not None: - # Downloading and loading a dataset from the hub. - dataset = load_dataset( - args.dataset_name, - args.dataset_config_name, - cache_dir=args.cache_dir, - ) - else: - data_files = {} - if args.train_data_dir is not None: - data_files["train"] = os.path.join(args.train_data_dir, "**") - dataset = load_dataset( - "imagefolder", - data_files=data_files, - cache_dir=args.cache_dir, - ) - # See more about loading custom images at - # https://huggingface.co/docs/datasets/main/en/image_load#imagefolder - - # Preprocessing the datasets. - # We need to tokenize inputs and targets. - column_names = dataset["train"].column_names - - # 6. Get the column names for input/target. - dataset_columns = DATASET_NAME_MAPPING.get(args.dataset_name, None) - if args.original_image_column is None: - original_image_column = dataset_columns[0] if dataset_columns is not None else column_names[0] - else: - original_image_column = args.original_image_column - if original_image_column not in column_names: - raise ValueError( - f"--original_image_column' value '{args.original_image_column}' needs to be one of: {', '.join(column_names)}" - ) - if args.edit_prompt_column is None: - edit_prompt_column = dataset_columns[1] if dataset_columns is not None else column_names[1] - else: - edit_prompt_column = args.edit_prompt_column - if edit_prompt_column not in column_names: - raise ValueError( - f"--edit_prompt_column' value '{args.edit_prompt_column}' needs to be one of: {', '.join(column_names)}" - ) - if args.edited_image_column is None: - edited_image_column = dataset_columns[2] if dataset_columns is not None else column_names[2] - else: - edited_image_column = args.edited_image_column - if edited_image_column not in column_names: - raise ValueError( - f"--edited_image_column' value '{args.edited_image_column}' needs to be one of: {', '.join(column_names)}" - ) - - # For mixed precision training we cast the text_encoder and vae weights to half-precision - # as these models are only used for inference, keeping weights in full precision is not required. - weight_dtype = torch.float32 - if accelerator.mixed_precision == "fp16": - weight_dtype = torch.float16 - warnings.warn(f"weight_dtype {weight_dtype} may cause nan during vae encoding", UserWarning) - - elif accelerator.mixed_precision == "bf16": - weight_dtype = torch.bfloat16 - warnings.warn(f"weight_dtype {weight_dtype} may cause nan during vae encoding", UserWarning) - - # Preprocessing the datasets. - # We need to tokenize input captions and transform the images. - def tokenize_captions(captions, tokenizer): - inputs = tokenizer( - captions, - max_length=tokenizer.model_max_length, - padding="max_length", - truncation=True, - return_tensors="pt", - ) - return inputs.input_ids - - # Preprocessing the datasets. - train_transforms = transforms.Compose( - [ - transforms.CenterCrop(args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution), - transforms.RandomHorizontalFlip() if args.random_flip else transforms.Lambda(lambda x: x), - ] - ) - - def preprocess_images(examples): - original_images = np.concatenate( - [convert_to_np(image, args.resolution) for image in examples[original_image_column]] - ) - edited_images = np.concatenate( - [convert_to_np(image, args.resolution) for image in examples[edited_image_column]] - ) - # We need to ensure that the original and the edited images undergo the same - # augmentation transforms. - images = np.concatenate([original_images, edited_images]) - images = torch.tensor(images) - images = 2 * (images / 255) - 1 - return train_transforms(images) - - # Load scheduler, tokenizer and models. - tokenizer_1 = AutoTokenizer.from_pretrained( - args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision, use_fast=False - ) - tokenizer_2 = AutoTokenizer.from_pretrained( - args.pretrained_model_name_or_path, subfolder="tokenizer_2", revision=args.revision, use_fast=False - ) - text_encoder_cls_1 = import_model_class_from_model_name_or_path(args.pretrained_model_name_or_path, args.revision) - text_encoder_cls_2 = import_model_class_from_model_name_or_path( - args.pretrained_model_name_or_path, args.revision, subfolder="text_encoder_2" - ) - - # Load scheduler and models - noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") - text_encoder_1 = text_encoder_cls_1.from_pretrained( - args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision - ) - text_encoder_2 = text_encoder_cls_2.from_pretrained( - args.pretrained_model_name_or_path, subfolder="text_encoder_2", revision=args.revision - ) - - # We ALWAYS pre-compute the additional condition embeddings needed for SDXL - # UNet as the model is already big and it uses two text encoders. - text_encoder_1.to(accelerator.device, dtype=weight_dtype) - text_encoder_2.to(accelerator.device, dtype=weight_dtype) - tokenizers = [tokenizer_1, tokenizer_2] - text_encoders = [text_encoder_1, text_encoder_2] - - # Freeze vae and text_encoders - vae.requires_grad_(False) - text_encoder_1.requires_grad_(False) - text_encoder_2.requires_grad_(False) - - # Adapted from pipelines.StableDiffusionXLPipeline.encode_prompt - def encode_prompt(text_encoders, tokenizers, prompt): - prompt_embeds_list = [] - - for tokenizer, text_encoder in zip(tokenizers, text_encoders): - text_inputs = tokenizer( - prompt, - padding="max_length", - max_length=tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = tokenizer(prompt, padding="longest", return_tensors="pt").input_ids - - if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal( - text_input_ids, untruncated_ids - ): - removed_text = tokenizer.batch_decode(untruncated_ids[:, tokenizer.model_max_length - 1 : -1]) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {tokenizer.model_max_length} tokens: {removed_text}" - ) - - prompt_embeds = text_encoder( - text_input_ids.to(text_encoder.device), - output_hidden_states=True, - ) - - # We are only ALWAYS interested in the pooled output of the final text encoder - pooled_prompt_embeds = prompt_embeds[0] - prompt_embeds = prompt_embeds.hidden_states[-2] - bs_embed, seq_len, _ = prompt_embeds.shape - prompt_embeds = prompt_embeds.view(bs_embed, seq_len, -1) - prompt_embeds_list.append(prompt_embeds) - - prompt_embeds = torch.concat(prompt_embeds_list, dim=-1) - pooled_prompt_embeds = pooled_prompt_embeds.view(bs_embed, -1) - return prompt_embeds, pooled_prompt_embeds - - # Adapted from pipelines.StableDiffusionXLPipeline.encode_prompt - def encode_prompts(text_encoders, tokenizers, prompts): - prompt_embeds_all = [] - pooled_prompt_embeds_all = [] - - for prompt in prompts: - prompt_embeds, pooled_prompt_embeds = encode_prompt(text_encoders, tokenizers, prompt) - prompt_embeds_all.append(prompt_embeds) - pooled_prompt_embeds_all.append(pooled_prompt_embeds) - - return torch.stack(prompt_embeds_all), torch.stack(pooled_prompt_embeds_all) - - # Adapted from examples.dreambooth.train_dreambooth_lora_sdxl - # Here, we compute not just the text embeddings but also the additional embeddings - # needed for the SD XL UNet to operate. - def compute_embeddings_for_prompts(prompts, text_encoders, tokenizers): - with torch.no_grad(): - prompt_embeds_all, pooled_prompt_embeds_all = encode_prompts(text_encoders, tokenizers, prompts) - add_text_embeds_all = pooled_prompt_embeds_all - - prompt_embeds_all = prompt_embeds_all.to(accelerator.device) - add_text_embeds_all = add_text_embeds_all.to(accelerator.device) - return prompt_embeds_all, add_text_embeds_all - - # Get null conditioning - def compute_null_conditioning(): - null_conditioning_list = [] - for a_tokenizer, a_text_encoder in zip(tokenizers, text_encoders): - null_conditioning_list.append( - a_text_encoder( - tokenize_captions([""], tokenizer=a_tokenizer).to(accelerator.device), - output_hidden_states=True, - ).hidden_states[-2] - ) - return torch.concat(null_conditioning_list, dim=-1) - - null_conditioning = compute_null_conditioning() - - def compute_time_ids(): - crops_coords_top_left = (args.crops_coords_top_left_h, args.crops_coords_top_left_w) - original_size = target_size = (args.resolution, args.resolution) - add_time_ids = list(original_size + crops_coords_top_left + target_size) - add_time_ids = torch.tensor([add_time_ids], dtype=weight_dtype) - return add_time_ids.to(accelerator.device).repeat(args.train_batch_size, 1) - - add_time_ids = compute_time_ids() - - def preprocess_train(examples): - # Preprocess images. - preprocessed_images = preprocess_images(examples) - # Since the original and edited images were concatenated before - # applying the transformations, we need to separate them and reshape - # them accordingly. - original_images, edited_images = preprocessed_images.chunk(2) - original_images = original_images.reshape(-1, 3, args.resolution, args.resolution) - edited_images = edited_images.reshape(-1, 3, args.resolution, args.resolution) - - # Collate the preprocessed images into the `examples`. - examples["original_pixel_values"] = original_images - examples["edited_pixel_values"] = edited_images - - # Preprocess the captions. - captions = list(examples[edit_prompt_column]) - prompt_embeds_all, add_text_embeds_all = compute_embeddings_for_prompts(captions, text_encoders, tokenizers) - examples["prompt_embeds"] = prompt_embeds_all - examples["add_text_embeds"] = add_text_embeds_all - return examples - - with accelerator.main_process_first(): - if args.max_train_samples is not None: - dataset["train"] = dataset["train"].shuffle(seed=args.seed).select(range(args.max_train_samples)) - # Set the training transforms - train_dataset = dataset["train"].with_transform(preprocess_train) - - def collate_fn(examples): - original_pixel_values = torch.stack([example["original_pixel_values"] for example in examples]) - original_pixel_values = original_pixel_values.to(memory_format=torch.contiguous_format).float() - edited_pixel_values = torch.stack([example["edited_pixel_values"] for example in examples]) - edited_pixel_values = edited_pixel_values.to(memory_format=torch.contiguous_format).float() - prompt_embeds = torch.concat([example["prompt_embeds"] for example in examples], dim=0) - add_text_embeds = torch.concat([example["add_text_embeds"] for example in examples], dim=0) - return { - "original_pixel_values": original_pixel_values, - "edited_pixel_values": edited_pixel_values, - "prompt_embeds": prompt_embeds, - "add_text_embeds": add_text_embeds, - } - - # DataLoaders creation: - train_dataloader = torch.utils.data.DataLoader( - train_dataset, - shuffle=True, - collate_fn=collate_fn, - batch_size=args.train_batch_size, - num_workers=args.dataloader_num_workers, - ) - - # Scheduler and math around the number of training steps. - overrode_max_train_steps = False - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - overrode_max_train_steps = True - - lr_scheduler = get_scheduler( - args.lr_scheduler, - optimizer=optimizer, - num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps, - num_training_steps=args.max_train_steps * args.gradient_accumulation_steps, - ) - - # Prepare everything with our `accelerator`. - unet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - unet, optimizer, train_dataloader, lr_scheduler - ) - - if args.use_ema: - ema_unet.to(accelerator.device) - - # Move vae, unet and text_encoder to device and cast to weight_dtype - # The VAE is in float32 to avoid NaN losses. - if args.pretrained_vae_model_name_or_path is not None: - vae.to(accelerator.device, dtype=weight_dtype) - else: - vae.to(accelerator.device, dtype=torch.float32) - - # We need to recalculate our total training steps as the size of the training dataloader may have changed. - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if overrode_max_train_steps: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - # Afterwards we recalculate our number of training epochs - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if accelerator.is_main_process: - accelerator.init_trackers("instruct-pix2pix-xl", config=vars(args)) - - # Train! - total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") - logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - global_step = 0 - first_epoch = 0 - - # Potentially load in the weights and states from a previous save - if args.resume_from_checkpoint: - if args.resume_from_checkpoint != "latest": - path = os.path.basename(args.resume_from_checkpoint) - else: - # Get the most recent checkpoint - dirs = os.listdir(args.output_dir) - dirs = [d for d in dirs if d.startswith("checkpoint")] - dirs = sorted(dirs, key=lambda x: int(x.split("-")[1])) - path = dirs[-1] if len(dirs) > 0 else None - - if path is None: - accelerator.print( - f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run." - ) - args.resume_from_checkpoint = None - else: - accelerator.print(f"Resuming from checkpoint {path}") - accelerator.load_state(os.path.join(args.output_dir, path)) - global_step = int(path.split("-")[1]) - - resume_global_step = global_step * args.gradient_accumulation_steps - first_epoch = global_step // num_update_steps_per_epoch - resume_step = resume_global_step % (num_update_steps_per_epoch * args.gradient_accumulation_steps) - - # Only show the progress bar once on each machine. - progress_bar = tqdm(range(global_step, args.max_train_steps), disable=not accelerator.is_local_main_process) - progress_bar.set_description("Steps") - - for epoch in range(first_epoch, args.num_train_epochs): - unet.train() - train_loss = 0.0 - for step, batch in enumerate(train_dataloader): - # Skip steps until we reach the resumed step - if args.resume_from_checkpoint and epoch == first_epoch and step < resume_step: - if step % args.gradient_accumulation_steps == 0: - progress_bar.update(1) - continue - - with accelerator.accumulate(unet): - # We want to learn the denoising process w.r.t the edited images which - # are conditioned on the original image (which was edited) and the edit instruction. - # So, first, convert images to latent space. - if args.pretrained_vae_model_name_or_path is not None: - edited_pixel_values = batch["edited_pixel_values"].to(dtype=weight_dtype) - else: - edited_pixel_values = batch["edited_pixel_values"] - latents = vae.encode(edited_pixel_values).latent_dist.sample() - latents = latents * vae.config.scaling_factor - if args.pretrained_vae_model_name_or_path is None: - latents = latents.to(weight_dtype) - - # Sample noise that we'll add to the latents - noise = torch.randn_like(latents) - bsz = latents.shape[0] - # Sample a random timestep for each image - timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device) - timesteps = timesteps.long() - - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) - - # SDXL additional inputs - encoder_hidden_states = batch["prompt_embeds"] - add_text_embeds = batch["add_text_embeds"] - - # Get the additional image embedding for conditioning. - # Instead of getting a diagonal Gaussian here, we simply take the mode. - if args.pretrained_vae_model_name_or_path is not None: - original_pixel_values = batch["original_pixel_values"].to(dtype=weight_dtype) - else: - original_pixel_values = batch["original_pixel_values"] - original_image_embeds = vae.encode(original_pixel_values).latent_dist.sample() - if args.pretrained_vae_model_name_or_path is None: - original_image_embeds = original_image_embeds.to(weight_dtype) - - # Conditioning dropout to support classifier-free guidance during inference. For more details - # check out the section 3.2.1 of the original paper https://arxiv.org/abs/2211.09800. - if args.conditioning_dropout_prob is not None: - random_p = torch.rand(bsz, device=latents.device, generator=generator) - # Sample masks for the edit prompts. - prompt_mask = random_p < 2 * args.conditioning_dropout_prob - prompt_mask = prompt_mask.reshape(bsz, 1, 1) - # Final text conditioning. - encoder_hidden_states = torch.where(prompt_mask, null_conditioning, encoder_hidden_states) - - # Sample masks for the original images. - image_mask_dtype = original_image_embeds.dtype - image_mask = 1 - ( - (random_p >= args.conditioning_dropout_prob).to(image_mask_dtype) - * (random_p < 3 * args.conditioning_dropout_prob).to(image_mask_dtype) - ) - image_mask = image_mask.reshape(bsz, 1, 1, 1) - # Final image conditioning. - original_image_embeds = image_mask * original_image_embeds - - # Concatenate the `original_image_embeds` with the `noisy_latents`. - concatenated_noisy_latents = torch.cat([noisy_latents, original_image_embeds], dim=1) - - # Get the target for loss depending on the prediction type - if noise_scheduler.config.prediction_type == "epsilon": - target = noise - elif noise_scheduler.config.prediction_type == "v_prediction": - target = noise_scheduler.get_velocity(latents, noise, timesteps) - else: - raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") - - # Predict the noise residual and compute loss - added_cond_kwargs = {"text_embeds": add_text_embeds, "time_ids": add_time_ids} - - model_pred = unet( - concatenated_noisy_latents, timesteps, encoder_hidden_states, added_cond_kwargs=added_cond_kwargs - ).sample - loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") - - # Gather the losses across all processes for logging (if we use distributed training). - avg_loss = accelerator.gather(loss.repeat(args.train_batch_size)).mean() - train_loss += avg_loss.item() / args.gradient_accumulation_steps - - # Backpropagate - accelerator.backward(loss) - if accelerator.sync_gradients: - accelerator.clip_grad_norm_(unet.parameters(), args.max_grad_norm) - optimizer.step() - lr_scheduler.step() - optimizer.zero_grad() - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - if args.use_ema: - ema_unet.step(unet.parameters()) - progress_bar.update(1) - global_step += 1 - accelerator.log({"train_loss": train_loss}, step=global_step) - train_loss = 0.0 - - if global_step % args.checkpointing_steps == 0: - if accelerator.is_main_process: - # _before_ saving state, check if this save would set us over the `checkpoints_total_limit` - if args.checkpoints_total_limit is not None: - checkpoints = os.listdir(args.output_dir) - checkpoints = [d for d in checkpoints if d.startswith("checkpoint")] - checkpoints = sorted(checkpoints, key=lambda x: int(x.split("-")[1])) - - # before we save the new checkpoint, we need to have at _most_ `checkpoints_total_limit - 1` checkpoints - if len(checkpoints) >= args.checkpoints_total_limit: - num_to_remove = len(checkpoints) - args.checkpoints_total_limit + 1 - removing_checkpoints = checkpoints[0:num_to_remove] - - logger.info( - f"{len(checkpoints)} checkpoints already exist, removing {len(removing_checkpoints)} checkpoints" - ) - logger.info(f"removing checkpoints: {', '.join(removing_checkpoints)}") - - for removing_checkpoint in removing_checkpoints: - removing_checkpoint = os.path.join(args.output_dir, removing_checkpoint) - shutil.rmtree(removing_checkpoint) - - save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}") - accelerator.save_state(save_path) - logger.info(f"Saved state to {save_path}") - - logs = {"step_loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} - progress_bar.set_postfix(**logs) - - ### BEGIN: Perform validation every `validation_epochs` steps - if global_step % args.validation_steps == 0 or global_step == 1: - if (args.val_image_url_or_path is not None) and (args.validation_prompt is not None): - logger.info( - f"Running validation... \n Generating {args.num_validation_images} images with prompt:" - f" {args.validation_prompt}." - ) - - # create pipeline - if args.use_ema: - # Store the UNet parameters temporarily and load the EMA parameters to perform inference. - ema_unet.store(unet.parameters()) - ema_unet.copy_to(unet.parameters()) - - # The models need unwrapping because for compatibility in distributed training mode. - pipeline = StableDiffusionXLInstructPix2PixPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=text_encoder_1, - text_encoder_2=text_encoder_2, - tokenizer=tokenizer_1, - tokenizer_2=tokenizer_2, - vae=vae, - revision=args.revision, - torch_dtype=weight_dtype, - ) - pipeline = pipeline.to(accelerator.device) - pipeline.set_progress_bar_config(disable=True) - - # run inference - # Save validation images - val_save_dir = os.path.join(args.output_dir, "validation_images") - if not os.path.exists(val_save_dir): - os.makedirs(val_save_dir) - - original_image = ( - lambda image_url_or_path: load_image(image_url_or_path) - if urlparse(image_url_or_path).scheme - else Image.open(image_url_or_path).convert("RGB") - )(args.val_image_url_or_path) - with torch.autocast( - str(accelerator.device).replace(":0", ""), enabled=accelerator.mixed_precision == "fp16" - ): - edited_images = [] - for val_img_idx in range(args.num_validation_images): - a_val_img = pipeline( - args.validation_prompt, - image=original_image, - num_inference_steps=20, - image_guidance_scale=1.5, - guidance_scale=7, - generator=generator, - ).images[0] - edited_images.append(a_val_img) - a_val_img.save(os.path.join(val_save_dir, f"step_{global_step}_val_img_{val_img_idx}.png")) - - for tracker in accelerator.trackers: - if tracker.name == "wandb": - wandb_table = wandb.Table(columns=WANDB_TABLE_COL_NAMES) - for edited_image in edited_images: - wandb_table.add_data( - wandb.Image(original_image), wandb.Image(edited_image), args.validation_prompt - ) - tracker.log({"validation": wandb_table}) - if args.use_ema: - # Switch back to the original UNet parameters. - ema_unet.restore(unet.parameters()) - - del pipeline - torch.cuda.empty_cache() - ### END: Perform validation every `validation_epochs` steps - - if global_step >= args.max_train_steps: - break - - # Create the pipeline using the trained modules and save it. - accelerator.wait_for_everyone() - if accelerator.is_main_process: - unet = accelerator.unwrap_model(unet) - if args.use_ema: - ema_unet.copy_to(unet.parameters()) - - pipeline = StableDiffusionXLInstructPix2PixPipeline.from_pretrained( - args.pretrained_model_name_or_path, - text_encoder=text_encoder_1, - text_encoder_2=text_encoder_2, - tokenizer=tokenizer_1, - tokenizer_2=tokenizer_2, - vae=vae, - unet=unet, - revision=args.revision, - ) - pipeline.save_pretrained(args.output_dir) - - if args.push_to_hub: - upload_folder( - repo_id=repo_id, - folder_path=args.output_dir, - commit_message="End of training", - ignore_patterns=["step_*", "epoch_*"], - ) - - if args.validation_prompt is not None: - edited_images = [] - pipeline = pipeline.to(accelerator.device) - with torch.autocast(str(accelerator.device).replace(":0", "")): - for _ in range(args.num_validation_images): - edited_images.append( - pipeline( - args.validation_prompt, - image=original_image, - num_inference_steps=20, - image_guidance_scale=1.5, - guidance_scale=7, - generator=generator, - ).images[0] - ) - - for tracker in accelerator.trackers: - if tracker.name == "wandb": - wandb_table = wandb.Table(columns=WANDB_TABLE_COL_NAMES) - for edited_image in edited_images: - wandb_table.add_data( - wandb.Image(original_image), wandb.Image(edited_image), args.validation_prompt - ) - tracker.log({"test": wandb_table}) - - accelerator.end_training() - - -if __name__ == "__main__": - main() diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipeline_utils.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipeline_utils.py deleted file mode 100644 index 87709d5f616cdfb195ed4527e4b630a86136c29c..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipeline_utils.py +++ /dev/null @@ -1,29 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and - -# limitations under the License. - -# NOTE: This file is deprecated and will be removed in a future version. -# It only exists so that temporarely `from diffusers.pipelines import DiffusionPipeline` works - -from .pipelines import DiffusionPipeline, ImagePipelineOutput # noqa: F401 -from .utils import deprecate - - -deprecate( - "pipelines_utils", - "0.22.0", - "Importing `DiffusionPipeline` or `ImagePipelineOutput` from diffusers.pipeline_utils is deprecated. Please import from diffusers.pipelines.pipeline_utils instead.", - standard_warn=False, - stacklevel=3, -) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/_base_/default_runtime.py b/spaces/Andy1621/uniformer_image_detection/configs/_base_/default_runtime.py deleted file mode 100644 index 55097c5b242da66c9735c0b45cd84beefab487b1..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/_base_/default_runtime.py +++ /dev/null @@ -1,16 +0,0 @@ -checkpoint_config = dict(interval=1) -# yapf:disable -log_config = dict( - interval=50, - hooks=[ - dict(type='TextLoggerHook'), - # dict(type='TensorboardLoggerHook') - ]) -# yapf:enable -custom_hooks = [dict(type='NumClassCheckHook')] - -dist_params = dict(backend='nccl') -log_level = 'INFO' -load_from = None -resume_from = None -workflow = [('train', 1)] diff --git a/spaces/Andy1621/uniformer_image_detection/configs/dynamic_rcnn/README.md b/spaces/Andy1621/uniformer_image_detection/configs/dynamic_rcnn/README.md deleted file mode 100644 index ffdc42dcdfddbaa946f81cba00e73b5573aa19fc..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/dynamic_rcnn/README.md +++ /dev/null @@ -1,20 +0,0 @@ -# Dynamic R-CNN: Towards High Quality Object Detection via Dynamic Training - -## Introduction - -[ALGORITHM] - -``` -@article{DynamicRCNN, - author = {Hongkai Zhang and Hong Chang and Bingpeng Ma and Naiyan Wang and Xilin Chen}, - title = {Dynamic {R-CNN}: Towards High Quality Object Detection via Dynamic Training}, - journal = {arXiv preprint arXiv:2004.06002}, - year = {2020} -} -``` - -## Results and Models - -| Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download | -|:---------:|:-------:|:-------:|:--------:|:--------------:|:------:|:------:|:--------:| -| R-50 | pytorch | 1x | 3.8 | | 38.9 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/dynamic_rcnn/dynamic_rcnn_r50_fpn_1x.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/dynamic_rcnn/dynamic_rcnn_r50_fpn_1x/dynamic_rcnn_r50_fpn_1x-62a3f276.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/dynamic_rcnn/dynamic_rcnn_r50_fpn_1x/dynamic_rcnn_r50_fpn_1x_20200618_095048.log.json) | diff --git a/spaces/Andy1621/uniformer_image_detection/configs/fpg/mask_rcnn_r50_fpg_crop640_50e_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/fpg/mask_rcnn_r50_fpg_crop640_50e_coco.py deleted file mode 100644 index 3c9ea27617c85c54309ac454fff253a6d0462735..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/fpg/mask_rcnn_r50_fpg_crop640_50e_coco.py +++ /dev/null @@ -1,48 +0,0 @@ -_base_ = 'mask_rcnn_r50_fpn_crop640_50e_coco.py' - -norm_cfg = dict(type='BN', requires_grad=True) -model = dict( - neck=dict( - type='FPG', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - inter_channels=256, - num_outs=5, - stack_times=9, - paths=['bu'] * 9, - same_down_trans=None, - same_up_trans=dict( - type='conv', - kernel_size=3, - stride=2, - padding=1, - norm_cfg=norm_cfg, - inplace=False, - order=('act', 'conv', 'norm')), - across_lateral_trans=dict( - type='conv', - kernel_size=1, - norm_cfg=norm_cfg, - inplace=False, - order=('act', 'conv', 'norm')), - across_down_trans=dict( - type='interpolation_conv', - mode='nearest', - kernel_size=3, - norm_cfg=norm_cfg, - order=('act', 'conv', 'norm'), - inplace=False), - across_up_trans=None, - across_skip_trans=dict( - type='conv', - kernel_size=1, - norm_cfg=norm_cfg, - inplace=False, - order=('act', 'conv', 'norm')), - output_trans=dict( - type='last_conv', - kernel_size=3, - order=('act', 'conv', 'norm'), - inplace=False), - norm_cfg=norm_cfg, - skip_inds=[(0, 1, 2, 3), (0, 1, 2), (0, 1), (0, ), ()])) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/gn+ws/faster_rcnn_r50_fpn_gn_ws-all_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/gn+ws/faster_rcnn_r50_fpn_gn_ws-all_1x_coco.py deleted file mode 100644 index 497267b6b50b3c160a4f8807230d4f986cf8eb3f..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/gn+ws/faster_rcnn_r50_fpn_gn_ws-all_1x_coco.py +++ /dev/null @@ -1,13 +0,0 @@ -_base_ = '../faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py' -conv_cfg = dict(type='ConvWS') -norm_cfg = dict(type='GN', num_groups=32, requires_grad=True) -model = dict( - pretrained='open-mmlab://jhu/resnet50_gn_ws', - backbone=dict(conv_cfg=conv_cfg, norm_cfg=norm_cfg), - neck=dict(conv_cfg=conv_cfg, norm_cfg=norm_cfg), - roi_head=dict( - bbox_head=dict( - type='Shared4Conv1FCBBoxHead', - conv_out_channels=256, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg))) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain_1x_coco.py deleted file mode 100644 index 86c5b13343b637ce218eed231240195a6768c5d1..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain_1x_coco.py +++ /dev/null @@ -1,41 +0,0 @@ -_base_ = './mask_rcnn_r50_fpn_1x_coco.py' -model = dict( - pretrained='open-mmlab://detectron2/resnet50_caffe', - backbone=dict(norm_cfg=dict(requires_grad=False), style='caffe')) -# use caffe img_norm -img_norm_cfg = dict( - mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict( - type='Resize', - img_scale=[(1333, 640), (1333, 672), (1333, 704), (1333, 736), - (1333, 768), (1333, 800)], - multiscale_mode='value', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/scratch/faster_rcnn_r50_fpn_gn-all_scratch_6x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/scratch/faster_rcnn_r50_fpn_gn-all_scratch_6x_coco.py deleted file mode 100644 index 636f3f67c7c246a60512e2b70d333320fbb85feb..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/scratch/faster_rcnn_r50_fpn_gn-all_scratch_6x_coco.py +++ /dev/null @@ -1,22 +0,0 @@ -_base_ = [ - '../_base_/models/faster_rcnn_r50_fpn.py', - '../_base_/datasets/coco_detection.py', - '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' -] -norm_cfg = dict(type='GN', num_groups=32, requires_grad=True) -model = dict( - pretrained=None, - backbone=dict( - frozen_stages=-1, zero_init_residual=False, norm_cfg=norm_cfg), - neck=dict(norm_cfg=norm_cfg), - roi_head=dict( - bbox_head=dict( - type='Shared4Conv1FCBBoxHead', - conv_out_channels=256, - norm_cfg=norm_cfg))) -# optimizer -optimizer = dict(paramwise_cfg=dict(norm_decay_mult=0)) -optimizer_config = dict(_delete_=True, grad_clip=None) -# learning policy -lr_config = dict(warmup_ratio=0.1, step=[65, 71]) -runner = dict(type='EpochBasedRunner', max_epochs=73) diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/gradio_hough2image.py b/spaces/Anonymous-sub/Rerender/ControlNet/gradio_hough2image.py deleted file mode 100644 index 6095eeb6767e005a155ee72057b3537021b09f31..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/gradio_hough2image.py +++ /dev/null @@ -1,100 +0,0 @@ -from share import * -import config - -import cv2 -import einops -import gradio as gr -import numpy as np -import torch -import random - -from pytorch_lightning import seed_everything -from annotator.util import resize_image, HWC3 -from annotator.mlsd import MLSDdetector -from cldm.model import create_model, load_state_dict -from cldm.ddim_hacked import DDIMSampler - - -apply_mlsd = MLSDdetector() - -model = create_model('./models/cldm_v15.yaml').cpu() -model.load_state_dict(load_state_dict('./models/control_sd15_mlsd.pth', location='cuda')) -model = model.cuda() -ddim_sampler = DDIMSampler(model) - - -def process(input_image, prompt, a_prompt, n_prompt, num_samples, image_resolution, detect_resolution, ddim_steps, guess_mode, strength, scale, seed, eta, value_threshold, distance_threshold): - with torch.no_grad(): - input_image = HWC3(input_image) - detected_map = apply_mlsd(resize_image(input_image, detect_resolution), value_threshold, distance_threshold) - detected_map = HWC3(detected_map) - img = resize_image(input_image, image_resolution) - H, W, C = img.shape - - detected_map = cv2.resize(detected_map, (W, H), interpolation=cv2.INTER_NEAREST) - - control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0 - control = torch.stack([control for _ in range(num_samples)], dim=0) - control = einops.rearrange(control, 'b h w c -> b c h w').clone() - - if seed == -1: - seed = random.randint(0, 65535) - seed_everything(seed) - - if config.save_memory: - model.low_vram_shift(is_diffusing=False) - - cond = {"c_concat": [control], "c_crossattn": [model.get_learned_conditioning([prompt + ', ' + a_prompt] * num_samples)]} - un_cond = {"c_concat": None if guess_mode else [control], "c_crossattn": [model.get_learned_conditioning([n_prompt] * num_samples)]} - shape = (4, H // 8, W // 8) - - if config.save_memory: - model.low_vram_shift(is_diffusing=True) - - model.control_scales = [strength * (0.825 ** float(12 - i)) for i in range(13)] if guess_mode else ([strength] * 13) # Magic number. IDK why. Perhaps because 0.825**12<0.01 but 0.826**12>0.01 - samples, intermediates = ddim_sampler.sample(ddim_steps, num_samples, - shape, cond, verbose=False, eta=eta, - unconditional_guidance_scale=scale, - unconditional_conditioning=un_cond) - - if config.save_memory: - model.low_vram_shift(is_diffusing=False) - - x_samples = model.decode_first_stage(samples) - x_samples = (einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + 127.5).cpu().numpy().clip(0, 255).astype(np.uint8) - - results = [x_samples[i] for i in range(num_samples)] - return [255 - cv2.dilate(detected_map, np.ones(shape=(3, 3), dtype=np.uint8), iterations=1)] + results - - -block = gr.Blocks().queue() -with block: - with gr.Row(): - gr.Markdown("## Control Stable Diffusion with Hough Line Maps") - with gr.Row(): - with gr.Column(): - input_image = gr.Image(source='upload', type="numpy") - prompt = gr.Textbox(label="Prompt") - run_button = gr.Button(label="Run") - with gr.Accordion("Advanced options", open=False): - num_samples = gr.Slider(label="Images", minimum=1, maximum=12, value=1, step=1) - image_resolution = gr.Slider(label="Image Resolution", minimum=256, maximum=768, value=512, step=64) - strength = gr.Slider(label="Control Strength", minimum=0.0, maximum=2.0, value=1.0, step=0.01) - guess_mode = gr.Checkbox(label='Guess Mode', value=False) - detect_resolution = gr.Slider(label="Hough Resolution", minimum=128, maximum=1024, value=512, step=1) - value_threshold = gr.Slider(label="Hough value threshold (MLSD)", minimum=0.01, maximum=2.0, value=0.1, step=0.01) - distance_threshold = gr.Slider(label="Hough distance threshold (MLSD)", minimum=0.01, maximum=20.0, value=0.1, step=0.01) - ddim_steps = gr.Slider(label="Steps", minimum=1, maximum=100, value=20, step=1) - scale = gr.Slider(label="Guidance Scale", minimum=0.1, maximum=30.0, value=9.0, step=0.1) - seed = gr.Slider(label="Seed", minimum=-1, maximum=2147483647, step=1, randomize=True) - eta = gr.Number(label="eta (DDIM)", value=0.0) - a_prompt = gr.Textbox(label="Added Prompt", value='best quality, extremely detailed') - n_prompt = gr.Textbox(label="Negative Prompt", - value='longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality') - with gr.Column(): - result_gallery = gr.Gallery(label='Output', show_label=False, elem_id="gallery").style(grid=2, height='auto') - ips = [input_image, prompt, a_prompt, n_prompt, num_samples, image_resolution, detect_resolution, ddim_steps, guess_mode, strength, scale, seed, eta, value_threshold, distance_threshold] - run_button.click(fn=process, inputs=ips, outputs=[result_gallery]) - - -block.launch(server_name='0.0.0.0') diff --git a/spaces/Anthony7906/MengHuiMXD_GPT/run_Linux.sh b/spaces/Anthony7906/MengHuiMXD_GPT/run_Linux.sh deleted file mode 100644 index 2d26597ae47519f42336ccffc16646713a192ae1..0000000000000000000000000000000000000000 --- a/spaces/Anthony7906/MengHuiMXD_GPT/run_Linux.sh +++ /dev/null @@ -1,31 +0,0 @@ -#!/bin/bash - -# 获取脚本所在目录 -script_dir=$(dirname "$(readlink -f "$0")") - -# 将工作目录更改为脚本所在目录 -cd "$script_dir" || exit - -# 检查Git仓库是否有更新 -git remote update -pwd - -if ! git status -uno | grep 'up to date' > /dev/null; then - # 如果有更新,关闭当前运行的服务器 - pkill -f ChuanhuChatbot.py - - # 拉取最新更改 - git pull - - # 安装依赖 - pip3 install -r requirements.txt - - # 重新启动服务器 - nohup python3 ChuanhuChatbot.py & -fi - -# 检查ChuanhuChatbot.py是否在运行 -if ! pgrep -f ChuanhuChatbot.py > /dev/null; then - # 如果没有运行,启动服务器 - nohup python3 ChuanhuChatbot.py & -fi diff --git a/spaces/Antoine245/bot/app.py b/spaces/Antoine245/bot/app.py deleted file mode 100644 index 678304d94eb1851ecdf757cc051041701df11e44..0000000000000000000000000000000000000000 --- a/spaces/Antoine245/bot/app.py +++ /dev/null @@ -1,69 +0,0 @@ -import gradio as gr -import os -import time -import google.generativeai as palm - -palm.configure(api_key=os.environ.get("palm_key")) - -defaults = { - 'model': 'models/chat-bison-001', - 'temperature': 0.25, - 'candidate_count': 1, - 'top_k': 40, - 'top_p': 0.95, -} - -context = "Your IT assistant" - -examples = [ - [ - "Hey my computer is broken", - "Hey, what is the issue with your computer?" - ] -] - -history = [''] - -with gr.Blocks(theme=gr.themes.Soft()) as demo: - chatbot = gr.Chatbot() - msg = gr.Textbox() - btn = gr.Button("Submit", variant="primary") - clear = gr.Button("Clear") - - def user(user_message, history): - history.append([user_message, None]) - return gr.update(value=""), history - - def bot(history): - try: - bot_message = palm.chat( - context=context, - examples=examples, - messages=[h[0] for h in history] - ) - - history[-1][1] = "" - for character in bot_message.last: - history[-1][1] += character - time.sleep(0.005) - except Exception as e: - # Handle the exception here - print("Error occurred:", str(e)) - # You can customize the error handling as per your requirements - # For example, return an error message to the user - - history[-1][1] = "Incorrect input please retry with a longer sentence in english" - - return history - - response = msg.submit(user, [msg, chatbot], [msg, chatbot], queue=False).then( - bot, chatbot, chatbot - ) - response = btn.click(user, [msg, chatbot], [msg, chatbot], queue=False).then( - bot, chatbot, chatbot - ) - response.then(lambda: gr.update(interactive=True), None, [msg], queue=False) - clear.click(lambda: None, None, chatbot, queue=False) - -demo.queue() -demo.launch(debug=True) diff --git a/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.h b/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.h deleted file mode 100644 index b2b88e8c46f19b6db0933163e57ccdb51180f517..0000000000000000000000000000000000000000 --- a/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.h +++ /dev/null @@ -1,35 +0,0 @@ -/*! -************************************************************************************************** -* Deformable DETR -* Copyright (c) 2020 SenseTime. All Rights Reserved. -* Licensed under the Apache License, Version 2.0 [see LICENSE for details] -************************************************************************************************** -* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -************************************************************************************************** -*/ - -#pragma once -#include - -namespace groundingdino { - -at::Tensor -ms_deform_attn_cpu_forward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const int im2col_step); - -std::vector -ms_deform_attn_cpu_backward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const at::Tensor &grad_output, - const int im2col_step); - -} // namespace groundingdino diff --git a/spaces/BAAI/AltDiffusion/share_btn.py b/spaces/BAAI/AltDiffusion/share_btn.py deleted file mode 100644 index e97a8ec6139e96ce03f018ba9a39670a948c76a7..0000000000000000000000000000000000000000 --- a/spaces/BAAI/AltDiffusion/share_btn.py +++ /dev/null @@ -1,60 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - const gradioEl = document.querySelector('body > gradio-app'); - const imgEls = gradioEl.querySelectorAll('#gallery img'); - const promptTxt = gradioEl.querySelector('#prompt-text-input input').value; - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - if(!imgEls.length){ - return; - }; - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - const files = await Promise.all( - [...imgEls].map(async (imgEl) => { - const res = await fetch(imgEl.src); - const blob = await res.blob(); - const imgId = Date.now() % 200; - const fileName = `diffuse-the-rest-${{imgId}}.png`; - return new File([blob], fileName, { type: 'image/png' }); - }) - ); - const urls = await Promise.all(files.map((f) => uploadFile(f))); - const htmlImgs = urls.map(url => ``); - const descriptionMd = `
    -${htmlImgs.join(`\n`)} -
    `; - const params = new URLSearchParams({ - title: promptTxt, - description: descriptionMd, - }); - const paramsStr = params.toString(); - window.open(`https://huggingface.co/spaces/BAAI/bilingual_stable_diffusion/discussions/new?${paramsStr}`, '_blank'); - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" \ No newline at end of file diff --git a/spaces/BLACKHOST/timer/tm.py b/spaces/BLACKHOST/timer/tm.py deleted file mode 100644 index b6af202932a051df32c729297244559a373904bb..0000000000000000000000000000000000000000 --- a/spaces/BLACKHOST/timer/tm.py +++ /dev/null @@ -1,6 +0,0 @@ -from time import sleep -time = 1000 #can change -while time != 0: - print(time) - time -= 1 #can change - sleep(0.1) #can change \ No newline at end of file diff --git a/spaces/BernardoOlisan/vqganclip/taming-transformers/taming/modules/losses/vqperceptual.py b/spaces/BernardoOlisan/vqganclip/taming-transformers/taming/modules/losses/vqperceptual.py deleted file mode 100644 index c2febd445728479d4cd9aacdb2572cb1f1af04db..0000000000000000000000000000000000000000 --- a/spaces/BernardoOlisan/vqganclip/taming-transformers/taming/modules/losses/vqperceptual.py +++ /dev/null @@ -1,136 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -from taming.modules.losses.lpips import LPIPS -from taming.modules.discriminator.model import NLayerDiscriminator, weights_init - - -class DummyLoss(nn.Module): - def __init__(self): - super().__init__() - - -def adopt_weight(weight, global_step, threshold=0, value=0.): - if global_step < threshold: - weight = value - return weight - - -def hinge_d_loss(logits_real, logits_fake): - loss_real = torch.mean(F.relu(1. - logits_real)) - loss_fake = torch.mean(F.relu(1. + logits_fake)) - d_loss = 0.5 * (loss_real + loss_fake) - return d_loss - - -def vanilla_d_loss(logits_real, logits_fake): - d_loss = 0.5 * ( - torch.mean(torch.nn.functional.softplus(-logits_real)) + - torch.mean(torch.nn.functional.softplus(logits_fake))) - return d_loss - - -class VQLPIPSWithDiscriminator(nn.Module): - def __init__(self, disc_start, codebook_weight=1.0, pixelloss_weight=1.0, - disc_num_layers=3, disc_in_channels=3, disc_factor=1.0, disc_weight=1.0, - perceptual_weight=1.0, use_actnorm=False, disc_conditional=False, - disc_ndf=64, disc_loss="hinge"): - super().__init__() - assert disc_loss in ["hinge", "vanilla"] - self.codebook_weight = codebook_weight - self.pixel_weight = pixelloss_weight - self.perceptual_loss = LPIPS().eval() - self.perceptual_weight = perceptual_weight - - self.discriminator = NLayerDiscriminator(input_nc=disc_in_channels, - n_layers=disc_num_layers, - use_actnorm=use_actnorm, - ndf=disc_ndf - ).apply(weights_init) - self.discriminator_iter_start = disc_start - if disc_loss == "hinge": - self.disc_loss = hinge_d_loss - elif disc_loss == "vanilla": - self.disc_loss = vanilla_d_loss - else: - raise ValueError(f"Unknown GAN loss '{disc_loss}'.") - print(f"VQLPIPSWithDiscriminator running with {disc_loss} loss.") - self.disc_factor = disc_factor - self.discriminator_weight = disc_weight - self.disc_conditional = disc_conditional - - def calculate_adaptive_weight(self, nll_loss, g_loss, last_layer=None): - if last_layer is not None: - nll_grads = torch.autograd.grad(nll_loss, last_layer, retain_graph=True)[0] - g_grads = torch.autograd.grad(g_loss, last_layer, retain_graph=True)[0] - else: - nll_grads = torch.autograd.grad(nll_loss, self.last_layer[0], retain_graph=True)[0] - g_grads = torch.autograd.grad(g_loss, self.last_layer[0], retain_graph=True)[0] - - d_weight = torch.norm(nll_grads) / (torch.norm(g_grads) + 1e-4) - d_weight = torch.clamp(d_weight, 0.0, 1e4).detach() - d_weight = d_weight * self.discriminator_weight - return d_weight - - def forward(self, codebook_loss, inputs, reconstructions, optimizer_idx, - global_step, last_layer=None, cond=None, split="train"): - rec_loss = torch.abs(inputs.contiguous() - reconstructions.contiguous()) - if self.perceptual_weight > 0: - p_loss = self.perceptual_loss(inputs.contiguous(), reconstructions.contiguous()) - rec_loss = rec_loss + self.perceptual_weight * p_loss - else: - p_loss = torch.tensor([0.0]) - - nll_loss = rec_loss - #nll_loss = torch.sum(nll_loss) / nll_loss.shape[0] - nll_loss = torch.mean(nll_loss) - - # now the GAN part - if optimizer_idx == 0: - # generator update - if cond is None: - assert not self.disc_conditional - logits_fake = self.discriminator(reconstructions.contiguous()) - else: - assert self.disc_conditional - logits_fake = self.discriminator(torch.cat((reconstructions.contiguous(), cond), dim=1)) - g_loss = -torch.mean(logits_fake) - - try: - d_weight = self.calculate_adaptive_weight(nll_loss, g_loss, last_layer=last_layer) - except RuntimeError: - assert not self.training - d_weight = torch.tensor(0.0) - - disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start) - loss = nll_loss + d_weight * disc_factor * g_loss + self.codebook_weight * codebook_loss.mean() - - log = {"{}/total_loss".format(split): loss.clone().detach().mean(), - "{}/quant_loss".format(split): codebook_loss.detach().mean(), - "{}/nll_loss".format(split): nll_loss.detach().mean(), - "{}/rec_loss".format(split): rec_loss.detach().mean(), - "{}/p_loss".format(split): p_loss.detach().mean(), - "{}/d_weight".format(split): d_weight.detach(), - "{}/disc_factor".format(split): torch.tensor(disc_factor), - "{}/g_loss".format(split): g_loss.detach().mean(), - } - return loss, log - - if optimizer_idx == 1: - # second pass for discriminator update - if cond is None: - logits_real = self.discriminator(inputs.contiguous().detach()) - logits_fake = self.discriminator(reconstructions.contiguous().detach()) - else: - logits_real = self.discriminator(torch.cat((inputs.contiguous().detach(), cond), dim=1)) - logits_fake = self.discriminator(torch.cat((reconstructions.contiguous().detach(), cond), dim=1)) - - disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start) - d_loss = disc_factor * self.disc_loss(logits_real, logits_fake) - - log = {"{}/disc_loss".format(split): d_loss.clone().detach().mean(), - "{}/logits_real".format(split): logits_real.detach().mean(), - "{}/logits_fake".format(split): logits_fake.detach().mean() - } - return d_loss, log diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/parsers.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/parsers.py deleted file mode 100644 index 667fae352f5833218d620a963a0ced3f8fbef7b9..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/parsers.py +++ /dev/null @@ -1,1112 +0,0 @@ -# Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"). You -# may not use this file except in compliance with the License. A copy of -# the License is located at -# -# http://aws.amazon.com/apache2.0/ -# -# or in the "license" file accompanying this file. This file is -# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF -# ANY KIND, either express or implied. See the License for the specific -# language governing permissions and limitations under the License. -"""Response parsers for the various protocol types. - -The module contains classes that can take an HTTP response, and given -an output shape, parse the response into a dict according to the -rules in the output shape. - -There are many similarities amongst the different protocols with regard -to response parsing, and the code is structured in a way to avoid -code duplication when possible. The diagram below is a diagram -showing the inheritance hierarchy of the response classes. - -:: - - - - +--------------+ - |ResponseParser| - +--------------+ - ^ ^ ^ - +--------------------+ | +-------------------+ - | | | - +----------+----------+ +------+-------+ +-------+------+ - |BaseXMLResponseParser| |BaseRestParser| |BaseJSONParser| - +---------------------+ +--------------+ +--------------+ - ^ ^ ^ ^ ^ ^ - | | | | | | - | | | | | | - | ++----------+-+ +-+-----------++ | - | |RestXMLParser| |RestJSONParser| | - +-----+-----+ +-------------+ +--------------+ +----+-----+ - |QueryParser| |JSONParser| - +-----------+ +----------+ - - -The diagram above shows that there is a base class, ``ResponseParser`` that -contains logic that is similar amongst all the different protocols (``query``, -``json``, ``rest-json``, ``rest-xml``). Amongst the various services there -is shared logic that can be grouped several ways: - -* The ``query`` and ``rest-xml`` both have XML bodies that are parsed in the - same way. -* The ``json`` and ``rest-json`` protocols both have JSON bodies that are - parsed in the same way. -* The ``rest-json`` and ``rest-xml`` protocols have additional attributes - besides body parameters that are parsed the same (headers, query string, - status code). - -This is reflected in the class diagram above. The ``BaseXMLResponseParser`` -and the BaseJSONParser contain logic for parsing the XML/JSON body, -and the BaseRestParser contains logic for parsing out attributes that -come from other parts of the HTTP response. Classes like the -``RestXMLParser`` inherit from the ``BaseXMLResponseParser`` to get the -XML body parsing logic and the ``BaseRestParser`` to get the HTTP -header/status code/query string parsing. - -Additionally, there are event stream parsers that are used by the other parsers -to wrap streaming bodies that represent a stream of events. The -BaseEventStreamParser extends from ResponseParser and defines the logic for -parsing values from the headers and payload of a message from the underlying -binary encoding protocol. Currently, event streams support parsing bodies -encoded as JSON and XML through the following hierarchy. - - - +--------------+ - |ResponseParser| - +--------------+ - ^ ^ ^ - +--------------------+ | +------------------+ - | | | - +----------+----------+ +----------+----------+ +-------+------+ - |BaseXMLResponseParser| |BaseEventStreamParser| |BaseJSONParser| - +---------------------+ +---------------------+ +--------------+ - ^ ^ ^ ^ - | | | | - | | | | - +-+----------------+-+ +-+-----------------+-+ - |EventStreamXMLParser| |EventStreamJSONParser| - +--------------------+ +---------------------+ - -Return Values -============= - -Each call to ``parse()`` returns a dict has this form:: - - Standard Response - - { - "ResponseMetadata": {"RequestId": } - - } - - Error response - - { - "ResponseMetadata": {"RequestId": } - "Error": { - "Code": , - "Message": , - "Type": , - - } - } - -""" -import base64 -import http.client -import json -import logging -import re - -from botocore.compat import ETree, XMLParseError -from botocore.eventstream import EventStream, NoInitialResponseError -from botocore.utils import ( - is_json_value_header, - lowercase_dict, - merge_dicts, - parse_timestamp, -) - -LOG = logging.getLogger(__name__) - -DEFAULT_TIMESTAMP_PARSER = parse_timestamp - - -class ResponseParserFactory: - def __init__(self): - self._defaults = {} - - def set_parser_defaults(self, **kwargs): - """Set default arguments when a parser instance is created. - - You can specify any kwargs that are allowed by a ResponseParser - class. There are currently two arguments: - - * timestamp_parser - A callable that can parse a timestamp string - * blob_parser - A callable that can parse a blob type - - """ - self._defaults.update(kwargs) - - def create_parser(self, protocol_name): - parser_cls = PROTOCOL_PARSERS[protocol_name] - return parser_cls(**self._defaults) - - -def create_parser(protocol): - return ResponseParserFactory().create_parser(protocol) - - -def _text_content(func): - # This decorator hides the difference between - # an XML node with text or a plain string. It's used - # to ensure that scalar processing operates only on text - # strings, which allows the same scalar handlers to be used - # for XML nodes from the body and HTTP headers. - def _get_text_content(self, shape, node_or_string): - if hasattr(node_or_string, 'text'): - text = node_or_string.text - if text is None: - # If an XML node is empty , - # we want to parse that as an empty string, - # not as a null/None value. - text = '' - else: - text = node_or_string - return func(self, shape, text) - - return _get_text_content - - -class ResponseParserError(Exception): - pass - - -class ResponseParser: - """Base class for response parsing. - - This class represents the interface that all ResponseParsers for the - various protocols must implement. - - This class will take an HTTP response and a model shape and parse the - HTTP response into a dictionary. - - There is a single public method exposed: ``parse``. See the ``parse`` - docstring for more info. - - """ - - DEFAULT_ENCODING = 'utf-8' - EVENT_STREAM_PARSER_CLS = None - - def __init__(self, timestamp_parser=None, blob_parser=None): - if timestamp_parser is None: - timestamp_parser = DEFAULT_TIMESTAMP_PARSER - self._timestamp_parser = timestamp_parser - if blob_parser is None: - blob_parser = self._default_blob_parser - self._blob_parser = blob_parser - self._event_stream_parser = None - if self.EVENT_STREAM_PARSER_CLS is not None: - self._event_stream_parser = self.EVENT_STREAM_PARSER_CLS( - timestamp_parser, blob_parser - ) - - def _default_blob_parser(self, value): - # Blobs are always returned as bytes type (this matters on python3). - # We don't decode this to a str because it's entirely possible that the - # blob contains binary data that actually can't be decoded. - return base64.b64decode(value) - - def parse(self, response, shape): - """Parse the HTTP response given a shape. - - :param response: The HTTP response dictionary. This is a dictionary - that represents the HTTP request. The dictionary must have the - following keys, ``body``, ``headers``, and ``status_code``. - - :param shape: The model shape describing the expected output. - :return: Returns a dictionary representing the parsed response - described by the model. In addition to the shape described from - the model, each response will also have a ``ResponseMetadata`` - which contains metadata about the response, which contains at least - two keys containing ``RequestId`` and ``HTTPStatusCode``. Some - responses may populate additional keys, but ``RequestId`` will - always be present. - - """ - LOG.debug('Response headers: %r', response['headers']) - LOG.debug('Response body:\n%r', response['body']) - if response['status_code'] >= 301: - if self._is_generic_error_response(response): - parsed = self._do_generic_error_parse(response) - elif self._is_modeled_error_shape(shape): - parsed = self._do_modeled_error_parse(response, shape) - # We don't want to decorate the modeled fields with metadata - return parsed - else: - parsed = self._do_error_parse(response, shape) - else: - parsed = self._do_parse(response, shape) - - # We don't want to decorate event stream responses with metadata - if shape and shape.serialization.get('eventstream'): - return parsed - - # Add ResponseMetadata if it doesn't exist and inject the HTTP - # status code and headers from the response. - if isinstance(parsed, dict): - response_metadata = parsed.get('ResponseMetadata', {}) - response_metadata['HTTPStatusCode'] = response['status_code'] - # Ensure that the http header keys are all lower cased. Older - # versions of urllib3 (< 1.11) would unintentionally do this for us - # (see urllib3#633). We need to do this conversion manually now. - headers = response['headers'] - response_metadata['HTTPHeaders'] = lowercase_dict(headers) - parsed['ResponseMetadata'] = response_metadata - self._add_checksum_response_metadata(response, response_metadata) - return parsed - - def _add_checksum_response_metadata(self, response, response_metadata): - checksum_context = response.get('context', {}).get('checksum', {}) - algorithm = checksum_context.get('response_algorithm') - if algorithm: - response_metadata['ChecksumAlgorithm'] = algorithm - - def _is_modeled_error_shape(self, shape): - return shape is not None and shape.metadata.get('exception', False) - - def _is_generic_error_response(self, response): - # There are times when a service will respond with a generic - # error response such as: - # 'Http/1.1 Service Unavailable' - # - # This can also happen if you're going through a proxy. - # In this case the protocol specific _do_error_parse will either - # fail to parse the response (in the best case) or silently succeed - # and treat the HTML above as an XML response and return - # non sensical parsed data. - # To prevent this case from happening we first need to check - # whether or not this response looks like the generic response. - if response['status_code'] >= 500: - if 'body' not in response or response['body'] is None: - return True - - body = response['body'].strip() - return body.startswith(b'') or not body - - def _do_generic_error_parse(self, response): - # There's not really much we can do when we get a generic - # html response. - LOG.debug( - "Received a non protocol specific error response from the " - "service, unable to populate error code and message." - ) - return { - 'Error': { - 'Code': str(response['status_code']), - 'Message': http.client.responses.get( - response['status_code'], '' - ), - }, - 'ResponseMetadata': {}, - } - - def _do_parse(self, response, shape): - raise NotImplementedError("%s._do_parse" % self.__class__.__name__) - - def _do_error_parse(self, response, shape): - raise NotImplementedError(f"{self.__class__.__name__}._do_error_parse") - - def _do_modeled_error_parse(self, response, shape, parsed): - raise NotImplementedError( - f"{self.__class__.__name__}._do_modeled_error_parse" - ) - - def _parse_shape(self, shape, node): - handler = getattr( - self, f'_handle_{shape.type_name}', self._default_handle - ) - return handler(shape, node) - - def _handle_list(self, shape, node): - # Enough implementations share list serialization that it's moved - # up here in the base class. - parsed = [] - member_shape = shape.member - for item in node: - parsed.append(self._parse_shape(member_shape, item)) - return parsed - - def _default_handle(self, shape, value): - return value - - def _create_event_stream(self, response, shape): - parser = self._event_stream_parser - name = response['context'].get('operation_name') - return EventStream(response['body'], shape, parser, name) - - def _get_first_key(self, value): - return list(value)[0] - - def _has_unknown_tagged_union_member(self, shape, value): - if shape.is_tagged_union: - if len(value) != 1: - error_msg = ( - "Invalid service response: %s must have one and only " - "one member set." - ) - raise ResponseParserError(error_msg % shape.name) - tag = self._get_first_key(value) - if tag not in shape.members: - msg = ( - "Received a tagged union response with member " - "unknown to client: %s. Please upgrade SDK for full " - "response support." - ) - LOG.info(msg % tag) - return True - return False - - def _handle_unknown_tagged_union_member(self, tag): - return {'SDK_UNKNOWN_MEMBER': {'name': tag}} - - -class BaseXMLResponseParser(ResponseParser): - def __init__(self, timestamp_parser=None, blob_parser=None): - super().__init__(timestamp_parser, blob_parser) - self._namespace_re = re.compile('{.*}') - - def _handle_map(self, shape, node): - parsed = {} - key_shape = shape.key - value_shape = shape.value - key_location_name = key_shape.serialization.get('name') or 'key' - value_location_name = value_shape.serialization.get('name') or 'value' - if shape.serialization.get('flattened') and not isinstance(node, list): - node = [node] - for keyval_node in node: - for single_pair in keyval_node: - # Within each there's a and a - tag_name = self._node_tag(single_pair) - if tag_name == key_location_name: - key_name = self._parse_shape(key_shape, single_pair) - elif tag_name == value_location_name: - val_name = self._parse_shape(value_shape, single_pair) - else: - raise ResponseParserError("Unknown tag: %s" % tag_name) - parsed[key_name] = val_name - return parsed - - def _node_tag(self, node): - return self._namespace_re.sub('', node.tag) - - def _handle_list(self, shape, node): - # When we use _build_name_to_xml_node, repeated elements are aggregated - # into a list. However, we can't tell the difference between a scalar - # value and a single element flattened list. So before calling the - # real _handle_list, we know that "node" should actually be a list if - # it's flattened, and if it's not, then we make it a one element list. - if shape.serialization.get('flattened') and not isinstance(node, list): - node = [node] - return super()._handle_list(shape, node) - - def _handle_structure(self, shape, node): - parsed = {} - members = shape.members - if shape.metadata.get('exception', False): - node = self._get_error_root(node) - xml_dict = self._build_name_to_xml_node(node) - if self._has_unknown_tagged_union_member(shape, xml_dict): - tag = self._get_first_key(xml_dict) - return self._handle_unknown_tagged_union_member(tag) - for member_name in members: - member_shape = members[member_name] - if ( - 'location' in member_shape.serialization - or member_shape.serialization.get('eventheader') - ): - # All members with locations have already been handled, - # so we don't need to parse these members. - continue - xml_name = self._member_key_name(member_shape, member_name) - member_node = xml_dict.get(xml_name) - if member_node is not None: - parsed[member_name] = self._parse_shape( - member_shape, member_node - ) - elif member_shape.serialization.get('xmlAttribute'): - attribs = {} - location_name = member_shape.serialization['name'] - for key, value in node.attrib.items(): - new_key = self._namespace_re.sub( - location_name.split(':')[0] + ':', key - ) - attribs[new_key] = value - if location_name in attribs: - parsed[member_name] = attribs[location_name] - return parsed - - def _get_error_root(self, original_root): - if self._node_tag(original_root) == 'ErrorResponse': - for child in original_root: - if self._node_tag(child) == 'Error': - return child - return original_root - - def _member_key_name(self, shape, member_name): - # This method is needed because we have to special case flattened list - # with a serialization name. If this is the case we use the - # locationName from the list's member shape as the key name for the - # surrounding structure. - if shape.type_name == 'list' and shape.serialization.get('flattened'): - list_member_serialized_name = shape.member.serialization.get( - 'name' - ) - if list_member_serialized_name is not None: - return list_member_serialized_name - serialized_name = shape.serialization.get('name') - if serialized_name is not None: - return serialized_name - return member_name - - def _build_name_to_xml_node(self, parent_node): - # If the parent node is actually a list. We should not be trying - # to serialize it to a dictionary. Instead, return the first element - # in the list. - if isinstance(parent_node, list): - return self._build_name_to_xml_node(parent_node[0]) - xml_dict = {} - for item in parent_node: - key = self._node_tag(item) - if key in xml_dict: - # If the key already exists, the most natural - # way to handle this is to aggregate repeated - # keys into a single list. - # 12 -> {'foo': [Node(1), Node(2)]} - if isinstance(xml_dict[key], list): - xml_dict[key].append(item) - else: - # Convert from a scalar to a list. - xml_dict[key] = [xml_dict[key], item] - else: - xml_dict[key] = item - return xml_dict - - def _parse_xml_string_to_dom(self, xml_string): - try: - parser = ETree.XMLParser( - target=ETree.TreeBuilder(), encoding=self.DEFAULT_ENCODING - ) - parser.feed(xml_string) - root = parser.close() - except XMLParseError as e: - raise ResponseParserError( - "Unable to parse response (%s), " - "invalid XML received. Further retries may succeed:\n%s" - % (e, xml_string) - ) - return root - - def _replace_nodes(self, parsed): - for key, value in parsed.items(): - if list(value): - sub_dict = self._build_name_to_xml_node(value) - parsed[key] = self._replace_nodes(sub_dict) - else: - parsed[key] = value.text - return parsed - - @_text_content - def _handle_boolean(self, shape, text): - if text == 'true': - return True - else: - return False - - @_text_content - def _handle_float(self, shape, text): - return float(text) - - @_text_content - def _handle_timestamp(self, shape, text): - return self._timestamp_parser(text) - - @_text_content - def _handle_integer(self, shape, text): - return int(text) - - @_text_content - def _handle_string(self, shape, text): - return text - - @_text_content - def _handle_blob(self, shape, text): - return self._blob_parser(text) - - _handle_character = _handle_string - _handle_double = _handle_float - _handle_long = _handle_integer - - -class QueryParser(BaseXMLResponseParser): - def _do_error_parse(self, response, shape): - xml_contents = response['body'] - root = self._parse_xml_string_to_dom(xml_contents) - parsed = self._build_name_to_xml_node(root) - self._replace_nodes(parsed) - # Once we've converted xml->dict, we need to make one or two - # more adjustments to extract nested errors and to be consistent - # with ResponseMetadata for non-error responses: - # 1. {"Errors": {"Error": {...}}} -> {"Error": {...}} - # 2. {"RequestId": "id"} -> {"ResponseMetadata": {"RequestId": "id"}} - if 'Errors' in parsed: - parsed.update(parsed.pop('Errors')) - if 'RequestId' in parsed: - parsed['ResponseMetadata'] = {'RequestId': parsed.pop('RequestId')} - return parsed - - def _do_modeled_error_parse(self, response, shape): - return self._parse_body_as_xml(response, shape, inject_metadata=False) - - def _do_parse(self, response, shape): - return self._parse_body_as_xml(response, shape, inject_metadata=True) - - def _parse_body_as_xml(self, response, shape, inject_metadata=True): - xml_contents = response['body'] - root = self._parse_xml_string_to_dom(xml_contents) - parsed = {} - if shape is not None: - start = root - if 'resultWrapper' in shape.serialization: - start = self._find_result_wrapped_shape( - shape.serialization['resultWrapper'], root - ) - parsed = self._parse_shape(shape, start) - if inject_metadata: - self._inject_response_metadata(root, parsed) - return parsed - - def _find_result_wrapped_shape(self, element_name, xml_root_node): - mapping = self._build_name_to_xml_node(xml_root_node) - return mapping[element_name] - - def _inject_response_metadata(self, node, inject_into): - mapping = self._build_name_to_xml_node(node) - child_node = mapping.get('ResponseMetadata') - if child_node is not None: - sub_mapping = self._build_name_to_xml_node(child_node) - for key, value in sub_mapping.items(): - sub_mapping[key] = value.text - inject_into['ResponseMetadata'] = sub_mapping - - -class EC2QueryParser(QueryParser): - def _inject_response_metadata(self, node, inject_into): - mapping = self._build_name_to_xml_node(node) - child_node = mapping.get('requestId') - if child_node is not None: - inject_into['ResponseMetadata'] = {'RequestId': child_node.text} - - def _do_error_parse(self, response, shape): - # EC2 errors look like: - # - # - # - # InvalidInstanceID.Malformed - # Invalid id: "1343124" - # - # - # 12345 - # - # This is different from QueryParser in that it's RequestID, - # not RequestId - original = super()._do_error_parse(response, shape) - if 'RequestID' in original: - original['ResponseMetadata'] = { - 'RequestId': original.pop('RequestID') - } - return original - - def _get_error_root(self, original_root): - for child in original_root: - if self._node_tag(child) == 'Errors': - for errors_child in child: - if self._node_tag(errors_child) == 'Error': - return errors_child - return original_root - - -class BaseJSONParser(ResponseParser): - def _handle_structure(self, shape, value): - final_parsed = {} - if shape.is_document_type: - final_parsed = value - else: - member_shapes = shape.members - if value is None: - # If the comes across the wire as "null" (None in python), - # we should be returning this unchanged, instead of as an - # empty dict. - return None - final_parsed = {} - if self._has_unknown_tagged_union_member(shape, value): - tag = self._get_first_key(value) - return self._handle_unknown_tagged_union_member(tag) - for member_name in member_shapes: - member_shape = member_shapes[member_name] - json_name = member_shape.serialization.get('name', member_name) - raw_value = value.get(json_name) - if raw_value is not None: - final_parsed[member_name] = self._parse_shape( - member_shapes[member_name], raw_value - ) - return final_parsed - - def _handle_map(self, shape, value): - parsed = {} - key_shape = shape.key - value_shape = shape.value - for key, value in value.items(): - actual_key = self._parse_shape(key_shape, key) - actual_value = self._parse_shape(value_shape, value) - parsed[actual_key] = actual_value - return parsed - - def _handle_blob(self, shape, value): - return self._blob_parser(value) - - def _handle_timestamp(self, shape, value): - return self._timestamp_parser(value) - - def _do_error_parse(self, response, shape): - body = self._parse_body_as_json(response['body']) - error = {"Error": {"Message": '', "Code": ''}, "ResponseMetadata": {}} - headers = response['headers'] - # Error responses can have slightly different structures for json. - # The basic structure is: - # - # {"__type":"ConnectClientException", - # "message":"The error message."} - - # The error message can either come in the 'message' or 'Message' key - # so we need to check for both. - error['Error']['Message'] = body.get( - 'message', body.get('Message', '') - ) - # if the message did not contain an error code - # include the response status code - response_code = response.get('status_code') - # Error response may contain an x-amzn-query-error header for json - # we need to fetch the error code from this header in that case - query_error = headers.get('x-amzn-query-error', '') - query_error_components = query_error.split(';') - code = None - if len(query_error_components) == 2 and query_error_components[0]: - code = query_error_components[0] - error['Error']['Type'] = query_error_components[1] - if code is None: - code = body.get('__type', response_code and str(response_code)) - if code is not None: - # code has a couple forms as well: - # * "com.aws.dynamodb.vAPI#ProvisionedThroughputExceededException" - # * "ResourceNotFoundException" - if '#' in code: - code = code.rsplit('#', 1)[1] - error['Error']['Code'] = code - self._inject_response_metadata(error, response['headers']) - return error - - def _inject_response_metadata(self, parsed, headers): - if 'x-amzn-requestid' in headers: - parsed.setdefault('ResponseMetadata', {})['RequestId'] = headers[ - 'x-amzn-requestid' - ] - - def _parse_body_as_json(self, body_contents): - if not body_contents: - return {} - body = body_contents.decode(self.DEFAULT_ENCODING) - try: - original_parsed = json.loads(body) - return original_parsed - except ValueError: - # if the body cannot be parsed, include - # the literal string as the message - return {'message': body} - - -class BaseEventStreamParser(ResponseParser): - def _do_parse(self, response, shape): - final_parsed = {} - if shape.serialization.get('eventstream'): - event_type = response['headers'].get(':event-type') - event_shape = shape.members.get(event_type) - if event_shape: - final_parsed[event_type] = self._do_parse( - response, event_shape - ) - else: - self._parse_non_payload_attrs( - response, shape, shape.members, final_parsed - ) - self._parse_payload(response, shape, shape.members, final_parsed) - return final_parsed - - def _do_error_parse(self, response, shape): - exception_type = response['headers'].get(':exception-type') - exception_shape = shape.members.get(exception_type) - if exception_shape is not None: - original_parsed = self._initial_body_parse(response['body']) - body = self._parse_shape(exception_shape, original_parsed) - error = { - 'Error': { - 'Code': exception_type, - 'Message': body.get('Message', body.get('message', '')), - } - } - else: - error = { - 'Error': { - 'Code': response['headers'].get(':error-code', ''), - 'Message': response['headers'].get(':error-message', ''), - } - } - return error - - def _parse_payload(self, response, shape, member_shapes, final_parsed): - if shape.serialization.get('event'): - for name in member_shapes: - member_shape = member_shapes[name] - if member_shape.serialization.get('eventpayload'): - body = response['body'] - if member_shape.type_name == 'blob': - parsed_body = body - elif member_shape.type_name == 'string': - parsed_body = body.decode(self.DEFAULT_ENCODING) - else: - raw_parse = self._initial_body_parse(body) - parsed_body = self._parse_shape( - member_shape, raw_parse - ) - final_parsed[name] = parsed_body - return - # If we didn't find an explicit payload, use the current shape - original_parsed = self._initial_body_parse(response['body']) - body_parsed = self._parse_shape(shape, original_parsed) - final_parsed.update(body_parsed) - - def _parse_non_payload_attrs( - self, response, shape, member_shapes, final_parsed - ): - headers = response['headers'] - for name in member_shapes: - member_shape = member_shapes[name] - if member_shape.serialization.get('eventheader'): - if name in headers: - value = headers[name] - if member_shape.type_name == 'timestamp': - # Event stream timestamps are an in milleseconds so we - # divide by 1000 to convert to seconds. - value = self._timestamp_parser(value / 1000.0) - final_parsed[name] = value - - def _initial_body_parse(self, body_contents): - # This method should do the initial xml/json parsing of the - # body. We we still need to walk the parsed body in order - # to convert types, but this method will do the first round - # of parsing. - raise NotImplementedError("_initial_body_parse") - - -class EventStreamJSONParser(BaseEventStreamParser, BaseJSONParser): - def _initial_body_parse(self, body_contents): - return self._parse_body_as_json(body_contents) - - -class EventStreamXMLParser(BaseEventStreamParser, BaseXMLResponseParser): - def _initial_body_parse(self, xml_string): - if not xml_string: - return ETree.Element('') - return self._parse_xml_string_to_dom(xml_string) - - -class JSONParser(BaseJSONParser): - - EVENT_STREAM_PARSER_CLS = EventStreamJSONParser - - """Response parser for the "json" protocol.""" - - def _do_parse(self, response, shape): - parsed = {} - if shape is not None: - event_name = shape.event_stream_name - if event_name: - parsed = self._handle_event_stream(response, shape, event_name) - else: - parsed = self._handle_json_body(response['body'], shape) - self._inject_response_metadata(parsed, response['headers']) - return parsed - - def _do_modeled_error_parse(self, response, shape): - return self._handle_json_body(response['body'], shape) - - def _handle_event_stream(self, response, shape, event_name): - event_stream_shape = shape.members[event_name] - event_stream = self._create_event_stream(response, event_stream_shape) - try: - event = event_stream.get_initial_response() - except NoInitialResponseError: - error_msg = 'First event was not of type initial-response' - raise ResponseParserError(error_msg) - parsed = self._handle_json_body(event.payload, shape) - parsed[event_name] = event_stream - return parsed - - def _handle_json_body(self, raw_body, shape): - # The json.loads() gives us the primitive JSON types, - # but we need to traverse the parsed JSON data to convert - # to richer types (blobs, timestamps, etc. - parsed_json = self._parse_body_as_json(raw_body) - return self._parse_shape(shape, parsed_json) - - -class BaseRestParser(ResponseParser): - def _do_parse(self, response, shape): - final_parsed = {} - final_parsed['ResponseMetadata'] = self._populate_response_metadata( - response - ) - self._add_modeled_parse(response, shape, final_parsed) - return final_parsed - - def _add_modeled_parse(self, response, shape, final_parsed): - if shape is None: - return final_parsed - member_shapes = shape.members - self._parse_non_payload_attrs( - response, shape, member_shapes, final_parsed - ) - self._parse_payload(response, shape, member_shapes, final_parsed) - - def _do_modeled_error_parse(self, response, shape): - final_parsed = {} - self._add_modeled_parse(response, shape, final_parsed) - return final_parsed - - def _populate_response_metadata(self, response): - metadata = {} - headers = response['headers'] - if 'x-amzn-requestid' in headers: - metadata['RequestId'] = headers['x-amzn-requestid'] - elif 'x-amz-request-id' in headers: - metadata['RequestId'] = headers['x-amz-request-id'] - # HostId is what it's called whenever this value is returned - # in an XML response body, so to be consistent, we'll always - # call is HostId. - metadata['HostId'] = headers.get('x-amz-id-2', '') - return metadata - - def _parse_payload(self, response, shape, member_shapes, final_parsed): - if 'payload' in shape.serialization: - # If a payload is specified in the output shape, then only that - # shape is used for the body payload. - payload_member_name = shape.serialization['payload'] - body_shape = member_shapes[payload_member_name] - if body_shape.serialization.get('eventstream'): - body = self._create_event_stream(response, body_shape) - final_parsed[payload_member_name] = body - elif body_shape.type_name in ['string', 'blob']: - # This is a stream - body = response['body'] - if isinstance(body, bytes): - body = body.decode(self.DEFAULT_ENCODING) - final_parsed[payload_member_name] = body - else: - original_parsed = self._initial_body_parse(response['body']) - final_parsed[payload_member_name] = self._parse_shape( - body_shape, original_parsed - ) - else: - original_parsed = self._initial_body_parse(response['body']) - body_parsed = self._parse_shape(shape, original_parsed) - final_parsed.update(body_parsed) - - def _parse_non_payload_attrs( - self, response, shape, member_shapes, final_parsed - ): - headers = response['headers'] - for name in member_shapes: - member_shape = member_shapes[name] - location = member_shape.serialization.get('location') - if location is None: - continue - elif location == 'statusCode': - final_parsed[name] = self._parse_shape( - member_shape, response['status_code'] - ) - elif location == 'headers': - final_parsed[name] = self._parse_header_map( - member_shape, headers - ) - elif location == 'header': - header_name = member_shape.serialization.get('name', name) - if header_name in headers: - final_parsed[name] = self._parse_shape( - member_shape, headers[header_name] - ) - - def _parse_header_map(self, shape, headers): - # Note that headers are case insensitive, so we .lower() - # all header names and header prefixes. - parsed = {} - prefix = shape.serialization.get('name', '').lower() - for header_name in headers: - if header_name.lower().startswith(prefix): - # The key name inserted into the parsed hash - # strips off the prefix. - name = header_name[len(prefix) :] - parsed[name] = headers[header_name] - return parsed - - def _initial_body_parse(self, body_contents): - # This method should do the initial xml/json parsing of the - # body. We we still need to walk the parsed body in order - # to convert types, but this method will do the first round - # of parsing. - raise NotImplementedError("_initial_body_parse") - - def _handle_string(self, shape, value): - parsed = value - if is_json_value_header(shape): - decoded = base64.b64decode(value).decode(self.DEFAULT_ENCODING) - parsed = json.loads(decoded) - return parsed - - def _handle_list(self, shape, node): - location = shape.serialization.get('location') - if location == 'header' and not isinstance(node, list): - # List in headers may be a comma separated string as per RFC7230 - node = [e.strip() for e in node.split(',')] - return super()._handle_list(shape, node) - - -class RestJSONParser(BaseRestParser, BaseJSONParser): - - EVENT_STREAM_PARSER_CLS = EventStreamJSONParser - - def _initial_body_parse(self, body_contents): - return self._parse_body_as_json(body_contents) - - def _do_error_parse(self, response, shape): - error = super()._do_error_parse(response, shape) - self._inject_error_code(error, response) - return error - - def _inject_error_code(self, error, response): - # The "Code" value can come from either a response - # header or a value in the JSON body. - body = self._initial_body_parse(response['body']) - if 'x-amzn-errortype' in response['headers']: - code = response['headers']['x-amzn-errortype'] - # Could be: - # x-amzn-errortype: ValidationException: - code = code.split(':')[0] - error['Error']['Code'] = code - elif 'code' in body or 'Code' in body: - error['Error']['Code'] = body.get('code', body.get('Code', '')) - - def _handle_integer(self, shape, value): - return int(value) - - _handle_long = _handle_integer - - -class RestXMLParser(BaseRestParser, BaseXMLResponseParser): - - EVENT_STREAM_PARSER_CLS = EventStreamXMLParser - - def _initial_body_parse(self, xml_string): - if not xml_string: - return ETree.Element('') - return self._parse_xml_string_to_dom(xml_string) - - def _do_error_parse(self, response, shape): - # We're trying to be service agnostic here, but S3 does have a slightly - # different response structure for its errors compared to other - # rest-xml serivces (route53/cloudfront). We handle this by just - # trying to parse both forms. - # First: - # - # - # Sender - # InvalidInput - # Invalid resource type: foo - # - # request-id - # - if response['body']: - # If the body ends up being invalid xml, the xml parser should not - # blow up. It should at least try to pull information about the - # the error response from other sources like the HTTP status code. - try: - return self._parse_error_from_body(response) - except ResponseParserError: - LOG.debug( - 'Exception caught when parsing error response body:', - exc_info=True, - ) - return self._parse_error_from_http_status(response) - - def _parse_error_from_http_status(self, response): - return { - 'Error': { - 'Code': str(response['status_code']), - 'Message': http.client.responses.get( - response['status_code'], '' - ), - }, - 'ResponseMetadata': { - 'RequestId': response['headers'].get('x-amz-request-id', ''), - 'HostId': response['headers'].get('x-amz-id-2', ''), - }, - } - - def _parse_error_from_body(self, response): - xml_contents = response['body'] - root = self._parse_xml_string_to_dom(xml_contents) - parsed = self._build_name_to_xml_node(root) - self._replace_nodes(parsed) - if root.tag == 'Error': - # This is an S3 error response. First we'll populate the - # response metadata. - metadata = self._populate_response_metadata(response) - # The RequestId and the HostId are already in the - # ResponseMetadata, but are also duplicated in the XML - # body. We don't need these values in both places, - # we'll just remove them from the parsed XML body. - parsed.pop('RequestId', '') - parsed.pop('HostId', '') - return {'Error': parsed, 'ResponseMetadata': metadata} - elif 'RequestId' in parsed: - # Other rest-xml serivces: - parsed['ResponseMetadata'] = {'RequestId': parsed.pop('RequestId')} - default = {'Error': {'Message': '', 'Code': ''}} - merge_dicts(default, parsed) - return default - - @_text_content - def _handle_string(self, shape, text): - text = super()._handle_string(shape, text) - return text - - -PROTOCOL_PARSERS = { - 'ec2': EC2QueryParser, - 'query': QueryParser, - 'json': JSONParser, - 'rest-json': RestJSONParser, - 'rest-xml': RestXMLParser, -} diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/tomli/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/tomli/__init__.py deleted file mode 100644 index 4c6ec97ec6961bcf184b6e0b2437b9924db0b9de..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/tomli/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# SPDX-License-Identifier: MIT -# SPDX-FileCopyrightText: 2021 Taneli Hukkinen -# Licensed to PSF under a Contributor Agreement. - -__all__ = ("loads", "load", "TOMLDecodeError") -__version__ = "2.0.1" # DO NOT EDIT THIS LINE MANUALLY. LET bump2version UTILITY DO IT - -from ._parser import TOMLDecodeError, load, loads - -# Pretend this exception was created here. -TOMLDecodeError.__module__ = __name__ diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/configs/quick_schedules/README.md b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/configs/quick_schedules/README.md deleted file mode 100644 index a278199b8557a1e2fb341fe6757786a6cecb82b3..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/configs/quick_schedules/README.md +++ /dev/null @@ -1 +0,0 @@ -These are quick configs for performance or accuracy regression tracking purposes. diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/tests/test_structures.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/tests/test_structures.py deleted file mode 100644 index 34dca0cbb931effcb4a60c979ad2e32bab2eb8bf..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/tests/test_structures.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. - -import unittest - -from densepose.structures import normalized_coords_transform - - -class TestStructures(unittest.TestCase): - def test_normalized_coords_transform(self): - bbox = (32, 24, 288, 216) - x0, y0, w, h = bbox - xmin, ymin, xmax, ymax = x0, y0, x0 + w, y0 + h - f = normalized_coords_transform(*bbox) - # Top-left - expected_p, actual_p = (-1, -1), f((xmin, ymin)) - self.assertEqual(expected_p, actual_p) - # Top-right - expected_p, actual_p = (1, -1), f((xmax, ymin)) - self.assertEqual(expected_p, actual_p) - # Bottom-left - expected_p, actual_p = (-1, 1), f((xmin, ymax)) - self.assertEqual(expected_p, actual_p) - # Bottom-right - expected_p, actual_p = (1, 1), f((xmax, ymax)) - self.assertEqual(expected_p, actual_p) diff --git a/spaces/CVPR/GFPGAN-example/README.md b/spaces/CVPR/GFPGAN-example/README.md deleted file mode 100644 index c3eb25964826586eab4d1173abc151434eefd1ef..0000000000000000000000000000000000000000 --- a/spaces/CVPR/GFPGAN-example/README.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -title: GFPGAN Example -emoji: 🚀 -colorFrom: red -colorTo: indigo -sdk: gradio -app_file: app.py -pinned: false -license: apache-2.0 ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`models`: _List[string]_ -HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space. -Will be parsed automatically from your code if not specified here. - -`datasets`: _List[string]_ -HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space. -Will be parsed automatically from your code if not specified here. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/fill.h b/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/fill.h deleted file mode 100644 index 6665a264873f6a0a775de0aa670ee7567d899ad9..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/fill.h +++ /dev/null @@ -1,23 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system inherits fill -#include - diff --git a/spaces/CVPR/MonoScene/monoscene/config.py b/spaces/CVPR/MonoScene/monoscene/config.py deleted file mode 100644 index e03e806ad5e0c7ea4c439e3e82d955e3c0b3038f..0000000000000000000000000000000000000000 --- a/spaces/CVPR/MonoScene/monoscene/config.py +++ /dev/null @@ -1,26 +0,0 @@ -from transformers import PretrainedConfig -from typing import List - - -class MonoSceneConfig(PretrainedConfig): - - def __init__( - self, - dataset="kitti", - n_classes=20, - feature=64, - project_scale=2, - full_scene_size=(256, 256, 32), - **kwargs, - ): - self.dataset = dataset - self.n_classes = n_classes - self.feature = feature - self.project_scale = project_scale - self.full_scene_size = full_scene_size - super().__init__(**kwargs) - - - - - diff --git a/spaces/CVPR/WALT/walt/datasets/pipelines/auto_augment.py b/spaces/CVPR/WALT/walt/datasets/pipelines/auto_augment.py deleted file mode 100644 index e19adaec18a96cac4dbe1d8c2c9193e9901be1fb..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/walt/datasets/pipelines/auto_augment.py +++ /dev/null @@ -1,890 +0,0 @@ -import copy - -import cv2 -import mmcv -import numpy as np - -from ..builder import PIPELINES -from .compose import Compose - -_MAX_LEVEL = 10 - - -def level_to_value(level, max_value): - """Map from level to values based on max_value.""" - return (level / _MAX_LEVEL) * max_value - - -def enhance_level_to_value(level, a=1.8, b=0.1): - """Map from level to values.""" - return (level / _MAX_LEVEL) * a + b - - -def random_negative(value, random_negative_prob): - """Randomly negate value based on random_negative_prob.""" - return -value if np.random.rand() < random_negative_prob else value - - -def bbox2fields(): - """The key correspondence from bboxes to labels, masks and - segmentations.""" - bbox2label = { - 'gt_bboxes': 'gt_labels', - 'gt_bboxes_ignore': 'gt_labels_ignore' - } - bbox2mask = { - 'gt_bboxes': 'gt_masks', - 'gt_bboxes_ignore': 'gt_masks_ignore' - } - bbox2seg = { - 'gt_bboxes': 'gt_semantic_seg', - } - return bbox2label, bbox2mask, bbox2seg - - -@PIPELINES.register_module() -class AutoAugment(object): - """Auto augmentation. - - This data augmentation is proposed in `Learning Data Augmentation - Strategies for Object Detection `_. - - TODO: Implement 'Shear', 'Sharpness' and 'Rotate' transforms - - Args: - policies (list[list[dict]]): The policies of auto augmentation. Each - policy in ``policies`` is a specific augmentation policy, and is - composed by several augmentations (dict). When AutoAugment is - called, a random policy in ``policies`` will be selected to - augment images. - - Examples: - >>> replace = (104, 116, 124) - >>> policies = [ - >>> [ - >>> dict(type='Sharpness', prob=0.0, level=8), - >>> dict( - >>> type='Shear', - >>> prob=0.4, - >>> level=0, - >>> replace=replace, - >>> axis='x') - >>> ], - >>> [ - >>> dict( - >>> type='Rotate', - >>> prob=0.6, - >>> level=10, - >>> replace=replace), - >>> dict(type='Color', prob=1.0, level=6) - >>> ] - >>> ] - >>> augmentation = AutoAugment(policies) - >>> img = np.ones(100, 100, 3) - >>> gt_bboxes = np.ones(10, 4) - >>> results = dict(img=img, gt_bboxes=gt_bboxes) - >>> results = augmentation(results) - """ - - def __init__(self, policies): - assert isinstance(policies, list) and len(policies) > 0, \ - 'Policies must be a non-empty list.' - for policy in policies: - assert isinstance(policy, list) and len(policy) > 0, \ - 'Each policy in policies must be a non-empty list.' - for augment in policy: - assert isinstance(augment, dict) and 'type' in augment, \ - 'Each specific augmentation must be a dict with key' \ - ' "type".' - - self.policies = copy.deepcopy(policies) - self.transforms = [Compose(policy) for policy in self.policies] - - def __call__(self, results): - transform = np.random.choice(self.transforms) - return transform(results) - - def __repr__(self): - return f'{self.__class__.__name__}(policies={self.policies})' - - -@PIPELINES.register_module() -class Shear(object): - """Apply Shear Transformation to image (and its corresponding bbox, mask, - segmentation). - - Args: - level (int | float): The level should be in range [0,_MAX_LEVEL]. - img_fill_val (int | float | tuple): The filled values for image border. - If float, the same fill value will be used for all the three - channels of image. If tuple, the should be 3 elements. - seg_ignore_label (int): The fill value used for segmentation map. - Note this value must equals ``ignore_label`` in ``semantic_head`` - of the corresponding config. Default 255. - prob (float): The probability for performing Shear and should be in - range [0, 1]. - direction (str): The direction for shear, either "horizontal" - or "vertical". - max_shear_magnitude (float): The maximum magnitude for Shear - transformation. - random_negative_prob (float): The probability that turns the - offset negative. Should be in range [0,1] - interpolation (str): Same as in :func:`mmcv.imshear`. - """ - - def __init__(self, - level, - img_fill_val=128, - seg_ignore_label=255, - prob=0.5, - direction='horizontal', - max_shear_magnitude=0.3, - random_negative_prob=0.5, - interpolation='bilinear'): - assert isinstance(level, (int, float)), 'The level must be type ' \ - f'int or float, got {type(level)}.' - assert 0 <= level <= _MAX_LEVEL, 'The level should be in range ' \ - f'[0,{_MAX_LEVEL}], got {level}.' - if isinstance(img_fill_val, (float, int)): - img_fill_val = tuple([float(img_fill_val)] * 3) - elif isinstance(img_fill_val, tuple): - assert len(img_fill_val) == 3, 'img_fill_val as tuple must ' \ - f'have 3 elements. got {len(img_fill_val)}.' - img_fill_val = tuple([float(val) for val in img_fill_val]) - else: - raise ValueError( - 'img_fill_val must be float or tuple with 3 elements.') - assert np.all([0 <= val <= 255 for val in img_fill_val]), 'all ' \ - 'elements of img_fill_val should between range [0,255].' \ - f'got {img_fill_val}.' - assert 0 <= prob <= 1.0, 'The probability of shear should be in ' \ - f'range [0,1]. got {prob}.' - assert direction in ('horizontal', 'vertical'), 'direction must ' \ - f'in be either "horizontal" or "vertical". got {direction}.' - assert isinstance(max_shear_magnitude, float), 'max_shear_magnitude ' \ - f'should be type float. got {type(max_shear_magnitude)}.' - assert 0. <= max_shear_magnitude <= 1., 'Defaultly ' \ - 'max_shear_magnitude should be in range [0,1]. ' \ - f'got {max_shear_magnitude}.' - self.level = level - self.magnitude = level_to_value(level, max_shear_magnitude) - self.img_fill_val = img_fill_val - self.seg_ignore_label = seg_ignore_label - self.prob = prob - self.direction = direction - self.max_shear_magnitude = max_shear_magnitude - self.random_negative_prob = random_negative_prob - self.interpolation = interpolation - - def _shear_img(self, - results, - magnitude, - direction='horizontal', - interpolation='bilinear'): - """Shear the image. - - Args: - results (dict): Result dict from loading pipeline. - magnitude (int | float): The magnitude used for shear. - direction (str): The direction for shear, either "horizontal" - or "vertical". - interpolation (str): Same as in :func:`mmcv.imshear`. - """ - for key in results.get('img_fields', ['img']): - img = results[key] - img_sheared = mmcv.imshear( - img, - magnitude, - direction, - border_value=self.img_fill_val, - interpolation=interpolation) - results[key] = img_sheared.astype(img.dtype) - - def _shear_bboxes(self, results, magnitude): - """Shear the bboxes.""" - h, w, c = results['img_shape'] - if self.direction == 'horizontal': - shear_matrix = np.stack([[1, magnitude], - [0, 1]]).astype(np.float32) # [2, 2] - else: - shear_matrix = np.stack([[1, 0], [magnitude, - 1]]).astype(np.float32) - for key in results.get('bbox_fields', []): - min_x, min_y, max_x, max_y = np.split( - results[key], results[key].shape[-1], axis=-1) - coordinates = np.stack([[min_x, min_y], [max_x, min_y], - [min_x, max_y], - [max_x, max_y]]) # [4, 2, nb_box, 1] - coordinates = coordinates[..., 0].transpose( - (2, 1, 0)).astype(np.float32) # [nb_box, 2, 4] - new_coords = np.matmul(shear_matrix[None, :, :], - coordinates) # [nb_box, 2, 4] - min_x = np.min(new_coords[:, 0, :], axis=-1) - min_y = np.min(new_coords[:, 1, :], axis=-1) - max_x = np.max(new_coords[:, 0, :], axis=-1) - max_y = np.max(new_coords[:, 1, :], axis=-1) - min_x = np.clip(min_x, a_min=0, a_max=w) - min_y = np.clip(min_y, a_min=0, a_max=h) - max_x = np.clip(max_x, a_min=min_x, a_max=w) - max_y = np.clip(max_y, a_min=min_y, a_max=h) - results[key] = np.stack([min_x, min_y, max_x, max_y], - axis=-1).astype(results[key].dtype) - - def _shear_masks(self, - results, - magnitude, - direction='horizontal', - fill_val=0, - interpolation='bilinear'): - """Shear the masks.""" - h, w, c = results['img_shape'] - for key in results.get('mask_fields', []): - masks = results[key] - results[key] = masks.shear((h, w), - magnitude, - direction, - border_value=fill_val, - interpolation=interpolation) - - def _shear_seg(self, - results, - magnitude, - direction='horizontal', - fill_val=255, - interpolation='bilinear'): - """Shear the segmentation maps.""" - for key in results.get('seg_fields', []): - seg = results[key] - results[key] = mmcv.imshear( - seg, - magnitude, - direction, - border_value=fill_val, - interpolation=interpolation).astype(seg.dtype) - - def _filter_invalid(self, results, min_bbox_size=0): - """Filter bboxes and corresponding masks too small after shear - augmentation.""" - bbox2label, bbox2mask, _ = bbox2fields() - for key in results.get('bbox_fields', []): - bbox_w = results[key][:, 2] - results[key][:, 0] - bbox_h = results[key][:, 3] - results[key][:, 1] - valid_inds = (bbox_w > min_bbox_size) & (bbox_h > min_bbox_size) - valid_inds = np.nonzero(valid_inds)[0] - results[key] = results[key][valid_inds] - # label fields. e.g. gt_labels and gt_labels_ignore - label_key = bbox2label.get(key) - if label_key in results: - results[label_key] = results[label_key][valid_inds] - # mask fields, e.g. gt_masks and gt_masks_ignore - mask_key = bbox2mask.get(key) - if mask_key in results: - results[mask_key] = results[mask_key][valid_inds] - - def __call__(self, results): - """Call function to shear images, bounding boxes, masks and semantic - segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Sheared results. - """ - if np.random.rand() > self.prob: - return results - magnitude = random_negative(self.magnitude, self.random_negative_prob) - self._shear_img(results, magnitude, self.direction, self.interpolation) - self._shear_bboxes(results, magnitude) - # fill_val set to 0 for background of mask. - self._shear_masks( - results, - magnitude, - self.direction, - fill_val=0, - interpolation=self.interpolation) - self._shear_seg( - results, - magnitude, - self.direction, - fill_val=self.seg_ignore_label, - interpolation=self.interpolation) - self._filter_invalid(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(level={self.level}, ' - repr_str += f'img_fill_val={self.img_fill_val}, ' - repr_str += f'seg_ignore_label={self.seg_ignore_label}, ' - repr_str += f'prob={self.prob}, ' - repr_str += f'direction={self.direction}, ' - repr_str += f'max_shear_magnitude={self.max_shear_magnitude}, ' - repr_str += f'random_negative_prob={self.random_negative_prob}, ' - repr_str += f'interpolation={self.interpolation})' - return repr_str - - -@PIPELINES.register_module() -class Rotate(object): - """Apply Rotate Transformation to image (and its corresponding bbox, mask, - segmentation). - - Args: - level (int | float): The level should be in range (0,_MAX_LEVEL]. - scale (int | float): Isotropic scale factor. Same in - ``mmcv.imrotate``. - center (int | float | tuple[float]): Center point (w, h) of the - rotation in the source image. If None, the center of the - image will be used. Same in ``mmcv.imrotate``. - img_fill_val (int | float | tuple): The fill value for image border. - If float, the same value will be used for all the three - channels of image. If tuple, the should be 3 elements (e.g. - equals the number of channels for image). - seg_ignore_label (int): The fill value used for segmentation map. - Note this value must equals ``ignore_label`` in ``semantic_head`` - of the corresponding config. Default 255. - prob (float): The probability for perform transformation and - should be in range 0 to 1. - max_rotate_angle (int | float): The maximum angles for rotate - transformation. - random_negative_prob (float): The probability that turns the - offset negative. - """ - - def __init__(self, - level, - scale=1, - center=None, - img_fill_val=128, - seg_ignore_label=255, - prob=0.5, - max_rotate_angle=30, - random_negative_prob=0.5): - assert isinstance(level, (int, float)), \ - f'The level must be type int or float. got {type(level)}.' - assert 0 <= level <= _MAX_LEVEL, \ - f'The level should be in range (0,{_MAX_LEVEL}]. got {level}.' - assert isinstance(scale, (int, float)), \ - f'The scale must be type int or float. got type {type(scale)}.' - if isinstance(center, (int, float)): - center = (center, center) - elif isinstance(center, tuple): - assert len(center) == 2, 'center with type tuple must have '\ - f'2 elements. got {len(center)} elements.' - else: - assert center is None, 'center must be None or type int, '\ - f'float or tuple, got type {type(center)}.' - if isinstance(img_fill_val, (float, int)): - img_fill_val = tuple([float(img_fill_val)] * 3) - elif isinstance(img_fill_val, tuple): - assert len(img_fill_val) == 3, 'img_fill_val as tuple must '\ - f'have 3 elements. got {len(img_fill_val)}.' - img_fill_val = tuple([float(val) for val in img_fill_val]) - else: - raise ValueError( - 'img_fill_val must be float or tuple with 3 elements.') - assert np.all([0 <= val <= 255 for val in img_fill_val]), \ - 'all elements of img_fill_val should between range [0,255]. '\ - f'got {img_fill_val}.' - assert 0 <= prob <= 1.0, 'The probability should be in range [0,1]. '\ - 'got {prob}.' - assert isinstance(max_rotate_angle, (int, float)), 'max_rotate_angle '\ - f'should be type int or float. got type {type(max_rotate_angle)}.' - self.level = level - self.scale = scale - # Rotation angle in degrees. Positive values mean - # clockwise rotation. - self.angle = level_to_value(level, max_rotate_angle) - self.center = center - self.img_fill_val = img_fill_val - self.seg_ignore_label = seg_ignore_label - self.prob = prob - self.max_rotate_angle = max_rotate_angle - self.random_negative_prob = random_negative_prob - - def _rotate_img(self, results, angle, center=None, scale=1.0): - """Rotate the image. - - Args: - results (dict): Result dict from loading pipeline. - angle (float): Rotation angle in degrees, positive values - mean clockwise rotation. Same in ``mmcv.imrotate``. - center (tuple[float], optional): Center point (w, h) of the - rotation. Same in ``mmcv.imrotate``. - scale (int | float): Isotropic scale factor. Same in - ``mmcv.imrotate``. - """ - for key in results.get('img_fields', ['img']): - img = results[key].copy() - img_rotated = mmcv.imrotate( - img, angle, center, scale, border_value=self.img_fill_val) - results[key] = img_rotated.astype(img.dtype) - - def _rotate_bboxes(self, results, rotate_matrix): - """Rotate the bboxes.""" - h, w, c = results['img_shape'] - for key in results.get('bbox_fields', []): - min_x, min_y, max_x, max_y = np.split( - results[key], results[key].shape[-1], axis=-1) - coordinates = np.stack([[min_x, min_y], [max_x, min_y], - [min_x, max_y], - [max_x, max_y]]) # [4, 2, nb_bbox, 1] - # pad 1 to convert from format [x, y] to homogeneous - # coordinates format [x, y, 1] - coordinates = np.concatenate( - (coordinates, - np.ones((4, 1, coordinates.shape[2], 1), coordinates.dtype)), - axis=1) # [4, 3, nb_bbox, 1] - coordinates = coordinates.transpose( - (2, 0, 1, 3)) # [nb_bbox, 4, 3, 1] - rotated_coords = np.matmul(rotate_matrix, - coordinates) # [nb_bbox, 4, 2, 1] - rotated_coords = rotated_coords[..., 0] # [nb_bbox, 4, 2] - min_x, min_y = np.min( - rotated_coords[:, :, 0], axis=1), np.min( - rotated_coords[:, :, 1], axis=1) - max_x, max_y = np.max( - rotated_coords[:, :, 0], axis=1), np.max( - rotated_coords[:, :, 1], axis=1) - min_x, min_y = np.clip( - min_x, a_min=0, a_max=w), np.clip( - min_y, a_min=0, a_max=h) - max_x, max_y = np.clip( - max_x, a_min=min_x, a_max=w), np.clip( - max_y, a_min=min_y, a_max=h) - results[key] = np.stack([min_x, min_y, max_x, max_y], - axis=-1).astype(results[key].dtype) - - def _rotate_masks(self, - results, - angle, - center=None, - scale=1.0, - fill_val=0): - """Rotate the masks.""" - h, w, c = results['img_shape'] - for key in results.get('mask_fields', []): - masks = results[key] - results[key] = masks.rotate((h, w), angle, center, scale, fill_val) - - def _rotate_seg(self, - results, - angle, - center=None, - scale=1.0, - fill_val=255): - """Rotate the segmentation map.""" - for key in results.get('seg_fields', []): - seg = results[key].copy() - results[key] = mmcv.imrotate( - seg, angle, center, scale, - border_value=fill_val).astype(seg.dtype) - - def _filter_invalid(self, results, min_bbox_size=0): - """Filter bboxes and corresponding masks too small after rotate - augmentation.""" - bbox2label, bbox2mask, _ = bbox2fields() - for key in results.get('bbox_fields', []): - bbox_w = results[key][:, 2] - results[key][:, 0] - bbox_h = results[key][:, 3] - results[key][:, 1] - valid_inds = (bbox_w > min_bbox_size) & (bbox_h > min_bbox_size) - valid_inds = np.nonzero(valid_inds)[0] - results[key] = results[key][valid_inds] - # label fields. e.g. gt_labels and gt_labels_ignore - label_key = bbox2label.get(key) - if label_key in results: - results[label_key] = results[label_key][valid_inds] - # mask fields, e.g. gt_masks and gt_masks_ignore - mask_key = bbox2mask.get(key) - if mask_key in results: - results[mask_key] = results[mask_key][valid_inds] - - def __call__(self, results): - """Call function to rotate images, bounding boxes, masks and semantic - segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Rotated results. - """ - if np.random.rand() > self.prob: - return results - h, w = results['img'].shape[:2] - center = self.center - if center is None: - center = ((w - 1) * 0.5, (h - 1) * 0.5) - angle = random_negative(self.angle, self.random_negative_prob) - self._rotate_img(results, angle, center, self.scale) - rotate_matrix = cv2.getRotationMatrix2D(center, -angle, self.scale) - self._rotate_bboxes(results, rotate_matrix) - self._rotate_masks(results, angle, center, self.scale, fill_val=0) - self._rotate_seg( - results, angle, center, self.scale, fill_val=self.seg_ignore_label) - self._filter_invalid(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(level={self.level}, ' - repr_str += f'scale={self.scale}, ' - repr_str += f'center={self.center}, ' - repr_str += f'img_fill_val={self.img_fill_val}, ' - repr_str += f'seg_ignore_label={self.seg_ignore_label}, ' - repr_str += f'prob={self.prob}, ' - repr_str += f'max_rotate_angle={self.max_rotate_angle}, ' - repr_str += f'random_negative_prob={self.random_negative_prob})' - return repr_str - - -@PIPELINES.register_module() -class Translate(object): - """Translate the images, bboxes, masks and segmentation maps horizontally - or vertically. - - Args: - level (int | float): The level for Translate and should be in - range [0,_MAX_LEVEL]. - prob (float): The probability for performing translation and - should be in range [0, 1]. - img_fill_val (int | float | tuple): The filled value for image - border. If float, the same fill value will be used for all - the three channels of image. If tuple, the should be 3 - elements (e.g. equals the number of channels for image). - seg_ignore_label (int): The fill value used for segmentation map. - Note this value must equals ``ignore_label`` in ``semantic_head`` - of the corresponding config. Default 255. - direction (str): The translate direction, either "horizontal" - or "vertical". - max_translate_offset (int | float): The maximum pixel's offset for - Translate. - random_negative_prob (float): The probability that turns the - offset negative. - min_size (int | float): The minimum pixel for filtering - invalid bboxes after the translation. - """ - - def __init__(self, - level, - prob=0.5, - img_fill_val=128, - seg_ignore_label=255, - direction='horizontal', - max_translate_offset=250., - random_negative_prob=0.5, - min_size=0): - assert isinstance(level, (int, float)), \ - 'The level must be type int or float.' - assert 0 <= level <= _MAX_LEVEL, \ - 'The level used for calculating Translate\'s offset should be ' \ - 'in range [0,_MAX_LEVEL]' - assert 0 <= prob <= 1.0, \ - 'The probability of translation should be in range [0, 1].' - if isinstance(img_fill_val, (float, int)): - img_fill_val = tuple([float(img_fill_val)] * 3) - elif isinstance(img_fill_val, tuple): - assert len(img_fill_val) == 3, \ - 'img_fill_val as tuple must have 3 elements.' - img_fill_val = tuple([float(val) for val in img_fill_val]) - else: - raise ValueError('img_fill_val must be type float or tuple.') - assert np.all([0 <= val <= 255 for val in img_fill_val]), \ - 'all elements of img_fill_val should between range [0,255].' - assert direction in ('horizontal', 'vertical'), \ - 'direction should be "horizontal" or "vertical".' - assert isinstance(max_translate_offset, (int, float)), \ - 'The max_translate_offset must be type int or float.' - # the offset used for translation - self.offset = int(level_to_value(level, max_translate_offset)) - self.level = level - self.prob = prob - self.img_fill_val = img_fill_val - self.seg_ignore_label = seg_ignore_label - self.direction = direction - self.max_translate_offset = max_translate_offset - self.random_negative_prob = random_negative_prob - self.min_size = min_size - - def _translate_img(self, results, offset, direction='horizontal'): - """Translate the image. - - Args: - results (dict): Result dict from loading pipeline. - offset (int | float): The offset for translate. - direction (str): The translate direction, either "horizontal" - or "vertical". - """ - for key in results.get('img_fields', ['img']): - img = results[key].copy() - results[key] = mmcv.imtranslate( - img, offset, direction, self.img_fill_val).astype(img.dtype) - - def _translate_bboxes(self, results, offset): - """Shift bboxes horizontally or vertically, according to offset.""" - h, w, c = results['img_shape'] - for key in results.get('bbox_fields', []): - min_x, min_y, max_x, max_y = np.split( - results[key], results[key].shape[-1], axis=-1) - if self.direction == 'horizontal': - min_x = np.maximum(0, min_x + offset) - max_x = np.minimum(w, max_x + offset) - elif self.direction == 'vertical': - min_y = np.maximum(0, min_y + offset) - max_y = np.minimum(h, max_y + offset) - - # the boxes translated outside of image will be filtered along with - # the corresponding masks, by invoking ``_filter_invalid``. - results[key] = np.concatenate([min_x, min_y, max_x, max_y], - axis=-1) - - def _translate_masks(self, - results, - offset, - direction='horizontal', - fill_val=0): - """Translate masks horizontally or vertically.""" - h, w, c = results['img_shape'] - for key in results.get('mask_fields', []): - masks = results[key] - results[key] = masks.translate((h, w), offset, direction, fill_val) - - def _translate_seg(self, - results, - offset, - direction='horizontal', - fill_val=255): - """Translate segmentation maps horizontally or vertically.""" - for key in results.get('seg_fields', []): - seg = results[key].copy() - results[key] = mmcv.imtranslate(seg, offset, direction, - fill_val).astype(seg.dtype) - - def _filter_invalid(self, results, min_size=0): - """Filter bboxes and masks too small or translated out of image.""" - bbox2label, bbox2mask, _ = bbox2fields() - for key in results.get('bbox_fields', []): - bbox_w = results[key][:, 2] - results[key][:, 0] - bbox_h = results[key][:, 3] - results[key][:, 1] - valid_inds = (bbox_w > min_size) & (bbox_h > min_size) - valid_inds = np.nonzero(valid_inds)[0] - results[key] = results[key][valid_inds] - # label fields. e.g. gt_labels and gt_labels_ignore - label_key = bbox2label.get(key) - if label_key in results: - results[label_key] = results[label_key][valid_inds] - # mask fields, e.g. gt_masks and gt_masks_ignore - mask_key = bbox2mask.get(key) - if mask_key in results: - results[mask_key] = results[mask_key][valid_inds] - return results - - def __call__(self, results): - """Call function to translate images, bounding boxes, masks and - semantic segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Translated results. - """ - if np.random.rand() > self.prob: - return results - offset = random_negative(self.offset, self.random_negative_prob) - self._translate_img(results, offset, self.direction) - self._translate_bboxes(results, offset) - # fill_val defaultly 0 for BitmapMasks and None for PolygonMasks. - self._translate_masks(results, offset, self.direction) - # fill_val set to ``seg_ignore_label`` for the ignored value - # of segmentation map. - self._translate_seg( - results, offset, self.direction, fill_val=self.seg_ignore_label) - self._filter_invalid(results, min_size=self.min_size) - return results - - -@PIPELINES.register_module() -class ColorTransform(object): - """Apply Color transformation to image. The bboxes, masks, and - segmentations are not modified. - - Args: - level (int | float): Should be in range [0,_MAX_LEVEL]. - prob (float): The probability for performing Color transformation. - """ - - def __init__(self, level, prob=0.5): - assert isinstance(level, (int, float)), \ - 'The level must be type int or float.' - assert 0 <= level <= _MAX_LEVEL, \ - 'The level should be in range [0,_MAX_LEVEL].' - assert 0 <= prob <= 1.0, \ - 'The probability should be in range [0,1].' - self.level = level - self.prob = prob - self.factor = enhance_level_to_value(level) - - def _adjust_color_img(self, results, factor=1.0): - """Apply Color transformation to image.""" - for key in results.get('img_fields', ['img']): - # NOTE defaultly the image should be BGR format - img = results[key] - results[key] = mmcv.adjust_color(img, factor).astype(img.dtype) - - def __call__(self, results): - """Call function for Color transformation. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Colored results. - """ - if np.random.rand() > self.prob: - return results - self._adjust_color_img(results, self.factor) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(level={self.level}, ' - repr_str += f'prob={self.prob})' - return repr_str - - -@PIPELINES.register_module() -class EqualizeTransform(object): - """Apply Equalize transformation to image. The bboxes, masks and - segmentations are not modified. - - Args: - prob (float): The probability for performing Equalize transformation. - """ - - def __init__(self, prob=0.5): - assert 0 <= prob <= 1.0, \ - 'The probability should be in range [0,1].' - self.prob = prob - - def _imequalize(self, results): - """Equalizes the histogram of one image.""" - for key in results.get('img_fields', ['img']): - img = results[key] - results[key] = mmcv.imequalize(img).astype(img.dtype) - - def __call__(self, results): - """Call function for Equalize transformation. - - Args: - results (dict): Results dict from loading pipeline. - - Returns: - dict: Results after the transformation. - """ - if np.random.rand() > self.prob: - return results - self._imequalize(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(prob={self.prob})' - - -@PIPELINES.register_module() -class BrightnessTransform(object): - """Apply Brightness transformation to image. The bboxes, masks and - segmentations are not modified. - - Args: - level (int | float): Should be in range [0,_MAX_LEVEL]. - prob (float): The probability for performing Brightness transformation. - """ - - def __init__(self, level, prob=0.5): - assert isinstance(level, (int, float)), \ - 'The level must be type int or float.' - assert 0 <= level <= _MAX_LEVEL, \ - 'The level should be in range [0,_MAX_LEVEL].' - assert 0 <= prob <= 1.0, \ - 'The probability should be in range [0,1].' - self.level = level - self.prob = prob - self.factor = enhance_level_to_value(level) - - def _adjust_brightness_img(self, results, factor=1.0): - """Adjust the brightness of image.""" - for key in results.get('img_fields', ['img']): - img = results[key] - results[key] = mmcv.adjust_brightness(img, - factor).astype(img.dtype) - - def __call__(self, results): - """Call function for Brightness transformation. - - Args: - results (dict): Results dict from loading pipeline. - - Returns: - dict: Results after the transformation. - """ - if np.random.rand() > self.prob: - return results - self._adjust_brightness_img(results, self.factor) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(level={self.level}, ' - repr_str += f'prob={self.prob})' - return repr_str - - -@PIPELINES.register_module() -class ContrastTransform(object): - """Apply Contrast transformation to image. The bboxes, masks and - segmentations are not modified. - - Args: - level (int | float): Should be in range [0,_MAX_LEVEL]. - prob (float): The probability for performing Contrast transformation. - """ - - def __init__(self, level, prob=0.5): - assert isinstance(level, (int, float)), \ - 'The level must be type int or float.' - assert 0 <= level <= _MAX_LEVEL, \ - 'The level should be in range [0,_MAX_LEVEL].' - assert 0 <= prob <= 1.0, \ - 'The probability should be in range [0,1].' - self.level = level - self.prob = prob - self.factor = enhance_level_to_value(level) - - def _adjust_contrast_img(self, results, factor=1.0): - """Adjust the image contrast.""" - for key in results.get('img_fields', ['img']): - img = results[key] - results[key] = mmcv.adjust_contrast(img, factor).astype(img.dtype) - - def __call__(self, results): - """Call function for Contrast transformation. - - Args: - results (dict): Results dict from loading pipeline. - - Returns: - dict: Results after the transformation. - """ - if np.random.rand() > self.prob: - return results - self._adjust_contrast_img(results, self.factor) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(level={self.level}, ' - repr_str += f'prob={self.prob})' - return repr_str diff --git a/spaces/CVPR/lama-example/app.py b/spaces/CVPR/lama-example/app.py deleted file mode 100644 index 1e08c18901bb85d211d1da175995642af361b519..0000000000000000000000000000000000000000 --- a/spaces/CVPR/lama-example/app.py +++ /dev/null @@ -1,42 +0,0 @@ -import os -os.system("gdown https://drive.google.com/uc?id=1-95IOJ-2y9BtmABiffIwndPqNZD_gLnV") -os.system("unzip big-lama.zip") -import cv2 -import paddlehub as hub -import gradio as gr -import torch -from PIL import Image, ImageOps -import numpy as np -os.mkdir("data") -os.mkdir("dataout") -model = hub.Module(name='U2Net') -def infer(img,mask,option): - img = ImageOps.contain(img, (700,700)) - width, height = img.size - img.save("./data/data.png") - if option == "automatic (U2net)": - result = model.Segmentation( - images=[cv2.cvtColor(np.array(img), cv2.COLOR_RGB2BGR)], - paths=None, - batch_size=1, - input_size=320, - output_dir='output', - visualization=True) - im = Image.fromarray(result[0]['mask']) - else: - mask = mask.resize((width,height)) - im = mask - im.save("./data/data_mask.png") - os.system('python predict.py model.path=/home/user/app/big-lama/ indir=/home/user/app/data/ outdir=/home/user/app/dataout/ device=cpu') - return "./dataout/data_mask.png",im - -inputs = [gr.inputs.Image(type='pil', label="Original Image"),gr.inputs.Image(type='pil',source="canvas", label="Mask",invert_colors=True),gr.inputs.Radio(choices=["automatic (U2net)","manual"], type="value", default="manual", label="Masking option")] -outputs = [gr.outputs.Image(type="file",label="output"),gr.outputs.Image(type="pil",label="Mask")] -title = "LaMa Image Inpainting Example" -description = "Gradio demo for LaMa: Resolution-robust Large Mask Inpainting with Fourier Convolutions. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below. Masks are generated by U^2net" -article = "

    Resolution-robust Large Mask Inpainting with Fourier Convolutions | Github Repo

    visitor badge
    " -examples = [ - ['person512.png',"canvas.png","automatic (U2net)"], - ['person512.png',"maskexam.png","manual"] -] -gr.Interface(infer, inputs, outputs, title=title, description=description, article=article, examples=examples).launch(enable_queue=True,cache_examples=True) \ No newline at end of file diff --git a/spaces/CVPR/lama-example/models/ade20k/utils.py b/spaces/CVPR/lama-example/models/ade20k/utils.py deleted file mode 100644 index f337db7db54c82be041698d694e1403e8918c4c0..0000000000000000000000000000000000000000 --- a/spaces/CVPR/lama-example/models/ade20k/utils.py +++ /dev/null @@ -1,40 +0,0 @@ -"""Modified from https://github.com/CSAILVision/semantic-segmentation-pytorch""" - -import os -import sys - -import numpy as np -import torch - -try: - from urllib import urlretrieve -except ImportError: - from urllib.request import urlretrieve - - -def load_url(url, model_dir='./pretrained', map_location=None): - if not os.path.exists(model_dir): - os.makedirs(model_dir) - filename = url.split('/')[-1] - cached_file = os.path.join(model_dir, filename) - if not os.path.exists(cached_file): - sys.stderr.write('Downloading: "{}" to {}\n'.format(url, cached_file)) - urlretrieve(url, cached_file) - return torch.load(cached_file, map_location=map_location) - - -def color_encode(labelmap, colors, mode='RGB'): - labelmap = labelmap.astype('int') - labelmap_rgb = np.zeros((labelmap.shape[0], labelmap.shape[1], 3), - dtype=np.uint8) - for label in np.unique(labelmap): - if label < 0: - continue - labelmap_rgb += (labelmap == label)[:, :, np.newaxis] * \ - np.tile(colors[label], - (labelmap.shape[0], labelmap.shape[1], 1)) - - if mode == 'BGR': - return labelmap_rgb[:, :, ::-1] - else: - return labelmap_rgb diff --git a/spaces/CikeyQI/meme-api/meme_generator/memes/capoo_rip/__init__.py b/spaces/CikeyQI/meme-api/meme_generator/memes/capoo_rip/__init__.py deleted file mode 100644 index 5711280bc1c3fd1efed76725ff6698e7813067c1..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/meme-api/meme_generator/memes/capoo_rip/__init__.py +++ /dev/null @@ -1,59 +0,0 @@ -from pathlib import Path -from typing import List - -from pil_utils import BuildImage - -from meme_generator import add_meme -from meme_generator.utils import save_gif - -img_dir = Path(__file__).parent / "images" - - -def capoo_rip(images: List[BuildImage], texts, args): - img = images[0].convert("RGBA").resize((150, 100), keep_ratio=True) - img_left = img.crop((0, 0, 75, 100)) - img_right = img.crop((75, 0, 150, 100)) - params1 = [ - [(61, 196), ((140, 68), (0, 59), (33, 0), (165, 8))], - [(63, 196), ((136, 68), (0, 59), (29, 0), (158, 13))], - [(62, 195), ((137, 72), (0, 58), (27, 0), (167, 11))], - [(95, 152), ((0, 8), (155, 0), (163, 107), (13, 112))], - [(108, 129), ((0, 6), (128, 0), (136, 113), (10, 117))], - [(84, 160), ((0, 6), (184, 0), (190, 90), (10, 97))], - ] - params2 = [ - ( - [(78, 158), ((0, 3), (86, 0), (97, 106), (16, 106))], - [(195, 156), ((0, 4), (82, 0), (85, 106), (15, 110))], - ), - ( - [(89, 156), ((0, 0), (80, 0), (94, 100), (14, 100))], - [(192, 151), ((0, 7), (79, 3), (82, 107), (11, 112))], - ), - ] - raw_frames = [BuildImage.open(img_dir / f"{i}.png") for i in range(8)] - for i in range(6): - pos, points = params1[i] - raw_frames[i].paste(img.perspective(points), pos, below=True) - for i in range(2): - (pos1, points1), (pos2, points2) = params2[i] - raw_frames[i + 6].paste(img_left.perspective(points1), pos1, below=True) - raw_frames[i + 6].paste(img_right.perspective(points2), pos2, below=True) - - new_frames: List[BuildImage] = [] - for i in range(3): - new_frames += raw_frames[0:3] - new_frames += raw_frames[3:] - new_frames.append(raw_frames[-1]) - - frames = [frame.image for frame in new_frames] - return save_gif(frames, 0.1) - - -add_meme( - "capoo_rip", - capoo_rip, - min_images=1, - max_images=1, - keywords=["咖波撕"], -) diff --git a/spaces/CofAI/chat.b4/g4f/Provider/__init__.py b/spaces/CofAI/chat.b4/g4f/Provider/__init__.py deleted file mode 100644 index 65f8cb1da5a0279a6639f1427e4bbc0664b6e1bb..0000000000000000000000000000000000000000 --- a/spaces/CofAI/chat.b4/g4f/Provider/__init__.py +++ /dev/null @@ -1,33 +0,0 @@ -from . import Provider -from .Providers import ( - Aichat, - Ails, - Bard, - Better, - Bing, - ChatgptAi, - ChatgptLogin, - ChatgptLogin, - DeepAi, - Easychat, - Ezcht, - Fakeopen, - Forefront, - GetGpt, - Gravityengine, - H2o, - hteyun, - Liaobots, - Lockchat, - Mishalsgpt, - Phind, - Theb, - Vercel, - Weuseing, - Xiaor, - Yqcloud, - You, - Zeabur -) - -Palm = Bard diff --git a/spaces/CofAI/picscore1/style.css b/spaces/CofAI/picscore1/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/CofAI/picscore1/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/Cong723/gpt-academic-public/docs/self_analysis.md b/spaces/Cong723/gpt-academic-public/docs/self_analysis.md deleted file mode 100644 index c88e1e41217eb13a30269f933586f6c241fab38d..0000000000000000000000000000000000000000 --- a/spaces/Cong723/gpt-academic-public/docs/self_analysis.md +++ /dev/null @@ -1,256 +0,0 @@ -# chatgpt-academic项目自译解报告 -(Author补充:以下分析均由本项目调用ChatGPT一键生成,如果有不准确的地方,全怪GPT😄) - -## 对程序的整体功能和构架做出概括。然后用一张markdown表格整理每个文件的功能。 - -整体概括: - -该程序是一个基于自然语言处理和机器学习的科学论文辅助工具,主要功能包括聊天机器人、批量总结PDF文档、批量翻译PDF文档、生成函数注释、解析项目源代码等。程序基于 Gradio 构建 Web 服务,并集成了代理和自动更新功能,提高了用户的使用体验。 - -文件功能表格: - -| 文件名 | 文件功能 | -| --- | --- | -| check_proxy.py | 用于检查代理的正确性和可用性 | -| colorful.py | 包含不同预设置颜色的常量,并用于多种UI元素 | -| config.py | 用于全局配置的类 | -| config_private.py | 与config.py文件一起使用的另一个配置文件,用于更改私密信息 | -| core_functional.py | 包含一些TextFunctional类和基础功能函数 | -| crazy_functional.py | 包含大量高级功能函数和实验性的功能函数 | -| main.py | 程序的主入口,包含GUI主窗口和主要的UI管理功能 | -| theme.py | 包含一些预设置主题的颜色 | -| toolbox.py | 提供了一些有用的工具函数 | -| crazy_functions\crazy_utils.py | 包含一些用于实现高级功能的辅助函数 | -| crazy_functions\Latex全文润色.py | 实现了对LaTeX文件中全文的润色和格式化功能 | -| crazy_functions\Latex全文翻译.py | 实现了对LaTeX文件中的内容进行翻译的功能 | -| crazy_functions\_\_init\_\_.py | 用于导入crazy_functional.py中的功能函数 | -| crazy_functions\下载arxiv论文翻译摘要.py | 从Arxiv上下载论文并提取重要信息 | -| crazy_functions\代码重写为全英文_多线程.py | 针对中文Python文件,将其翻译为全英文 | -| crazy_functions\总结word文档.py | 提取Word文件的重要内容来生成摘要 | -| crazy_functions\批量Markdown翻译.py | 批量翻译Markdown文件 | -| crazy_functions\批量总结PDF文档.py | 批量从PDF文件中提取摘要 | -| crazy_functions\批量总结PDF文档pdfminer.py | 批量从PDF文件中提取摘要 | -| crazy_functions\批量翻译PDF文档_多线程.py | 批量翻译PDF文件 | -| crazy_functions\理解PDF文档内容.py | 批量分析PDF文件并提取摘要 | -| crazy_functions\生成函数注释.py | 自动生成Python文件中函数的注释 | -| crazy_functions\解析项目源代码.py | 解析并分析给定项目的源代码 | -| crazy_functions\询问多个大语言模型.py | 向多个大语言模型询问输入文本并进行处理 | -| crazy_functions\读文献写摘要.py | 根据用户输入读取文献内容并生成摘要 | -| crazy_functions\谷歌检索小助手.py | 利用谷歌学术检索用户提供的论文信息并提取相关信息 | -| crazy_functions\高级功能函数模板.py | 实现高级功能的模板函数 | -| request_llm\bridge_all.py | 处理与LLM的交互 | -| request_llm\bridge_chatglm.py | 使用ChatGLM模型进行聊天 | -| request_llm\bridge_chatgpt.py | 实现对话生成的各项功能 | -| request_llm\bridge_tgui.py | 在Websockets中与用户进行交互并生成文本输出 | - - - -## [0/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\check_proxy.py - -该文件主要包括四个函数:check_proxy、backup_and_download、patch_and_restart 和 auto_update。其中,check_proxy 函数用于检查代理是否可用;backup_and_download 用于进行一键更新备份和下载;patch_and_restart 是一键更新协议的重要函数,用于覆盖和重启;auto_update 函数用于查询版本和用户意见,并自动进行一键更新。该文件主要使用了 requests、json、shutil、zipfile、distutils、subprocess 等 Python 标准库和 toolbox 和 colorful 两个第三方库。 - -## [1/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\colorful.py - -该程序文件实现了一些打印文本的函数,使其具有不同的颜色输出。当系统为Linux时直接跳过,否则使用colorama库来实现颜色输出。程序提供了深色和亮色两种颜色输出方式,同时也提供了对打印函数的别名。对于不是终端输出的情况,对所有的打印函数进行重复定义,以便在重定向时能够避免打印错误日志。 - -## [2/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\config.py - -该程序文件是一个配置文件,其主要功能是提供使用API密钥等信息,以及对程序的体验进行优化,例如定义对话框高度、布局等。还包含一些其他的设置,例如设置并行使用的线程数、重试次数限制等等。 - -## [3/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\config_private.py - -这是一个名为config_private.py的Python文件,它用于配置API_KEY和代理信息。API_KEY是一个私密密钥,用于访问某些受保护的API。USE_PROXY变量设置为True以应用代理,proxies变量配置了代理网络的地址和协议。在使用该文件时,需要填写正确的API_KEY和代理信息。 - -## [4/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\core_functional.py - -该文件是一个Python模块,名为"core_functional.py"。模块中定义了一个字典,包含了各种核心功能的配置信息,如英语学术润色、中文学术润色、查找语法错误等。每个功能都包含一些前言和后语,在前言中描述了该功能的任务和要求,在后语中提供一些附加信息。此外,有些功能还定义了一些特定的处理函数和按钮颜色。 - -## [5/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functional.py - -这是一个Python程序文件,文件名是crazy_functional.py。它导入了一个名为HotReload的工具箱,并定义了一个名为get_crazy_functions()的函数。这个函数包括三个部分的插件组,分别是已经编写完成的第一组插件、已经测试但距离完美状态还差一点点的第二组插件和尚未充分测试的第三组插件。每个插件都有一个名称、一个按钮颜色、一个函数和一个是否加入下拉菜单中的标志位。这些插件提供了多种功能,包括生成函数注释、解析项目源代码、批量翻译PDF文档、谷歌检索、PDF文档内容理解和Latex文档的全文润色、翻译等功能。其中第三组插件可能还存在一定的bug。 - -## [6/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\main.py - -该Python脚本代码实现了一个用于交互式对话的Chatbot机器人。它使用了Gradio框架来构建一个Web界面,并在此基础之上嵌入了一个文本输入框和与Chatbot进行交互的其他控件,包括提交、重置、停止和清除按钮、选择框和滑块等。此外,它还包括了一些类和函数和一些用于编程分析的工具和方法。整个程序文件的结构清晰,注释丰富,并提供了很多技术细节,使得开发者可以很容易地在其基础上进行二次开发、修改、扩展和集成。 - -## [7/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\theme.py - -该程序文件名为theme.py,主要功能为调节Gradio的全局样式。在该文件中,调节了Gradio的主题颜色、字体、阴影、边框、渐变等等样式。同时,该文件还添加了一些高级CSS样式,比如调整表格单元格的背景和边框,设定聊天气泡的圆角、最大宽度和阴影等等。如果CODE_HIGHLIGHT为True,则还进行了代码高亮显示。 - -## [8/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\toolbox.py - -这是一个名为`toolbox.py`的源代码文件。该文件包含了一系列工具函数和装饰器,用于聊天Bot的开发和调试。其中有一些功能包括将输入参数进行重组、捕捉函数中的异常并记录到历史记录中、生成Markdown格式的聊天记录报告等。该文件中还包含了一些与转换Markdown文本相关的函数。 - -## [9/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\crazy_utils.py - -这是一个Python程序文件 `crazy_utils.py`,它包含了两个函数: - -- `input_clipping(inputs, history, max_token_limit)`:这个函数接收三个参数,inputs 是一个字符串,history 是一个列表,max_token_limit 是一个整数。它使用 `tiktoken` 、`numpy` 和 `toolbox` 模块,处理输入文本和历史记录,将其裁剪到指定的最大标记数,避免输入过长导致的性能问题。如果 inputs 长度不超过 max_token_limit 的一半,则只裁剪历史;否则,同时裁剪输入和历史。 -- `request_gpt_model_in_new_thread_with_ui_alive(inputs, inputs_show_user, llm_kwargs, chatbot, history, sys_prompt, refresh_interval=0.2, handle_token_exceed=True, retry_times_at_unknown_error=2)`:这个函数接收八个参数,其中后三个是列表类型,其他为标量或句柄等。它提供对话窗口和刷新控制,执行 `predict_no_ui_long_connection` 方法,将输入数据发送至 GPT 模型并获取结果,如果子任务出错,返回相应的错误信息,否则返回结果。 - -## [10/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\Latex全文润色.py - -这是一个名为"crazy_functions\Latex全文润色.py"的程序文件,其中包含了两个函数"Latex英文润色"和"Latex中文润色",以及其他辅助函数。这些函数能够对 Latex 项目进行润色处理,其中 "多文件润色" 函数是一个主要函数,它调用了其他辅助函数用于读取和处理 Latex 项目中的文件。函数使用了多线程和机器学习模型进行自然语言处理,对文件进行简化和排版来满足学术标准。注释已删除并可以在函数内部查找。 - -## [11/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\Latex全文翻译.py - -这个程序文件包括一个用于对整个Latex项目进行翻译的函数 `Latex英译中` 和一个用于将中文翻译为英文的函数 `Latex中译英`。这两个函数都会尝试导入依赖库 tiktoken, 若无法导入则会提示用户安装。`Latex英译中` 函数会对 Latex 项目中的文件进行分离并去除注释,然后运行多线程翻译。`Latex中译英` 也做同样的事情,只不过是将中文翻译为英文。这个程序文件还包括其他一些帮助函数。 - -## [12/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\__init__.py - -这是一个 Python 包,包名为 `crazy_functions`,在 `__init__.py` 文件中定义了一些函数,包含以下函数: - -- `crazy_addition(a, b)`:对两个数进行加法运算,并将结果返回。 -- `crazy_multiplication(a, b)`:对两个数进行乘法运算,并将结果返回。 -- `crazy_subtraction(a, b)`:对两个数进行减法运算,并将结果返回。 -- `crazy_division(a, b)`:对两个数进行除法运算,并将结果返回。 -- `crazy_factorial(n)`:计算 `n` 的阶乘并返回结果。 - -这些函数可能会有一些奇怪或者不符合常规的实现方式(由函数名可以看出来),所以这个包的名称为 `crazy_functions`,可能是暗示这些函数会有一些“疯狂”的实现方式。 - -## [13/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\下载arxiv论文翻译摘要.py - -该程序实现了一个名为“下载arxiv论文并翻译摘要”的函数插件,作者是“binary-husky”。该函数的功能是,在输入一篇arxiv论文的链接后,提取摘要、下载PDF文档、翻译摘要为中文,并将翻译结果保存到文件中。程序使用了一些Python库,如requests、pdfminer和beautifulsoup4等。程序入口是名为“下载arxiv论文并翻译摘要”的函数,其中使用了自定义的辅助函数download_arxiv_和get_name。程序中还使用了其他非函数的辅助函数和变量,如update_ui、CatchException、report_exception和get_conf等。 - -## [14/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\代码重写为全英文_多线程.py - -该文件是一个多线程Python脚本,包含多个函数和利用第三方库进行的API请求。主要功能是将给定文件夹内的Python代码文件中所有中文转化为英文,然后输出转化后的英文代码。重要的功能和步骤包括: - -1. 清空历史,以免输入溢出 -2. 尝试导入依赖,如果缺少依赖,则给出安装建议 -3. 集合文件 -4. 显示随意内容以防卡顿的感觉 -5. Token限制下的截断与处理 -6. 多线程操作请求转换中文变为英文的代码 -7. 所有线程同时开始执行任务函数 -8. 循环轮询各个线程是否执行完毕 -9. 把结果写入文件 -10. 备份一个文件 - -## [15/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\总结word文档.py - -这是一个名为"总结word文档.py"的程序文件,使用python编写。该文件导入了"toolbox"和"crazy_utils"模块,实现了解析docx格式和doc格式的文件的功能。该文件包含了一个名为"解析docx"的函数,通过对文件内容应用自然语言处理技术,生成文章片段的中英文概述。具体实现过程中,该函数使用了"docx"模块和"win32com.client"模块来实现对docx和doc格式文件的解析,同时使用了"request_gpt_model_in_new_thread_with_ui_alive"函数来向GPT模型发起请求。最后,该文件还实现了一个名为"总结word文档"的函数来批量总结Word文档。 - -## [16/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\批量Markdown翻译.py - -这个程序文件实现了一个批量Markdown翻译功能,可以将一个源代码项目中的Markdown文本翻译成指定语言(目前支持中<-英和英<-中)。程序主要分为三个函数,`PaperFileGroup`类用于处理长文本的拆分,`多文件翻译`是主要函数调用了`request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency`函数进行多线程翻译并输出结果,`Markdown英译中`和`Markdown中译外`分别是英译中和中译英的入口函数,用于解析项目路径和调用翻译函数。程序依赖于tiktoken等库实现。 - -## [17/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\批量总结PDF文档.py - -这是一个名为“批量总结PDF文档”的Python脚本,包含了多个函数。其中有一个函数名为“clean_text”,可以对PDF提取出的原始文本进行清洗和格式化处理,将连字转换为其基本形式,并根据heuristic规则判断换行符是否是段落分隔,并相应地进行替换。另一个函数名为“解析PDF”,可以接收一个PDF文件清单,并对清单中的每一个PDF进行解析,提取出文本并调用“clean_text”函数进行清洗和格式化处理,然后向用户发送一个包含文章简介信息的问题并等待用户回答。最后,该脚本也包含一个名为“批量总结PDF文档”的主函数,其中调用了“解析PDF”函数来完成对PDF文件的批量处理。 - -## [18/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\批量总结PDF文档pdfminer.py - -这个文件是一个Python模块,文件名为pdfminer.py,它定义了一个函数批量总结PDF文档。该函数接受一些参数,然后尝试导入pdfminer和beautifulsoup4库。该函数将读取pdf文件或tex文件中的内容,对其进行分析,并使用GPT模型进行自然语言摘要。文件中还有一个辅助函数readPdf,用于读取pdf文件中的内容。 - -## [19/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\批量翻译PDF文档_多线程.py - -这是一个Python脚本,文件名是crazy_functions\批量翻译PDF文档_多线程.py。该脚本提供了一个名为“批量翻译PDF文档”的函数,可以批量翻译PDF文件并生成报告文件。该函数使用了多个模块和函数(如toolbox、crazy_utils、update_ui等),使用了Python的异常处理和多线程功能,还使用了一些文本处理函数和第三方库(如fitz和tiktoken)。在函数执行过程中,它会进行一些参数检查、读取和清理PDF文本、递归地切割PDF文件、获取文章meta信息、多线程翻译、整理报告格式等操作,并更新UI界面和生成报告文件。 - -## [20/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\理解PDF文档内容.py - -这是一个解析PDF文件内容的Python程序,程序文件名为"理解PDF文档内容.py",程序主要由5个步骤组成:第0步是切割PDF文件;第1步是从摘要中提取高价值信息,放到history中;第2步是迭代地历遍整个文章,提取精炼信息;第3步是整理history;第4步是设置一个token上限,防止回答时Token溢出。程序主要用到了Python中的各种模块和函数库,如:toolbox, tiktoken, pymupdf等。 - -## [21/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\生成函数注释.py - -这是一个名为"生成函数注释"的函数,带有一个装饰器"@CatchException",可以捕获异常。该函数接受文件路径、参数和聊天机器人等参数,用于对多个Python或C++文件进行函数注释,使用了"toolbox"和"crazy_utils"模块中的函数。该函数会逐个读取指定文件中的内容,并使用聊天机器人进行交互,向用户请求注释信息,然后将生成的注释与原文件内容一起输出到一个markdown表格中。最后,该函数返回一个字符串,指示任务是否已完成。另外还包含一个名为"批量生成函数注释"的函数,它与"生成函数注释"函数一起用于批量处理多个文件。 - -## [22/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\解析项目源代码.py - -这个程序文件实现了对一个源代码项目进行分析的功能。其中,函数`解析项目本身`、`解析一个Python项目`、`解析一个C项目的头文件`、`解析一个C项目`、`解析一个Java项目`和`解析前端项目`分别用于解析不同类型的项目。函数`解析源代码新`实现了对每一个源代码文件的分析,并将分析结果汇总,同时还实现了分组和迭代处理,提高了效率。最后,函数`write_results_to_file`将所有分析结果写入文件。中间,还用到了`request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency`和`request_gpt_model_in_new_thread_with_ui_alive`来完成请求和响应,并用`update_ui`实时更新界面。 - -## [23/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\询问多个大语言模型.py - -这是一个Python程序,文件名为"crazy_functions\询问多个大语言模型.py"。该程序实现了一个同时向多个大语言模型询问的功能,接收用户输入文本以及模型参数,向ChatGPT和ChatGLM模型发出请求,并将对话记录显示在聊天框中,同时刷新界面。 - -## [24/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\读文章写摘要.py - -该程序文件是一个Python模块,文件名为"读文章写摘要.py",主要包含两个函数:"解析Paper"和"读文章写摘要"。其中,"解析Paper"函数接受文件路径、参数等参数,逐个打印文件内容并使用GPT模型生成对该文件的摘要;"读文章写摘要"函数则接受一段文本内容和参数,将该文本内容及其所有.tex文件逐个传递给"解析Paper"函数进行处理,并使用GPT模型生成文章的中英文摘要。文件还导入了一些工具函数,如异常处理、信息上报和文件写入等。 - -## [25/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\谷歌检索小助手.py - -该文件代码包含了一个名为`get_meta_information`的函数和一个名为`谷歌检索小助手`的装饰器函数,用于从谷歌学术中抓取文章元信息,并从用户提供的搜索页面中分析所有文章的相关信息。该文件使用了许多第三方库,如requests、arxiv、BeautifulSoup等。其中`get_meta_information`函数中还定义了一个名为`string_similar`的辅助函数,用于比较字符串相似度。 - -## [26/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\高级功能函数模板.py - -该程序文件是一个 Python 模块,包含一个名为“高阶功能模板函数”的函数。该函数接受多个参数,其中包括输入文本、GPT 模型参数、插件模型参数、聊天显示框、聊天历史等。 该函数的主要功能是根据输入文本,使用 GPT 模型生成一些问题,并等待用户回答这些问题(使用 Markdown 格式),然后将用户回答加入到聊天历史中,并更新聊天显示框。该函数还包含了一些异常处理和多线程的相关操作。该程序文件还引用了另一个 Python 模块中的两个函数,分别为“CatchException”和“update_ui”,并且还引用了一个名为“request_gpt_model_in_new_thread_with_ui_alive”的自定义函数。 - -## [27/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\request_llm\bridge_all.py - -这个文件是用来处理与LLM的交互的。包含两个函数,一个是 predict_no_ui_long_connection 用来处理长文本的输出,可以多线程调用;另一个是 predict 用来处理基础的对话功能。这个文件会导入其他文件中定义的方法进行调用,具体调用哪个方法取决于传入的参数。函数中还有一些装饰器和管理多线程的逻辑。 - -## [28/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\request_llm\bridge_chatglm.py - -这个程序文件实现了一个使用ChatGLM模型进行聊天的功能。具体实现过程是:首先进行初始化,然后使用GetGLMHandle类进行ChatGLM模型的加载和运行。predict_no_ui_long_connection函数用于多线程聊天,而predict函数用于单线程聊天,它们的不同之处在于前者不会更新UI界面,后者会。这个文件还导入了其他模块和库,例如transformers、time、importlib等,并使用了多进程Pipe。 - -## [29/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\request_llm\bridge_chatgpt.py - -这个程序文件是用于对话生成的,主要包含三个函数:predict、predict_no_ui、predict_no_ui_long_connection。其中,predict是用于普通对话的函数,具备完备的交互功能,但不具备多线程能力;predict_no_ui是高级实验性功能模块调用的函数,参数简单,可以多线程并行,方便实现复杂的功能逻辑;predict_no_ui_long_connection解决了predict_no_ui在处理长文档时容易断开连接的问题,同样支持多线程。程序中还包含一些常量和工具函数,用于整合信息,选择LLM模型,生成http请求,发送请求,接收响应等。它需要配置一个config文件,包含代理网址、API等敏感信息。 - -## [30/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\request_llm\bridge_tgui.py - -该程序文件实现了一个基于Websockets的文本生成服务和对话功能。其中,有三个函数:`run()`、`predict()`和`predict_no_ui_long_connection()`。`run()`函数用于连接到Websocket服务并生成文本结果;`predict()`函数用于将用户输入作为文本生成的输入,同时在UI上显示对话历史记录,并在不断更新UI的过程中不断更新生成的文本输出;`predict_no_ui_long_connection()`函数与`predict()`函数类似,但没有UI,并在一段时间内返回单个生成的文本。整个程序还引入了多个Python模块来完成相关功能,例如`asyncio`、`websockets`、`json`等等。 - -## 根据以上分析,对程序的整体功能和构架重新做出概括。然后用一张markdown表格整理每个文件的功能(包括check_proxy.py, colorful.py, config.py, config_private.py, core_functional.py, crazy_functional.py, main.py, theme.py, toolbox.py, crazy_functions\crazy_utils.py, crazy_functions\Latex全文润色.py, crazy_functions\Latex全文翻译.py, crazy_functions\__init__.py, crazy_functions\下载arxiv论文翻译摘要.py, crazy_functions\代码重写为全英文_多线程.py, crazy_functions\总结word文档.py)。 - -程序功能概括:该程序是一个聊天机器人,可以通过 Web 界面与用户进行交互。它包含了丰富的功能,如文本润色、翻译、代码重写、在线查找等,并且支持多线程处理。用户可以通过 Gradio 框架提供的 Web 界面进行交互,程序还提供了一些调试工具,如toolbox 模块,方便程序开发和调试。 - -下表概述了每个文件的功能: - -| 文件名 | 功能 | -| ----------------------------------------------------------- | ------------------------------------------------------------ | -| check_proxy.py | 检查代理是否可用 | -| colorful.py | 用于打印文本的字体颜色输出模块 | -| config.py | 用于程序中的各种设置,如并行线程数量和重试次数的限制等 | -| config_private.py | 配置API_KEY和代理信息的文件 | -| core_functional.py | 包含具体的文本处理功能的模块 | -| crazy_functional.py | 包括各种插件函数的模块,提供了多种文本处理功能 | -| main.py | 包含 Chatbot 机器人主程序的模块 | -| theme.py | 用于调节全局样式的模块 | -| toolbox.py | 包含工具函数和装饰器,用于聊天Bot的开发和调试 | -| crazy_functions\crazy_utils.py | 包含一些辅助函数,如文本裁剪和消息捕捉等 | -| crazy_functions\Latex全文润色.py | 对 Latex 项目进行润色处理的功能模块 | -| crazy_functions\Latex全文翻译.py | 对 Latex 项目进行翻译的功能模块 | -| crazy_functions\__init__.py | 定义一些奇特的数学函数等 | -| crazy_functions\下载arxiv论文翻译摘要.py | 下载 Arxiv 论文并翻译摘要的功能模块 | -| crazy_functions\代码重写为全英文_多线程.py | 将Python程序中所有中文转化为英文的功能模块 | -| crazy_functions\总结word文档.py | 解析 docx 和 doc 格式的文件,生成文章片段的中英文概述的功能模块 | - -## 根据以上分析,对程序的整体功能和构架重新做出概括。然后用一张markdown表格整理每个文件的功能(包括check_proxy.py, colorful.py, config.py, config_private.py, core_functional.py, crazy_functional.py, main.py, theme.py, toolbox.py, crazy_functions\crazy_utils.py, crazy_functions\Latex全文润色.py, crazy_functions\Latex全文翻译.py, crazy_functions\__init__.py, crazy_functions\下载arxiv论文翻译摘要.py, crazy_functions\代码重写为全英文_多线程.py, crazy_functions\总结word文档.py, crazy_functions\批量Markdown翻译.py, crazy_functions\批量总结PDF文档.py, crazy_functions\批量总结PDF文档pdfminer.py, crazy_functions\批量翻译PDF文档_多线程.py, crazy_functions\理解PDF文档内容.py, crazy_functions\生成函数注释.py, crazy_functions\解析项目源代码.py, crazy_functions\询问多个大语言模型.py, crazy_functions\读文章写摘要.py, crazy_functions\谷歌检索小助手.py, crazy_functions\高级功能函数模板.py, request_llm\bridge_all.py, request_llm\bridge_chatglm.py, request_llm\bridge_chatgpt.py, request_llm\bridge_tgui.py)。 - -根据以上分析,整个程序是一个集成了多个有用工具和功能的文本处理和生成工具,提供了多种在不同场景下使用的功能,包括但不限于对话生成、文本摘要、PDF文件批量处理、代码翻译和实用工具等。主要的Python模块包括"toolbox.py"、"config.py"、"core_functional.py"和"crazy_functional.py"等,并且还使用了许多第三方库和模块实现相关功能。以下是每个程序文件的功能: - -| 文件名 | 文件功能 | -| --- | --- | -| check_proxy.py | 用于检查代理的正确性和可用性 | -| colorful.py | 包含不同预设置颜色的常量,并用于多种UI元素 | -| config.py | 用于全局配置的类 | -| config_private.py | 与config.py文件一起使用的另一个配置文件,用于更改私密信息 | -| core_functional.py | 包含一些TextFunctional类和基础功能函数 | -| crazy_functional.py | 包含大量高级功能函数和实验性的功能函数 | -| main.py | 程序的主入口,包含GUI主窗口和主要的UI管理功能 | -| theme.py | 包含一些预设置主题的颜色 | -| toolbox.py | 提供了一些有用的工具函数 | -| crazy_functions\crazy_utils.py | 包含一些用于实现高级功能的辅助函数 | -| crazy_functions\Latex全文润色.py | 实现了对LaTeX文件中全文的润色和格式化功能 | -| crazy_functions\Latex全文翻译.py | 实现了对LaTeX文件中的内容进行翻译的功能 | -| crazy_functions\_\_init\_\_.py | 用于导入crazy_functional.py中的功能函数 | -| crazy_functions\下载arxiv论文翻译摘要.py | 从Arxiv上下载论文并提取重要信息 | -| crazy_functions\代码重写为全英文_多线程.py | 针对中文Python文件,将其翻译为全英文 | -| crazy_functions\总结word文档.py | 提取Word文件的重要内容来生成摘要 | -| crazy_functions\批量Markdown翻译.py | 批量翻译Markdown文件 | -| crazy_functions\批量总结PDF文档.py | 批量从PDF文件中提取摘要 | -| crazy_functions\批量总结PDF文档pdfminer.py | 批量从PDF文件中提取摘要 | -| crazy_functions\批量翻译PDF文档_多线程.py | 批量翻译PDF文件 | -| crazy_functions\理解PDF文档内容.py | 批量分析PDF文件并提取摘要 | -| crazy_functions\生成函数注释.py | 自动生成Python文件中函数的注释 | -| crazy_functions\解析项目源代码.py | 解析并分析给定项目的源代码 | -| crazy_functions\询问多个大语言模型.py | 向多个大语言模型询问输入文本并进行处理 | -| crazy_functions\读文献写摘要.py | 根据用户输入读取文献内容并生成摘要 | -| crazy_functions\谷歌检索小助手.py | 利用谷歌学术检索用户提供的论文信息并提取相关信息 | -| crazy_functions\高级功能函数模板.py | 实现高级功能的模板函数 | -| request_llm\bridge_all.py | 处理与LLM的交互 | -| request_llm\bridge_chatglm.py | 使用ChatGLM模型进行聊天 | -| request_llm\bridge_chatgpt.py | 实现对话生成的各项功能 | -| request_llm\bridge_tgui.py | 在Websockets中与用户进行交互并生成文本输出 | - diff --git a/spaces/Cropinky/hana_hanak_houses/realesrgan/data/realesrgan_dataset.py b/spaces/Cropinky/hana_hanak_houses/realesrgan/data/realesrgan_dataset.py deleted file mode 100644 index 4cf2d9e6583a6789b771679734ce55bb8a22e628..0000000000000000000000000000000000000000 --- a/spaces/Cropinky/hana_hanak_houses/realesrgan/data/realesrgan_dataset.py +++ /dev/null @@ -1,192 +0,0 @@ -import cv2 -import math -import numpy as np -import os -import os.path as osp -import random -import time -import torch -from basicsr.data.degradations import circular_lowpass_kernel, random_mixed_kernels -from basicsr.data.transforms import augment -from basicsr.utils import FileClient, get_root_logger, imfrombytes, img2tensor -from basicsr.utils.registry import DATASET_REGISTRY -from torch.utils import data as data - - -@DATASET_REGISTRY.register() -class RealESRGANDataset(data.Dataset): - """Dataset used for Real-ESRGAN model: - Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data. - - It loads gt (Ground-Truth) images, and augments them. - It also generates blur kernels and sinc kernels for generating low-quality images. - Note that the low-quality images are processed in tensors on GPUS for faster processing. - - Args: - opt (dict): Config for train datasets. It contains the following keys: - dataroot_gt (str): Data root path for gt. - meta_info (str): Path for meta information file. - io_backend (dict): IO backend type and other kwarg. - use_hflip (bool): Use horizontal flips. - use_rot (bool): Use rotation (use vertical flip and transposing h and w for implementation). - Please see more options in the codes. - """ - - def __init__(self, opt): - super(RealESRGANDataset, self).__init__() - self.opt = opt - self.file_client = None - self.io_backend_opt = opt['io_backend'] - self.gt_folder = opt['dataroot_gt'] - - # file client (lmdb io backend) - if self.io_backend_opt['type'] == 'lmdb': - self.io_backend_opt['db_paths'] = [self.gt_folder] - self.io_backend_opt['client_keys'] = ['gt'] - if not self.gt_folder.endswith('.lmdb'): - raise ValueError(f"'dataroot_gt' should end with '.lmdb', but received {self.gt_folder}") - with open(osp.join(self.gt_folder, 'meta_info.txt')) as fin: - self.paths = [line.split('.')[0] for line in fin] - else: - # disk backend with meta_info - # Each line in the meta_info describes the relative path to an image - with open(self.opt['meta_info']) as fin: - paths = [line.strip().split(' ')[0] for line in fin] - self.paths = [os.path.join(self.gt_folder, v) for v in paths] - - # blur settings for the first degradation - self.blur_kernel_size = opt['blur_kernel_size'] - self.kernel_list = opt['kernel_list'] - self.kernel_prob = opt['kernel_prob'] # a list for each kernel probability - self.blur_sigma = opt['blur_sigma'] - self.betag_range = opt['betag_range'] # betag used in generalized Gaussian blur kernels - self.betap_range = opt['betap_range'] # betap used in plateau blur kernels - self.sinc_prob = opt['sinc_prob'] # the probability for sinc filters - - # blur settings for the second degradation - self.blur_kernel_size2 = opt['blur_kernel_size2'] - self.kernel_list2 = opt['kernel_list2'] - self.kernel_prob2 = opt['kernel_prob2'] - self.blur_sigma2 = opt['blur_sigma2'] - self.betag_range2 = opt['betag_range2'] - self.betap_range2 = opt['betap_range2'] - self.sinc_prob2 = opt['sinc_prob2'] - - # a final sinc filter - self.final_sinc_prob = opt['final_sinc_prob'] - - self.kernel_range = [2 * v + 1 for v in range(3, 11)] # kernel size ranges from 7 to 21 - # TODO: kernel range is now hard-coded, should be in the configure file - self.pulse_tensor = torch.zeros(21, 21).float() # convolving with pulse tensor brings no blurry effect - self.pulse_tensor[10, 10] = 1 - - def __getitem__(self, index): - if self.file_client is None: - self.file_client = FileClient(self.io_backend_opt.pop('type'), **self.io_backend_opt) - - # -------------------------------- Load gt images -------------------------------- # - # Shape: (h, w, c); channel order: BGR; image range: [0, 1], float32. - gt_path = self.paths[index] - # avoid errors caused by high latency in reading files - retry = 3 - while retry > 0: - try: - img_bytes = self.file_client.get(gt_path, 'gt') - except (IOError, OSError) as e: - logger = get_root_logger() - logger.warn(f'File client error: {e}, remaining retry times: {retry - 1}') - # change another file to read - index = random.randint(0, self.__len__()) - gt_path = self.paths[index] - time.sleep(1) # sleep 1s for occasional server congestion - else: - break - finally: - retry -= 1 - img_gt = imfrombytes(img_bytes, float32=True) - - # -------------------- Do augmentation for training: flip, rotation -------------------- # - img_gt = augment(img_gt, self.opt['use_hflip'], self.opt['use_rot']) - - # crop or pad to 400 - # TODO: 400 is hard-coded. You may change it accordingly - h, w = img_gt.shape[0:2] - crop_pad_size = 400 - # pad - if h < crop_pad_size or w < crop_pad_size: - pad_h = max(0, crop_pad_size - h) - pad_w = max(0, crop_pad_size - w) - img_gt = cv2.copyMakeBorder(img_gt, 0, pad_h, 0, pad_w, cv2.BORDER_REFLECT_101) - # crop - if img_gt.shape[0] > crop_pad_size or img_gt.shape[1] > crop_pad_size: - h, w = img_gt.shape[0:2] - # randomly choose top and left coordinates - top = random.randint(0, h - crop_pad_size) - left = random.randint(0, w - crop_pad_size) - img_gt = img_gt[top:top + crop_pad_size, left:left + crop_pad_size, ...] - - # ------------------------ Generate kernels (used in the first degradation) ------------------------ # - kernel_size = random.choice(self.kernel_range) - if np.random.uniform() < self.opt['sinc_prob']: - # this sinc filter setting is for kernels ranging from [7, 21] - if kernel_size < 13: - omega_c = np.random.uniform(np.pi / 3, np.pi) - else: - omega_c = np.random.uniform(np.pi / 5, np.pi) - kernel = circular_lowpass_kernel(omega_c, kernel_size, pad_to=False) - else: - kernel = random_mixed_kernels( - self.kernel_list, - self.kernel_prob, - kernel_size, - self.blur_sigma, - self.blur_sigma, [-math.pi, math.pi], - self.betag_range, - self.betap_range, - noise_range=None) - # pad kernel - pad_size = (21 - kernel_size) // 2 - kernel = np.pad(kernel, ((pad_size, pad_size), (pad_size, pad_size))) - - # ------------------------ Generate kernels (used in the second degradation) ------------------------ # - kernel_size = random.choice(self.kernel_range) - if np.random.uniform() < self.opt['sinc_prob2']: - if kernel_size < 13: - omega_c = np.random.uniform(np.pi / 3, np.pi) - else: - omega_c = np.random.uniform(np.pi / 5, np.pi) - kernel2 = circular_lowpass_kernel(omega_c, kernel_size, pad_to=False) - else: - kernel2 = random_mixed_kernels( - self.kernel_list2, - self.kernel_prob2, - kernel_size, - self.blur_sigma2, - self.blur_sigma2, [-math.pi, math.pi], - self.betag_range2, - self.betap_range2, - noise_range=None) - - # pad kernel - pad_size = (21 - kernel_size) // 2 - kernel2 = np.pad(kernel2, ((pad_size, pad_size), (pad_size, pad_size))) - - # ------------------------------------- the final sinc kernel ------------------------------------- # - if np.random.uniform() < self.opt['final_sinc_prob']: - kernel_size = random.choice(self.kernel_range) - omega_c = np.random.uniform(np.pi / 3, np.pi) - sinc_kernel = circular_lowpass_kernel(omega_c, kernel_size, pad_to=21) - sinc_kernel = torch.FloatTensor(sinc_kernel) - else: - sinc_kernel = self.pulse_tensor - - # BGR to RGB, HWC to CHW, numpy to tensor - img_gt = img2tensor([img_gt], bgr2rgb=True, float32=True)[0] - kernel = torch.FloatTensor(kernel) - kernel2 = torch.FloatTensor(kernel2) - - return_d = {'gt': img_gt, 'kernel1': kernel, 'kernel2': kernel2, 'sinc_kernel': sinc_kernel, 'gt_path': gt_path} - return return_d - - def __len__(self): - return len(self.paths) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/PdfImagePlugin.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/PdfImagePlugin.py deleted file mode 100644 index c41f8aee0044799050dbcd2d7a01a7726511fae4..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/PdfImagePlugin.py +++ /dev/null @@ -1,284 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# PDF (Acrobat) file handling -# -# History: -# 1996-07-16 fl Created -# 1997-01-18 fl Fixed header -# 2004-02-21 fl Fixes for 1/L/CMYK images, etc. -# 2004-02-24 fl Fixes for 1 and P images. -# -# Copyright (c) 1997-2004 by Secret Labs AB. All rights reserved. -# Copyright (c) 1996-1997 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# - -## -# Image plugin for PDF images (output only). -## - -import io -import math -import os -import time - -from . import Image, ImageFile, ImageSequence, PdfParser, __version__, features - -# -# -------------------------------------------------------------------- - -# object ids: -# 1. catalogue -# 2. pages -# 3. image -# 4. page -# 5. page contents - - -def _save_all(im, fp, filename): - _save(im, fp, filename, save_all=True) - - -## -# (Internal) Image save plugin for the PDF format. - - -def _save(im, fp, filename, save_all=False): - is_appending = im.encoderinfo.get("append", False) - if is_appending: - existing_pdf = PdfParser.PdfParser(f=fp, filename=filename, mode="r+b") - else: - existing_pdf = PdfParser.PdfParser(f=fp, filename=filename, mode="w+b") - - dpi = im.encoderinfo.get("dpi") - if dpi: - x_resolution = dpi[0] - y_resolution = dpi[1] - else: - x_resolution = y_resolution = im.encoderinfo.get("resolution", 72.0) - - info = { - "title": None - if is_appending - else os.path.splitext(os.path.basename(filename))[0], - "author": None, - "subject": None, - "keywords": None, - "creator": None, - "producer": None, - "creationDate": None if is_appending else time.gmtime(), - "modDate": None if is_appending else time.gmtime(), - } - for k, default in info.items(): - v = im.encoderinfo.get(k) if k in im.encoderinfo else default - if v: - existing_pdf.info[k[0].upper() + k[1:]] = v - - # - # make sure image data is available - im.load() - - existing_pdf.start_writing() - existing_pdf.write_header() - existing_pdf.write_comment(f"created by Pillow {__version__} PDF driver") - - # - # pages - ims = [im] - if save_all: - append_images = im.encoderinfo.get("append_images", []) - for append_im in append_images: - append_im.encoderinfo = im.encoderinfo.copy() - ims.append(append_im) - number_of_pages = 0 - image_refs = [] - page_refs = [] - contents_refs = [] - for im in ims: - im_number_of_pages = 1 - if save_all: - try: - im_number_of_pages = im.n_frames - except AttributeError: - # Image format does not have n_frames. - # It is a single frame image - pass - number_of_pages += im_number_of_pages - for i in range(im_number_of_pages): - image_refs.append(existing_pdf.next_object_id(0)) - page_refs.append(existing_pdf.next_object_id(0)) - contents_refs.append(existing_pdf.next_object_id(0)) - existing_pdf.pages.append(page_refs[-1]) - - # - # catalog and list of pages - existing_pdf.write_catalog() - - page_number = 0 - for im_sequence in ims: - im_pages = ImageSequence.Iterator(im_sequence) if save_all else [im_sequence] - for im in im_pages: - # FIXME: Should replace ASCIIHexDecode with RunLengthDecode - # (packbits) or LZWDecode (tiff/lzw compression). Note that - # PDF 1.2 also supports Flatedecode (zip compression). - - bits = 8 - params = None - decode = None - - # - # Get image characteristics - - width, height = im.size - - if im.mode == "1": - if features.check("libtiff"): - filter = "CCITTFaxDecode" - bits = 1 - params = PdfParser.PdfArray( - [ - PdfParser.PdfDict( - { - "K": -1, - "BlackIs1": True, - "Columns": width, - "Rows": height, - } - ) - ] - ) - else: - filter = "DCTDecode" - colorspace = PdfParser.PdfName("DeviceGray") - procset = "ImageB" # grayscale - elif im.mode == "L": - filter = "DCTDecode" - # params = f"<< /Predictor 15 /Columns {width-2} >>" - colorspace = PdfParser.PdfName("DeviceGray") - procset = "ImageB" # grayscale - elif im.mode == "P": - filter = "ASCIIHexDecode" - palette = im.getpalette() - colorspace = [ - PdfParser.PdfName("Indexed"), - PdfParser.PdfName("DeviceRGB"), - 255, - PdfParser.PdfBinary(palette), - ] - procset = "ImageI" # indexed color - elif im.mode == "RGB": - filter = "DCTDecode" - colorspace = PdfParser.PdfName("DeviceRGB") - procset = "ImageC" # color images - elif im.mode == "RGBA": - filter = "JPXDecode" - colorspace = PdfParser.PdfName("DeviceRGB") - procset = "ImageC" # color images - elif im.mode == "CMYK": - filter = "DCTDecode" - colorspace = PdfParser.PdfName("DeviceCMYK") - procset = "ImageC" # color images - decode = [1, 0, 1, 0, 1, 0, 1, 0] - else: - msg = f"cannot save mode {im.mode}" - raise ValueError(msg) - - # - # image - - op = io.BytesIO() - - if filter == "ASCIIHexDecode": - ImageFile._save(im, op, [("hex", (0, 0) + im.size, 0, im.mode)]) - elif filter == "CCITTFaxDecode": - im.save( - op, - "TIFF", - compression="group4", - # use a single strip - strip_size=math.ceil(im.width / 8) * im.height, - ) - elif filter == "DCTDecode": - Image.SAVE["JPEG"](im, op, filename) - elif filter == "JPXDecode": - Image.SAVE["JPEG2000"](im, op, filename) - elif filter == "FlateDecode": - ImageFile._save(im, op, [("zip", (0, 0) + im.size, 0, im.mode)]) - elif filter == "RunLengthDecode": - ImageFile._save(im, op, [("packbits", (0, 0) + im.size, 0, im.mode)]) - else: - msg = f"unsupported PDF filter ({filter})" - raise ValueError(msg) - - stream = op.getvalue() - if filter == "CCITTFaxDecode": - stream = stream[8:] - filter = PdfParser.PdfArray([PdfParser.PdfName(filter)]) - else: - filter = PdfParser.PdfName(filter) - - existing_pdf.write_obj( - image_refs[page_number], - stream=stream, - Type=PdfParser.PdfName("XObject"), - Subtype=PdfParser.PdfName("Image"), - Width=width, # * 72.0 / x_resolution, - Height=height, # * 72.0 / y_resolution, - Filter=filter, - BitsPerComponent=bits, - Decode=decode, - DecodeParms=params, - ColorSpace=colorspace, - ) - - # - # page - - existing_pdf.write_page( - page_refs[page_number], - Resources=PdfParser.PdfDict( - ProcSet=[PdfParser.PdfName("PDF"), PdfParser.PdfName(procset)], - XObject=PdfParser.PdfDict(image=image_refs[page_number]), - ), - MediaBox=[ - 0, - 0, - width * 72.0 / x_resolution, - height * 72.0 / y_resolution, - ], - Contents=contents_refs[page_number], - ) - - # - # page contents - - page_contents = b"q %f 0 0 %f 0 0 cm /image Do Q\n" % ( - width * 72.0 / x_resolution, - height * 72.0 / y_resolution, - ) - - existing_pdf.write_obj(contents_refs[page_number], stream=page_contents) - - page_number += 1 - - # - # trailer - existing_pdf.write_xref_and_trailer() - if hasattr(fp, "flush"): - fp.flush() - existing_pdf.close() - - -# -# -------------------------------------------------------------------- - - -Image.register_save("PDF", _save) -Image.register_save_all("PDF", _save_all) - -Image.register_extension("PDF", ".pdf") - -Image.register_mime("PDF", "application/pdf") diff --git a/spaces/Datasculptor/MusicGen/CONTRIBUTING.md b/spaces/Datasculptor/MusicGen/CONTRIBUTING.md deleted file mode 100644 index 55b99140204d785d572ada9761dd77f302ae31c6..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/MusicGen/CONTRIBUTING.md +++ /dev/null @@ -1,35 +0,0 @@ -# Contributing to Audiocraft - -We want to make contributing to this project as easy and transparent as -possible. - -## Pull Requests - -Audiocraft is the implementation of a research paper. -Therefore, we do not plan on accepting many pull requests for new features. -We certainly welcome them for bug fixes. - -1. Fork the repo and create your branch from `main`. -2. If you've added code that should be tested, add tests. -3. If you've changed APIs, update the documentation. -4. Ensure the test suite passes. -5. Make sure your code lints. -6. If you haven't already, complete the Contributor License Agreement ("CLA"). - -## Contributor License Agreement ("CLA") -In order to accept your pull request, we need you to submit a CLA. You only need -to do this once to work on any of Meta's open source projects. - -Complete your CLA here: - -## Issues -We use GitHub issues to track public bugs. Please ensure your description is -clear and has sufficient instructions to be able to reproduce the issue. - -Meta has a [bounty program](https://www.facebook.com/whitehat/) for the safe -disclosure of security bugs. In those cases, please go through the process -outlined on that page and do not file a public issue. - -## License -By contributing to encodec, you agree that your contributions will be licensed -under the LICENSE file in the root directory of this source tree. diff --git a/spaces/Detomo/ai-comic-generation/src/lib/getInitialRenderedScene.ts b/spaces/Detomo/ai-comic-generation/src/lib/getInitialRenderedScene.ts deleted file mode 100644 index 7c0739bf8bebdaf16aa4acf610eb6bdad9c15fd2..0000000000000000000000000000000000000000 --- a/spaces/Detomo/ai-comic-generation/src/lib/getInitialRenderedScene.ts +++ /dev/null @@ -1,11 +0,0 @@ -import { RenderedScene } from "@/types" - -export const getInitialRenderedScene = (): RenderedScene => ({ - renderId: "", - status: "pending", - assetUrl: "", - alt: "", - error: "", - maskUrl: "", - segments: [] -}) \ No newline at end of file diff --git a/spaces/Detomo/ai-comic-generation/src/lib/replaceWhiteWithTransparent.ts b/spaces/Detomo/ai-comic-generation/src/lib/replaceWhiteWithTransparent.ts deleted file mode 100644 index cee490fc1a0b19b2192ce86d6c8f9867a3a6a6d9..0000000000000000000000000000000000000000 --- a/spaces/Detomo/ai-comic-generation/src/lib/replaceWhiteWithTransparent.ts +++ /dev/null @@ -1,37 +0,0 @@ -export function replaceWhiteWithTransparent(imageBase64: string): Promise { - return new Promise((resolve, reject) => { - const img = new Image(); - img.onload = () => { - const canvas = document.createElement('canvas'); - canvas.width = img.width; - canvas.height = img.height; - - const ctx = canvas.getContext('2d'); - if (!ctx) { - reject('Unable to get canvas 2D context'); - return; - } - - ctx.drawImage(img, 0, 0); - - const imageData = ctx.getImageData(0, 0, canvas.width, canvas.height); - const data = imageData.data; - - for (let i = 0; i < data.length; i += 4) { - if (data[i] === 255 && data[i + 1] === 255 && data[i + 2] === 255) { - data[i + 3] = 0; - } - } - - ctx.putImageData(imageData, 0, 0); - - resolve(canvas.toDataURL()); - }; - - img.onerror = (err) => { - reject(err); - }; - - img.src = imageBase64; - }); -} \ No newline at end of file diff --git a/spaces/Deva123d/WaveFormBot/README.md b/spaces/Deva123d/WaveFormBot/README.md deleted file mode 100644 index 0687f20c106f7d1473f100054be9770266f92c43..0000000000000000000000000000000000000000 --- a/spaces/Deva123d/WaveFormBot/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: WaveFormBot -emoji: 👁 -colorFrom: yellow -colorTo: gray -sdk: streamlit -sdk_version: 1.24.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/DinoPiteko/youtube-whisper-04/app.py b/spaces/DinoPiteko/youtube-whisper-04/app.py deleted file mode 100644 index 4a61dc561a016c53ad93a3c556b0ef7bafa964eb..0000000000000000000000000000000000000000 --- a/spaces/DinoPiteko/youtube-whisper-04/app.py +++ /dev/null @@ -1,66 +0,0 @@ -import gradio as gr -import whisper -from pytube import YouTube - -def get_audio(url): - yt = YouTube(url) - return yt.streams.filter(only_audio=True)[0].download(filename="tmp.mp4") - -def get_transcript(url, model_size, lang, format): - - model = whisper.load_model(model_size) - - if lang == "None": - lang = None - - result = model.transcribe(get_audio(url), fp16=False, language=lang) - - if format == "None": - return result["text"] - elif format == ".srt": - return format_to_srt(result["segments"]) - -def format_to_srt(segments): - output = "" - for i, segment in enumerate(segments): - output += f"{i + 1}\n" - output += f"{format_timestamp(segment['start'])} --> {format_timestamp(segment['end'])}\n" - output += f"{segment['text']}\n\n" - return output - -def format_timestamp(t): - hh = t//3600 - mm = (t - hh*3600)//60 - ss = t - hh*3600 - mm*60 - mi = (t - int(t))*1000 - return f"{int(hh):02d}:{int(mm):02d}:{int(ss):02d},{int(mi):03d}" - - -langs = ["None"] + sorted(list(whisper.tokenizer.LANGUAGES.values())) -model_size = list(whisper._MODELS.keys()) - -with gr.Blocks() as demo: - - with gr.Row(): - - with gr.Column(): - - with gr.Row(): - url = gr.Textbox(placeholder='Youtube video URL', label='URL') - - with gr.Row(): - - model_size = gr.Dropdown(choices=model_size, value='tiny', label="Model") - lang = gr.Dropdown(choices=langs, value="None", label="Language (Optional)") - format = gr.Dropdown(choices=["None", ".srt"], value="None", label="Timestamps? (Optional)") - - with gr.Row(): - gr.Markdown("Larger models are more accurate, but slower. For 1min video, it'll take ~30s (tiny), ~1min (base), ~3min (small), ~5min (medium), etc.") - transcribe_btn = gr.Button('Transcribe') - - with gr.Column(): - outputs = gr.Textbox(placeholder='Transcription of the video', label='Transcription') - - transcribe_btn.click(get_transcript, inputs=[url, model_size, lang, format], outputs=outputs) - -demo.launch(debug=True) diff --git a/spaces/Dogge/bigscience-bloomz-7b1/README.md b/spaces/Dogge/bigscience-bloomz-7b1/README.md deleted file mode 100644 index 39845e739f40bb253d328974b8440fccea7ccb0e..0000000000000000000000000000000000000000 --- a/spaces/Dogge/bigscience-bloomz-7b1/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Bigscience Bloomz 7b1 -emoji: 🏢 -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 3.13.0 -app_file: app.py -pinned: false -license: bigscience-bloom-rail-1.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/DragGan/DragGan-Inversion/PTI/dnnlib/util.py b/spaces/DragGan/DragGan-Inversion/PTI/dnnlib/util.py deleted file mode 100644 index 76725336d01e75e1c68daa88be47f4fde0bbc63b..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/PTI/dnnlib/util.py +++ /dev/null @@ -1,477 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Miscellaneous utility classes and functions.""" - -import ctypes -import fnmatch -import importlib -import inspect -import numpy as np -import os -import shutil -import sys -import types -import io -import pickle -import re -import requests -import html -import hashlib -import glob -import tempfile -import urllib -import urllib.request -import uuid - -from distutils.util import strtobool -from typing import Any, List, Tuple, Union - - -# Util classes -# ------------------------------------------------------------------------------------------ - - -class EasyDict(dict): - """Convenience class that behaves like a dict but allows access with the attribute syntax.""" - - def __getattr__(self, name: str) -> Any: - try: - return self[name] - except KeyError: - raise AttributeError(name) - - def __setattr__(self, name: str, value: Any) -> None: - self[name] = value - - def __delattr__(self, name: str) -> None: - del self[name] - - -class Logger(object): - """Redirect stderr to stdout, optionally print stdout to a file, and optionally force flushing on both stdout and the file.""" - - def __init__(self, file_name: str = None, file_mode: str = "w", should_flush: bool = True): - self.file = None - - if file_name is not None: - self.file = open(file_name, file_mode) - - self.should_flush = should_flush - self.stdout = sys.stdout - self.stderr = sys.stderr - - sys.stdout = self - sys.stderr = self - - def __enter__(self) -> "Logger": - return self - - def __exit__(self, exc_type: Any, exc_value: Any, traceback: Any) -> None: - self.close() - - def write(self, text: Union[str, bytes]) -> None: - """Write text to stdout (and a file) and optionally flush.""" - if isinstance(text, bytes): - text = text.decode() - if len(text) == 0: # workaround for a bug in VSCode debugger: sys.stdout.write(''); sys.stdout.flush() => crash - return - - if self.file is not None: - self.file.write(text) - - self.stdout.write(text) - - if self.should_flush: - self.flush() - - def flush(self) -> None: - """Flush written text to both stdout and a file, if open.""" - if self.file is not None: - self.file.flush() - - self.stdout.flush() - - def close(self) -> None: - """Flush, close possible files, and remove stdout/stderr mirroring.""" - self.flush() - - # if using multiple loggers, prevent closing in wrong order - if sys.stdout is self: - sys.stdout = self.stdout - if sys.stderr is self: - sys.stderr = self.stderr - - if self.file is not None: - self.file.close() - self.file = None - - -# Cache directories -# ------------------------------------------------------------------------------------------ - -_dnnlib_cache_dir = None - -def set_cache_dir(path: str) -> None: - global _dnnlib_cache_dir - _dnnlib_cache_dir = path - -def make_cache_dir_path(*paths: str) -> str: - if _dnnlib_cache_dir is not None: - return os.path.join(_dnnlib_cache_dir, *paths) - if 'DNNLIB_CACHE_DIR' in os.environ: - return os.path.join(os.environ['DNNLIB_CACHE_DIR'], *paths) - if 'HOME' in os.environ: - return os.path.join(os.environ['HOME'], '.cache', 'dnnlib', *paths) - if 'USERPROFILE' in os.environ: - return os.path.join(os.environ['USERPROFILE'], '.cache', 'dnnlib', *paths) - return os.path.join(tempfile.gettempdir(), '.cache', 'dnnlib', *paths) - -# Small util functions -# ------------------------------------------------------------------------------------------ - - -def format_time(seconds: Union[int, float]) -> str: - """Convert the seconds to human readable string with days, hours, minutes and seconds.""" - s = int(np.rint(seconds)) - - if s < 60: - return "{0}s".format(s) - elif s < 60 * 60: - return "{0}m {1:02}s".format(s // 60, s % 60) - elif s < 24 * 60 * 60: - return "{0}h {1:02}m {2:02}s".format(s // (60 * 60), (s // 60) % 60, s % 60) - else: - return "{0}d {1:02}h {2:02}m".format(s // (24 * 60 * 60), (s // (60 * 60)) % 24, (s // 60) % 60) - - -def ask_yes_no(question: str) -> bool: - """Ask the user the question until the user inputs a valid answer.""" - while True: - try: - print("{0} [y/n]".format(question)) - return strtobool(input().lower()) - except ValueError: - pass - - -def tuple_product(t: Tuple) -> Any: - """Calculate the product of the tuple elements.""" - result = 1 - - for v in t: - result *= v - - return result - - -_str_to_ctype = { - "uint8": ctypes.c_ubyte, - "uint16": ctypes.c_uint16, - "uint32": ctypes.c_uint32, - "uint64": ctypes.c_uint64, - "int8": ctypes.c_byte, - "int16": ctypes.c_int16, - "int32": ctypes.c_int32, - "int64": ctypes.c_int64, - "float32": ctypes.c_float, - "float64": ctypes.c_double -} - - -def get_dtype_and_ctype(type_obj: Any) -> Tuple[np.dtype, Any]: - """Given a type name string (or an object having a __name__ attribute), return matching Numpy and ctypes types that have the same size in bytes.""" - type_str = None - - if isinstance(type_obj, str): - type_str = type_obj - elif hasattr(type_obj, "__name__"): - type_str = type_obj.__name__ - elif hasattr(type_obj, "name"): - type_str = type_obj.name - else: - raise RuntimeError("Cannot infer type name from input") - - assert type_str in _str_to_ctype.keys() - - my_dtype = np.dtype(type_str) - my_ctype = _str_to_ctype[type_str] - - assert my_dtype.itemsize == ctypes.sizeof(my_ctype) - - return my_dtype, my_ctype - - -def is_pickleable(obj: Any) -> bool: - try: - with io.BytesIO() as stream: - pickle.dump(obj, stream) - return True - except: - return False - - -# Functionality to import modules/objects by name, and call functions by name -# ------------------------------------------------------------------------------------------ - -def get_module_from_obj_name(obj_name: str) -> Tuple[types.ModuleType, str]: - """Searches for the underlying module behind the name to some python object. - Returns the module and the object name (original name with module part removed).""" - - # allow convenience shorthands, substitute them by full names - obj_name = re.sub("^np.", "numpy.", obj_name) - obj_name = re.sub("^tf.", "tensorflow.", obj_name) - - # list alternatives for (module_name, local_obj_name) - parts = obj_name.split(".") - name_pairs = [(".".join(parts[:i]), ".".join(parts[i:])) for i in range(len(parts), 0, -1)] - - # try each alternative in turn - for module_name, local_obj_name in name_pairs: - try: - module = importlib.import_module(module_name) # may raise ImportError - get_obj_from_module(module, local_obj_name) # may raise AttributeError - return module, local_obj_name - except: - pass - - # maybe some of the modules themselves contain errors? - for module_name, _local_obj_name in name_pairs: - try: - importlib.import_module(module_name) # may raise ImportError - except ImportError: - if not str(sys.exc_info()[1]).startswith("No module named '" + module_name + "'"): - raise - - # maybe the requested attribute is missing? - for module_name, local_obj_name in name_pairs: - try: - module = importlib.import_module(module_name) # may raise ImportError - get_obj_from_module(module, local_obj_name) # may raise AttributeError - except ImportError: - pass - - # we are out of luck, but we have no idea why - raise ImportError(obj_name) - - -def get_obj_from_module(module: types.ModuleType, obj_name: str) -> Any: - """Traverses the object name and returns the last (rightmost) python object.""" - if obj_name == '': - return module - obj = module - for part in obj_name.split("."): - obj = getattr(obj, part) - return obj - - -def get_obj_by_name(name: str) -> Any: - """Finds the python object with the given name.""" - module, obj_name = get_module_from_obj_name(name) - return get_obj_from_module(module, obj_name) - - -def call_func_by_name(*args, func_name: str = None, **kwargs) -> Any: - """Finds the python object with the given name and calls it as a function.""" - assert func_name is not None - func_obj = get_obj_by_name(func_name) - assert callable(func_obj) - return func_obj(*args, **kwargs) - - -def construct_class_by_name(*args, class_name: str = None, **kwargs) -> Any: - """Finds the python class with the given name and constructs it with the given arguments.""" - return call_func_by_name(*args, func_name=class_name, **kwargs) - - -def get_module_dir_by_obj_name(obj_name: str) -> str: - """Get the directory path of the module containing the given object name.""" - module, _ = get_module_from_obj_name(obj_name) - return os.path.dirname(inspect.getfile(module)) - - -def is_top_level_function(obj: Any) -> bool: - """Determine whether the given object is a top-level function, i.e., defined at module scope using 'def'.""" - return callable(obj) and obj.__name__ in sys.modules[obj.__module__].__dict__ - - -def get_top_level_function_name(obj: Any) -> str: - """Return the fully-qualified name of a top-level function.""" - assert is_top_level_function(obj) - module = obj.__module__ - if module == '__main__': - module = os.path.splitext(os.path.basename(sys.modules[module].__file__))[0] - return module + "." + obj.__name__ - - -# File system helpers -# ------------------------------------------------------------------------------------------ - -def list_dir_recursively_with_ignore(dir_path: str, ignores: List[str] = None, add_base_to_relative: bool = False) -> List[Tuple[str, str]]: - """List all files recursively in a given directory while ignoring given file and directory names. - Returns list of tuples containing both absolute and relative paths.""" - assert os.path.isdir(dir_path) - base_name = os.path.basename(os.path.normpath(dir_path)) - - if ignores is None: - ignores = [] - - result = [] - - for root, dirs, files in os.walk(dir_path, topdown=True): - for ignore_ in ignores: - dirs_to_remove = [d for d in dirs if fnmatch.fnmatch(d, ignore_)] - - # dirs need to be edited in-place - for d in dirs_to_remove: - dirs.remove(d) - - files = [f for f in files if not fnmatch.fnmatch(f, ignore_)] - - absolute_paths = [os.path.join(root, f) for f in files] - relative_paths = [os.path.relpath(p, dir_path) for p in absolute_paths] - - if add_base_to_relative: - relative_paths = [os.path.join(base_name, p) for p in relative_paths] - - assert len(absolute_paths) == len(relative_paths) - result += zip(absolute_paths, relative_paths) - - return result - - -def copy_files_and_create_dirs(files: List[Tuple[str, str]]) -> None: - """Takes in a list of tuples of (src, dst) paths and copies files. - Will create all necessary directories.""" - for file in files: - target_dir_name = os.path.dirname(file[1]) - - # will create all intermediate-level directories - if not os.path.exists(target_dir_name): - os.makedirs(target_dir_name) - - shutil.copyfile(file[0], file[1]) - - -# URL helpers -# ------------------------------------------------------------------------------------------ - -def is_url(obj: Any, allow_file_urls: bool = False) -> bool: - """Determine whether the given object is a valid URL string.""" - if not isinstance(obj, str) or not "://" in obj: - return False - if allow_file_urls and obj.startswith('file://'): - return True - try: - res = requests.compat.urlparse(obj) - if not res.scheme or not res.netloc or not "." in res.netloc: - return False - res = requests.compat.urlparse(requests.compat.urljoin(obj, "/")) - if not res.scheme or not res.netloc or not "." in res.netloc: - return False - except: - return False - return True - - -def open_url(url: str, cache_dir: str = None, num_attempts: int = 10, verbose: bool = True, return_filename: bool = False, cache: bool = True) -> Any: - """Download the given URL and return a binary-mode file object to access the data.""" - assert num_attempts >= 1 - assert not (return_filename and (not cache)) - - # Doesn't look like an URL scheme so interpret it as a local filename. - if not re.match('^[a-z]+://', url): - return url if return_filename else open(url, "rb") - - # Handle file URLs. This code handles unusual file:// patterns that - # arise on Windows: - # - # file:///c:/foo.txt - # - # which would translate to a local '/c:/foo.txt' filename that's - # invalid. Drop the forward slash for such pathnames. - # - # If you touch this code path, you should test it on both Linux and - # Windows. - # - # Some internet resources suggest using urllib.request.url2pathname() but - # but that converts forward slashes to backslashes and this causes - # its own set of problems. - if url.startswith('file://'): - filename = urllib.parse.urlparse(url).path - if re.match(r'^/[a-zA-Z]:', filename): - filename = filename[1:] - return filename if return_filename else open(filename, "rb") - - assert is_url(url) - - # Lookup from cache. - if cache_dir is None: - cache_dir = make_cache_dir_path('downloads') - - url_md5 = hashlib.md5(url.encode("utf-8")).hexdigest() - if cache: - cache_files = glob.glob(os.path.join(cache_dir, url_md5 + "_*")) - if len(cache_files) == 1: - filename = cache_files[0] - return filename if return_filename else open(filename, "rb") - - # Download. - url_name = None - url_data = None - with requests.Session() as session: - if verbose: - print("Downloading %s ..." % url, end="", flush=True) - for attempts_left in reversed(range(num_attempts)): - try: - with session.get(url) as res: - res.raise_for_status() - if len(res.content) == 0: - raise IOError("No data received") - - if len(res.content) < 8192: - content_str = res.content.decode("utf-8") - if "download_warning" in res.headers.get("Set-Cookie", ""): - links = [html.unescape(link) for link in content_str.split('"') if "export=download" in link] - if len(links) == 1: - url = requests.compat.urljoin(url, links[0]) - raise IOError("Google Drive virus checker nag") - if "Google Drive - Quota exceeded" in content_str: - raise IOError("Google Drive download quota exceeded -- please try again later") - - match = re.search(r'filename="([^"]*)"', res.headers.get("Content-Disposition", "")) - url_name = match[1] if match else url - url_data = res.content - if verbose: - print(" done") - break - except KeyboardInterrupt: - raise - except: - if not attempts_left: - if verbose: - print(" failed") - raise - if verbose: - print(".", end="", flush=True) - - # Save to cache. - if cache: - safe_name = re.sub(r"[^0-9a-zA-Z-._]", "_", url_name) - cache_file = os.path.join(cache_dir, url_md5 + "_" + safe_name) - temp_file = os.path.join(cache_dir, "tmp_" + uuid.uuid4().hex + "_" + url_md5 + "_" + safe_name) - os.makedirs(cache_dir, exist_ok=True) - with open(temp_file, "wb") as f: - f.write(url_data) - os.replace(temp_file, cache_file) # atomic - if return_filename: - return cache_file - - # Return data as file object. - assert not return_filename - return io.BytesIO(url_data) diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/docs/Dataset.md b/spaces/DragGan/DragGan-Inversion/stylegan_human/docs/Dataset.md deleted file mode 100644 index ef6c56cedab89f3ab09306826240b075af244899..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/stylegan_human/docs/Dataset.md +++ /dev/null @@ -1,74 +0,0 @@ -# SHHQ Dataset - - -## Overview -SHHQ is a dataset with high-quality full-body human images in a resolution of 1024 × 512. -Since we need to follow a rigorous legal review in our institute, we can not release all of the data at once. - -For now, SHHQ-1.0 with 40K images is released! More data will be released in the later versions. - - -## Data Sources -Images are collected in two main ways: -1) From the Internet. -We developed a crawler tool with an official API, mainly downloading images from Flickr, Pixabay and Pexels. So you need to meet all the following licenses when using the dataset: CC0, [Pixabay License](https://pixabay.com/service/license/), and [Pexels Licenses](https://www.pexels.com/license/). -2) From the data providers. -We purchased images from databases of individual photographers, modeling agencies and other suppliers. -Images were reviewed by our legal team prior to purchase to ensure permission for use in research. - -### Note: -The composition of SHHQ-1.0: - -1) Images obtained from the above sources. -2) Processed 9991 DeepFashion [[1]](#1) images (retain only full body images). -3) 1940 African images from the InFashAI [[2]](#2) dataset to increase data diversity. - -## Data License -We are aware of privacy concerns and seriously treat the license and privacy issues. All released data will be ensured under the license of CC0 and free for research use. Also, persons in the dataset are anonymised without additional private or sensitive metadata. - -## Agreement -The SHHQ is available for non-commercial research purposes only. - -You agree not to reproduce, duplicate, copy, sell, trade, resell or exploit any portion of the images and any portion of the derived data for commercial purposes. - -You agree NOT to further copy, publish or distribute any portion of SHHQ to any third party for any purpose. Except, for internal use at a single site within the same organization it is allowed to make copies of the dataset. - -Shanghai AI Lab reserves the right to terminate your access to the SHHQ at any time. - -## Dataset Preview -For those interested in our dataset, we provide a preview version with 100 images randomly sampled from SHHQ-1.0: [SHHQ-1.0_samples](https://drive.google.com/file/d/1tnNFfmFtzRbYL3qEnNXQ_ShaN9YV5tI5/view?usp=sharing). - -In SHHQ-1.0, we provide aligned raw images along with machine-calculated segmentation masks. Later we are planning to release manually annotated human-parsing version of these 40,000 images. Please stay tuned. - -> We also provide script [bg_white.py](../bg_white.py) to whiten the background of the raw image using its segmentation mask. - -If you want to access the full SHHQ-1.0, please read the following instructions. - -## Model trained using SHHQ-1.0 - -| Structure | 1024x512 | Metric | Scores | 512x256 | Metric | Scores | -| --------- |:----------:| :----------:| :----------:| :-----: | :-----: | :-----: | -| StyleGAN1 | to be released | - | - | to be released | - | - | -| StyleGAN2 | [SHHQ-1.0_sg2_1024.pkl](https://drive.google.com/file/d/1PuvE72xpc69Zq4y58dohuKbG9dFnnjEX/view?usp=sharing) | fid50k_full | 3.56 | [SHHQ-1.0_sg2_512.pkl](https://drive.google.com/file/d/170t2FRWxR8_TG3_y0nVtDBogLPOClnyf/view?usp=sharing) | fid50k_full | 3.68 | -| StyleGAN3 | to be released | - | - |to be released | - | - | - - -## Download Instructions -Please download the SHHQ Dataset Release Agreement from [link](./SHHQ_Dataset_Release_Agreement.pdf). -Read it carefully, complete and sign it appropriately. - -Please send the completed form to Jianglin Fu (arlenefu@outlook.com) and Shikai Li (lishikai@pjlab.org.cn), and cc to Wayne Wu (wuwenyan0503@gmail.com) using institutional email address. The email Subject Title is "SHHQ Dataset Release Agreement". We will verify your request and contact you with the dataset link and password to unzip the image data. - -Note: - -1. We are currently facing large incoming applications, and we need to carefully verify all the applicants, please be patient, and we will reply to you as soon as possible. - -2. The signature in the agreement should be hand-written. - -## References -[1] -Liu, Ziwei and Luo, Ping and Qiu, Shi and Wang, Xiaogang and Tang, Xiaoou. DeepFashion: Powering Robust Clothes Recognition and Retrieval with Rich Annotations. CVPR (2016) - -[2] -Hacheme, Gilles and Sayouti, Noureini. Neural fashion image captioning: Accounting for data diversity. arXiv preprint arXiv:2106.12154 (2021) - diff --git a/spaces/DragGan/DragGan/gui_utils/imgui_window.py b/spaces/DragGan/DragGan/gui_utils/imgui_window.py deleted file mode 100644 index 30d539a1382def526050c83978d1118348ac77ad..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan/gui_utils/imgui_window.py +++ /dev/null @@ -1,103 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import os -import imgui -import imgui.integrations.glfw - -from . import glfw_window -from . import imgui_utils -from . import text_utils - -#---------------------------------------------------------------------------- - -class ImguiWindow(glfw_window.GlfwWindow): - def __init__(self, *, title='ImguiWindow', font=None, font_sizes=range(14,24), **glfw_kwargs): - if font is None: - font = text_utils.get_default_font() - font_sizes = {int(size) for size in font_sizes} - super().__init__(title=title, **glfw_kwargs) - - # Init fields. - self._imgui_context = None - self._imgui_renderer = None - self._imgui_fonts = None - self._cur_font_size = max(font_sizes) - - # Delete leftover imgui.ini to avoid unexpected behavior. - if os.path.isfile('imgui.ini'): - os.remove('imgui.ini') - - # Init ImGui. - self._imgui_context = imgui.create_context() - self._imgui_renderer = _GlfwRenderer(self._glfw_window) - self._attach_glfw_callbacks() - imgui.get_io().ini_saving_rate = 0 # Disable creating imgui.ini at runtime. - imgui.get_io().mouse_drag_threshold = 0 # Improve behavior with imgui_utils.drag_custom(). - self._imgui_fonts = {size: imgui.get_io().fonts.add_font_from_file_ttf(font, size) for size in font_sizes} - self._imgui_renderer.refresh_font_texture() - - def close(self): - self.make_context_current() - self._imgui_fonts = None - if self._imgui_renderer is not None: - self._imgui_renderer.shutdown() - self._imgui_renderer = None - if self._imgui_context is not None: - #imgui.destroy_context(self._imgui_context) # Commented out to avoid creating imgui.ini at the end. - self._imgui_context = None - super().close() - - def _glfw_key_callback(self, *args): - super()._glfw_key_callback(*args) - self._imgui_renderer.keyboard_callback(*args) - - @property - def font_size(self): - return self._cur_font_size - - @property - def spacing(self): - return round(self._cur_font_size * 0.4) - - def set_font_size(self, target): # Applied on next frame. - self._cur_font_size = min((abs(key - target), key) for key in self._imgui_fonts.keys())[1] - - def begin_frame(self): - # Begin glfw frame. - super().begin_frame() - - # Process imgui events. - self._imgui_renderer.mouse_wheel_multiplier = self._cur_font_size / 10 - if self.content_width > 0 and self.content_height > 0: - self._imgui_renderer.process_inputs() - - # Begin imgui frame. - imgui.new_frame() - imgui.push_font(self._imgui_fonts[self._cur_font_size]) - imgui_utils.set_default_style(spacing=self.spacing, indent=self.font_size, scrollbar=self.font_size+4) - - def end_frame(self): - imgui.pop_font() - imgui.render() - imgui.end_frame() - self._imgui_renderer.render(imgui.get_draw_data()) - super().end_frame() - -#---------------------------------------------------------------------------- -# Wrapper class for GlfwRenderer to fix a mouse wheel bug on Linux. - -class _GlfwRenderer(imgui.integrations.glfw.GlfwRenderer): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self.mouse_wheel_multiplier = 1 - - def scroll_callback(self, window, x_offset, y_offset): - self.io.mouse_wheel += y_offset * self.mouse_wheel_multiplier - -#---------------------------------------------------------------------------- diff --git a/spaces/DragGan/DragGan/stylegan_human/pti/pti_models/e4e/encoders/psp_encoders.py b/spaces/DragGan/DragGan/stylegan_human/pti/pti_models/e4e/encoders/psp_encoders.py deleted file mode 100644 index 85da8a41461e20170cc3f3afaff3f25be9f6b2d1..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan/stylegan_human/pti/pti_models/e4e/encoders/psp_encoders.py +++ /dev/null @@ -1,200 +0,0 @@ -from enum import Enum -import math -import numpy as np -import torch -from torch import nn -from torch.nn import Conv2d, BatchNorm2d, PReLU, Sequential, Module - -from pti.pti_models.e4e.encoders.helpers import get_blocks, bottleneck_IR, bottleneck_IR_SE, _upsample_add -from pti.pti_models.e4e.stylegan2.model import EqualLinear - - -class ProgressiveStage(Enum): - WTraining = 0 - Delta1Training = 1 - Delta2Training = 2 - Delta3Training = 3 - Delta4Training = 4 - Delta5Training = 5 - Delta6Training = 6 - Delta7Training = 7 - Delta8Training = 8 - Delta9Training = 9 - Delta10Training = 10 - Delta11Training = 11 - Delta12Training = 12 - Delta13Training = 13 - Delta14Training = 14 - Delta15Training = 15 - Delta16Training = 16 - Delta17Training = 17 - Inference = 18 - - -class GradualStyleBlock(Module): - def __init__(self, in_c, out_c, spatial): - super(GradualStyleBlock, self).__init__() - self.out_c = out_c - self.spatial = spatial - num_pools = int(np.log2(spatial)) - modules = [] - modules += [Conv2d(in_c, out_c, kernel_size=3, stride=2, padding=1), - nn.LeakyReLU()] - for i in range(num_pools - 1): - modules += [ - Conv2d(out_c, out_c, kernel_size=3, stride=2, padding=1), - nn.LeakyReLU() - ] - self.convs = nn.Sequential(*modules) - self.linear = EqualLinear(out_c, out_c, lr_mul=1) - - def forward(self, x): - x = self.convs(x) - x = x.view(-1, self.out_c) - x = self.linear(x) - return x - - -class GradualStyleEncoder(Module): - def __init__(self, num_layers, mode='ir', opts=None): - super(GradualStyleEncoder, self).__init__() - assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152' - assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se' - blocks = get_blocks(num_layers) - if mode == 'ir': - unit_module = bottleneck_IR - elif mode == 'ir_se': - unit_module = bottleneck_IR_SE - self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False), - BatchNorm2d(64), - PReLU(64)) - modules = [] - for block in blocks: - for bottleneck in block: - modules.append(unit_module(bottleneck.in_channel, - bottleneck.depth, - bottleneck.stride)) - self.body = Sequential(*modules) - - self.styles = nn.ModuleList() - log_size = int(math.log(opts.stylegan_size, 2)) - self.style_count = 2 * log_size - 2 - self.coarse_ind = 3 - self.middle_ind = 7 - for i in range(self.style_count): - if i < self.coarse_ind: - style = GradualStyleBlock(512, 512, 16) - elif i < self.middle_ind: - style = GradualStyleBlock(512, 512, 32) - else: - style = GradualStyleBlock(512, 512, 64) - self.styles.append(style) - self.latlayer1 = nn.Conv2d(256, 512, kernel_size=1, stride=1, padding=0) - self.latlayer2 = nn.Conv2d(128, 512, kernel_size=1, stride=1, padding=0) - - def forward(self, x): - x = self.input_layer(x) - - latents = [] - modulelist = list(self.body._modules.values()) - for i, l in enumerate(modulelist): - x = l(x) - if i == 6: - c1 = x - elif i == 20: - c2 = x - elif i == 23: - c3 = x - - for j in range(self.coarse_ind): - latents.append(self.styles[j](c3)) - - p2 = _upsample_add(c3, self.latlayer1(c2)) - for j in range(self.coarse_ind, self.middle_ind): - latents.append(self.styles[j](p2)) - - p1 = _upsample_add(p2, self.latlayer2(c1)) - for j in range(self.middle_ind, self.style_count): - latents.append(self.styles[j](p1)) - - out = torch.stack(latents, dim=1) - return out - - -class Encoder4Editing(Module): - def __init__(self, num_layers, mode='ir', opts=None): - super(Encoder4Editing, self).__init__() - assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152' - assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se' - blocks = get_blocks(num_layers) - if mode == 'ir': - unit_module = bottleneck_IR - elif mode == 'ir_se': - unit_module = bottleneck_IR_SE - self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False), - BatchNorm2d(64), - PReLU(64)) - modules = [] - for block in blocks: - for bottleneck in block: - modules.append(unit_module(bottleneck.in_channel, - bottleneck.depth, - bottleneck.stride)) - self.body = Sequential(*modules) - - self.styles = nn.ModuleList() - log_size = int(math.log(opts.stylegan_size, 2)) - self.style_count = 2 * log_size - 2 - self.coarse_ind = 3 - self.middle_ind = 7 - - for i in range(self.style_count): - if i < self.coarse_ind: - style = GradualStyleBlock(512, 512, 16) - elif i < self.middle_ind: - style = GradualStyleBlock(512, 512, 32) - else: - style = GradualStyleBlock(512, 512, 64) - self.styles.append(style) - - self.latlayer1 = nn.Conv2d(256, 512, kernel_size=1, stride=1, padding=0) - self.latlayer2 = nn.Conv2d(128, 512, kernel_size=1, stride=1, padding=0) - - self.progressive_stage = ProgressiveStage.Inference - - def get_deltas_starting_dimensions(self): - ''' Get a list of the initial dimension of every delta from which it is applied ''' - return list(range(self.style_count)) # Each dimension has a delta applied to it - - def set_progressive_stage(self, new_stage: ProgressiveStage): - self.progressive_stage = new_stage - print('Changed progressive stage to: ', new_stage) - - def forward(self, x): - x = self.input_layer(x) - - modulelist = list(self.body._modules.values()) - for i, l in enumerate(modulelist): - x = l(x) - if i == 6: - c1 = x - elif i == 20: - c2 = x - elif i == 23: - c3 = x - - # Infer main W and duplicate it - w0 = self.styles[0](c3) - w = w0.repeat(self.style_count, 1, 1).permute(1, 0, 2) - stage = self.progressive_stage.value - features = c3 - for i in range(1, min(stage + 1, self.style_count)): # Infer additional deltas - if i == self.coarse_ind: - p2 = _upsample_add(c3, self.latlayer1(c2)) # FPN's middle features - features = p2 - elif i == self.middle_ind: - p1 = _upsample_add(p2, self.latlayer2(c1)) # FPN's fine features - features = p1 - delta_i = self.styles[i](features) - w[:, i] += delta_i - return w diff --git a/spaces/DragGan/DragGan/training/networks_stylegan3.py b/spaces/DragGan/DragGan/training/networks_stylegan3.py deleted file mode 100644 index e34bf87ee23a4e5612094062dd67d0a7f6de5e39..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan/training/networks_stylegan3.py +++ /dev/null @@ -1,548 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Generator architecture from the paper -"Alias-Free Generative Adversarial Networks".""" - -import numpy as np -import scipy.signal -import scipy.optimize -import torch -import torch.nn.functional as F -from torch_utils import misc -from torch_utils import persistence -from torch_utils.ops import conv2d_gradfix -from torch_utils.ops import filtered_lrelu -from torch_utils.ops import bias_act - -#---------------------------------------------------------------------------- - -@misc.profiled_function -def modulated_conv2d( - x, # Input tensor: [batch_size, in_channels, in_height, in_width] - w, # Weight tensor: [out_channels, in_channels, kernel_height, kernel_width] - s, # Style tensor: [batch_size, in_channels] - demodulate = True, # Apply weight demodulation? - padding = 0, # Padding: int or [padH, padW] - input_gain = None, # Optional scale factors for the input channels: [], [in_channels], or [batch_size, in_channels] -): - with misc.suppress_tracer_warnings(): # this value will be treated as a constant - batch_size = int(x.shape[0]) - out_channels, in_channels, kh, kw = w.shape - misc.assert_shape(w, [out_channels, in_channels, kh, kw]) # [OIkk] - misc.assert_shape(x, [batch_size, in_channels, None, None]) # [NIHW] - misc.assert_shape(s, [batch_size, in_channels]) # [NI] - - # Pre-normalize inputs. - if demodulate: - w = w * w.square().mean([1,2,3], keepdim=True).rsqrt() - s = s * s.square().mean().rsqrt() - - # Modulate weights. - w = w.unsqueeze(0) # [NOIkk] - w = w * s.unsqueeze(1).unsqueeze(3).unsqueeze(4) # [NOIkk] - - # Demodulate weights. - if demodulate: - dcoefs = (w.square().sum(dim=[2,3,4]) + 1e-8).rsqrt() # [NO] - w = w * dcoefs.unsqueeze(2).unsqueeze(3).unsqueeze(4) # [NOIkk] - - # Apply input scaling. - if input_gain is not None: - input_gain = input_gain.expand(batch_size, in_channels) # [NI] - w = w * input_gain.unsqueeze(1).unsqueeze(3).unsqueeze(4) # [NOIkk] - - # Execute as one fused op using grouped convolution. - x = x.reshape(1, -1, *x.shape[2:]) - w = w.reshape(-1, in_channels, kh, kw) - x = conv2d_gradfix.conv2d(input=x, weight=w.to(x.dtype), padding=padding, groups=batch_size) - x = x.reshape(batch_size, -1, *x.shape[2:]) - return x - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class FullyConnectedLayer(torch.nn.Module): - def __init__(self, - in_features, # Number of input features. - out_features, # Number of output features. - activation = 'linear', # Activation function: 'relu', 'lrelu', etc. - bias = True, # Apply additive bias before the activation function? - lr_multiplier = 1, # Learning rate multiplier. - weight_init = 1, # Initial standard deviation of the weight tensor. - bias_init = 0, # Initial value of the additive bias. - ): - super().__init__() - self.in_features = in_features - self.out_features = out_features - self.activation = activation - self.weight = torch.nn.Parameter(torch.randn([out_features, in_features]) * (weight_init / lr_multiplier)) - bias_init = np.broadcast_to(np.asarray(bias_init, dtype=np.float32), [out_features]) - self.bias = torch.nn.Parameter(torch.from_numpy(bias_init / lr_multiplier)) if bias else None - self.weight_gain = lr_multiplier / np.sqrt(in_features) - self.bias_gain = lr_multiplier - - def forward(self, x): - w = self.weight.to(x.dtype) * self.weight_gain - b = self.bias - if b is not None: - b = b.to(x.dtype) - if self.bias_gain != 1: - b = b * self.bias_gain - if self.activation == 'linear' and b is not None: - x = torch.addmm(b.unsqueeze(0), x, w.t()) - else: - x = x.matmul(w.t()) - x = bias_act.bias_act(x, b, act=self.activation) - return x - - def extra_repr(self): - return f'in_features={self.in_features:d}, out_features={self.out_features:d}, activation={self.activation:s}' - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class MappingNetwork(torch.nn.Module): - def __init__(self, - z_dim, # Input latent (Z) dimensionality. - c_dim, # Conditioning label (C) dimensionality, 0 = no labels. - w_dim, # Intermediate latent (W) dimensionality. - num_ws, # Number of intermediate latents to output. - num_layers = 2, # Number of mapping layers. - lr_multiplier = 0.01, # Learning rate multiplier for the mapping layers. - w_avg_beta = 0.998, # Decay for tracking the moving average of W during training. - ): - super().__init__() - self.z_dim = z_dim - self.c_dim = c_dim - self.w_dim = w_dim - self.num_ws = num_ws - self.num_layers = num_layers - self.w_avg_beta = w_avg_beta - - # Construct layers. - self.embed = FullyConnectedLayer(self.c_dim, self.w_dim) if self.c_dim > 0 else None - features = [self.z_dim + (self.w_dim if self.c_dim > 0 else 0)] + [self.w_dim] * self.num_layers - for idx, in_features, out_features in zip(range(num_layers), features[:-1], features[1:]): - layer = FullyConnectedLayer(in_features, out_features, activation='lrelu', lr_multiplier=lr_multiplier) - setattr(self, f'fc{idx}', layer) - self.register_buffer('w_avg', torch.zeros([w_dim])) - - def forward(self, z, c, truncation_psi=1, truncation_cutoff=None, update_emas=False): - misc.assert_shape(z, [None, self.z_dim]) - if truncation_cutoff is None: - truncation_cutoff = self.num_ws - - # Embed, normalize, and concatenate inputs. - x = z.to(torch.float32) - x = x * (x.square().mean(1, keepdim=True) + 1e-8).rsqrt() - if self.c_dim > 0: - misc.assert_shape(c, [None, self.c_dim]) - y = self.embed(c.to(torch.float32)) - y = y * (y.square().mean(1, keepdim=True) + 1e-8).rsqrt() - x = torch.cat([x, y], dim=1) if x is not None else y - - # Execute layers. - for idx in range(self.num_layers): - x = getattr(self, f'fc{idx}')(x) - - # Update moving average of W. - if update_emas: - self.w_avg.copy_(x.detach().mean(dim=0).lerp(self.w_avg, self.w_avg_beta)) - - # Broadcast and apply truncation. - x = x.unsqueeze(1).repeat([1, self.num_ws, 1]) - if truncation_psi != 1: - x[:, :truncation_cutoff] = self.w_avg.lerp(x[:, :truncation_cutoff], truncation_psi) - return x - - def extra_repr(self): - return f'z_dim={self.z_dim:d}, c_dim={self.c_dim:d}, w_dim={self.w_dim:d}, num_ws={self.num_ws:d}' - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class SynthesisInput(torch.nn.Module): - def __init__(self, - w_dim, # Intermediate latent (W) dimensionality. - channels, # Number of output channels. - size, # Output spatial size: int or [width, height]. - sampling_rate, # Output sampling rate. - bandwidth, # Output bandwidth. - ): - super().__init__() - self.w_dim = w_dim - self.channels = channels - self.size = np.broadcast_to(np.asarray(size), [2]) - self.sampling_rate = sampling_rate - self.bandwidth = bandwidth - - # Draw random frequencies from uniform 2D disc. - freqs = torch.randn([self.channels, 2]) - radii = freqs.square().sum(dim=1, keepdim=True).sqrt() - freqs /= radii * radii.square().exp().pow(0.25) - freqs *= bandwidth - phases = torch.rand([self.channels]) - 0.5 - - # Setup parameters and buffers. - self.weight = torch.nn.Parameter(torch.randn([self.channels, self.channels])) - self.affine = FullyConnectedLayer(w_dim, 4, weight_init=0, bias_init=[1,0,0,0]) - self.register_buffer('transform', torch.eye(3, 3)) # User-specified inverse transform wrt. resulting image. - self.register_buffer('freqs', freqs) - self.register_buffer('phases', phases) - - def forward(self, w): - # Introduce batch dimension. - transforms = self.transform.unsqueeze(0) # [batch, row, col] - freqs = self.freqs.unsqueeze(0) # [batch, channel, xy] - phases = self.phases.unsqueeze(0) # [batch, channel] - - # Apply learned transformation. - t = self.affine(w) # t = (r_c, r_s, t_x, t_y) - t = t / t[:, :2].norm(dim=1, keepdim=True) # t' = (r'_c, r'_s, t'_x, t'_y) - m_r = torch.eye(3, device=w.device).unsqueeze(0).repeat([w.shape[0], 1, 1]) # Inverse rotation wrt. resulting image. - m_r[:, 0, 0] = t[:, 0] # r'_c - m_r[:, 0, 1] = -t[:, 1] # r'_s - m_r[:, 1, 0] = t[:, 1] # r'_s - m_r[:, 1, 1] = t[:, 0] # r'_c - m_t = torch.eye(3, device=w.device).unsqueeze(0).repeat([w.shape[0], 1, 1]) # Inverse translation wrt. resulting image. - m_t[:, 0, 2] = -t[:, 2] # t'_x - m_t[:, 1, 2] = -t[:, 3] # t'_y - transforms = m_r @ m_t @ transforms # First rotate resulting image, then translate, and finally apply user-specified transform. - - # Transform frequencies. - phases = phases + (freqs @ transforms[:, :2, 2:]).squeeze(2) - freqs = freqs @ transforms[:, :2, :2] - - # Dampen out-of-band frequencies that may occur due to the user-specified transform. - amplitudes = (1 - (freqs.norm(dim=2) - self.bandwidth) / (self.sampling_rate / 2 - self.bandwidth)).clamp(0, 1) - - # Construct sampling grid. - theta = torch.eye(2, 3, device=w.device) - theta[0, 0] = 0.5 * self.size[0] / self.sampling_rate - theta[1, 1] = 0.5 * self.size[1] / self.sampling_rate - grids = torch.nn.functional.affine_grid(theta.unsqueeze(0), [1, 1, self.size[1], self.size[0]], align_corners=False) - - # Compute Fourier features. - x = (grids.unsqueeze(3) @ freqs.permute(0, 2, 1).unsqueeze(1).unsqueeze(2)).squeeze(3) # [batch, height, width, channel] - x = x + phases.unsqueeze(1).unsqueeze(2) - x = torch.sin(x * (np.pi * 2)) - x = x * amplitudes.unsqueeze(1).unsqueeze(2) - - # Apply trainable mapping. - weight = self.weight / np.sqrt(self.channels) - x = x @ weight.t() - - # Ensure correct shape. - x = x.permute(0, 3, 1, 2) # [batch, channel, height, width] - misc.assert_shape(x, [w.shape[0], self.channels, int(self.size[1]), int(self.size[0])]) - return x - - def extra_repr(self): - return '\n'.join([ - f'w_dim={self.w_dim:d}, channels={self.channels:d}, size={list(self.size)},', - f'sampling_rate={self.sampling_rate:g}, bandwidth={self.bandwidth:g}']) - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class SynthesisLayer(torch.nn.Module): - def __init__(self, - w_dim, # Intermediate latent (W) dimensionality. - is_torgb, # Is this the final ToRGB layer? - is_critically_sampled, # Does this layer use critical sampling? - use_fp16, # Does this layer use FP16? - - # Input & output specifications. - in_channels, # Number of input channels. - out_channels, # Number of output channels. - in_size, # Input spatial size: int or [width, height]. - out_size, # Output spatial size: int or [width, height]. - in_sampling_rate, # Input sampling rate (s). - out_sampling_rate, # Output sampling rate (s). - in_cutoff, # Input cutoff frequency (f_c). - out_cutoff, # Output cutoff frequency (f_c). - in_half_width, # Input transition band half-width (f_h). - out_half_width, # Output Transition band half-width (f_h). - - # Hyperparameters. - conv_kernel = 3, # Convolution kernel size. Ignored for final the ToRGB layer. - filter_size = 6, # Low-pass filter size relative to the lower resolution when up/downsampling. - lrelu_upsampling = 2, # Relative sampling rate for leaky ReLU. Ignored for final the ToRGB layer. - use_radial_filters = False, # Use radially symmetric downsampling filter? Ignored for critically sampled layers. - conv_clamp = 256, # Clamp the output to [-X, +X], None = disable clamping. - magnitude_ema_beta = 0.999, # Decay rate for the moving average of input magnitudes. - ): - super().__init__() - self.w_dim = w_dim - self.is_torgb = is_torgb - self.is_critically_sampled = is_critically_sampled - self.use_fp16 = use_fp16 - self.in_channels = in_channels - self.out_channels = out_channels - self.in_size = np.broadcast_to(np.asarray(in_size), [2]) - self.out_size = np.broadcast_to(np.asarray(out_size), [2]) - self.in_sampling_rate = in_sampling_rate - self.out_sampling_rate = out_sampling_rate - self.tmp_sampling_rate = max(in_sampling_rate, out_sampling_rate) * (1 if is_torgb else lrelu_upsampling) - self.in_cutoff = in_cutoff - self.out_cutoff = out_cutoff - self.in_half_width = in_half_width - self.out_half_width = out_half_width - self.conv_kernel = 1 if is_torgb else conv_kernel - self.conv_clamp = conv_clamp - self.magnitude_ema_beta = magnitude_ema_beta - - # Setup parameters and buffers. - self.affine = FullyConnectedLayer(self.w_dim, self.in_channels, bias_init=1) - self.weight = torch.nn.Parameter(torch.randn([self.out_channels, self.in_channels, self.conv_kernel, self.conv_kernel])) - self.bias = torch.nn.Parameter(torch.zeros([self.out_channels])) - self.register_buffer('magnitude_ema', torch.ones([])) - - # Design upsampling filter. - self.up_factor = int(np.rint(self.tmp_sampling_rate / self.in_sampling_rate)) - assert self.in_sampling_rate * self.up_factor == self.tmp_sampling_rate - self.up_taps = filter_size * self.up_factor if self.up_factor > 1 and not self.is_torgb else 1 - self.register_buffer('up_filter', self.design_lowpass_filter( - numtaps=self.up_taps, cutoff=self.in_cutoff, width=self.in_half_width*2, fs=self.tmp_sampling_rate)) - - # Design downsampling filter. - self.down_factor = int(np.rint(self.tmp_sampling_rate / self.out_sampling_rate)) - assert self.out_sampling_rate * self.down_factor == self.tmp_sampling_rate - self.down_taps = filter_size * self.down_factor if self.down_factor > 1 and not self.is_torgb else 1 - self.down_radial = use_radial_filters and not self.is_critically_sampled - self.register_buffer('down_filter', self.design_lowpass_filter( - numtaps=self.down_taps, cutoff=self.out_cutoff, width=self.out_half_width*2, fs=self.tmp_sampling_rate, radial=self.down_radial)) - - # Compute padding. - pad_total = (self.out_size - 1) * self.down_factor + 1 # Desired output size before downsampling. - pad_total -= (self.in_size + self.conv_kernel - 1) * self.up_factor # Input size after upsampling. - pad_total += self.up_taps + self.down_taps - 2 # Size reduction caused by the filters. - pad_lo = (pad_total + self.up_factor) // 2 # Shift sample locations according to the symmetric interpretation (Appendix C.3). - pad_hi = pad_total - pad_lo - self.padding = [int(pad_lo[0]), int(pad_hi[0]), int(pad_lo[1]), int(pad_hi[1])] - - def forward(self, x, w, noise_mode='random', force_fp32=False, update_emas=False): - assert noise_mode in ['random', 'const', 'none'] # unused - misc.assert_shape(x, [None, self.in_channels, int(self.in_size[1]), int(self.in_size[0])]) - misc.assert_shape(w, [x.shape[0], self.w_dim]) - - # Track input magnitude. - if update_emas: - with torch.autograd.profiler.record_function('update_magnitude_ema'): - magnitude_cur = x.detach().to(torch.float32).square().mean() - self.magnitude_ema.copy_(magnitude_cur.lerp(self.magnitude_ema, self.magnitude_ema_beta)) - input_gain = self.magnitude_ema.rsqrt() - - # Execute affine layer. - styles = self.affine(w) - if self.is_torgb: - weight_gain = 1 / np.sqrt(self.in_channels * (self.conv_kernel ** 2)) - styles = styles * weight_gain - - # Execute modulated conv2d. - dtype = torch.float16 if (self.use_fp16 and not force_fp32 and x.device.type == 'cuda') else torch.float32 - x = modulated_conv2d(x=x.to(dtype), w=self.weight, s=styles, - padding=self.conv_kernel-1, demodulate=(not self.is_torgb), input_gain=input_gain) - - # Execute bias, filtered leaky ReLU, and clamping. - gain = 1 if self.is_torgb else np.sqrt(2) - slope = 1 if self.is_torgb else 0.2 - x = filtered_lrelu.filtered_lrelu(x=x, fu=self.up_filter, fd=self.down_filter, b=self.bias.to(x.dtype), - up=self.up_factor, down=self.down_factor, padding=self.padding, gain=gain, slope=slope, clamp=self.conv_clamp) - - # Ensure correct shape and dtype. - misc.assert_shape(x, [None, self.out_channels, int(self.out_size[1]), int(self.out_size[0])]) - assert x.dtype == dtype - return x - - @staticmethod - def design_lowpass_filter(numtaps, cutoff, width, fs, radial=False): - assert numtaps >= 1 - - # Identity filter. - if numtaps == 1: - return None - - # Separable Kaiser low-pass filter. - if not radial: - f = scipy.signal.firwin(numtaps=numtaps, cutoff=cutoff, width=width, fs=fs) - return torch.as_tensor(f, dtype=torch.float32) - - # Radially symmetric jinc-based filter. - x = (np.arange(numtaps) - (numtaps - 1) / 2) / fs - r = np.hypot(*np.meshgrid(x, x)) - f = scipy.special.j1(2 * cutoff * (np.pi * r)) / (np.pi * r) - beta = scipy.signal.kaiser_beta(scipy.signal.kaiser_atten(numtaps, width / (fs / 2))) - w = np.kaiser(numtaps, beta) - f *= np.outer(w, w) - f /= np.sum(f) - return torch.as_tensor(f, dtype=torch.float32) - - def extra_repr(self): - return '\n'.join([ - f'w_dim={self.w_dim:d}, is_torgb={self.is_torgb},', - f'is_critically_sampled={self.is_critically_sampled}, use_fp16={self.use_fp16},', - f'in_sampling_rate={self.in_sampling_rate:g}, out_sampling_rate={self.out_sampling_rate:g},', - f'in_cutoff={self.in_cutoff:g}, out_cutoff={self.out_cutoff:g},', - f'in_half_width={self.in_half_width:g}, out_half_width={self.out_half_width:g},', - f'in_size={list(self.in_size)}, out_size={list(self.out_size)},', - f'in_channels={self.in_channels:d}, out_channels={self.out_channels:d}']) - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class SynthesisNetwork(torch.nn.Module): - def __init__(self, - w_dim, # Intermediate latent (W) dimensionality. - img_resolution, # Output image resolution. - img_channels, # Number of color channels. - channel_base = 32768, # Overall multiplier for the number of channels. - channel_max = 512, # Maximum number of channels in any layer. - num_layers = 14, # Total number of layers, excluding Fourier features and ToRGB. - num_critical = 2, # Number of critically sampled layers at the end. - first_cutoff = 2, # Cutoff frequency of the first layer (f_{c,0}). - first_stopband = 2**2.1, # Minimum stopband of the first layer (f_{t,0}). - last_stopband_rel = 2**0.3, # Minimum stopband of the last layer, expressed relative to the cutoff. - margin_size = 10, # Number of additional pixels outside the image. - output_scale = 0.25, # Scale factor for the output image. - num_fp16_res = 4, # Use FP16 for the N highest resolutions. - **layer_kwargs, # Arguments for SynthesisLayer. - ): - super().__init__() - self.w_dim = w_dim - self.num_ws = num_layers + 2 - self.img_resolution = img_resolution - self.img_channels = img_channels - self.num_layers = num_layers - self.num_critical = num_critical - self.margin_size = margin_size - self.output_scale = output_scale - self.num_fp16_res = num_fp16_res - - # Geometric progression of layer cutoffs and min. stopbands. - last_cutoff = self.img_resolution / 2 # f_{c,N} - last_stopband = last_cutoff * last_stopband_rel # f_{t,N} - exponents = np.minimum(np.arange(self.num_layers + 1) / (self.num_layers - self.num_critical), 1) - cutoffs = first_cutoff * (last_cutoff / first_cutoff) ** exponents # f_c[i] - stopbands = first_stopband * (last_stopband / first_stopband) ** exponents # f_t[i] - - # Compute remaining layer parameters. - sampling_rates = np.exp2(np.ceil(np.log2(np.minimum(stopbands * 2, self.img_resolution)))) # s[i] - half_widths = np.maximum(stopbands, sampling_rates / 2) - cutoffs # f_h[i] - sizes = sampling_rates + self.margin_size * 2 - sizes[-2:] = self.img_resolution - channels = np.rint(np.minimum((channel_base / 2) / cutoffs, channel_max)) - channels[-1] = self.img_channels - - # Construct layers. - self.input = SynthesisInput( - w_dim=self.w_dim, channels=int(channels[0]), size=int(sizes[0]), - sampling_rate=sampling_rates[0], bandwidth=cutoffs[0]) - self.layer_names = [] - for idx in range(self.num_layers + 1): - prev = max(idx - 1, 0) - is_torgb = (idx == self.num_layers) - is_critically_sampled = (idx >= self.num_layers - self.num_critical) - use_fp16 = (sampling_rates[idx] * (2 ** self.num_fp16_res) > self.img_resolution) - layer = SynthesisLayer( - w_dim=self.w_dim, is_torgb=is_torgb, is_critically_sampled=is_critically_sampled, use_fp16=use_fp16, - in_channels=int(channels[prev]), out_channels= int(channels[idx]), - in_size=int(sizes[prev]), out_size=int(sizes[idx]), - in_sampling_rate=int(sampling_rates[prev]), out_sampling_rate=int(sampling_rates[idx]), - in_cutoff=cutoffs[prev], out_cutoff=cutoffs[idx], - in_half_width=half_widths[prev], out_half_width=half_widths[idx], - **layer_kwargs) - name = f'L{idx}_{layer.out_size[0]}_{layer.out_channels}' - setattr(self, name, layer) - self.layer_names.append(name) - - def forward(self, ws, return_feature=False, **layer_kwargs): - features = [] - misc.assert_shape(ws, [None, self.num_ws, self.w_dim]) - ws = ws.to(torch.float32).unbind(dim=1) - - # Execute layers. - x = self.input(ws[0]) - for name, w in zip(self.layer_names, ws[1:]): - x = getattr(self, name)(x, w, **layer_kwargs) - features.append(x) - if self.output_scale != 1: - x = x * self.output_scale - - # Ensure correct shape and dtype. - misc.assert_shape(x, [None, self.img_channels, self.img_resolution, self.img_resolution]) - x = x.to(torch.float32) - if return_feature: - return x, features - else: - return x - - def extra_repr(self): - return '\n'.join([ - f'w_dim={self.w_dim:d}, num_ws={self.num_ws:d},', - f'img_resolution={self.img_resolution:d}, img_channels={self.img_channels:d},', - f'num_layers={self.num_layers:d}, num_critical={self.num_critical:d},', - f'margin_size={self.margin_size:d}, num_fp16_res={self.num_fp16_res:d}']) - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class Generator(torch.nn.Module): - def __init__(self, - z_dim, # Input latent (Z) dimensionality. - c_dim, # Conditioning label (C) dimensionality. - w_dim, # Intermediate latent (W) dimensionality. - img_resolution, # Output resolution. - img_channels, # Number of output color channels. - mapping_kwargs = {}, # Arguments for MappingNetwork. - resize=None, - **synthesis_kwargs, # Arguments for SynthesisNetwork. - ): - super().__init__() - self.z_dim = z_dim - self.c_dim = c_dim - self.w_dim = w_dim - self.img_resolution = img_resolution - self.img_channels = img_channels - self.synthesis = SynthesisNetwork(w_dim=w_dim, img_resolution=img_resolution, img_channels=img_channels, **synthesis_kwargs) - self.num_ws = self.synthesis.num_ws - self.mapping = MappingNetwork(z_dim=z_dim, c_dim=c_dim, w_dim=w_dim, num_ws=self.num_ws, **mapping_kwargs) - self.resize = resize - - def forward(self, z, c, truncation_psi=1, truncation_cutoff=None, update_emas=False, input_is_w=False, return_feature=False, **synthesis_kwargs): - if input_is_w: - ws = z - if ws.dim() == 2: - ws = ws.unsqueeze(1).repeat([1, self.mapping.num_ws, 1]) - else: - ws = self.mapping(z, c, truncation_psi=truncation_psi, truncation_cutoff=truncation_cutoff, update_emas=update_emas) - img = self.synthesis(ws, update_emas=update_emas, return_feature=return_feature, **synthesis_kwargs) - if return_feature: - img, feature = img - if self.resize is not None: - img = imresize(img, [self.resize, self.resize]) - if return_feature: - return img, feature - else: - return img - -#---------------------------------------------------------------------------- - -def imresize(image, size): - dim = image.dim() - if dim == 3: - image = image.unsqueeze(1) - b, _, h, w = image.shape - if size[0] > h: - image = F.interpolate(image, size, mode='bilinear') - elif size[0] < h: - image = F.interpolate(image, size, mode='area') - if dim == 3: - image = image.squeeze(1) - return image diff --git a/spaces/ECCV2022/bytetrack/tutorials/motr/mot_online/matching.py b/spaces/ECCV2022/bytetrack/tutorials/motr/mot_online/matching.py deleted file mode 100644 index cc7abab60f86e5e84994071fc0ec0dd2f89c0377..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/bytetrack/tutorials/motr/mot_online/matching.py +++ /dev/null @@ -1,196 +0,0 @@ -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import lap -import numpy as np -import scipy -from cython_bbox import bbox_overlaps as bbox_ious -from scipy.spatial.distance import cdist - -chi2inv95 = { - 1: 3.8415, - 2: 5.9915, - 3: 7.8147, - 4: 9.4877, - 5: 11.070, - 6: 12.592, - 7: 14.067, - 8: 15.507, - 9: 16.919} - -def merge_matches(m1, m2, shape): - O,P,Q = shape - m1 = np.asarray(m1) - m2 = np.asarray(m2) - - M1 = scipy.sparse.coo_matrix((np.ones(len(m1)), (m1[:, 0], m1[:, 1])), shape=(O, P)) - M2 = scipy.sparse.coo_matrix((np.ones(len(m2)), (m2[:, 0], m2[:, 1])), shape=(P, Q)) - - mask = M1*M2 - match = mask.nonzero() - match = list(zip(match[0], match[1])) - unmatched_O = tuple(set(range(O)) - set([i for i, j in match])) - unmatched_Q = tuple(set(range(Q)) - set([j for i, j in match])) - - return match, unmatched_O, unmatched_Q - - -def _indices_to_matches(cost_matrix, indices, thresh): - matched_cost = cost_matrix[tuple(zip(*indices))] - matched_mask = (matched_cost <= thresh) - - matches = indices[matched_mask] - unmatched_a = tuple(set(range(cost_matrix.shape[0])) - set(matches[:, 0])) - unmatched_b = tuple(set(range(cost_matrix.shape[1])) - set(matches[:, 1])) - - return matches, unmatched_a, unmatched_b - - -def linear_assignment(cost_matrix, thresh): - if cost_matrix.size == 0: - return np.empty((0, 2), dtype=int), tuple(range(cost_matrix.shape[0])), tuple(range(cost_matrix.shape[1])) - matches, unmatched_a, unmatched_b = [], [], [] - cost, x, y = lap.lapjv(cost_matrix, extend_cost=True, cost_limit=thresh) - for ix, mx in enumerate(x): - if mx >= 0: - matches.append([ix, mx]) - unmatched_a = np.where(x < 0)[0] - unmatched_b = np.where(y < 0)[0] - matches = np.asarray(matches) - return matches, unmatched_a, unmatched_b - - -def ious(atlbrs, btlbrs): - """ - Compute cost based on IoU - :type atlbrs: list[tlbr] | np.ndarray - :type atlbrs: list[tlbr] | np.ndarray - :rtype ious np.ndarray - """ - ious = np.zeros((len(atlbrs), len(btlbrs)), dtype=np.float) - if ious.size == 0: - return ious - - ious = bbox_ious( - np.ascontiguousarray(atlbrs, dtype=np.float), - np.ascontiguousarray(btlbrs, dtype=np.float) - ) - - return ious - - -def iou_distance(atracks, btracks): - """ - Compute cost based on IoU - :type atracks: list[STrack] - :type btracks: list[STrack] - :rtype cost_matrix np.ndarray - """ - - if (len(atracks)>0 and isinstance(atracks[0], np.ndarray)) or (len(btracks) > 0 and isinstance(btracks[0], np.ndarray)): - atlbrs = atracks - btlbrs = btracks - else: - atlbrs = [track.tlbr for track in atracks] - btlbrs = [track.tlbr for track in btracks] - _ious = ious(atlbrs, btlbrs) - cost_matrix = 1 - _ious - - return cost_matrix - -def embedding_distance(tracks, detections, metric='cosine'): - """ - :param tracks: list[STrack] - :param detections: list[BaseTrack] - :param metric: - :return: cost_matrix np.ndarray - """ - - cost_matrix = np.zeros((len(tracks), len(detections)), dtype=np.float) - if cost_matrix.size == 0: - return cost_matrix - det_features = np.asarray([track.curr_feat for track in detections], dtype=np.float) - #for i, track in enumerate(tracks): - #cost_matrix[i, :] = np.maximum(0.0, cdist(track.smooth_feat.reshape(1,-1), det_features, metric)) - track_features = np.asarray([track.smooth_feat for track in tracks], dtype=np.float) - cost_matrix = np.maximum(0.0, cdist(track_features, det_features, metric)) # Nomalized features - return cost_matrix - -def embedding_distance2(tracks, detections, metric='cosine'): - """ - :param tracks: list[STrack] - :param detections: list[BaseTrack] - :param metric: - :return: cost_matrix np.ndarray - """ - - cost_matrix = np.zeros((len(tracks), len(detections)), dtype=np.float) - if cost_matrix.size == 0: - return cost_matrix - det_features = np.asarray([track.curr_feat for track in detections], dtype=np.float) - #for i, track in enumerate(tracks): - #cost_matrix[i, :] = np.maximum(0.0, cdist(track.smooth_feat.reshape(1,-1), det_features, metric)) - track_features = np.asarray([track.smooth_feat for track in tracks], dtype=np.float) - cost_matrix = np.maximum(0.0, cdist(track_features, det_features, metric)) # Nomalized features - track_features = np.asarray([track.features[0] for track in tracks], dtype=np.float) - cost_matrix2 = np.maximum(0.0, cdist(track_features, det_features, metric)) # Nomalized features - track_features = np.asarray([track.features[len(track.features)-1] for track in tracks], dtype=np.float) - cost_matrix3 = np.maximum(0.0, cdist(track_features, det_features, metric)) # Nomalized features - for row in range(len(cost_matrix)): - cost_matrix[row] = (cost_matrix[row]+cost_matrix2[row]+cost_matrix3[row])/3 - return cost_matrix - - -def vis_id_feature_A_distance(tracks, detections, metric='cosine'): - track_features = [] - det_features = [] - leg1 = len(tracks) - leg2 = len(detections) - cost_matrix = np.zeros((leg1, leg2), dtype=np.float) - cost_matrix_det = np.zeros((leg1, leg2), dtype=np.float) - cost_matrix_track = np.zeros((leg1, leg2), dtype=np.float) - det_features = np.asarray([track.curr_feat for track in detections], dtype=np.float) - track_features = np.asarray([track.smooth_feat for track in tracks], dtype=np.float) - if leg2 != 0: - cost_matrix_det = np.maximum(0.0, cdist(det_features, det_features, metric)) - if leg1 != 0: - cost_matrix_track = np.maximum(0.0, cdist(track_features, track_features, metric)) - if cost_matrix.size == 0: - return track_features, det_features, cost_matrix, cost_matrix_det, cost_matrix_track - cost_matrix = np.maximum(0.0, cdist(track_features, det_features, metric)) - if leg1 > 10: - leg1 = 10 - tracks = tracks[:10] - if leg2 > 10: - leg2 = 10 - detections = detections[:10] - det_features = np.asarray([track.curr_feat for track in detections], dtype=np.float) - track_features = np.asarray([track.smooth_feat for track in tracks], dtype=np.float) - return track_features, det_features, cost_matrix, cost_matrix_det, cost_matrix_track - -def gate_cost_matrix(kf, cost_matrix, tracks, detections, only_position=False): - if cost_matrix.size == 0: - return cost_matrix - gating_dim = 2 if only_position else 4 - gating_threshold = chi2inv95[gating_dim] - measurements = np.asarray([det.to_xyah() for det in detections]) - for row, track in enumerate(tracks): - gating_distance = kf.gating_distance( - track.mean, track.covariance, measurements, only_position) - cost_matrix[row, gating_distance > gating_threshold] = np.inf - return cost_matrix - - -def fuse_motion(kf, cost_matrix, tracks, detections, only_position=False, lambda_=0.98): - if cost_matrix.size == 0: - return cost_matrix - gating_dim = 2 if only_position else 4 - gating_threshold = chi2inv95[gating_dim] - measurements = np.asarray([det.to_xyah() for det in detections]) - for row, track in enumerate(tracks): - gating_distance = kf.gating_distance( - track.mean, track.covariance, measurements, only_position, metric='maha') - cost_matrix[row, gating_distance > gating_threshold] = np.inf - cost_matrix[row] = lambda_ * cost_matrix[row] + (1 - lambda_) * gating_distance - return cost_matrix diff --git a/spaces/EDGAhab/Aatrox-Talking/attentions.py b/spaces/EDGAhab/Aatrox-Talking/attentions.py deleted file mode 100644 index 4e0b0c1fd48c962e21e1fbe60b23fc574927435c..0000000000000000000000000000000000000000 --- a/spaces/EDGAhab/Aatrox-Talking/attentions.py +++ /dev/null @@ -1,303 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/Edisonymy/buy-or-rent/src/utils/__init__.py b/spaces/Edisonymy/buy-or-rent/src/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/EronSamez/RVC_HFmeu/demucs/parser.py b/spaces/EronSamez/RVC_HFmeu/demucs/parser.py deleted file mode 100644 index 4e8a19cf976e3c6dfe411da64b8dce3e9a4548e0..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/demucs/parser.py +++ /dev/null @@ -1,244 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import os -from pathlib import Path - - -def get_parser(): - parser = argparse.ArgumentParser("demucs", description="Train and evaluate Demucs.") - default_raw = None - default_musdb = None - if 'DEMUCS_RAW' in os.environ: - default_raw = Path(os.environ['DEMUCS_RAW']) - if 'DEMUCS_MUSDB' in os.environ: - default_musdb = Path(os.environ['DEMUCS_MUSDB']) - parser.add_argument( - "--raw", - type=Path, - default=default_raw, - help="Path to raw audio, can be faster, see python3 -m demucs.raw to extract.") - parser.add_argument("--no_raw", action="store_const", const=None, dest="raw") - parser.add_argument("-m", - "--musdb", - type=Path, - default=default_musdb, - help="Path to musdb root") - parser.add_argument("--is_wav", action="store_true", - help="Indicate that the MusDB dataset is in wav format (i.e. MusDB-HQ).") - parser.add_argument("--metadata", type=Path, default=Path("metadata/"), - help="Folder where metadata information is stored.") - parser.add_argument("--wav", type=Path, - help="Path to a wav dataset. This should contain a 'train' and a 'valid' " - "subfolder.") - parser.add_argument("--samplerate", type=int, default=44100) - parser.add_argument("--audio_channels", type=int, default=2) - parser.add_argument("--samples", - default=44100 * 10, - type=int, - help="number of samples to feed in") - parser.add_argument("--data_stride", - default=44100, - type=int, - help="Stride for chunks, shorter = longer epochs") - parser.add_argument("-w", "--workers", default=10, type=int, help="Loader workers") - parser.add_argument("--eval_workers", default=2, type=int, help="Final evaluation workers") - parser.add_argument("-d", - "--device", - help="Device to train on, default is cuda if available else cpu") - parser.add_argument("--eval_cpu", action="store_true", help="Eval on test will be run on cpu.") - parser.add_argument("--dummy", help="Dummy parameter, useful to create a new checkpoint file") - parser.add_argument("--test", help="Just run the test pipeline + one validation. " - "This should be a filename relative to the models/ folder.") - parser.add_argument("--test_pretrained", help="Just run the test pipeline + one validation, " - "on a pretrained model. ") - - parser.add_argument("--rank", default=0, type=int) - parser.add_argument("--world_size", default=1, type=int) - parser.add_argument("--master") - - parser.add_argument("--checkpoints", - type=Path, - default=Path("checkpoints"), - help="Folder where to store checkpoints etc") - parser.add_argument("--evals", - type=Path, - default=Path("evals"), - help="Folder where to store evals and waveforms") - parser.add_argument("--save", - action="store_true", - help="Save estimated for the test set waveforms") - parser.add_argument("--logs", - type=Path, - default=Path("logs"), - help="Folder where to store logs") - parser.add_argument("--models", - type=Path, - default=Path("models"), - help="Folder where to store trained models") - parser.add_argument("-R", - "--restart", - action='store_true', - help='Restart training, ignoring previous run') - - parser.add_argument("--seed", type=int, default=42) - parser.add_argument("-e", "--epochs", type=int, default=180, help="Number of epochs") - parser.add_argument("-r", - "--repeat", - type=int, - default=2, - help="Repeat the train set, longer epochs") - parser.add_argument("-b", "--batch_size", type=int, default=64) - parser.add_argument("--lr", type=float, default=3e-4) - parser.add_argument("--mse", action="store_true", help="Use MSE instead of L1") - parser.add_argument("--init", help="Initialize from a pre-trained model.") - - # Augmentation options - parser.add_argument("--no_augment", - action="store_false", - dest="augment", - default=True, - help="No basic data augmentation.") - parser.add_argument("--repitch", type=float, default=0.2, - help="Probability to do tempo/pitch change") - parser.add_argument("--max_tempo", type=float, default=12, - help="Maximum relative tempo change in %% when using repitch.") - - parser.add_argument("--remix_group_size", - type=int, - default=4, - help="Shuffle sources using group of this size. Useful to somewhat " - "replicate multi-gpu training " - "on less GPUs.") - parser.add_argument("--shifts", - type=int, - default=10, - help="Number of random shifts used for the shift trick.") - parser.add_argument("--overlap", - type=float, - default=0.25, - help="Overlap when --split_valid is passed.") - - # See model.py for doc - parser.add_argument("--growth", - type=float, - default=2., - help="Number of channels between two layers will increase by this factor") - parser.add_argument("--depth", - type=int, - default=6, - help="Number of layers for the encoder and decoder") - parser.add_argument("--lstm_layers", type=int, default=2, help="Number of layers for the LSTM") - parser.add_argument("--channels", - type=int, - default=64, - help="Number of channels for the first encoder layer") - parser.add_argument("--kernel_size", - type=int, - default=8, - help="Kernel size for the (transposed) convolutions") - parser.add_argument("--conv_stride", - type=int, - default=4, - help="Stride for the (transposed) convolutions") - parser.add_argument("--context", - type=int, - default=3, - help="Context size for the decoder convolutions " - "before the transposed convolutions") - parser.add_argument("--rescale", - type=float, - default=0.1, - help="Initial weight rescale reference") - parser.add_argument("--no_resample", action="store_false", - default=True, dest="resample", - help="No Resampling of the input/output x2") - parser.add_argument("--no_glu", - action="store_false", - default=True, - dest="glu", - help="Replace all GLUs by ReLUs") - parser.add_argument("--no_rewrite", - action="store_false", - default=True, - dest="rewrite", - help="No 1x1 rewrite convolutions") - parser.add_argument("--normalize", action="store_true") - parser.add_argument("--no_norm_wav", action="store_false", dest='norm_wav', default=True) - - # Tasnet options - parser.add_argument("--tasnet", action="store_true") - parser.add_argument("--split_valid", - action="store_true", - help="Predict chunks by chunks for valid and test. Required for tasnet") - parser.add_argument("--X", type=int, default=8) - - # Other options - parser.add_argument("--show", - action="store_true", - help="Show model architecture, size and exit") - parser.add_argument("--save_model", action="store_true", - help="Skip traning, just save final model " - "for the current checkpoint value.") - parser.add_argument("--save_state", - help="Skip training, just save state " - "for the current checkpoint value. You should " - "provide a model name as argument.") - - # Quantization options - parser.add_argument("--q-min-size", type=float, default=1, - help="Only quantize layers over this size (in MB)") - parser.add_argument( - "--qat", type=int, help="If provided, use QAT training with that many bits.") - - parser.add_argument("--diffq", type=float, default=0) - parser.add_argument( - "--ms-target", type=float, default=162, - help="Model size target in MB, when using DiffQ. Best model will be kept " - "only if it is smaller than this target.") - - return parser - - -def get_name(parser, args): - """ - Return the name of an experiment given the args. Some parameters are ignored, - for instance --workers, as they do not impact the final result. - """ - ignore_args = set([ - "checkpoints", - "deterministic", - "eval", - "evals", - "eval_cpu", - "eval_workers", - "logs", - "master", - "rank", - "restart", - "save", - "save_model", - "save_state", - "show", - "workers", - "world_size", - ]) - parts = [] - name_args = dict(args.__dict__) - for name, value in name_args.items(): - if name in ignore_args: - continue - if value != parser.get_default(name): - if isinstance(value, Path): - parts.append(f"{name}={value.name}") - else: - parts.append(f"{name}={value}") - if parts: - name = " ".join(parts) - else: - name = "default" - return name diff --git a/spaces/EronSamez/RVC_HFmeu/utils/i18n.py b/spaces/EronSamez/RVC_HFmeu/utils/i18n.py deleted file mode 100644 index 8e75d2bc26ff86ab1716b8d7f239ad9f5cc1e32d..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/utils/i18n.py +++ /dev/null @@ -1,28 +0,0 @@ -import locale -import json -import os - - -def load_language_list(language): - with open(f"./i18n/{language}.json", "r", encoding="utf-8") as f: - language_list = json.load(f) - return language_list - - -class I18nAuto: - def __init__(self, language=None): - if language in ["Auto", None]: - language = "es_ES" - if not os.path.exists(f"./i18n/{language}.json"): - language = "es_ES" - language = "es_ES" - self.language = language - # print("Use Language:", language) - self.language_map = load_language_list(language) - - def __call__(self, key): - return self.language_map.get(key, key) - - def print(self): - # print("Use Language:", self.language) - print("") diff --git a/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_models/drrg_r50_fpn_unet.py b/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_models/drrg_r50_fpn_unet.py deleted file mode 100644 index 78156cca6030bcf7ac12b75287342915882eb0b3..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_models/drrg_r50_fpn_unet.py +++ /dev/null @@ -1,21 +0,0 @@ -model = dict( - type='DRRG', - backbone=dict( - type='mmdet.ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=-1, - norm_cfg=dict(type='BN', requires_grad=True), - init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50'), - norm_eval=True, - style='caffe'), - neck=dict( - type='FPN_UNet', in_channels=[256, 512, 1024, 2048], out_channels=32), - bbox_head=dict( - type='DRRGHead', - in_channels=32, - text_region_thr=0.3, - center_region_thr=0.4, - loss=dict(type='DRRGLoss'), - postprocessor=dict(type='DRRGPostprocessor', link_thr=0.80))) diff --git a/spaces/EuroPython2022/mmocr-demo/configs/_base_/recog_models/abinet.py b/spaces/EuroPython2022/mmocr-demo/configs/_base_/recog_models/abinet.py deleted file mode 100644 index 19c6b66731f0b205741037ece8d6b49f91d0110b..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/mmocr-demo/configs/_base_/recog_models/abinet.py +++ /dev/null @@ -1,70 +0,0 @@ -# num_chars depends on the configuration of label_convertor. The actual -# dictionary size is 36 + 1 (). -# TODO: Automatically update num_chars based on the configuration of -# label_convertor -num_chars = 37 -max_seq_len = 26 - -label_convertor = dict( - type='ABIConvertor', - dict_type='DICT36', - with_unknown=False, - with_padding=False, - lower=True, -) - -model = dict( - type='ABINet', - backbone=dict(type='ResNetABI'), - encoder=dict( - type='ABIVisionModel', - encoder=dict( - type='TransformerEncoder', - n_layers=3, - n_head=8, - d_model=512, - d_inner=2048, - dropout=0.1, - max_len=8 * 32, - ), - decoder=dict( - type='ABIVisionDecoder', - in_channels=512, - num_channels=64, - attn_height=8, - attn_width=32, - attn_mode='nearest', - use_result='feature', - num_chars=num_chars, - max_seq_len=max_seq_len, - init_cfg=dict(type='Xavier', layer='Conv2d')), - ), - decoder=dict( - type='ABILanguageDecoder', - d_model=512, - n_head=8, - d_inner=2048, - n_layers=4, - dropout=0.1, - detach_tokens=True, - use_self_attn=False, - pad_idx=num_chars - 1, - num_chars=num_chars, - max_seq_len=max_seq_len, - init_cfg=None), - fuser=dict( - type='ABIFuser', - d_model=512, - num_chars=num_chars, - init_cfg=None, - max_seq_len=max_seq_len, - ), - loss=dict( - type='ABILoss', - enc_weight=1.0, - dec_weight=1.0, - fusion_weight=1.0, - num_classes=num_chars), - label_convertor=label_convertor, - max_seq_len=max_seq_len, - iter_size=3) diff --git a/spaces/FL33TW00D/whisper-turbo/_next/static/css/68f98a9e0e1cc1b3.css b/spaces/FL33TW00D/whisper-turbo/_next/static/css/68f98a9e0e1cc1b3.css deleted file mode 100644 index 53c2f7b23d52f1224c213bfe1478365fda093436..0000000000000000000000000000000000000000 --- a/spaces/FL33TW00D/whisper-turbo/_next/static/css/68f98a9e0e1cc1b3.css +++ /dev/null @@ -1 +0,0 @@ -@font-face{font-family:__VT323_2a9463;font-style:normal;font-weight:400;font-display:swap;src:url(/_next/static/media/61cd2e7f311e7836.woff2) format("woff2");unicode-range:U+0102-0103,U+0110-0111,U+0128-0129,U+0168-0169,U+01a0-01a1,U+01af-01b0,U+0300-0301,U+0303-0304,U+0308-0309,U+0323,U+0329,U+1ea0-1ef9,U+20ab}@font-face{font-family:__VT323_2a9463;font-style:normal;font-weight:400;font-display:swap;src:url(/_next/static/media/fd428b69af9ef976.woff2) format("woff2");unicode-range:U+0100-02af,U+0304,U+0308,U+0329,U+1e00-1e9f,U+1ef2-1eff,U+2020,U+20a0-20ab,U+20ad-20cf,U+2113,U+2c60-2c7f,U+a720-a7ff}@font-face{font-family:__VT323_2a9463;font-style:normal;font-weight:400;font-display:swap;src:url(/_next/static/media/f36ad5a94261c3ca.woff2) format("woff2");unicode-range:U+00??,U+0131,U+0152-0153,U+02bb-02bc,U+02c6,U+02da,U+02dc,U+0304,U+0308,U+0329,U+2000-206f,U+2074,U+20ac,U+2122,U+2191,U+2193,U+2212,U+2215,U+feff,U+fffd}@font-face{font-family:__VT323_Fallback_2a9463;src:local("Arial");ascent-override:91.26%;descent-override:22.82%;line-gap-override:0.00%;size-adjust:87.66%}.__className_2a9463{font-family:__VT323_2a9463,__VT323_Fallback_2a9463;font-weight:400;font-style:normal} \ No newline at end of file diff --git a/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/__init__.py b/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/__init__.py deleted file mode 100644 index 2988b28937a22c3d039dde6590bcc1ac8dd3b89a..0000000000000000000000000000000000000000 --- a/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# https://github.com/xinntao/BasicSR -# flake8: noqa -from .archs import * -from .data import * -from .losses import * -from .metrics import * -from .models import * -from .ops import * -from .train import * -from .utils import * -# from version import __gitsha__, __version__ diff --git a/spaces/FridaZuley/RVC_HFKawaii/go-tensorboard.bat b/spaces/FridaZuley/RVC_HFKawaii/go-tensorboard.bat deleted file mode 100644 index cb81c17d3865513adec8eb0b832b7888cd1e4078..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/go-tensorboard.bat +++ /dev/null @@ -1,2 +0,0 @@ -python fixes/tensor-launch.py -pause \ No newline at end of file diff --git a/spaces/FridaZuley/RVC_HFKawaii/train/data_utils.py b/spaces/FridaZuley/RVC_HFKawaii/train/data_utils.py deleted file mode 100644 index 71c0eff1815469a52399dc90a093a2f8a29223eb..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/train/data_utils.py +++ /dev/null @@ -1,512 +0,0 @@ -import os, traceback -import numpy as np -import torch -import torch.utils.data - -from mel_processing import spectrogram_torch -from utils import load_wav_to_torch, load_filepaths_and_text - - -class TextAudioLoaderMultiNSFsid(torch.utils.data.Dataset): - """ - 1) loads audio, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - - def __init__(self, audiopaths_and_text, hparams): - self.audiopaths_and_text = load_filepaths_and_text(audiopaths_and_text) - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 5000) - self._filter() - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - audiopaths_and_text_new = [] - lengths = [] - for audiopath, text, pitch, pitchf, dv in self.audiopaths_and_text: - if self.min_text_len <= len(text) and len(text) <= self.max_text_len: - audiopaths_and_text_new.append([audiopath, text, pitch, pitchf, dv]) - lengths.append(os.path.getsize(audiopath) // (3 * self.hop_length)) - self.audiopaths_and_text = audiopaths_and_text_new - self.lengths = lengths - - def get_sid(self, sid): - sid = torch.LongTensor([int(sid)]) - return sid - - def get_audio_text_pair(self, audiopath_and_text): - # separate filename and text - file = audiopath_and_text[0] - phone = audiopath_and_text[1] - pitch = audiopath_and_text[2] - pitchf = audiopath_and_text[3] - dv = audiopath_and_text[4] - - phone, pitch, pitchf = self.get_labels(phone, pitch, pitchf) - spec, wav = self.get_audio(file) - dv = self.get_sid(dv) - - len_phone = phone.size()[0] - len_spec = spec.size()[-1] - # print(123,phone.shape,pitch.shape,spec.shape) - if len_phone != len_spec: - len_min = min(len_phone, len_spec) - # amor - len_wav = len_min * self.hop_length - - spec = spec[:, :len_min] - wav = wav[:, :len_wav] - - phone = phone[:len_min, :] - pitch = pitch[:len_min] - pitchf = pitchf[:len_min] - - return (spec, wav, phone, pitch, pitchf, dv) - - def get_labels(self, phone, pitch, pitchf): - phone = np.load(phone) - phone = np.repeat(phone, 2, axis=0) - pitch = np.load(pitch) - pitchf = np.load(pitchf) - n_num = min(phone.shape[0], 900) # DistributedBucketSampler - # print(234,phone.shape,pitch.shape) - phone = phone[:n_num, :] - pitch = pitch[:n_num] - pitchf = pitchf[:n_num] - phone = torch.FloatTensor(phone) - pitch = torch.LongTensor(pitch) - pitchf = torch.FloatTensor(pitchf) - return phone, pitch, pitchf - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError( - "{} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate - ) - ) - audio_norm = audio - # audio_norm = audio / self.max_wav_value - # audio_norm = audio / np.abs(audio).max() - - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if os.path.exists(spec_filename): - try: - spec = torch.load(spec_filename) - except: - print(spec_filename, traceback.format_exc()) - spec = spectrogram_torch( - audio_norm, - self.filter_length, - self.sampling_rate, - self.hop_length, - self.win_length, - center=False, - ) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename, _use_new_zipfile_serialization=False) - else: - spec = spectrogram_torch( - audio_norm, - self.filter_length, - self.sampling_rate, - self.hop_length, - self.win_length, - center=False, - ) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename, _use_new_zipfile_serialization=False) - return spec, audio_norm - - def __getitem__(self, index): - return self.get_audio_text_pair(self.audiopaths_and_text[index]) - - def __len__(self): - return len(self.audiopaths_and_text) - - -class TextAudioCollateMultiNSFsid: - """Zero-pads model inputs and targets""" - - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text and aduio - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[0].size(1) for x in batch]), dim=0, descending=True - ) - - max_spec_len = max([x[0].size(1) for x in batch]) - max_wave_len = max([x[1].size(1) for x in batch]) - spec_lengths = torch.LongTensor(len(batch)) - wave_lengths = torch.LongTensor(len(batch)) - spec_padded = torch.FloatTensor(len(batch), batch[0][0].size(0), max_spec_len) - wave_padded = torch.FloatTensor(len(batch), 1, max_wave_len) - spec_padded.zero_() - wave_padded.zero_() - - max_phone_len = max([x[2].size(0) for x in batch]) - phone_lengths = torch.LongTensor(len(batch)) - phone_padded = torch.FloatTensor( - len(batch), max_phone_len, batch[0][2].shape[1] - ) # (spec, wav, phone, pitch) - pitch_padded = torch.LongTensor(len(batch), max_phone_len) - pitchf_padded = torch.FloatTensor(len(batch), max_phone_len) - phone_padded.zero_() - pitch_padded.zero_() - pitchf_padded.zero_() - # dv = torch.FloatTensor(len(batch), 256)#gin=256 - sid = torch.LongTensor(len(batch)) - - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - spec = row[0] - spec_padded[i, :, : spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wave = row[1] - wave_padded[i, :, : wave.size(1)] = wave - wave_lengths[i] = wave.size(1) - - phone = row[2] - phone_padded[i, : phone.size(0), :] = phone - phone_lengths[i] = phone.size(0) - - pitch = row[3] - pitch_padded[i, : pitch.size(0)] = pitch - pitchf = row[4] - pitchf_padded[i, : pitchf.size(0)] = pitchf - - # dv[i] = row[5] - sid[i] = row[5] - - return ( - phone_padded, - phone_lengths, - pitch_padded, - pitchf_padded, - spec_padded, - spec_lengths, - wave_padded, - wave_lengths, - # dv - sid, - ) - - -class TextAudioLoader(torch.utils.data.Dataset): - """ - 1) loads audio, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - - def __init__(self, audiopaths_and_text, hparams): - self.audiopaths_and_text = load_filepaths_and_text(audiopaths_and_text) - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 5000) - self._filter() - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - audiopaths_and_text_new = [] - lengths = [] - for audiopath, text, dv in self.audiopaths_and_text: - if self.min_text_len <= len(text) and len(text) <= self.max_text_len: - audiopaths_and_text_new.append([audiopath, text, dv]) - lengths.append(os.path.getsize(audiopath) // (3 * self.hop_length)) - self.audiopaths_and_text = audiopaths_and_text_new - self.lengths = lengths - - def get_sid(self, sid): - sid = torch.LongTensor([int(sid)]) - return sid - - def get_audio_text_pair(self, audiopath_and_text): - # separate filename and text - file = audiopath_and_text[0] - phone = audiopath_and_text[1] - dv = audiopath_and_text[2] - - phone = self.get_labels(phone) - spec, wav = self.get_audio(file) - dv = self.get_sid(dv) - - len_phone = phone.size()[0] - len_spec = spec.size()[-1] - if len_phone != len_spec: - len_min = min(len_phone, len_spec) - len_wav = len_min * self.hop_length - spec = spec[:, :len_min] - wav = wav[:, :len_wav] - phone = phone[:len_min, :] - return (spec, wav, phone, dv) - - def get_labels(self, phone): - phone = np.load(phone) - phone = np.repeat(phone, 2, axis=0) - n_num = min(phone.shape[0], 900) # DistributedBucketSampler - phone = phone[:n_num, :] - phone = torch.FloatTensor(phone) - return phone - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError( - "{} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate - ) - ) - audio_norm = audio - # audio_norm = audio / self.max_wav_value - # audio_norm = audio / np.abs(audio).max() - - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if os.path.exists(spec_filename): - try: - spec = torch.load(spec_filename) - except: - print(spec_filename, traceback.format_exc()) - spec = spectrogram_torch( - audio_norm, - self.filter_length, - self.sampling_rate, - self.hop_length, - self.win_length, - center=False, - ) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename, _use_new_zipfile_serialization=False) - else: - spec = spectrogram_torch( - audio_norm, - self.filter_length, - self.sampling_rate, - self.hop_length, - self.win_length, - center=False, - ) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename, _use_new_zipfile_serialization=False) - return spec, audio_norm - - def __getitem__(self, index): - return self.get_audio_text_pair(self.audiopaths_and_text[index]) - - def __len__(self): - return len(self.audiopaths_and_text) - - -class TextAudioCollate: - """Zero-pads model inputs and targets""" - - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text and aduio - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[0].size(1) for x in batch]), dim=0, descending=True - ) - - max_spec_len = max([x[0].size(1) for x in batch]) - max_wave_len = max([x[1].size(1) for x in batch]) - spec_lengths = torch.LongTensor(len(batch)) - wave_lengths = torch.LongTensor(len(batch)) - spec_padded = torch.FloatTensor(len(batch), batch[0][0].size(0), max_spec_len) - wave_padded = torch.FloatTensor(len(batch), 1, max_wave_len) - spec_padded.zero_() - wave_padded.zero_() - - max_phone_len = max([x[2].size(0) for x in batch]) - phone_lengths = torch.LongTensor(len(batch)) - phone_padded = torch.FloatTensor( - len(batch), max_phone_len, batch[0][2].shape[1] - ) - phone_padded.zero_() - sid = torch.LongTensor(len(batch)) - - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - spec = row[0] - spec_padded[i, :, : spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wave = row[1] - wave_padded[i, :, : wave.size(1)] = wave - wave_lengths[i] = wave.size(1) - - phone = row[2] - phone_padded[i, : phone.size(0), :] = phone - phone_lengths[i] = phone.size(0) - - sid[i] = row[3] - - return ( - phone_padded, - phone_lengths, - spec_padded, - spec_lengths, - wave_padded, - wave_lengths, - sid, - ) - - -class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler): - """ - Maintain similar input lengths in a batch. - Length groups are specified by boundaries. - Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}. - - It removes samples which are not included in the boundaries. - Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded. - """ - - def __init__( - self, - dataset, - batch_size, - boundaries, - num_replicas=None, - rank=None, - shuffle=True, - ): - super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - self.lengths = dataset.lengths - self.batch_size = batch_size - self.boundaries = boundaries - - self.buckets, self.num_samples_per_bucket = self._create_buckets() - self.total_size = sum(self.num_samples_per_bucket) - self.num_samples = self.total_size // self.num_replicas - - def _create_buckets(self): - buckets = [[] for _ in range(len(self.boundaries) - 1)] - for i in range(len(self.lengths)): - length = self.lengths[i] - idx_bucket = self._bisect(length) - if idx_bucket != -1: - buckets[idx_bucket].append(i) - - for i in range(len(buckets) - 1, -1, -1): # - if len(buckets[i]) == 0: - buckets.pop(i) - self.boundaries.pop(i + 1) - - num_samples_per_bucket = [] - for i in range(len(buckets)): - len_bucket = len(buckets[i]) - total_batch_size = self.num_replicas * self.batch_size - rem = ( - total_batch_size - (len_bucket % total_batch_size) - ) % total_batch_size - num_samples_per_bucket.append(len_bucket + rem) - return buckets, num_samples_per_bucket - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - - indices = [] - if self.shuffle: - for bucket in self.buckets: - indices.append(torch.randperm(len(bucket), generator=g).tolist()) - else: - for bucket in self.buckets: - indices.append(list(range(len(bucket)))) - - batches = [] - for i in range(len(self.buckets)): - bucket = self.buckets[i] - len_bucket = len(bucket) - ids_bucket = indices[i] - num_samples_bucket = self.num_samples_per_bucket[i] - - # add extra samples to make it evenly divisible - rem = num_samples_bucket - len_bucket - ids_bucket = ( - ids_bucket - + ids_bucket * (rem // len_bucket) - + ids_bucket[: (rem % len_bucket)] - ) - - # subsample - ids_bucket = ids_bucket[self.rank :: self.num_replicas] - - # batching - for j in range(len(ids_bucket) // self.batch_size): - batch = [ - bucket[idx] - for idx in ids_bucket[ - j * self.batch_size : (j + 1) * self.batch_size - ] - ] - batches.append(batch) - - if self.shuffle: - batch_ids = torch.randperm(len(batches), generator=g).tolist() - batches = [batches[i] for i in batch_ids] - self.batches = batches - - assert len(self.batches) * self.batch_size == self.num_samples - return iter(self.batches) - - def _bisect(self, x, lo=0, hi=None): - if hi is None: - hi = len(self.boundaries) - 1 - - if hi > lo: - mid = (hi + lo) // 2 - if self.boundaries[mid] < x and x <= self.boundaries[mid + 1]: - return mid - elif x <= self.boundaries[mid]: - return self._bisect(x, lo, mid) - else: - return self._bisect(x, mid + 1, hi) - else: - return -1 - - def __len__(self): - return self.num_samples // self.batch_size diff --git a/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/gen5_build_car.sh b/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/gen5_build_car.sh deleted file mode 100644 index 0770a5f946691a4930793d89e694740b807c0ce7..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/gen5_build_car.sh +++ /dev/null @@ -1,15 +0,0 @@ -#!/bin/bash -#SBATCH -c 10 -#SBATCH -n 1 -#SBATCH -o logs/%j.out -#SBATCH --exclusive - -STEPS=${1-'15000'} - -sh scripts/traintest_scripts/train_test_multi_task_goal.sh data \ - "[align-rope,sweeping-piles,align-box-corner,towers-of-hanoi-seq-seen-colors,assembling-kits-seq-seen-colors]" "[build-car]" \ - 5taskgen_unrelated $STEPS - -sh scripts/traintest_scripts/train_test_multi_task_goal.sh data \ -"[build-two-circles,build-wheel,build-bridge,towers-of-hanoi-seq-seen-colors,stack-block-pyramid-seq-seen-colors]" "[build-car]" \ - 5taskgen_related $STEPS diff --git a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/docs/anime_video_model.md b/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/docs/anime_video_model.md deleted file mode 100644 index 0ad5c85804c1f8636c3720a652b40bbd9df0fe2e..0000000000000000000000000000000000000000 --- a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/docs/anime_video_model.md +++ /dev/null @@ -1,136 +0,0 @@ -# Anime Video Models - -:white_check_mark: We add small models that are optimized for anime videos :-)
    -More comparisons can be found in [anime_comparisons.md](anime_comparisons.md) - -- [How to Use](#how-to-use) -- [PyTorch Inference](#pytorch-inference) -- [ncnn Executable File](#ncnn-executable-file) - - [Step 1: Use ffmpeg to extract frames from video](#step-1-use-ffmpeg-to-extract-frames-from-video) - - [Step 2: Inference with Real-ESRGAN executable file](#step-2-inference-with-real-esrgan-executable-file) - - [Step 3: Merge the enhanced frames back into a video](#step-3-merge-the-enhanced-frames-back-into-a-video) -- [More Demos](#more-demos) - -| Models | Scale | Description | -| ---------------------------------------------------------------------------------------------------------------------------------- | :---- | :----------------------------- | -| [realesr-animevideov3](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-animevideov3.pth) | X4 1 | Anime video model with XS size | - -Note:
    -1 This model can also be used for X1, X2, X3. - ---- - -The following are some demos (best view in the full screen mode). - - - - - - - -## How to Use - -### PyTorch Inference - -```bash -# download model -wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-animevideov3.pth -P weights -# single gpu and single process inference -CUDA_VISIBLE_DEVICES=0 python inference_realesrgan_video.py -i inputs/video/onepiece_demo.mp4 -n realesr-animevideov3 -s 2 --suffix outx2 -# single gpu and multi process inference (you can use multi-processing to improve GPU utilization) -CUDA_VISIBLE_DEVICES=0 python inference_realesrgan_video.py -i inputs/video/onepiece_demo.mp4 -n realesr-animevideov3 -s 2 --suffix outx2 --num_process_per_gpu 2 -# multi gpu and multi process inference -CUDA_VISIBLE_DEVICES=0,1,2,3 python inference_realesrgan_video.py -i inputs/video/onepiece_demo.mp4 -n realesr-animevideov3 -s 2 --suffix outx2 --num_process_per_gpu 2 -``` - -```console -Usage: ---num_process_per_gpu The total number of process is num_gpu * num_process_per_gpu. The bottleneck of - the program lies on the IO, so the GPUs are usually not fully utilized. To alleviate - this issue, you can use multi-processing by setting this parameter. As long as it - does not exceed the CUDA memory ---extract_frame_first If you encounter ffmpeg error when using multi-processing, you can turn this option on. -``` - -### NCNN Executable File - -#### Step 1: Use ffmpeg to extract frames from video - -```bash -ffmpeg -i onepiece_demo.mp4 -qscale:v 1 -qmin 1 -qmax 1 -vsync 0 tmp_frames/frame%08d.png -``` - -- Remember to create the folder `tmp_frames` ahead - -#### Step 2: Inference with Real-ESRGAN executable file - -1. Download the latest portable [Windows](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-windows.zip) / [Linux](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-ubuntu.zip) / [MacOS](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-macos.zip) **executable files for Intel/AMD/Nvidia GPU** - -1. Taking the Windows as example, run: - - ```bash - ./realesrgan-ncnn-vulkan.exe -i tmp_frames -o out_frames -n realesr-animevideov3 -s 2 -f jpg - ``` - - - Remember to create the folder `out_frames` ahead - -#### Step 3: Merge the enhanced frames back into a video - -1. First obtain fps from input videos by - - ```bash - ffmpeg -i onepiece_demo.mp4 - ``` - - ```console - Usage: - -i input video path - ``` - - You will get the output similar to the following screenshot. - -

    - -

    - -2. Merge frames - - ```bash - ffmpeg -r 23.98 -i out_frames/frame%08d.jpg -c:v libx264 -r 23.98 -pix_fmt yuv420p output.mp4 - ``` - - ```console - Usage: - -i input video path - -c:v video encoder (usually we use libx264) - -r fps, remember to modify it to meet your needs - -pix_fmt pixel format in video - ``` - - If you also want to copy audio from the input videos, run: - - ```bash - ffmpeg -r 23.98 -i out_frames/frame%08d.jpg -i onepiece_demo.mp4 -map 0:v:0 -map 1:a:0 -c:a copy -c:v libx264 -r 23.98 -pix_fmt yuv420p output_w_audio.mp4 - ``` - - ```console - Usage: - -i input video path, here we use two input streams - -c:v video encoder (usually we use libx264) - -r fps, remember to modify it to meet your needs - -pix_fmt pixel format in video - ``` - -## More Demos - -- Input video for One Piece: - - - -- Out video for One Piece - - - -**More comparisons** - - diff --git a/spaces/GotAudio/Understanding-Women/app.py b/spaces/GotAudio/Understanding-Women/app.py deleted file mode 100644 index 4ce2640225d5fbd80a096cc5997410a28632c3b5..0000000000000000000000000000000000000000 --- a/spaces/GotAudio/Understanding-Women/app.py +++ /dev/null @@ -1,18 +0,0 @@ -import gradio as gr -from fastai import * -from fastai.vision.all import * - -import pathlib -plt = platform.system() -if plt == 'Linux': pathlib.WindowsPath = pathlib.PosixPath - -learn_inf = load_learner("export.pkl") - -def predict_mood(img): - - pred, pred_idx, probs = learn_inf.predict(img) - return f'Prediction: {pred}; Probability: {probs[pred_idx]:.04f}' - -gr.inputs.Image(tool=False, optional=False) -webpage = gr.Interface(fn=predict_mood, inputs=gr.inputs.Image(tool=False, optional=False), outputs="text", title="Women's Mood Detector", live=True, theme="dark-peach", description="It detects wether the woman is Angry, Happy, or Sad.", examples=[["example1.jpg"], ["example2.jpg"], ["example3.jpg"]]) -webpage.launch() \ No newline at end of file diff --git a/spaces/Gradio-Blocks/anime-colorization/scripts/image_nll.py b/spaces/Gradio-Blocks/anime-colorization/scripts/image_nll.py deleted file mode 100644 index 2b72bfd3810d63270a873f7889dddfd2512387b3..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/anime-colorization/scripts/image_nll.py +++ /dev/null @@ -1,96 +0,0 @@ -""" -Approximate the bits/dimension for an image model. -""" - -import argparse -import os - -import numpy as np -import torch.distributed as dist - -from pixel_guide_diffusion import dist_util, logger -from pixel_guide_diffusion.image_datasets import load_data -from pixel_guide_diffusion.script_util import ( - model_and_diffusion_defaults, - create_model_and_diffusion, - add_dict_to_argparser, - args_to_dict, -) - - -def main(): - args = create_argparser().parse_args() - - dist_util.setup_dist() - logger.configure() - - logger.log("creating model and diffusion...") - model, diffusion = create_model_and_diffusion( - **args_to_dict(args, model_and_diffusion_defaults().keys()) - ) - model.load_state_dict( - dist_util.load_state_dict(args.model_path, map_location="cpu") - ) - model.to(dist_util.dev()) - model.eval() - - logger.log("creating data loader...") - data = load_data( - data_dir=args.data_dir, - batch_size=args.batch_size, - image_size=args.image_size, - class_cond=args.class_cond, - deterministic=True, - ) - - logger.log("evaluating...") - run_bpd_evaluation(model, diffusion, data, args.num_samples, args.clip_denoised) - - -def run_bpd_evaluation(model, diffusion, data, num_samples, clip_denoised): - all_bpd = [] - all_metrics = {"vb": [], "mse": [], "xstart_mse": []} - num_complete = 0 - while num_complete < num_samples: - batch, model_kwargs = next(data) - batch = batch.to(dist_util.dev()) - model_kwargs = {k: v.to(dist_util.dev()) for k, v in model_kwargs.items()} - minibatch_metrics = diffusion.calc_bpd_loop( - model, batch, clip_denoised=clip_denoised, model_kwargs=model_kwargs - ) - - for key, term_list in all_metrics.items(): - terms = minibatch_metrics[key].mean(dim=0) / dist.get_world_size() - dist.all_reduce(terms) - term_list.append(terms.detach().cpu().numpy()) - - total_bpd = minibatch_metrics["total_bpd"] - total_bpd = total_bpd.mean() / dist.get_world_size() - dist.all_reduce(total_bpd) - all_bpd.append(total_bpd.item()) - num_complete += dist.get_world_size() * batch.shape[0] - - logger.log(f"done {num_complete} samples: bpd={np.mean(all_bpd)}") - - if dist.get_rank() == 0: - for name, terms in all_metrics.items(): - out_path = os.path.join(logger.get_dir(), f"{name}_terms.npz") - logger.log(f"saving {name} terms to {out_path}") - np.savez(out_path, np.mean(np.stack(terms), axis=0)) - - dist.barrier() - logger.log("evaluation complete") - - -def create_argparser(): - defaults = dict( - data_dir="", clip_denoised=True, num_samples=1000, batch_size=1, model_path="" - ) - defaults.update(model_and_diffusion_defaults()) - parser = argparse.ArgumentParser() - add_dict_to_argparser(parser, defaults) - return parser - - -if __name__ == "__main__": - main() diff --git a/spaces/HaMerL/ChaosinChat/assets/custom.js b/spaces/HaMerL/ChaosinChat/assets/custom.js deleted file mode 100644 index b8071034f3618c541e3f4169c7fc6d6593d56f44..0000000000000000000000000000000000000000 --- a/spaces/HaMerL/ChaosinChat/assets/custom.js +++ /dev/null @@ -1,224 +0,0 @@ - -// custom javascript here - -const MAX_HISTORY_LENGTH = 32; - -var key_down_history = []; -var currentIndex = -1; -var user_input_ta; - -var gradioContainer = null; -var user_input_ta = null; -var user_input_tb = null; -var userInfoDiv = null; -var appTitleDiv = null; -var chatbot = null; -var apSwitch = null; - -var ga = document.getElementsByTagName("gradio-app"); -var targetNode = ga[0]; -var isInIframe = (window.self !== window.top); - -// gradio 页面加载好了么??? 我能动你的元素了么?? -function gradioLoaded(mutations) { - for (var i = 0; i < mutations.length; i++) { - if (mutations[i].addedNodes.length) { - gradioContainer = document.querySelector(".gradio-container"); - user_input_tb = document.getElementById('user_input_tb'); - userInfoDiv = document.getElementById("user_info"); - appTitleDiv = document.getElementById("app_title"); - chatbot = document.querySelector('#chuanhu_chatbot'); - apSwitch = document.querySelector('.apSwitch input[type="checkbox"]'); - - if (gradioContainer && apSwitch) { // gradioCainter 加载出来了没? - adjustDarkMode(); - } - if (user_input_tb) { // user_input_tb 加载出来了没? - selectHistory(); - } - if (userInfoDiv && appTitleDiv) { // userInfoDiv 和 appTitleDiv 加载出来了没? - setTimeout(showOrHideUserInfo(), 2000); - } - if (chatbot) { // chatbot 加载出来了没? - setChatbotHeight() - } - } - } -} - -function selectHistory() { - user_input_ta = user_input_tb.querySelector("textarea"); - if (user_input_ta) { - observer.disconnect(); // 停止监听 - // 在 textarea 上监听 keydown 事件 - user_input_ta.addEventListener("keydown", function (event) { - var value = user_input_ta.value.trim(); - // 判断按下的是否为方向键 - if (event.code === 'ArrowUp' || event.code === 'ArrowDown') { - // 如果按下的是方向键,且输入框中有内容,且历史记录中没有该内容,则不执行操作 - if (value && key_down_history.indexOf(value) === -1) - return; - // 对于需要响应的动作,阻止默认行为。 - event.preventDefault(); - var length = key_down_history.length; - if (length === 0) { - currentIndex = -1; // 如果历史记录为空,直接将当前选中的记录重置 - return; - } - if (currentIndex === -1) { - currentIndex = length; - } - if (event.code === 'ArrowUp' && currentIndex > 0) { - currentIndex--; - user_input_ta.value = key_down_history[currentIndex]; - } else if (event.code === 'ArrowDown' && currentIndex < length - 1) { - currentIndex++; - user_input_ta.value = key_down_history[currentIndex]; - } - user_input_ta.selectionStart = user_input_ta.value.length; - user_input_ta.selectionEnd = user_input_ta.value.length; - const input_event = new InputEvent("input", { bubbles: true, cancelable: true }); - user_input_ta.dispatchEvent(input_event); - } else if (event.code === "Enter") { - if (value) { - currentIndex = -1; - if (key_down_history.indexOf(value) === -1) { - key_down_history.push(value); - if (key_down_history.length > MAX_HISTORY_LENGTH) { - key_down_history.shift(); - } - } - } - } - }); - } -} - -function toggleUserInfoVisibility(shouldHide) { - if (userInfoDiv) { - if (shouldHide) { - userInfoDiv.classList.add("hideK"); - } else { - userInfoDiv.classList.remove("hideK"); - } - } -} -function showOrHideUserInfo() { - var sendBtn = document.getElementById("submit_btn"); - - // Bind mouse/touch events to show/hide user info - appTitleDiv.addEventListener("mouseenter", function () { - toggleUserInfoVisibility(false); - }); - userInfoDiv.addEventListener("mouseenter", function () { - toggleUserInfoVisibility(false); - }); - sendBtn.addEventListener("mouseenter", function () { - toggleUserInfoVisibility(false); - }); - - appTitleDiv.addEventListener("mouseleave", function () { - toggleUserInfoVisibility(true); - }); - userInfoDiv.addEventListener("mouseleave", function () { - toggleUserInfoVisibility(true); - }); - sendBtn.addEventListener("mouseleave", function () { - toggleUserInfoVisibility(true); - }); - - appTitleDiv.ontouchstart = function () { - toggleUserInfoVisibility(false); - }; - userInfoDiv.ontouchstart = function () { - toggleUserInfoVisibility(false); - }; - sendBtn.ontouchstart = function () { - toggleUserInfoVisibility(false); - }; - - appTitleDiv.ontouchend = function () { - setTimeout(function () { - toggleUserInfoVisibility(true); - }, 3000); - }; - userInfoDiv.ontouchend = function () { - setTimeout(function () { - toggleUserInfoVisibility(true); - }, 3000); - }; - sendBtn.ontouchend = function () { - setTimeout(function () { - toggleUserInfoVisibility(true); - }, 3000); // Delay 1 second to hide user info - }; - - // Hide user info after 2 second - setTimeout(function () { - toggleUserInfoVisibility(true); - }, 2000); -} - -function toggleDarkMode(isEnabled) { - if (isEnabled) { - gradioContainer.classList.add("dark"); - document.body.style.setProperty("background-color", "var(--neutral-950)", "important"); - } else { - gradioContainer.classList.remove("dark"); - document.body.style.backgroundColor = ""; - } -} -function adjustDarkMode() { - const darkModeQuery = window.matchMedia("(prefers-color-scheme: dark)"); - - // 根据当前颜色模式设置初始状态 - apSwitch.checked = darkModeQuery.matches; - toggleDarkMode(darkModeQuery.matches); - // 监听颜色模式变化 - darkModeQuery.addEventListener("change", (e) => { - apSwitch.checked = e.matches; - toggleDarkMode(e.matches); - }); - // apSwitch = document.querySelector('.apSwitch input[type="checkbox"]'); - apSwitch.addEventListener("change", (e) => { - toggleDarkMode(e.target.checked); - }); -} - -function setChatbotHeight() { - const screenWidth = window.innerWidth; - const statusDisplay = document.querySelector('#status_display'); - const statusDisplayHeight = statusDisplay ? statusDisplay.offsetHeight : 0; - const wrap = chatbot.querySelector('.wrap'); - const vh = window.innerHeight * 0.01; - document.documentElement.style.setProperty('--vh', `${vh}px`); - if (isInIframe) { - chatbot.style.height = `700px`; - wrap.style.maxHeight = `calc(700px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))` - } else { - if (screenWidth <= 320) { - chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 150}px)`; - wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 150}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`; - } else if (screenWidth <= 499) { - chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 100}px)`; - wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 100}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`; - } else { - chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 160}px)`; - wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 160}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`; - } - } -} - -// 监视页面内部 DOM 变动 -var observer = new MutationObserver(function (mutations) { - gradioLoaded(mutations); -}); -observer.observe(targetNode, { childList: true, subtree: true }); - -// 监视页面变化 -window.addEventListener("DOMContentLoaded", function () { - isInIframe = (window.self !== window.top); -}); -window.addEventListener('resize', setChatbotHeight); -window.addEventListener('scroll', setChatbotHeight); -window.matchMedia("(prefers-color-scheme: dark)").addEventListener("change", adjustDarkMode); \ No newline at end of file diff --git a/spaces/Hallucinate/demo/k_diffusion/models/image_v1.py b/spaces/Hallucinate/demo/k_diffusion/models/image_v1.py deleted file mode 100644 index 9ffd5f2c4d6c9d086107d5fac67452419696c723..0000000000000000000000000000000000000000 --- a/spaces/Hallucinate/demo/k_diffusion/models/image_v1.py +++ /dev/null @@ -1,156 +0,0 @@ -import math - -import torch -from torch import nn -from torch.nn import functional as F - -from .. import layers, utils - - -def orthogonal_(module): - nn.init.orthogonal_(module.weight) - return module - - -class ResConvBlock(layers.ConditionedResidualBlock): - def __init__(self, feats_in, c_in, c_mid, c_out, group_size=32, dropout_rate=0.): - skip = None if c_in == c_out else orthogonal_(nn.Conv2d(c_in, c_out, 1, bias=False)) - super().__init__( - layers.AdaGN(feats_in, c_in, max(1, c_in // group_size)), - nn.GELU(), - nn.Conv2d(c_in, c_mid, 3, padding=1), - nn.Dropout2d(dropout_rate, inplace=True), - layers.AdaGN(feats_in, c_mid, max(1, c_mid // group_size)), - nn.GELU(), - nn.Conv2d(c_mid, c_out, 3, padding=1), - nn.Dropout2d(dropout_rate, inplace=True), - skip=skip) - - -class DBlock(layers.ConditionedSequential): - def __init__(self, n_layers, feats_in, c_in, c_mid, c_out, group_size=32, head_size=64, dropout_rate=0., downsample=False, self_attn=False, cross_attn=False, c_enc=0): - modules = [nn.Identity()] - for i in range(n_layers): - my_c_in = c_in if i == 0 else c_mid - my_c_out = c_mid if i < n_layers - 1 else c_out - modules.append(ResConvBlock(feats_in, my_c_in, c_mid, my_c_out, group_size, dropout_rate)) - if self_attn: - norm = lambda c_in: layers.AdaGN(feats_in, c_in, max(1, my_c_out // group_size)) - modules.append(layers.SelfAttention2d(my_c_out, max(1, my_c_out // head_size), norm, dropout_rate)) - if cross_attn: - norm = lambda c_in: layers.AdaGN(feats_in, c_in, max(1, my_c_out // group_size)) - modules.append(layers.CrossAttention2d(my_c_out, c_enc, max(1, my_c_out // head_size), norm, dropout_rate)) - super().__init__(*modules) - self.set_downsample(downsample) - - def set_downsample(self, downsample): - self[0] = layers.Downsample2d() if downsample else nn.Identity() - return self - - -class UBlock(layers.ConditionedSequential): - def __init__(self, n_layers, feats_in, c_in, c_mid, c_out, group_size=32, head_size=64, dropout_rate=0., upsample=False, self_attn=False, cross_attn=False, c_enc=0): - modules = [] - for i in range(n_layers): - my_c_in = c_in if i == 0 else c_mid - my_c_out = c_mid if i < n_layers - 1 else c_out - modules.append(ResConvBlock(feats_in, my_c_in, c_mid, my_c_out, group_size, dropout_rate)) - if self_attn: - norm = lambda c_in: layers.AdaGN(feats_in, c_in, max(1, my_c_out // group_size)) - modules.append(layers.SelfAttention2d(my_c_out, max(1, my_c_out // head_size), norm, dropout_rate)) - if cross_attn: - norm = lambda c_in: layers.AdaGN(feats_in, c_in, max(1, my_c_out // group_size)) - modules.append(layers.CrossAttention2d(my_c_out, c_enc, max(1, my_c_out // head_size), norm, dropout_rate)) - modules.append(nn.Identity()) - super().__init__(*modules) - self.set_upsample(upsample) - - def forward(self, input, cond, skip=None): - if skip is not None: - input = torch.cat([input, skip], dim=1) - return super().forward(input, cond) - - def set_upsample(self, upsample): - self[-1] = layers.Upsample2d() if upsample else nn.Identity() - return self - - -class MappingNet(nn.Sequential): - def __init__(self, feats_in, feats_out, n_layers=2): - layers = [] - for i in range(n_layers): - layers.append(orthogonal_(nn.Linear(feats_in if i == 0 else feats_out, feats_out))) - layers.append(nn.GELU()) - super().__init__(*layers) - - -class ImageDenoiserModelV1(nn.Module): - def __init__(self, c_in, feats_in, depths, channels, self_attn_depths, cross_attn_depths=None, mapping_cond_dim=0, unet_cond_dim=0, cross_cond_dim=0, dropout_rate=0., patch_size=1, skip_stages=0, has_variance=False): - super().__init__() - self.c_in = c_in - self.channels = channels - self.unet_cond_dim = unet_cond_dim - self.patch_size = patch_size - self.has_variance = has_variance - self.timestep_embed = layers.FourierFeatures(1, feats_in) - if mapping_cond_dim > 0: - self.mapping_cond = nn.Linear(mapping_cond_dim, feats_in, bias=False) - self.mapping = MappingNet(feats_in, feats_in) - self.proj_in = nn.Conv2d((c_in + unet_cond_dim) * self.patch_size ** 2, channels[max(0, skip_stages - 1)], 1) - self.proj_out = nn.Conv2d(channels[max(0, skip_stages - 1)], c_in * self.patch_size ** 2 + (1 if self.has_variance else 0), 1) - nn.init.zeros_(self.proj_out.weight) - nn.init.zeros_(self.proj_out.bias) - if cross_cond_dim == 0: - cross_attn_depths = [False] * len(self_attn_depths) - d_blocks, u_blocks = [], [] - for i in range(len(depths)): - my_c_in = channels[max(0, i - 1)] - d_blocks.append(DBlock(depths[i], feats_in, my_c_in, channels[i], channels[i], downsample=i > skip_stages, self_attn=self_attn_depths[i], cross_attn=cross_attn_depths[i], c_enc=cross_cond_dim, dropout_rate=dropout_rate)) - for i in range(len(depths)): - my_c_in = channels[i] * 2 if i < len(depths) - 1 else channels[i] - my_c_out = channels[max(0, i - 1)] - u_blocks.append(UBlock(depths[i], feats_in, my_c_in, channels[i], my_c_out, upsample=i > skip_stages, self_attn=self_attn_depths[i], cross_attn=cross_attn_depths[i], c_enc=cross_cond_dim, dropout_rate=dropout_rate)) - self.u_net = layers.UNet(d_blocks, reversed(u_blocks), skip_stages=skip_stages) - - def forward(self, input, sigma, mapping_cond=None, unet_cond=None, cross_cond=None, cross_cond_padding=None, return_variance=False): - c_noise = sigma.log() / 4 - timestep_embed = self.timestep_embed(utils.append_dims(c_noise, 2)) - mapping_cond_embed = torch.zeros_like(timestep_embed) if mapping_cond is None else self.mapping_cond(mapping_cond) - mapping_out = self.mapping(timestep_embed + mapping_cond_embed) - cond = {'cond': mapping_out} - if unet_cond is not None: - input = torch.cat([input, unet_cond], dim=1) - if cross_cond is not None: - cond['cross'] = cross_cond - cond['cross_padding'] = cross_cond_padding - if self.patch_size > 1: - input = F.pixel_unshuffle(input, self.patch_size) - input = self.proj_in(input) - input = self.u_net(input, cond) - input = self.proj_out(input) - if self.has_variance: - input, logvar = input[:, :-1], input[:, -1].flatten(1).mean(1) - if self.patch_size > 1: - input = F.pixel_shuffle(input, self.patch_size) - if self.has_variance and return_variance: - return input, logvar - return input - - def set_skip_stages(self, skip_stages): - self.proj_in = nn.Conv2d(self.proj_in.in_channels, self.channels[max(0, skip_stages - 1)], 1) - self.proj_out = nn.Conv2d(self.channels[max(0, skip_stages - 1)], self.proj_out.out_channels, 1) - nn.init.zeros_(self.proj_out.weight) - nn.init.zeros_(self.proj_out.bias) - self.u_net.skip_stages = skip_stages - for i, block in enumerate(self.u_net.d_blocks): - block.set_downsample(i > skip_stages) - for i, block in enumerate(reversed(self.u_net.u_blocks)): - block.set_upsample(i > skip_stages) - return self - - def set_patch_size(self, patch_size): - self.patch_size = patch_size - self.proj_in = nn.Conv2d((self.c_in + self.unet_cond_dim) * self.patch_size ** 2, self.channels[max(0, self.u_net.skip_stages - 1)], 1) - self.proj_out = nn.Conv2d(self.channels[max(0, self.u_net.skip_stages - 1)], self.c_in * self.patch_size ** 2 + (1 if self.has_variance else 0), 1) - nn.init.zeros_(self.proj_out.weight) - nn.init.zeros_(self.proj_out.bias) diff --git a/spaces/HaloMaster/chinesesummary/fengshen/examples/classification/demo_classification_afqmc_roberta.sh b/spaces/HaloMaster/chinesesummary/fengshen/examples/classification/demo_classification_afqmc_roberta.sh deleted file mode 100644 index bad55f2de72f66f02b583d9b191802c55cfe0a4b..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/examples/classification/demo_classification_afqmc_roberta.sh +++ /dev/null @@ -1,62 +0,0 @@ -MODEL_NAME="IDEA-CCNL/Erlangshen-Roberta-110M-NLI" - -TEXTA_NAME=sentence1 -TEXTB_NAME=sentence2 -LABEL_NAME=label -ID_NAME=id - -BATCH_SIZE=1 -VAL_BATCH_SIZE=1 - -DATA_ARGS="\ - --dataset_name IDEA-CCNL/AFQMC \ - --train_batchsize $BATCH_SIZE \ - --valid_batchsize $VAL_BATCH_SIZE \ - --max_length 128 \ - --texta_name $TEXTA_NAME \ - --textb_name $TEXTB_NAME \ - --label_name $LABEL_NAME \ - --id_name $ID_NAME \ - " - -MODEL_ARGS="\ - --learning_rate 1e-5 \ - --weight_decay 1e-2 \ - --warmup_ratio 0.01 \ - --num_labels 2 \ - --model_type huggingface-auto \ - " - -MODEL_CHECKPOINT_ARGS="\ - --monitor val_acc \ - --save_top_k 3 \ - --mode max \ - --every_n_train_steps 0 \ - --save_weights_only True \ - --dirpath . \ - --filename model-{epoch:02d}-{val_acc:.4f} \ - " - - -TRAINER_ARGS="\ - --max_epochs 67 \ - --gpus 1 \ - --num_nodes 1 \ - --strategy ddp \ - --gradient_clip_val 1.0 \ - --check_val_every_n_epoch 1 \ - --val_check_interval 1.0 \ - --precision 16 \ - --default_root_dir . \ - " - -options=" \ - --pretrained_model_path $MODEL_NAME \ - $DATA_ARGS \ - $MODEL_ARGS \ - $MODEL_CHECKPOINT_ARGS \ - $TRAINER_ARGS \ - " - -python3 finetune_classification.py $options - diff --git a/spaces/HaloMaster/chinesesummary/fengshen/examples/zen2_finetune/ner_zen2_large_weibo.sh b/spaces/HaloMaster/chinesesummary/fengshen/examples/zen2_finetune/ner_zen2_large_weibo.sh deleted file mode 100644 index 7fab2998437ef8c12dcd93466371d0324eec4c79..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/examples/zen2_finetune/ner_zen2_large_weibo.sh +++ /dev/null @@ -1,91 +0,0 @@ -#!/bin/bash -#SBATCH --job-name=zen2_large_weibo # create a short name for your job -#SBATCH --nodes=1 # node count -#SBATCH --ntasks=1 # total number of tasks across all nodes -#SBATCH --cpus-per-task=30 # cpu-cores per task (>1 if multi-threaded tasks) -#SBATCH --gres=gpu:1 # number of gpus per node -#SBATCH --mail-type=ALL # send email when job begins, ends or failed etc. -#SBATCH -o %x-%j.log # output and error file name (%x=job name, %j=job id) - - -# export CUDA_VISIBLE_DEVICES='2' -export TORCH_EXTENSIONS_DIR=/cognitive_comp/ganruyi/tmp/torch_extendsions - -MODEL_NAME=zen2_large - -TASK=weibo - -ZERO_STAGE=1 -STRATEGY=deepspeed_stage_${ZERO_STAGE} - -ROOT_DIR=/cognitive_comp/ganruyi/experiments/ner_finetune/${MODEL_NAME}_${TASK} -if [ ! -d ${ROOT_DIR} ];then - mkdir -p ${ROOT_DIR} - echo ${ROOT_DIR} created!!!!!!!!!!!!!! -else - echo ${ROOT_DIR} exist!!!!!!!!!!!!!!! -fi - -DATA_DIR=/cognitive_comp/lujunyu/data_zh/NER_Aligned/weibo/ -PRETRAINED_MODEL_PATH=/cognitive_comp/ganruyi/hf_models/zen/zh_zen_large_2.0 - -CHECKPOINT_PATH=${ROOT_DIR}/ckpt/ -OUTPUT_PATH=${ROOT_DIR}/predict.json - -DATA_ARGS="\ - --data_dir $DATA_DIR \ - --train_data train.all.bmes \ - --valid_data test.all.bmes \ - --test_data test.all.bmes \ - --train_batchsize 16 \ - --valid_batchsize 16 \ - --max_seq_length 256 \ - --task_name weibo \ - " - -MODEL_ARGS="\ - --learning_rate 3e-5 \ - --weight_decay 0.1 \ - --warmup_ratio 0.01 \ - --markup bioes \ - --middle_prefix M- \ - " - -MODEL_CHECKPOINT_ARGS="\ - --monitor val_f1 \ - --save_top_k 3 \ - --mode max \ - --every_n_train_steps 100 \ - --save_weights_only True \ - --dirpath $CHECKPOINT_PATH \ - --filename model-{epoch:02d}-{val_f1:.4f} \ - " - -TRAINER_ARGS="\ - --max_epochs 30 \ - --gpus 1 \ - --check_val_every_n_epoch 1 \ - --val_check_interval 20 \ - --default_root_dir $ROOT_DIR \ - " - - -options=" \ - --pretrained_model_path $PRETRAINED_MODEL_PATH \ - --vocab_file $PRETRAINED_MODEL_PATH/vocab.txt \ - --do_lower_case \ - --output_save_path $OUTPUT_PATH \ - $DATA_ARGS \ - $MODEL_ARGS \ - $MODEL_CHECKPOINT_ARGS \ - $TRAINER_ARGS \ -" -SCRIPT_PATH=/cognitive_comp/ganruyi/Fengshenbang-LM/fengshen/examples/zen2_finetune/fengshen_token_level_ft_task.py -/home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options - -# SINGULARITY_PATH=/cognitive_comp/ganruyi/pytorch21_06_py3_docker_image_v2.sif -# python3 $SCRIPT_PATH $options -# source activate base -# singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $SINGULARITY_PATH /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options -# /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options - diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/data/replabels.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/data/replabels.py deleted file mode 100644 index 441f1bd432b95865fc981c6c695cee299b07ed62..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/data/replabels.py +++ /dev/null @@ -1,70 +0,0 @@ -#!/usr/bin/env python3 - -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Replabel transforms for use with flashlight's ASG criterion. -""" - - -def replabel_symbol(i): - """ - Replabel symbols used in flashlight, currently just "1", "2", ... - This prevents training with numeral tokens, so this might change in the future - """ - return str(i) - - -def pack_replabels(tokens, dictionary, max_reps): - """ - Pack a token sequence so that repeated symbols are replaced by replabels - """ - if len(tokens) == 0 or max_reps <= 0: - return tokens - - replabel_value_to_idx = [0] * (max_reps + 1) - for i in range(1, max_reps + 1): - replabel_value_to_idx[i] = dictionary.index(replabel_symbol(i)) - - result = [] - prev_token = -1 - num_reps = 0 - for token in tokens: - if token == prev_token and num_reps < max_reps: - num_reps += 1 - else: - if num_reps > 0: - result.append(replabel_value_to_idx[num_reps]) - num_reps = 0 - result.append(token) - prev_token = token - if num_reps > 0: - result.append(replabel_value_to_idx[num_reps]) - return result - - -def unpack_replabels(tokens, dictionary, max_reps): - """ - Unpack a token sequence so that replabels are replaced by repeated symbols - """ - if len(tokens) == 0 or max_reps <= 0: - return tokens - - replabel_idx_to_value = {} - for i in range(1, max_reps + 1): - replabel_idx_to_value[dictionary.index(replabel_symbol(i))] = i - - result = [] - prev_token = -1 - for token in tokens: - try: - for _ in range(replabel_idx_to_value[token]): - result.append(prev_token) - prev_token = -1 - except KeyError: - result.append(token) - prev_token = token - return result diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/speech_to_text/modules/augmented_memory_attention.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/speech_to_text/modules/augmented_memory_attention.py deleted file mode 100644 index e7465bc889fd1ba6ca2c60905a2eb6ff5cc62b9d..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/speech_to_text/modules/augmented_memory_attention.py +++ /dev/null @@ -1,488 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Tuple, List - -import torch -import torch.nn.functional as F -from fairseq.models import FairseqEncoder -from fairseq.models.speech_to_text import ( - ConvTransformerEncoder, -) -from fairseq.models.speech_to_text.utils import attention_suppression -from fairseq.models.speech_to_text.utils import ( - lengths_to_encoder_padding_mask, - segments_to_sequence, - sequence_to_segments, -) -from fairseq.modules import MultiheadAttention, TransformerEncoderLayer -from torch import nn, Tensor - -# ------------------------------------------------------------------------------ -# AugmentedMemoryConvTransformerEncoder -# ------------------------------------------------------------------------------ - - -class AugmentedMemoryConvTransformerEncoder(ConvTransformerEncoder): - def __init__(self, args): - super().__init__(args) - - args.encoder_stride = self.stride() - - self.left_context = args.left_context // args.encoder_stride - - self.right_context = args.right_context // args.encoder_stride - - self.left_context_after_stride = args.left_context // args.encoder_stride - self.right_context_after_stride = args.right_context // args.encoder_stride - - self.transformer_layers = nn.ModuleList([]) - self.transformer_layers.extend( - [ - AugmentedMemoryTransformerEncoderLayer(args) - for i in range(args.encoder_layers) - ] - ) - - def stride(self): - # Hard coded here. Should infer from convs in future - stride = 4 - return stride - - def forward(self, src_tokens, src_lengths, states=None): - """Encode input sequence. - :param torch.Tensor xs: input tensor - :param torch.Tensor masks: input mask - :return: position embedded tensor and mask - :rtype Tuple[torch.Tensor, torch.Tensor]: - """ - bsz, max_seq_len, _ = src_tokens.size() - x = ( - src_tokens.view(bsz, max_seq_len, self.in_channels, self.input_dim) - .transpose(1, 2) - .contiguous() - ) - x = self.conv(x) - bsz, _, output_seq_len, _ = x.size() - x = x.transpose(1, 2).transpose(0, 1).contiguous().view(output_seq_len, bsz, -1) - x = self.out(x) - x = self.embed_scale * x - - subsampling_factor = 1.0 * max_seq_len / output_seq_len - input_lengths = torch.max( - (src_lengths.float() / subsampling_factor).ceil().long(), - x.size(0) * src_lengths.new_ones([src_lengths.size(0)]).long(), - ) - - encoder_padding_mask, _ = lengths_to_encoder_padding_mask( - input_lengths, batch_first=True - ) - - # TODO: fix positional embedding - positions = self.embed_positions(encoder_padding_mask).transpose(0, 1) - - x += positions - x = F.dropout(x, p=self.dropout, training=self.training) - - # State to store memory banks etc. - if states is None: - states = [ - {"memory_banks": None, "encoder_states": None} - for i in range(len(self.transformer_layers)) - ] - - for i, layer in enumerate(self.transformer_layers): - # x size: - # (self.left_size + self.segment_size + self.right_size) - # / self.stride, num_heads, dim - # TODO: Consider mask here - x = layer(x, states[i]) - states[i]["encoder_states"] = x[ - self.left_context_after_stride : -self.right_context_after_stride - ] - - lengths = ( - ( - ~encoder_padding_mask[ - :, self.left_context_after_stride : -self.right_context_after_stride - ] - ) - .sum(dim=1, keepdim=True) - .long() - ) - - return states[-1]["encoder_states"], lengths, states - - -# ------------------------------------------------------------------------------ -# AugmentedMemoryTransformerEncoderLayer -# ------------------------------------------------------------------------------ -class AugmentedMemoryTransformerEncoderLayer(TransformerEncoderLayer): - def __init__(self, args): - super().__init__(args) - - self.left_context = args.left_context // args.encoder_stride - self.right_context = args.right_context // args.encoder_stride - - def forward(self, x, state): - - length, batch_size, x_dim = x.size() - - residual = x - - if self.normalize_before: - x = self.self_attn_layer_norm(x) - - # init_state - if state.get("memory_banks", None) is None: - state["memory_banks"] = [] - - # TODO reseach new sum_query method - seg_start = self.left_context - seg_end = length - self.right_context - if seg_start < seg_end: - summarization_query = torch.mean(x[seg_start:seg_end], keepdim=True, dim=0) - else: - summarization_query = x.new_zeros(1, batch_size, x_dim) - - x = torch.cat([x, summarization_query], dim=0) - - x = self.self_attn(input_and_summary=x, state=state) - - x = self.dropout_module(x) - x = residual + x - - if not self.normalize_before: - x = self.self_attn_layer_norm(x) - - residual = x - if self.normalize_before: - x = self.final_layer_norm(x) - - x = self.activation_fn(self.fc1(x)) - x = self.activation_dropout_module(x) - x = self.fc2(x) - x = self.dropout_module(x) - x = residual + x - if not self.normalize_before: - x = self.final_layer_norm(x) - - return x - - def build_self_attention(self, embed_dim, args): - return AugmentedMemoryMultiheadAttention( - embed_dim=embed_dim, - num_heads=args.encoder_attention_heads, - dropout=args.attention_dropout, - self_attention=True, - q_noise=self.quant_noise, - qn_block_size=self.quant_noise_block_size, - tanh_on_mem=True, - max_memory_size=args.max_memory_size, - ) - - -# ------------------------------------------------------------------------------ -# AugmentedMemoryMultiheadAttention -# ------------------------------------------------------------------------------ -class AugmentedMemoryMultiheadAttention(MultiheadAttention): - """ - Augmented Memory Attention from - Streaming Transformer-based Acoustic Models - Using Self-attention with Augmented Memory - https://arxiv.org/abs/2005.08042 - """ - - def __init__( - self, - embed_dim, - num_heads, - kdim=None, - vdim=None, - dropout=0.0, - bias=True, - add_bias_kv=False, - add_zero_attn=False, - self_attention=False, - encoder_decoder_attention=False, - q_noise=0.0, - qn_block_size=8, - tanh_on_mem=False, - memory_dim=None, - std_scale=0.5, # 0.5 based on https://arxiv.org/abs/2005.09137 - max_memory_size=-1, - disable_mem_on_mem_attn=True, - ): - super().__init__( - embed_dim, - num_heads, - kdim, - vdim, - dropout, - bias, - add_bias_kv, - add_zero_attn, - self_attention, - encoder_decoder_attention, - q_noise, - qn_block_size, - ) - - self.memory_dim = memory_dim if memory_dim is not None else embed_dim - self.std_scale = std_scale - self.disable_mem_on_mem_attn = disable_mem_on_mem_attn - - # This Operator was used for factorization in PySpeech - self.v2e = lambda x: x - - if tanh_on_mem: - self.squash_mem = torch.tanh - self.nonlinear_squash_mem = True - else: - self.squash_mem = lambda x: x - self.nonlinear_squash_mem = False - - self.max_memory_size = max_memory_size - - def forward(self, input_and_summary, state): - """ - input: Encoder states of current segment with left or right context, - plus one summarization query - - """ - - length, batch_size, _ = input_and_summary.shape - length = length - 1 # not include sum_query, last index - - memory = state["memory_banks"] - # TODO: positional embedding on memory - - if self.max_memory_size > -1 and len(memory) > self.max_memory_size: - # TODO: need to fix here - if self.max_memory_size == 0: - memory = memory.new_zeros(1, memory.size(1), self.memory_dim) - else: - memory = memory[-self.max_memory_size :] - - memory_and_input = torch.cat(memory + [input_and_summary[:-1]], dim=0) - input_and_sum_query = input_and_summary - - q = self.q_proj(self.v2e(input_and_sum_query)) - k = self.k_proj(self.v2e(memory_and_input)) - v = self.v_proj(self.v2e(memory_and_input)) - - q = ( - q.contiguous() - .view(-1, batch_size * self.num_heads, self.head_dim) - .transpose(0, 1) - * self.scaling - ) - k = ( - k.contiguous() - .view(-1, batch_size * self.num_heads, self.head_dim) - .transpose(0, 1) - ) - - v = ( - v.contiguous() - .view(-1, batch_size * self.num_heads, self.head_dim) - .transpose(0, 1) - ) - - attention_weights = torch.bmm(q, k.transpose(1, 2)) - - if self.disable_mem_on_mem_attn: - attention_weights = self.suppress_mem_on_mem_attention( - batch_size, self.num_heads, len(memory), attention_weights - ) - - if self.std_scale is not None: - attention_weights = attention_suppression(attention_weights, self.std_scale) - - assert list(attention_weights.shape) == [ - batch_size * self.num_heads, - length + 1, - length + len(memory), - ] - - attention_weights = torch.nn.functional.softmax( - attention_weights.float(), dim=-1 - ).type_as(attention_weights) - - attention_probs = self.dropout_module(attention_weights) - - # [T, T, B, n_head] + [T, B, n_head, d_head] -> [T, B, n_head, d_head] - attention = torch.bmm(attention_probs, v) - - assert list(attention.shape) == [ - batch_size * self.num_heads, - length + 1, - self.head_dim, - ] - - attention = ( - attention.transpose(0, 1) - .contiguous() - .view(length + 1, batch_size, self.embed_dim) - ) - - output_and_memory = self.out_proj(attention) - - next_m = output_and_memory[-1:] - next_m = self.squash_mem(next_m) - output = output_and_memory[:-1] - - state["memory_banks"].append(next_m) - - return output - - def suppress_mem_on_mem_attention( - self, B: int, num_heads: int, mem_size: int, attention_weight: Tensor - ): - """ - Arguments: - - B: batch size - - num_heads: number of attention heads - - mem_size: size of memory bank - - attention_weight: a [B*num_heads, T + 1, T + mem_size] vector - - Return: - modified attention_weight with [B*num_heads, -1, :mem_size] = -inf - """ - attention_weight[:, -1, :mem_size] = float("-inf") - return attention_weight - - -# ------------------------------------------------------------------------------ -# SequenceEncoder -# ------------------------------------------------------------------------------ -class SequenceEncoder(FairseqEncoder): - """ - SequenceEncoder encodes sequences. - - More specifically, `src_tokens` and `src_lengths` in `forward()` should - describe a batch of "complete" sequences rather than segments. - - Segment-by-segment inference can be triggered by `segment_size`: - 1) `segment_size` is None: - SequenceEncoder treats the input sequence as one single segment. - 2) `segment_size` is not None (some int instead): - SequenceEncoder does the following: - 1. breaks the input sequence into several segments - 2. inference on each segment and collect the outputs - 3. concatanete segment outputs into the output sequence. - Note that `segment_size` here shouldn't include additional left/right - contexts needed, for example if we wish to infer with LC-BLSTM where the - middle chunk size is 100 and right context is 20, `segment_size` should be - 100. - """ - - def __init__(self, args, module): - super().__init__(None) - - self.module = module - self.input_time_axis = 1 - self.output_time_axis = 0 - self.segment_size = args.segment_size - self.left_context = args.left_context - self.right_context = args.right_context - - def forward( - self, - src_tokens: Tensor, - src_lengths: Tensor, - states=None, - ): - - seg_src_tokens_lengths = sequence_to_segments( - sequence=src_tokens, - time_axis=self.input_time_axis, - lengths=src_lengths, - segment_size=self.segment_size, - extra_left_context=self.left_context, - extra_right_context=self.right_context, - ) - - seg_encoder_states_lengths: List[Tuple[Tensor, Tensor]] = [] - - for seg_src_tokens, seg_src_lengths in seg_src_tokens_lengths: - (seg_encoder_states, seg_enc_lengths, states) = self.module( - seg_src_tokens, - seg_src_lengths, - states=states, - ) - - seg_encoder_states_lengths.append((seg_encoder_states, seg_enc_lengths)) - - encoder_out, enc_lengths = segments_to_sequence( - segments=seg_encoder_states_lengths, time_axis=self.output_time_axis - ) - - encoder_padding_mask, _ = lengths_to_encoder_padding_mask( - enc_lengths, batch_first=True - ) - - if not encoder_padding_mask.any(): - encoder_padding_mask = None - - return { - "encoder_out": [encoder_out], - "encoder_padding_mask": [encoder_padding_mask], - "encoder_embedding": [], - "encoder_states": [states], - "src_tokens": [], - "src_lengths": [], - } - - def incremental_encode( - self, - seg_src_tokens: Tensor, - seg_src_lengths: Tensor, - states=None, - ): - """ - Different from forward function, this function takes segmented speech - as input, and append encoder states to previous states - """ - (seg_encoder_states, seg_enc_lengths, states) = self.module( - seg_src_tokens, - seg_src_lengths, - states=states, - ) - return seg_encoder_states, seg_enc_lengths, states - - -# ------------------------------------------------------------------------------ -# Augmented memory model decorator -# ------------------------------------------------------------------------------ -def augmented_memory(klass): - class StreamSeq2SeqModel(klass): - @staticmethod - def add_args(parser): - super(StreamSeq2SeqModel, StreamSeq2SeqModel).add_args(parser) - parser.add_argument( - "--segment-size", type=int, required=True, help="Length of the segment." - ) - parser.add_argument( - "--left-context", - type=int, - default=0, - help="Left context for the segment.", - ) - parser.add_argument( - "--right-context", - type=int, - default=0, - help="Right context for the segment.", - ) - parser.add_argument( - "--max-memory-size", - type=int, - default=-1, - help="Right context for the segment.", - ) - - StreamSeq2SeqModel.__name__ = klass.__name__ - return StreamSeq2SeqModel diff --git a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/README.md b/spaces/Harveenchadha/Vakyansh-Hindi-TTS/README.md deleted file mode 100644 index ea8958397bb5b5fdcb96cd966bd040050ece6fd6..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Vakyansh Hindi TTS -emoji: 🐨 -colorFrom: indigo -colorTo: purple -sdk: gradio -sdk_version: 2.8.13 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/indicnlp/langinfo.py b/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/indicnlp/langinfo.py deleted file mode 100644 index efb7e372feeb67d7106eb5c443de2e14053fd204..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/indicnlp/langinfo.py +++ /dev/null @@ -1,488 +0,0 @@ -# -# Copyright (c) 2013-present, Anoop Kunchukuttan -# All rights reserved. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -# - -## language codes -LC_TA='ta' - -SCRIPT_RANGES={ - 'pa':[0x0a00,0x0a7f] , - 'gu':[0x0a80,0x0aff] , - 'or':[0x0b00,0x0b7f] , - 'ta':[0x0b80,0x0bff] , - 'te':[0x0c00,0x0c7f] , - 'kn':[0x0c80,0x0cff] , - 'ml':[0x0d00,0x0d7f] , - 'si':[0x0d80,0x0dff] , - 'hi':[0x0900,0x097f] , - 'mr':[0x0900,0x097f] , - 'kK':[0x0900,0x097f] , - 'sa':[0x0900,0x097f] , - 'ne':[0x0900,0x097f] , - 'sd':[0x0900,0x097f] , - 'bn':[0x0980,0x09ff] , - 'as':[0x0980,0x09ff] , - } - -DRAVIDIAN_LANGUAGES=['ta', 'te', 'kn', 'ml',] -IE_LANGUAGES=['hi', 'mr', 'kK', 'sa', 'ne', 'sd', 'bn', 'as', 'pa', 'gu', 'or', 'si', ] -DANDA_DELIM_LANGUAGES=['as','bn','hi','ne','or','pa','sa','sd'] - -URDU_RANGES=[ - [0x0600,0x06ff], - [0x0750,0x077f], - [0xfb50,0xfdff], - [0xfe70,0xfeff], - ] - -COORDINATED_RANGE_START_INCLUSIVE=0 -COORDINATED_RANGE_END_INCLUSIVE=0x6f - -NUMERIC_OFFSET_START=0x66 -NUMERIC_OFFSET_END=0x6f - -HALANTA_OFFSET=0x4d -AUM_OFFSET=0x50 -NUKTA_OFFSET=0x3c - -RUPEE_SIGN=0x20b9 - -DANDA=0x0964 -DOUBLE_DANDA=0x0965 - -#TODO: add missing fricatives and approximants -VELAR_RANGE=[0x15,0x19] -PALATAL_RANGE=[0x1a,0x1e] -RETROFLEX_RANGE=[0x1f,0x23] -DENTAL_RANGE=[0x24,0x29] -LABIAL_RANGE=[0x2a,0x2e] - -# verify -VOICED_LIST=[0x17,0x18,0x1c,0x1d,0x21,0x22,0x26,0x27,0x2c,0x2d] -UNVOICED_LIST=[0x15,0x16,0x1a,0x1b,0x1f,0x20,0x24,0x25,0x2a,0x2b] #TODO: add sibilants/sonorants -ASPIRATED_LIST=[0x16,0x18,0x1b,0x1d,0x20,0x22,0x25,0x27,0x2b,0x2d] -UNASPIRATED_LIST=[0x15,0x17,0x1a,0x1c,0x1f,0x21,0x24,0x26,0x2a,0x2c] -NASAL_LIST=[0x19,0x1e,0x23,0x28,0x29,0x2d] -FRICATIVE_LIST=[0x36,0x37,0x38] -APPROXIMANT_LIST=[0x2f,0x30,0x31,0x32,0x33,0x34,0x35] - -#TODO: ha has to be properly categorized - -def is_danda_delim(lang): - """ - Returns True if danda/double danda is a possible delimiter for the language - """ - return lang in DANDA_DELIM_LANGUAGES - -def get_offset(c,lang): - """ - Applicable to Brahmi derived Indic scripts - """ - return ord(c)-SCRIPT_RANGES[lang][0] - -def offset_to_char(c,lang): - """ - Applicable to Brahmi derived Indic scripts - """ - return chr(c+SCRIPT_RANGES[lang][0]) - -def in_coordinated_range(c_offset): - """ - Applicable to Brahmi derived Indic scripts - """ - return (c_offset>=COORDINATED_RANGE_START_INCLUSIVE and c_offset<=COORDINATED_RANGE_END_INCLUSIVE) - -def is_indiclang_char(c,lang): - """ - Applicable to Brahmi derived Indic scripts - """ - o=get_offset(c,lang) - return (o>=0 and o<=0x7f) or ord(c)==DANDA or ord(c)==DOUBLE_DANDA - -# def is_vowel(c,lang): -# """ -# Is the character a vowel -# """ -# o=get_offset(c,lang) -# return (o>=0x04 and o<=0x14) - -# def is_vowel_sign(c,lang): -# """ -# Is the character a vowel sign (maatraa) -# """ -# o=get_offset(c,lang) -# return (o>=0x3e and o<=0x4c) - -# def is_halanta(c,lang): -# """ -# Is the character the halanta character -# """ -# o=get_offset(c,lang) -# return (o==HALANTA_OFFSET) - -# def is_nukta(c,lang): -# """ -# Is the character the halanta character -# """ -# o=get_offset(c,lang) -# return (o==NUKTA_OFFSET) - -# def is_aum(c,lang): -# """ -# Is the character a vowel sign (maatraa) -# """ -# o=get_offset(c,lang) -# return (o==AUM_OFFSET) - -# def is_consonant(c,lang): -# """ -# Is the character a consonant -# """ -# o=get_offset(c,lang) -# return (o>=0x15 and o<=0x39) - -# def is_velar(c,lang): -# """ -# Is the character a velar -# """ -# o=get_offset(c,lang) -# return (o>=VELAR_RANGE[0] and o<=VELAR_RANGE[1]) - -# def is_palatal(c,lang): -# """ -# Is the character a palatal -# """ -# o=get_offset(c,lang) -# return (o>=PALATAL_RANGE[0] and o<=PALATAL_RANGE[1]) - -# def is_retroflex(c,lang): -# """ -# Is the character a retroflex -# """ -# o=get_offset(c,lang) -# return (o>=RETROFLEX_RANGE[0] and o<=RETROFLEX_RANGE[1]) - -# def is_dental(c,lang): -# """ -# Is the character a dental -# """ -# o=get_offset(c,lang) -# return (o>=DENTAL_RANGE[0] and o<=DENTAL_RANGE[1]) - -# def is_labial(c,lang): -# """ -# Is the character a labial -# """ -# o=get_offset(c,lang) -# return (o>=LABIAL_RANGE[0] and o<=LABIAL_RANGE[1]) - -# def is_voiced(c,lang): -# """ -# Is the character a voiced consonant -# """ -# o=get_offset(c,lang) -# return o in VOICED_LIST - -# def is_unvoiced(c,lang): -# """ -# Is the character a unvoiced consonant -# """ -# o=get_offset(c,lang) -# return o in UNVOICED_LIST - -# def is_aspirated(c,lang): -# """ -# Is the character a aspirated consonant -# """ -# o=get_offset(c,lang) -# return o in ASPIRATED_LIST - -# def is_unaspirated(c,lang): -# """ -# Is the character a unaspirated consonant -# """ -# o=get_offset(c,lang) -# return o in UNASPIRATED_LIST - -# def is_nasal(c,lang): -# """ -# Is the character a nasal consonant -# """ -# o=get_offset(c,lang) -# return o in NASAL_LIST - -# def is_fricative(c,lang): -# """ -# Is the character a fricative consonant -# """ -# o=get_offset(c,lang) -# return o in FRICATIVE_LIST - -# def is_approximant(c,lang): -# """ -# Is the character an approximant consonant -# """ -# o=get_offset(c,lang) -# return o in APPROXIMANT_LIST - -# def is_number(c,lang): -# """ -# Is the character a number -# """ -# o=get_offset(c,lang) -# return (o>=0x66 and o<=0x6f) - - -def is_vowel(c,lang): - """ - Is the character a vowel - """ - o=get_offset(c,lang) - return (o>=0x04 and o<=0x14) - -def is_vowel_sign(c,lang): - """ - Is the character a vowel sign (maatraa) - """ - o=get_offset(c,lang) - return (o>=0x3e and o<=0x4c) - -def is_halanta(c,lang): - """ - Is the character the halanta character - """ - o=get_offset(c,lang) - return (o==HALANTA_OFFSET) - -def is_nukta(c,lang): - """ - Is the character the halanta character - """ - o=get_offset(c,lang) - return (o==NUKTA_OFFSET) - -def is_aum(c,lang): - """ - Is the character a vowel sign (maatraa) - """ - o=get_offset(c,lang) - return (o==AUM_OFFSET) - -def is_consonant(c,lang): - """ - Is the character a consonant - """ - o=get_offset(c,lang) - return (o>=0x15 and o<=0x39) - -def is_velar(c,lang): - """ - Is the character a velar - """ - o=get_offset(c,lang) - return (o>=VELAR_RANGE[0] and o<=VELAR_RANGE[1]) - -def is_palatal(c,lang): - """ - Is the character a palatal - """ - o=get_offset(c,lang) - return (o>=PALATAL_RANGE[0] and o<=PALATAL_RANGE[1]) - -def is_retroflex(c,lang): - """ - Is the character a retroflex - """ - o=get_offset(c,lang) - return (o>=RETROFLEX_RANGE[0] and o<=RETROFLEX_RANGE[1]) - -def is_dental(c,lang): - """ - Is the character a dental - """ - o=get_offset(c,lang) - return (o>=DENTAL_RANGE[0] and o<=DENTAL_RANGE[1]) - -def is_labial(c,lang): - """ - Is the character a labial - """ - o=get_offset(c,lang) - return (o>=LABIAL_RANGE[0] and o<=LABIAL_RANGE[1]) - -def is_voiced(c,lang): - """ - Is the character a voiced consonant - """ - o=get_offset(c,lang) - return o in VOICED_LIST - -def is_unvoiced(c,lang): - """ - Is the character a unvoiced consonant - """ - o=get_offset(c,lang) - return o in UNVOICED_LIST - -def is_aspirated(c,lang): - """ - Is the character a aspirated consonant - """ - o=get_offset(c,lang) - return o in ASPIRATED_LIST - -def is_unaspirated(c,lang): - """ - Is the character a unaspirated consonant - """ - o=get_offset(c,lang) - return o in UNASPIRATED_LIST - -def is_nasal(c,lang): - """ - Is the character a nasal consonant - """ - o=get_offset(c,lang) - return o in NASAL_LIST - -def is_fricative(c,lang): - """ - Is the character a fricative consonant - """ - o=get_offset(c,lang) - return o in FRICATIVE_LIST - -def is_approximant(c,lang): - """ - Is the character an approximant consonant - """ - o=get_offset(c,lang) - return o in APPROXIMANT_LIST - -def is_number(c,lang): - """ - Is the character a number - """ - o=get_offset(c,lang) - return (o>=0x66 and o<=0x6f) - - -################################################## - -def is_vowel_offset(c_offset): - """ - Is the offset a vowel - """ - return (c_offset>=0x04 and c_offset<=0x14) - -def is_vowel_sign_offset(c_offset): - """ - Is the offset a vowel sign (maatraa) - """ - return (c_offset>=0x3e and c_offset<=0x4c) - -def is_halanta_offset(c_offset): - """ - Is the offset the halanta offset - """ - return (c_offset==HALANTA_OFFSET) - -def is_nukta_offset(c_offset): - """ - Is the offset the halanta offset - """ - return (c_offset==NUKTA_OFFSET) - -def is_aum_offset(c_offset): - """ - Is the offset a vowel sign (maatraa) - """ - return (c_offset==AUM_OFFSET) - -def is_consonant_offset(c_offset): - """ - Is the offset a consonant - """ - return (c_offset>=0x15 and c_offset<=0x39) - -def is_velar_offset(c_offset): - """ - Is the offset a velar - """ - return (c_offset>=VELAR_RANGE[0] and c_offset<=VELAR_RANGE[1]) - -def is_palatal_offset(c_offset): - """ - Is the offset a palatal - """ - return (c_offset>=PALATAL_RANGE[0] and c_offset<=PALATAL_RANGE[1]) - -def is_retroflex_offset(c_offset): - """ - Is the offset a retroflex - """ - return (c_offset>=RETROFLEX_RANGE[0] and c_offset<=RETROFLEX_RANGE[1]) - -def is_dental_offset(c_offset): - """ - Is the offset a dental - """ - return (c_offset>=DENTAL_RANGE[0] and c_offset<=DENTAL_RANGE[1]) - -def is_labial_offset(c_offset): - """ - Is the offset a labial - """ - return (c_offset>=LABIAL_RANGE[0] and c_offset<=LABIAL_RANGE[1]) - -def is_voiced_offset(c_offset): - """ - Is the offset a voiced consonant - """ - return c_offset in VOICED_LIST - -def is_unvoiced_offset(c_offset): - """ - Is the offset a unvoiced consonant - """ - return c_offset in UNVOICED_LIST - -def is_aspirated_offset(c_offset): - """ - Is the offset a aspirated consonant - """ - return c_offset in ASPIRATED_LIST - -def is_unaspirated_offset(c_offset): - """ - Is the offset a unaspirated consonant - """ - return c_offset in UNASPIRATED_LIST - -def is_nasal_offset(c_offset): - """ - Is the offset a nasal consonant - """ - return c_offset in NASAL_LIST - -def is_fricative_offset(c_offset): - """ - Is the offset a fricative consonant - """ - return c_offset in FRICATIVE_LIST - -def is_approximant_offset(c_offset): - """ - Is the offset an approximant consonant - """ - return c_offset in APPROXIMANT_LIST - -def is_number_offset(c_offset): - """ - Is the offset a number - """ - return (c_offset>=0x66 and c_offset<=0x6f) diff --git a/spaces/Harveenchadha/en_to_indic_translation/scripts/remove_train_devtest_overlaps.py b/spaces/Harveenchadha/en_to_indic_translation/scripts/remove_train_devtest_overlaps.py deleted file mode 100644 index 6107bb6b3e430457d55e65e19c95d4ef241035e1..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/en_to_indic_translation/scripts/remove_train_devtest_overlaps.py +++ /dev/null @@ -1,265 +0,0 @@ -import os -import string -import shutil -from itertools import permutations, chain -from collections import defaultdict -from tqdm import tqdm -import sys - -INDIC_LANGS = ["as", "bn", "gu", "hi", "kn", "ml", "mr", "or", "pa", "ta", "te"] -# we will be testing the overlaps of training data with all these benchmarks -# benchmarks = ['wat2021-devtest', 'wat2020-devtest', 'wat-2018', 'wmt-news', 'ufal-ta', 'pmi'] - - -def read_lines(path): - # if path doesnt exist, return empty list - if not os.path.exists(path): - return [] - with open(path, "r") as f: - lines = f.readlines() - return lines - - -def create_txt(outFile, lines): - add_newline = not "\n" in lines[0] - outfile = open("{0}".format(outFile), "w") - for line in lines: - if add_newline: - outfile.write(line + "\n") - else: - outfile.write(line) - - outfile.close() - - -def pair_dedup_files(src_file, tgt_file): - src_lines = read_lines(src_file) - tgt_lines = read_lines(tgt_file) - len_before = len(src_lines) - - src_dedupped, tgt_dedupped = pair_dedup_lists(src_lines, tgt_lines) - - len_after = len(src_dedupped) - num_duplicates = len_before - len_after - - print(f"Dropped duplicate pairs in {src_file} Num duplicates -> {num_duplicates}") - create_txt(src_file, src_dedupped) - create_txt(tgt_file, tgt_dedupped) - - -def pair_dedup_lists(src_list, tgt_list): - src_tgt = list(set(zip(src_list, tgt_list))) - src_deduped, tgt_deduped = zip(*src_tgt) - return src_deduped, tgt_deduped - - -def strip_and_normalize(line): - # lowercase line, remove spaces and strip punctuation - - # one of the fastest way to add an exclusion list and remove that - # list of characters from a string - # https://towardsdatascience.com/how-to-efficiently-remove-punctuations-from-a-string-899ad4a059fb - exclist = string.punctuation + "\u0964" - table_ = str.maketrans("", "", exclist) - - line = line.replace(" ", "").lower() - # dont use this method, it is painfully slow - # line = "".join([i for i in line if i not in string.punctuation]) - line = line.translate(table_) - return line - - -def expand_tupled_list(list_of_tuples): - # convert list of tuples into two lists - # https://stackoverflow.com/questions/8081545/how-to-convert-list-of-tuples-to-multiple-lists - # [(en, as), (as, bn), (bn, gu)] - > [en, as, bn], [as, bn, gu] - list_a, list_b = map(list, zip(*list_of_tuples)) - return list_a, list_b - - -def get_src_tgt_lang_lists(many2many=False): - if many2many is False: - SRC_LANGS = ["en"] - TGT_LANGS = INDIC_LANGS - else: - all_languages = INDIC_LANGS + ["en"] - # lang_pairs = list(permutations(all_languages, 2)) - - SRC_LANGS, TGT_LANGS = all_languages, all_languages - - return SRC_LANGS, TGT_LANGS - - -def normalize_and_gather_all_benchmarks(devtest_dir, many2many=False): - - # This is a dict of dict of lists - # the first keys are for lang-pair, the second keys are for src/tgt - # the values are the devtest lines. - # so devtest_pairs_normalized[en-as][src] will store src(en lines) - # so devtest_pairs_normalized[en-as][tgt] will store tgt(as lines) - devtest_pairs_normalized = defaultdict(lambda: defaultdict(list)) - SRC_LANGS, TGT_LANGS = get_src_tgt_lang_lists(many2many) - benchmarks = os.listdir(devtest_dir) - for dataset in benchmarks: - for src_lang in SRC_LANGS: - for tgt_lang in TGT_LANGS: - if src_lang == tgt_lang: - continue - if dataset == "wat2021-devtest": - # wat2021 dev and test sets have differnet folder structure - src_dev = read_lines(f"{devtest_dir}/{dataset}/dev.{src_lang}") - tgt_dev = read_lines(f"{devtest_dir}/{dataset}/dev.{tgt_lang}") - src_test = read_lines(f"{devtest_dir}/{dataset}/test.{src_lang}") - tgt_test = read_lines(f"{devtest_dir}/{dataset}/test.{tgt_lang}") - else: - src_dev = read_lines( - f"{devtest_dir}/{dataset}/{src_lang}-{tgt_lang}/dev.{src_lang}" - ) - tgt_dev = read_lines( - f"{devtest_dir}/{dataset}/{src_lang}-{tgt_lang}/dev.{tgt_lang}" - ) - src_test = read_lines( - f"{devtest_dir}/{dataset}/{src_lang}-{tgt_lang}/test.{src_lang}" - ) - tgt_test = read_lines( - f"{devtest_dir}/{dataset}/{src_lang}-{tgt_lang}/test.{tgt_lang}" - ) - - # if the tgt_pair data doesnt exist for a particular test set, - # it will be an empty list - if tgt_test == [] or tgt_dev == []: - # print(f'{dataset} does not have {src_lang}-{tgt_lang} data') - continue - - # combine both dev and test sets into one - src_devtest = src_dev + src_test - tgt_devtest = tgt_dev + tgt_test - - src_devtest = [strip_and_normalize(line) for line in src_devtest] - tgt_devtest = [strip_and_normalize(line) for line in tgt_devtest] - - devtest_pairs_normalized[f"{src_lang}-{tgt_lang}"]["src"].extend( - src_devtest - ) - devtest_pairs_normalized[f"{src_lang}-{tgt_lang}"]["tgt"].extend( - tgt_devtest - ) - - # dedup merged benchmark datasets - for src_lang in SRC_LANGS: - for tgt_lang in TGT_LANGS: - if src_lang == tgt_lang: - continue - src_devtest, tgt_devtest = ( - devtest_pairs_normalized[f"{src_lang}-{tgt_lang}"]["src"], - devtest_pairs_normalized[f"{src_lang}-{tgt_lang}"]["tgt"], - ) - # if the devtest data doesnt exist for the src-tgt pair then continue - if src_devtest == [] or tgt_devtest == []: - continue - src_devtest, tgt_devtest = pair_dedup_lists(src_devtest, tgt_devtest) - ( - devtest_pairs_normalized[f"{src_lang}-{tgt_lang}"]["src"], - devtest_pairs_normalized[f"{src_lang}-{tgt_lang}"]["tgt"], - ) = ( - src_devtest, - tgt_devtest, - ) - - return devtest_pairs_normalized - - -def remove_train_devtest_overlaps(train_dir, devtest_dir, many2many=False): - - devtest_pairs_normalized = normalize_and_gather_all_benchmarks( - devtest_dir, many2many - ) - - SRC_LANGS, TGT_LANGS = get_src_tgt_lang_lists(many2many) - - if not many2many: - all_src_sentences_normalized = [] - for key in devtest_pairs_normalized: - all_src_sentences_normalized.extend(devtest_pairs_normalized[key]["src"]) - # remove all duplicates. Now this contains all the normalized - # english sentences in all test benchmarks across all lang pair - all_src_sentences_normalized = list(set(all_src_sentences_normalized)) - else: - all_src_sentences_normalized = None - - src_overlaps = [] - tgt_overlaps = [] - for src_lang in SRC_LANGS: - for tgt_lang in TGT_LANGS: - if src_lang == tgt_lang: - continue - new_src_train = [] - new_tgt_train = [] - - pair = f"{src_lang}-{tgt_lang}" - src_train = read_lines(f"{train_dir}/{pair}/train.{src_lang}") - tgt_train = read_lines(f"{train_dir}/{pair}/train.{tgt_lang}") - - len_before = len(src_train) - if len_before == 0: - continue - - src_train_normalized = [strip_and_normalize(line) for line in src_train] - tgt_train_normalized = [strip_and_normalize(line) for line in tgt_train] - - if all_src_sentences_normalized: - src_devtest_normalized = all_src_sentences_normalized - else: - src_devtest_normalized = devtest_pairs_normalized[pair]["src"] - - tgt_devtest_normalized = devtest_pairs_normalized[pair]["tgt"] - - # compute all src and tgt super strict overlaps for a lang pair - overlaps = set(src_train_normalized) & set(src_devtest_normalized) - src_overlaps.extend(list(overlaps)) - - overlaps = set(tgt_train_normalized) & set(tgt_devtest_normalized) - tgt_overlaps.extend(list(overlaps)) - # dictionaries offer o(1) lookup - src_overlaps_dict = {} - tgt_overlaps_dict = {} - for line in src_overlaps: - src_overlaps_dict[line] = 1 - for line in tgt_overlaps: - tgt_overlaps_dict[line] = 1 - - # loop to remove the ovelapped data - idx = -1 - for src_line_norm, tgt_line_norm in tqdm( - zip(src_train_normalized, tgt_train_normalized), total=len_before - ): - idx += 1 - if src_overlaps_dict.get(src_line_norm, None): - continue - if tgt_overlaps_dict.get(tgt_line_norm, None): - continue - new_src_train.append(src_train[idx]) - new_tgt_train.append(tgt_train[idx]) - - len_after = len(new_src_train) - print( - f"Detected overlaps between train and devetest for {pair} is {len_before - len_after}" - ) - print(f"saving new files at {train_dir}/{pair}/") - create_txt(f"{train_dir}/{pair}/train.{src_lang}", new_src_train) - create_txt(f"{train_dir}/{pair}/train.{tgt_lang}", new_tgt_train) - - -if __name__ == "__main__": - train_data_dir = sys.argv[1] - # benchmarks directory should contains all the test sets - devtest_data_dir = sys.argv[2] - if len(sys.argv) == 3: - many2many = False - elif len(sys.argv) == 4: - many2many = sys.argv[4] - if many2many.lower() == "true": - many2many = True - else: - many2many = False - remove_train_devtest_overlaps(train_data_dir, devtest_data_dir, many2many) diff --git a/spaces/ICML2022/OFA/fairseq/examples/speech_synthesis/evaluation/eval_f0.py b/spaces/ICML2022/OFA/fairseq/examples/speech_synthesis/evaluation/eval_f0.py deleted file mode 100644 index df721d683113b44957149cfc3cddaba36520a22c..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/speech_synthesis/evaluation/eval_f0.py +++ /dev/null @@ -1,266 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Signal processing-based evaluation using waveforms -""" -import numpy as np -import os.path as op - -import torchaudio -import tqdm -from tabulate import tabulate - -from examples.speech_synthesis.utils import ( - gross_pitch_error, voicing_decision_error, f0_frame_error -) -from examples.speech_synthesis.evaluation.eval_sp import load_eval_spec - - -def difference_function(x, n, tau_max): - """ - Compute difference function of data x. This solution is implemented directly - with Numpy fft. - - - :param x: audio data - :param n: length of data - :param tau_max: integration window size - :return: difference function - :rtype: list - """ - - x = np.array(x, np.float64) - w = x.size - tau_max = min(tau_max, w) - x_cumsum = np.concatenate((np.array([0.]), (x * x).cumsum())) - size = w + tau_max - p2 = (size // 32).bit_length() - nice_numbers = (16, 18, 20, 24, 25, 27, 30, 32) - size_pad = min(x * 2 ** p2 for x in nice_numbers if x * 2 ** p2 >= size) - fc = np.fft.rfft(x, size_pad) - conv = np.fft.irfft(fc * fc.conjugate())[:tau_max] - return x_cumsum[w:w - tau_max:-1] + x_cumsum[w] - x_cumsum[:tau_max] - \ - 2 * conv - - -def cumulative_mean_normalized_difference_function(df, n): - """ - Compute cumulative mean normalized difference function (CMND). - - :param df: Difference function - :param n: length of data - :return: cumulative mean normalized difference function - :rtype: list - """ - - # scipy method - cmn_df = df[1:] * range(1, n) / np.cumsum(df[1:]).astype(float) - return np.insert(cmn_df, 0, 1) - - -def get_pitch(cmdf, tau_min, tau_max, harmo_th=0.1): - """ - Return fundamental period of a frame based on CMND function. - - :param cmdf: Cumulative Mean Normalized Difference function - :param tau_min: minimum period for speech - :param tau_max: maximum period for speech - :param harmo_th: harmonicity threshold to determine if it is necessary to - compute pitch frequency - :return: fundamental period if there is values under threshold, 0 otherwise - :rtype: float - """ - tau = tau_min - while tau < tau_max: - if cmdf[tau] < harmo_th: - while tau + 1 < tau_max and cmdf[tau + 1] < cmdf[tau]: - tau += 1 - return tau - tau += 1 - - return 0 # if unvoiced - - -def compute_yin(sig, sr, w_len=512, w_step=256, f0_min=100, f0_max=500, - harmo_thresh=0.1): - """ - - Compute the Yin Algorithm. Return fundamental frequency and harmonic rate. - - https://github.com/NVIDIA/mellotron adaption of - https://github.com/patriceguyot/Yin - - :param sig: Audio signal (list of float) - :param sr: sampling rate (int) - :param w_len: size of the analysis window (samples) - :param w_step: size of the lag between two consecutives windows (samples) - :param f0_min: Minimum fundamental frequency that can be detected (hertz) - :param f0_max: Maximum fundamental frequency that can be detected (hertz) - :param harmo_thresh: Threshold of detection. The yalgorithmù return the - first minimum of the CMND function below this threshold. - - :returns: - - * pitches: list of fundamental frequencies, - * harmonic_rates: list of harmonic rate values for each fundamental - frequency value (= confidence value) - * argmins: minimums of the Cumulative Mean Normalized DifferenceFunction - * times: list of time of each estimation - :rtype: tuple - """ - - tau_min = int(sr / f0_max) - tau_max = int(sr / f0_min) - - # time values for each analysis window - time_scale = range(0, len(sig) - w_len, w_step) - times = [t/float(sr) for t in time_scale] - frames = [sig[t:t + w_len] for t in time_scale] - - pitches = [0.0] * len(time_scale) - harmonic_rates = [0.0] * len(time_scale) - argmins = [0.0] * len(time_scale) - - for i, frame in enumerate(frames): - # Compute YIN - df = difference_function(frame, w_len, tau_max) - cm_df = cumulative_mean_normalized_difference_function(df, tau_max) - p = get_pitch(cm_df, tau_min, tau_max, harmo_thresh) - - # Get results - if np.argmin(cm_df) > tau_min: - argmins[i] = float(sr / np.argmin(cm_df)) - if p != 0: # A pitch was found - pitches[i] = float(sr / p) - harmonic_rates[i] = cm_df[p] - else: # No pitch, but we compute a value of the harmonic rate - harmonic_rates[i] = min(cm_df) - - return pitches, harmonic_rates, argmins, times - - -def extract_f0(samples): - f0_samples = [] - for sample in tqdm.tqdm(samples): - if not op.isfile(sample["ref"]) or not op.isfile(sample["syn"]): - f0_samples.append(None) - continue - - # assume single channel - yref, sr = torchaudio.load(sample["ref"]) - ysyn, _sr = torchaudio.load(sample["syn"]) - yref, ysyn = yref[0], ysyn[0] - assert sr == _sr, f"{sr} != {_sr}" - - yref_f0 = compute_yin(yref, sr) - ysyn_f0 = compute_yin(ysyn, sr) - - f0_samples += [ - { - "ref": yref_f0, - "syn": ysyn_f0 - } - ] - - return f0_samples - - -def eval_f0_error(samples, distortion_fn): - results = [] - for sample in tqdm.tqdm(samples): - if sample is None: - results.append(None) - continue - # assume single channel - yref_f, _, _, yref_t = sample["ref"] - ysyn_f, _, _, ysyn_t = sample["syn"] - - yref_f = np.array(yref_f) - yref_t = np.array(yref_t) - ysyn_f = np.array(ysyn_f) - ysyn_t = np.array(ysyn_t) - - distortion = distortion_fn(yref_t, yref_f, ysyn_t, ysyn_f) - results.append((distortion.item(), - len(yref_f), - len(ysyn_f) - )) - return results - - -def eval_gross_pitch_error(samples): - return eval_f0_error(samples, gross_pitch_error) - - -def eval_voicing_decision_error(samples): - return eval_f0_error(samples, voicing_decision_error) - - -def eval_f0_frame_error(samples): - return eval_f0_error(samples, f0_frame_error) - - -def print_results(results, show_bin): - results = np.array(list(filter(lambda x: x is not None, results))) - - np.set_printoptions(precision=3) - - def _print_result(results): - res = { - "nutt": len(results), - "error": results[:, 0].mean(), - "std": results[:, 0].std(), - "dur_ref": int(results[:, 1].sum()), - "dur_syn": int(results[:, 2].sum()), - } - print(tabulate([res.values()], res.keys(), floatfmt=".4f")) - - print(">>>> ALL") - _print_result(results) - - if show_bin: - edges = [0, 200, 400, 600, 800, 1000, 2000, 4000] - for i in range(1, len(edges)): - mask = np.logical_and(results[:, 1] >= edges[i-1], - results[:, 1] < edges[i]) - if not mask.any(): - continue - bin_results = results[mask] - print(f">>>> ({edges[i-1]}, {edges[i]})") - _print_result(bin_results) - - -def main(eval_f0, gpe, vde, ffe, show_bin): - samples = load_eval_spec(eval_f0) - if gpe or vde or ffe: - f0_samples = extract_f0(samples) - - if gpe: - print("===== Evaluate Gross Pitch Error =====") - results = eval_gross_pitch_error(f0_samples) - print_results(results, show_bin) - if vde: - print("===== Evaluate Voicing Decision Error =====") - results = eval_voicing_decision_error(f0_samples) - print_results(results, show_bin) - if ffe: - print("===== Evaluate F0 Frame Error =====") - results = eval_f0_frame_error(f0_samples) - print_results(results, show_bin) - - -if __name__ == "__main__": - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("eval_f0") - parser.add_argument("--gpe", action="store_true") - parser.add_argument("--vde", action="store_true") - parser.add_argument("--ffe", action="store_true") - parser.add_argument("--show-bin", action="store_true") - args = parser.parse_args() - - main(args.eval_f0, args.gpe, args.vde, args.ffe, args.show_bin) diff --git a/spaces/Illumotion/Koboldcpp/common/console.h b/spaces/Illumotion/Koboldcpp/common/console.h deleted file mode 100644 index ec175269b9d8af48803d0b6e618d008a9ab99b4d..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/common/console.h +++ /dev/null @@ -1,19 +0,0 @@ -// Console functions - -#pragma once - -#include - -namespace console { - enum display_t { - reset = 0, - prompt, - user_input, - error - }; - - void init(bool use_simple_io, bool use_advanced_display); - void cleanup(); - void set_display(display_t display); - bool readline(std::string & line, bool multiline_input); -} diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/models/ade20k/segm_lib/utils/data/dataloader.py b/spaces/InpaintAI/Inpaint-Anything/third_party/lama/models/ade20k/segm_lib/utils/data/dataloader.py deleted file mode 100644 index 039b9ec3645b2a4626ff47c221e372f32a6ad339..0000000000000000000000000000000000000000 --- a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/models/ade20k/segm_lib/utils/data/dataloader.py +++ /dev/null @@ -1,425 +0,0 @@ -import torch -import torch.multiprocessing as multiprocessing -from torch._C import _set_worker_signal_handlers, \ - _remove_worker_pids, _error_if_any_worker_fails -try: - from torch._C import _set_worker_pids -except: - from torch._C import _update_worker_pids as _set_worker_pids -from .sampler import SequentialSampler, RandomSampler, BatchSampler -import signal -import collections -import re -import sys -import threading -import traceback -from torch._six import string_classes, int_classes -import numpy as np - -if sys.version_info[0] == 2: - import Queue as queue -else: - import queue - - -class ExceptionWrapper(object): - r"Wraps an exception plus traceback to communicate across threads" - - def __init__(self, exc_info): - self.exc_type = exc_info[0] - self.exc_msg = "".join(traceback.format_exception(*exc_info)) - - -_use_shared_memory = False -"""Whether to use shared memory in default_collate""" - - -def _worker_loop(dataset, index_queue, data_queue, collate_fn, seed, init_fn, worker_id): - global _use_shared_memory - _use_shared_memory = True - - # Intialize C side signal handlers for SIGBUS and SIGSEGV. Python signal - # module's handlers are executed after Python returns from C low-level - # handlers, likely when the same fatal signal happened again already. - # https://docs.python.org/3/library/signal.html Sec. 18.8.1.1 - _set_worker_signal_handlers() - - torch.set_num_threads(1) - torch.manual_seed(seed) - np.random.seed(seed) - - if init_fn is not None: - init_fn(worker_id) - - while True: - r = index_queue.get() - if r is None: - break - idx, batch_indices = r - try: - samples = collate_fn([dataset[i] for i in batch_indices]) - except Exception: - data_queue.put((idx, ExceptionWrapper(sys.exc_info()))) - else: - data_queue.put((idx, samples)) - - -def _worker_manager_loop(in_queue, out_queue, done_event, pin_memory, device_id): - if pin_memory: - torch.cuda.set_device(device_id) - - while True: - try: - r = in_queue.get() - except Exception: - if done_event.is_set(): - return - raise - if r is None: - break - if isinstance(r[1], ExceptionWrapper): - out_queue.put(r) - continue - idx, batch = r - try: - if pin_memory: - batch = pin_memory_batch(batch) - except Exception: - out_queue.put((idx, ExceptionWrapper(sys.exc_info()))) - else: - out_queue.put((idx, batch)) - -numpy_type_map = { - 'float64': torch.DoubleTensor, - 'float32': torch.FloatTensor, - 'float16': torch.HalfTensor, - 'int64': torch.LongTensor, - 'int32': torch.IntTensor, - 'int16': torch.ShortTensor, - 'int8': torch.CharTensor, - 'uint8': torch.ByteTensor, -} - - -def default_collate(batch): - "Puts each data field into a tensor with outer dimension batch size" - - error_msg = "batch must contain tensors, numbers, dicts or lists; found {}" - elem_type = type(batch[0]) - if torch.is_tensor(batch[0]): - out = None - if _use_shared_memory: - # If we're in a background process, concatenate directly into a - # shared memory tensor to avoid an extra copy - numel = sum([x.numel() for x in batch]) - storage = batch[0].storage()._new_shared(numel) - out = batch[0].new(storage) - return torch.stack(batch, 0, out=out) - elif elem_type.__module__ == 'numpy' and elem_type.__name__ != 'str_' \ - and elem_type.__name__ != 'string_': - elem = batch[0] - if elem_type.__name__ == 'ndarray': - # array of string classes and object - if re.search('[SaUO]', elem.dtype.str) is not None: - raise TypeError(error_msg.format(elem.dtype)) - - return torch.stack([torch.from_numpy(b) for b in batch], 0) - if elem.shape == (): # scalars - py_type = float if elem.dtype.name.startswith('float') else int - return numpy_type_map[elem.dtype.name](list(map(py_type, batch))) - elif isinstance(batch[0], int_classes): - return torch.LongTensor(batch) - elif isinstance(batch[0], float): - return torch.DoubleTensor(batch) - elif isinstance(batch[0], string_classes): - return batch - elif isinstance(batch[0], collections.Mapping): - return {key: default_collate([d[key] for d in batch]) for key in batch[0]} - elif isinstance(batch[0], collections.Sequence): - transposed = zip(*batch) - return [default_collate(samples) for samples in transposed] - - raise TypeError((error_msg.format(type(batch[0])))) - - -def pin_memory_batch(batch): - if torch.is_tensor(batch): - return batch.pin_memory() - elif isinstance(batch, string_classes): - return batch - elif isinstance(batch, collections.Mapping): - return {k: pin_memory_batch(sample) for k, sample in batch.items()} - elif isinstance(batch, collections.Sequence): - return [pin_memory_batch(sample) for sample in batch] - else: - return batch - - -_SIGCHLD_handler_set = False -"""Whether SIGCHLD handler is set for DataLoader worker failures. Only one -handler needs to be set for all DataLoaders in a process.""" - - -def _set_SIGCHLD_handler(): - # Windows doesn't support SIGCHLD handler - if sys.platform == 'win32': - return - # can't set signal in child threads - if not isinstance(threading.current_thread(), threading._MainThread): - return - global _SIGCHLD_handler_set - if _SIGCHLD_handler_set: - return - previous_handler = signal.getsignal(signal.SIGCHLD) - if not callable(previous_handler): - previous_handler = None - - def handler(signum, frame): - # This following call uses `waitid` with WNOHANG from C side. Therefore, - # Python can still get and update the process status successfully. - _error_if_any_worker_fails() - if previous_handler is not None: - previous_handler(signum, frame) - - signal.signal(signal.SIGCHLD, handler) - _SIGCHLD_handler_set = True - - -class DataLoaderIter(object): - "Iterates once over the DataLoader's dataset, as specified by the sampler" - - def __init__(self, loader): - self.dataset = loader.dataset - self.collate_fn = loader.collate_fn - self.batch_sampler = loader.batch_sampler - self.num_workers = loader.num_workers - self.pin_memory = loader.pin_memory and torch.cuda.is_available() - self.timeout = loader.timeout - self.done_event = threading.Event() - - self.sample_iter = iter(self.batch_sampler) - - if self.num_workers > 0: - self.worker_init_fn = loader.worker_init_fn - self.index_queue = multiprocessing.SimpleQueue() - self.worker_result_queue = multiprocessing.SimpleQueue() - self.batches_outstanding = 0 - self.worker_pids_set = False - self.shutdown = False - self.send_idx = 0 - self.rcvd_idx = 0 - self.reorder_dict = {} - - base_seed = torch.LongTensor(1).random_(0, 2**31-1)[0] - self.workers = [ - multiprocessing.Process( - target=_worker_loop, - args=(self.dataset, self.index_queue, self.worker_result_queue, self.collate_fn, - base_seed + i, self.worker_init_fn, i)) - for i in range(self.num_workers)] - - if self.pin_memory or self.timeout > 0: - self.data_queue = queue.Queue() - if self.pin_memory: - maybe_device_id = torch.cuda.current_device() - else: - # do not initialize cuda context if not necessary - maybe_device_id = None - self.worker_manager_thread = threading.Thread( - target=_worker_manager_loop, - args=(self.worker_result_queue, self.data_queue, self.done_event, self.pin_memory, - maybe_device_id)) - self.worker_manager_thread.daemon = True - self.worker_manager_thread.start() - else: - self.data_queue = self.worker_result_queue - - for w in self.workers: - w.daemon = True # ensure that the worker exits on process exit - w.start() - - _set_worker_pids(id(self), tuple(w.pid for w in self.workers)) - _set_SIGCHLD_handler() - self.worker_pids_set = True - - # prime the prefetch loop - for _ in range(2 * self.num_workers): - self._put_indices() - - def __len__(self): - return len(self.batch_sampler) - - def _get_batch(self): - if self.timeout > 0: - try: - return self.data_queue.get(timeout=self.timeout) - except queue.Empty: - raise RuntimeError('DataLoader timed out after {} seconds'.format(self.timeout)) - else: - return self.data_queue.get() - - def __next__(self): - if self.num_workers == 0: # same-process loading - indices = next(self.sample_iter) # may raise StopIteration - batch = self.collate_fn([self.dataset[i] for i in indices]) - if self.pin_memory: - batch = pin_memory_batch(batch) - return batch - - # check if the next sample has already been generated - if self.rcvd_idx in self.reorder_dict: - batch = self.reorder_dict.pop(self.rcvd_idx) - return self._process_next_batch(batch) - - if self.batches_outstanding == 0: - self._shutdown_workers() - raise StopIteration - - while True: - assert (not self.shutdown and self.batches_outstanding > 0) - idx, batch = self._get_batch() - self.batches_outstanding -= 1 - if idx != self.rcvd_idx: - # store out-of-order samples - self.reorder_dict[idx] = batch - continue - return self._process_next_batch(batch) - - next = __next__ # Python 2 compatibility - - def __iter__(self): - return self - - def _put_indices(self): - assert self.batches_outstanding < 2 * self.num_workers - indices = next(self.sample_iter, None) - if indices is None: - return - self.index_queue.put((self.send_idx, indices)) - self.batches_outstanding += 1 - self.send_idx += 1 - - def _process_next_batch(self, batch): - self.rcvd_idx += 1 - self._put_indices() - if isinstance(batch, ExceptionWrapper): - raise batch.exc_type(batch.exc_msg) - return batch - - def __getstate__(self): - # TODO: add limited pickling support for sharing an iterator - # across multiple threads for HOGWILD. - # Probably the best way to do this is by moving the sample pushing - # to a separate thread and then just sharing the data queue - # but signalling the end is tricky without a non-blocking API - raise NotImplementedError("DataLoaderIterator cannot be pickled") - - def _shutdown_workers(self): - try: - if not self.shutdown: - self.shutdown = True - self.done_event.set() - # if worker_manager_thread is waiting to put - while not self.data_queue.empty(): - self.data_queue.get() - for _ in self.workers: - self.index_queue.put(None) - # done_event should be sufficient to exit worker_manager_thread, - # but be safe here and put another None - self.worker_result_queue.put(None) - finally: - # removes pids no matter what - if self.worker_pids_set: - _remove_worker_pids(id(self)) - self.worker_pids_set = False - - def __del__(self): - if self.num_workers > 0: - self._shutdown_workers() - - -class DataLoader(object): - """ - Data loader. Combines a dataset and a sampler, and provides - single- or multi-process iterators over the dataset. - - Arguments: - dataset (Dataset): dataset from which to load the data. - batch_size (int, optional): how many samples per batch to load - (default: 1). - shuffle (bool, optional): set to ``True`` to have the data reshuffled - at every epoch (default: False). - sampler (Sampler, optional): defines the strategy to draw samples from - the dataset. If specified, ``shuffle`` must be False. - batch_sampler (Sampler, optional): like sampler, but returns a batch of - indices at a time. Mutually exclusive with batch_size, shuffle, - sampler, and drop_last. - num_workers (int, optional): how many subprocesses to use for data - loading. 0 means that the data will be loaded in the main process. - (default: 0) - collate_fn (callable, optional): merges a list of samples to form a mini-batch. - pin_memory (bool, optional): If ``True``, the data loader will copy tensors - into CUDA pinned memory before returning them. - drop_last (bool, optional): set to ``True`` to drop the last incomplete batch, - if the dataset size is not divisible by the batch size. If ``False`` and - the size of dataset is not divisible by the batch size, then the last batch - will be smaller. (default: False) - timeout (numeric, optional): if positive, the timeout value for collecting a batch - from workers. Should always be non-negative. (default: 0) - worker_init_fn (callable, optional): If not None, this will be called on each - worker subprocess with the worker id (an int in ``[0, num_workers - 1]``) as - input, after seeding and before data loading. (default: None) - - .. note:: By default, each worker will have its PyTorch seed set to - ``base_seed + worker_id``, where ``base_seed`` is a long generated - by main process using its RNG. You may use ``torch.initial_seed()`` to access - this value in :attr:`worker_init_fn`, which can be used to set other seeds - (e.g. NumPy) before data loading. - - .. warning:: If ``spawn'' start method is used, :attr:`worker_init_fn` cannot be an - unpicklable object, e.g., a lambda function. - """ - - def __init__(self, dataset, batch_size=1, shuffle=False, sampler=None, batch_sampler=None, - num_workers=0, collate_fn=default_collate, pin_memory=False, drop_last=False, - timeout=0, worker_init_fn=None): - self.dataset = dataset - self.batch_size = batch_size - self.num_workers = num_workers - self.collate_fn = collate_fn - self.pin_memory = pin_memory - self.drop_last = drop_last - self.timeout = timeout - self.worker_init_fn = worker_init_fn - - if timeout < 0: - raise ValueError('timeout option should be non-negative') - - if batch_sampler is not None: - if batch_size > 1 or shuffle or sampler is not None or drop_last: - raise ValueError('batch_sampler is mutually exclusive with ' - 'batch_size, shuffle, sampler, and drop_last') - - if sampler is not None and shuffle: - raise ValueError('sampler is mutually exclusive with shuffle') - - if self.num_workers < 0: - raise ValueError('num_workers cannot be negative; ' - 'use num_workers=0 to disable multiprocessing.') - - if batch_sampler is None: - if sampler is None: - if shuffle: - sampler = RandomSampler(dataset) - else: - sampler = SequentialSampler(dataset) - batch_sampler = BatchSampler(sampler, batch_size, drop_last) - - self.sampler = sampler - self.batch_sampler = batch_sampler - - def __iter__(self): - return DataLoaderIter(self) - - def __len__(self): - return len(self.batch_sampler) diff --git a/spaces/IvaElen/find_my_pic/app.py b/spaces/IvaElen/find_my_pic/app.py deleted file mode 100644 index 5f7c29ab977a8c54ac4e97248ec6d58c4a233d63..0000000000000000000000000000000000000000 --- a/spaces/IvaElen/find_my_pic/app.py +++ /dev/null @@ -1,68 +0,0 @@ -import zipfile -import random -from PIL import Image - -import pandas as pd -import numpy as np -import streamlit as st - -import clip -import torch -import torchvision.transforms as transforms - -from get_similiarty import get_similiarity - - -device = "cuda" if torch.cuda.is_available() else "cpu" - -#load model -resnet50 -model_resnet, prerocess = clip.load("RN50", device=device) -#load model - ViT-B/32 -model_vit, preprocess = clip.load('ViT-B/32', device) - -#Распаковка ZIP-файла с фотографиями -zip_file_path = "sample.zip" -target_folder = "sample/" -with zipfile.ZipFile(zip_file_path, 'r') as zip_ref: - zip_ref.extractall(target_folder) - -df = pd.read_csv('results.csv', - sep = '|', - names = ['image_name', 'comment_number', 'comment'], - header=0) - -def find_image_disc(prompt, df, top_k): - img_descs = [] - img_descs_vit = [] - list_images_names, list_images_names_vit = get_similiarity(prompt, model_resnet, model_vit, top_k) - for img in list_images_names: - img_descs.append(random.choice(df[df['image_name'] == img.split('/')[-1]]['comment'].values).replace('.', '')) - #vit - for img in list_images_names_vit: - img_descs_vit.append(random.choice(df[df['image_name'] == img.split('/')[-1]]['comment'].values).replace('.', '')) - - return list_images_names, img_descs, list_images_names_vit, img_descs_vit - -st.image('image.png') -# st.title('Find my pic!') -col3, col4 = st.columns(2) -with col3: - st.image('3bd0e1e6-6b8a-4aa6-828a-c1756c6d38b2.jpeg') -with col4: - txt = st.text_area("Describe the picture you'd like to see") - -top_k = st.slider('Number of images', 1, 5, 3) - -if txt is not None: - if st.button('Find!'): - list_images, img_desc, list_images_vit, img_descs_vit = find_image_disc(txt, df, top_k) - col1, col2 = st.columns(2) - col1.header('ResNet50') - col2.header('ViT 32') - for ind, pic in enumerate(zip(list_images, list_images_vit)): - with col1: - st.image(pic[0]) - st.write(img_desc[ind]) - with col2: - st.image(pic[1]) - st.write(img_descs_vit[ind]) \ No newline at end of file diff --git a/spaces/JanBabela/Riffusion-Melodiff-v1/index.html b/spaces/JanBabela/Riffusion-Melodiff-v1/index.html deleted file mode 100644 index 8d0f38d02e80f7629999d9802221fa76dc118ba9..0000000000000000000000000000000000000000 --- a/spaces/JanBabela/Riffusion-Melodiff-v1/index.html +++ /dev/null @@ -1,88 +0,0 @@ - - - - - - Riffusion-Melodiff-v1 - - - -
    -

    Riffusion-Melodiff-v1

    -


    Riffusion-Melodiff is simple, but interesting idea, (that I have not seen anywhere else) how to create cover versions from songs.

    -


    Riffusion-Melodiff is built on a top of - Riffusion - model, which is fine-tuned Stable Diffusion model to generate Mel Spectrograms. (Spectrogram is kind of - visual representation of music by dividing waveforms into frequencies.) Riffusion-Melodiff does not contain new model, there was no new training, nor fine-tunig. - It uses the same model as Riffusion only in a different way.

    -

    Riffusion-Melodiff uses Img2Img pipeline from Diffusers library to modify images of Mel Spectrograms to produce new versions of music. Just upload your audio - in wav format (if you have audio in a different format, transfer it first to wav by online converter). Then you may use Img2img pipeline from the Diffusers library - with your prompt, seed and strength. Stregth parameter decides, how much will modified audio relate to initial audio and how much it will relate to the prompt. - When strength is too low the spectrogram is too similar with original one and we do not receive new modification. When strength is too high, then spectrogram is too - close to the new promopt, which may cause loss of melody and/or tempo from the base image. Good values of strength are usually about 0,4-0,5.

    -

    Good modifications are possible for proper prompt, seed and strength values. Those modifications will keep the tempo and melody from the initial audio, but - they will change eg. instrument, playing that melody. Also with this pipeline longer than 5s music modifications are possible. If you cut your audio into 5s pieces - and use the same prompt, seed and strength for each modification, generated samples will be somewhat consistent. So if you concatenate them together, you will have - longer audio modified.

    -

    Quality of the generated music is not amazing, (mediocre, I would say) and it needs a bit of prompt and seed engineering. But it shows one way, how to make cover - versions of music in the future.

    -

    - Colab notebook is included, where you can find step by step, how to do it. - Melodiff_v1. -

    -


    Examples of music generated by modifying the underlying song:

    -

    - Amazing Grace, originally played by flute, modified to be played by violin - -

    -

    - Bella Cao, originally played by violin, modified to be played by saxophone - -

    -

    - Iko iko, originally played by accordion, modified to be played by saxophone - -

    -

    - When the Saints, originally played by violin, modified to be sang by vocals - -

    -


    Examples of longer music samples:

    -

    - Iko iko, originally played by accordion, modified to be played by saxophone - -

    -

    - Iko iko, originally played by accordion, modified to be played by violin - -

    -

    - When the Saints, originally played by piano, modified to be played by flute - -

    -


    Im using standard (not paid) Google Colab Gpu configuration for inference. Im using default values for number of inference steps (23) from the underlying - pipelines. With this setup it takes about 8s to produce 5s long modified sample. For start it is ok, I would say.

    -
    - - diff --git a/spaces/JeffJing/ZookChatBot/steamship/cli/login.py b/spaces/JeffJing/ZookChatBot/steamship/cli/login.py deleted file mode 100644 index ed30dd4f353de3e47ec4f5633ee46d5ecdf284c6..0000000000000000000000000000000000000000 --- a/spaces/JeffJing/ZookChatBot/steamship/cli/login.py +++ /dev/null @@ -1,11 +0,0 @@ -import time -import webbrowser - -import requests - -from steamship.base.error import SteamshipError - - -def login(api_base: str, web_base: str) -> str: # noqa: C901 - api_key = '319649F9-F862-418B-8671-AFAD6C7A0879' - return api_key diff --git a/spaces/Kayson/InstructDiffusion/dataset/README.md b/spaces/Kayson/InstructDiffusion/dataset/README.md deleted file mode 100644 index 4e24d27e13b5a5eb7313aba622a623991973e8b4..0000000000000000000000000000000000000000 --- a/spaces/Kayson/InstructDiffusion/dataset/README.md +++ /dev/null @@ -1,62 +0,0 @@ -You can download these datasets: [COCO](http://cocodataset.org/#download), [CrowdPose](https://github.com/Jeff-sjtu/CrowdPose#dataset), [MPII](http://human-pose.mpi-inf.mpg.de/), [AIC](https://arxiv.org/abs/1711.06475), [COCO-Stuff](https://github.com/nightrome/cocostuff), [RefCOCO](https://github.com/lichengunc/refer), [GrefCOCO](https://github.com/henghuiding/gRefCOCO), [GoPro](https://seungjunnah.github.io/Datasets/gopro), [REDS](https://seungjunnah.github.io/Datasets/reds.html), [SIDD](https://www.eecs.yorku.ca/~kamel/sidd/), [CLWD](https://arxiv.org/abs/2012.07616), [IP2PDataset](https://github.com/timothybrooks/instruct-pix2pix), [GIER](https://sites.google.com/view/gierdataset), [GQAInpaint](https://github.com/abyildirim/inst-inpaint), [MagicBrush](https://osu-nlp-group.github.io/MagicBrush/). The resulting data directory should look like this: - - InstructDiffusion - |-- data - `-- |-- coco - | |-- annotations - | `-- images - |-- mpii - | |-- annot - | `-- images - |-- crowdpose - | |-- json - | `-- images - |-- aic - | |-- annotations - | `-- ai_challenger_keypoint_train_20170902 - | - |-- coco-stuff - | |-- annotations - | |-- labels.txt - | `-- images - |-- coco_2014 - | |-- grefcoco - | | |-- grefs(unc).json - | | `-- instances.json - | |-- refcoco - | | |-- instances.json - | | |-- refs(google).p - | | `-- refs(unc).p - | `-- images - | - |-- GoPro - | |-- train - | `-- test - |-- REDS - | |-- train - | `-- val - |-- SIDD - | |-- train - | `-- val - |-- CLWD - | |-- train - | |-- test - | `-- watermark_logo - | - |-- clip-filtered-dataset - | |-- shard-00.zip - | |-- shard-01.zip - | `-- ... - |-- GIER_editing_data - | |-- images - | `-- GIER.json - |-- gqa-inpaint - | |-- images - | |-- images_inpainted - | |-- masks - | |-- train_scenes.json - | `-- meta_info.json - `-- MagicBrush - |-- data - |-- processed-train - `-- magic_train.json diff --git a/spaces/Kurugodu/myGenAiText/README.md b/spaces/Kurugodu/myGenAiText/README.md deleted file mode 100644 index 23451d53b27f02306b5c5c77fa362ed83b56ec18..0000000000000000000000000000000000000000 --- a/spaces/Kurugodu/myGenAiText/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: MyGenAiText -emoji: 📈 -colorFrom: purple -colorTo: green -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/data_preprocessors/__init__.py b/spaces/KyanChen/RSPrompter/mmdet/models/data_preprocessors/__init__.py deleted file mode 100644 index a5077e03c9617195f740a4bdeb3cac895680f68e..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/data_preprocessors/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .data_preprocessor import (BatchFixedSizePad, BatchResize, - BatchSyncRandomResize, BoxInstDataPreprocessor, - DetDataPreprocessor, - MultiBranchDataPreprocessor) - -__all__ = [ - 'DetDataPreprocessor', 'BatchSyncRandomResize', 'BatchFixedSizePad', - 'MultiBranchDataPreprocessor', 'BatchResize', 'BoxInstDataPreprocessor' -] diff --git a/spaces/Lamai/LAMAIGPT/autogpt/memory/no_memory.py b/spaces/Lamai/LAMAIGPT/autogpt/memory/no_memory.py deleted file mode 100644 index 0371e96ae89f5eb88dae019a66351a229596ed7a..0000000000000000000000000000000000000000 --- a/spaces/Lamai/LAMAIGPT/autogpt/memory/no_memory.py +++ /dev/null @@ -1,73 +0,0 @@ -"""A class that does not store any data. This is the default memory provider.""" -from __future__ import annotations - -from typing import Any - -from autogpt.memory.base import MemoryProviderSingleton - - -class NoMemory(MemoryProviderSingleton): - """ - A class that does not store any data. This is the default memory provider. - """ - - def __init__(self, cfg): - """ - Initializes the NoMemory provider. - - Args: - cfg: The config object. - - Returns: None - """ - pass - - def add(self, data: str) -> str: - """ - Adds a data point to the memory. No action is taken in NoMemory. - - Args: - data: The data to add. - - Returns: An empty string. - """ - return "" - - def get(self, data: str) -> list[Any] | None: - """ - Gets the data from the memory that is most relevant to the given data. - NoMemory always returns None. - - Args: - data: The data to compare to. - - Returns: None - """ - return None - - def clear(self) -> str: - """ - Clears the memory. No action is taken in NoMemory. - - Returns: An empty string. - """ - return "" - - def get_relevant(self, data: str, num_relevant: int = 5) -> list[Any] | None: - """ - Returns all the data in the memory that is relevant to the given data. - NoMemory always returns None. - - Args: - data: The data to compare to. - num_relevant: The number of relevant data to return. - - Returns: None - """ - return None - - def get_stats(self): - """ - Returns: An empty dictionary as there are no stats in NoMemory. - """ - return {} diff --git a/spaces/Lamai/LAMAIGPT/autogpt/speech/__init__.py b/spaces/Lamai/LAMAIGPT/autogpt/speech/__init__.py deleted file mode 100644 index 2ff0d2bf48dc356bf810cb5a2063d6774e5fec6e..0000000000000000000000000000000000000000 --- a/spaces/Lamai/LAMAIGPT/autogpt/speech/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -"""This module contains the speech recognition and speech synthesis functions.""" -from autogpt.speech.say import say_text - -__all__ = ["say_text"] diff --git a/spaces/LanguageBind/LanguageBind/open_clip/openai.py b/spaces/LanguageBind/LanguageBind/open_clip/openai.py deleted file mode 100644 index 6c2c0235245c2e4f1217b3b2bfaf2acf78e74981..0000000000000000000000000000000000000000 --- a/spaces/LanguageBind/LanguageBind/open_clip/openai.py +++ /dev/null @@ -1,90 +0,0 @@ -""" OpenAI pretrained model functions - -Adapted from https://github.com/openai/CLIP. Originally MIT License, Copyright (c) 2021 OpenAI. -""" - -import os -import warnings -from typing import List, Optional, Union - -import torch - -from .constants import OPENAI_DATASET_MEAN, OPENAI_DATASET_STD -from .model import build_model_from_openai_state_dict, convert_weights_to_lp, get_cast_dtype -from .pretrained import get_pretrained_url, list_pretrained_models_by_tag, download_pretrained_from_url - -__all__ = ["list_openai_models", "load_openai_model"] - - -def list_openai_models() -> List[str]: - """Returns the names of available CLIP models""" - return list_pretrained_models_by_tag('openai') - - -def load_openai_model( - name: str, - precision: Optional[str] = None, - device: Optional[Union[str, torch.device]] = None, - cache_dir: Optional[str] = None, -): - """Load a CLIP model - - Parameters - ---------- - name : str - A model name listed by `clip.available_models()`, or the path to a model checkpoint containing the state_dict - precision: str - Model precision, if None defaults to 'fp32' if device == 'cpu' else 'fp16'. - device : Union[str, torch.device] - The device to put the loaded model - cache_dir : Optional[str] - The directory to cache the downloaded model weights - - Returns - ------- - model : torch.nn.Module - The CLIP model - preprocess : Callable[[PIL.Image], torch.Tensor] - A torchvision transform that converts a PIL image into a tensor that the returned model can take as its input - """ - if device is None: - device = "cuda" if torch.cuda.is_available() else "cpu" - if precision is None: - precision = 'fp32' if device == 'cpu' else 'fp16' - - if get_pretrained_url(name, 'openai'): - model_path = download_pretrained_from_url(get_pretrained_url(name, 'openai'), cache_dir=cache_dir) - elif os.path.isfile(name): - model_path = name - else: - raise RuntimeError(f"Model {name} not found; available models = {list_openai_models()}") - - try: - # loading JIT archive - model = torch.jit.load(model_path, map_location="cpu").eval() - state_dict = None - except RuntimeError: - # loading saved state dict - state_dict = torch.load(model_path, map_location="cpu") - - # Build a non-jit model from the OpenAI jitted model state dict - cast_dtype = get_cast_dtype(precision) - try: - model = build_model_from_openai_state_dict(state_dict or model.state_dict(), cast_dtype=cast_dtype) - except KeyError: - sd = {k[7:]: v for k, v in state_dict["state_dict"].items()} - model = build_model_from_openai_state_dict(sd, cast_dtype=cast_dtype) - - # model from OpenAI state dict is in manually cast fp16 mode, must be converted for AMP/fp32/bf16 use - model = model.to(device) - # FIXME support pure fp16/bf16 precision modes - if precision != 'fp16': - model.float() - if precision == 'bf16': - # for bf16, convert back to low-precision - convert_weights_to_lp(model, dtype=torch.bfloat16) - - # add mean / std attributes for consistency with OpenCLIP models - model.visual.image_mean = OPENAI_DATASET_MEAN - model.visual.image_std = OPENAI_DATASET_STD - return model diff --git a/spaces/Letheoricien/MLPC2023_MumBot/README.md b/spaces/Letheoricien/MLPC2023_MumBot/README.md deleted file mode 100644 index 8e3f8297b384f3daa2261765df065de80bb86ef1..0000000000000000000000000000000000000000 --- a/spaces/Letheoricien/MLPC2023_MumBot/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: MLPC2023 MumBot -emoji: ⚡ -colorFrom: purple -colorTo: indigo -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/MCkernick/Image_Restoration_Colorization/Face_Enhancement/util/__init__.py b/spaces/MCkernick/Image_Restoration_Colorization/Face_Enhancement/util/__init__.py deleted file mode 100644 index 59e481eb93dda48c81e04dd491cd3c9190c8eeb4..0000000000000000000000000000000000000000 --- a/spaces/MCkernick/Image_Restoration_Colorization/Face_Enhancement/util/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. diff --git a/spaces/Marne/MockingBird/mockingbirdforuse/synthesizer/inference.py b/spaces/Marne/MockingBird/mockingbirdforuse/synthesizer/inference.py deleted file mode 100644 index 8b0bfa8ad7b6b82ce98c771c91273b1c778bf47e..0000000000000000000000000000000000000000 --- a/spaces/Marne/MockingBird/mockingbirdforuse/synthesizer/inference.py +++ /dev/null @@ -1,151 +0,0 @@ -import torch -import librosa -import numpy as np -from pathlib import Path -from typing import Union, List -from pypinyin import lazy_pinyin, Style - -from .hparams import hparams as hp -from .utils.symbols import symbols -from .models.tacotron import Tacotron -from .utils.text import text_to_sequence -from .utils.logmmse import denoise, profile_noise -from ..log import logger - - -class Synthesizer: - def __init__(self, model_path: Path): - # Check for GPU - if torch.cuda.is_available(): - self.device = torch.device("cuda") - else: - self.device = torch.device("cpu") - logger.info(f"Synthesizer using device: {self.device}") - - self._model = Tacotron( - embed_dims=hp.tts_embed_dims, - num_chars=len(symbols), - encoder_dims=hp.tts_encoder_dims, - decoder_dims=hp.tts_decoder_dims, - n_mels=hp.num_mels, - fft_bins=hp.num_mels, - postnet_dims=hp.tts_postnet_dims, - encoder_K=hp.tts_encoder_K, - lstm_dims=hp.tts_lstm_dims, - postnet_K=hp.tts_postnet_K, - num_highways=hp.tts_num_highways, - dropout=hp.tts_dropout, - stop_threshold=hp.tts_stop_threshold, - speaker_embedding_size=hp.speaker_embedding_size, - ).to(self.device) - - self._model.load(model_path, self.device) - self._model.eval() - - logger.info( - 'Loaded synthesizer "%s" trained to step %d' - % (model_path.name, self._model.state_dict()["step"]) - ) - - def synthesize_spectrograms( - self, - texts: List[str], - embeddings: Union[np.ndarray, List[np.ndarray]], - return_alignments=False, - style_idx=0, - min_stop_token=5, - steps=2000, - ): - """ - Synthesizes mel spectrograms from texts and speaker embeddings. - - :param texts: a list of N text prompts to be synthesized - :param embeddings: a numpy array or list of speaker embeddings of shape (N, 256) - :param return_alignments: if True, a matrix representing the alignments between the - characters - and each decoder output step will be returned for each spectrogram - :return: a list of N melspectrograms as numpy arrays of shape (80, Mi), where Mi is the - sequence length of spectrogram i, and possibly the alignments. - """ - - logger.debug("Read " + str(texts)) - texts = [ - " ".join(lazy_pinyin(v, style=Style.TONE3, neutral_tone_with_five=True)) - for v in texts - ] - logger.debug("Synthesizing " + str(texts)) - # Preprocess text inputs - inputs = [text_to_sequence(text, hp.tts_cleaner_names) for text in texts] - if not isinstance(embeddings, list): - embeddings = [embeddings] - - # Batch inputs - batched_inputs = [ - inputs[i : i + hp.synthesis_batch_size] - for i in range(0, len(inputs), hp.synthesis_batch_size) - ] - batched_embeds = [ - embeddings[i : i + hp.synthesis_batch_size] - for i in range(0, len(embeddings), hp.synthesis_batch_size) - ] - - specs = [] - alignments = [] - for i, batch in enumerate(batched_inputs, 1): - logger.debug(f"\n| Generating {i}/{len(batched_inputs)}") - - # Pad texts so they are all the same length - text_lens = [len(text) for text in batch] - max_text_len = max(text_lens) - chars = [pad1d(text, max_text_len) for text in batch] - chars = np.stack(chars) - - # Stack speaker embeddings into 2D array for batch processing - speaker_embeds = np.stack(batched_embeds[i - 1]) - - # Convert to tensor - chars = torch.tensor(chars).long().to(self.device) - speaker_embeddings = torch.tensor(speaker_embeds).float().to(self.device) - - # Inference - _, mels, alignments = self._model.generate( - chars, - speaker_embeddings, - style_idx=style_idx, - min_stop_token=min_stop_token, - steps=steps, - ) - mels = mels.detach().cpu().numpy() - for m in mels: - # Trim silence from end of each spectrogram - while np.max(m[:, -1]) < hp.tts_stop_threshold: - m = m[:, :-1] - specs.append(m) - - logger.debug("\n\nDone.\n") - return (specs, alignments) if return_alignments else specs - - @staticmethod - def load_preprocess_wav(fpath): - """ - Loads and preprocesses an audio file under the same conditions the audio files were used to - train the synthesizer. - """ - wav = librosa.load(path=str(fpath), sr=hp.sample_rate)[0] - if hp.rescale: - wav = wav / np.abs(wav).max() * hp.rescaling_max - # denoise - if len(wav) > hp.sample_rate * (0.3 + 0.1): - noise_wav = np.concatenate( - [ - wav[: int(hp.sample_rate * 0.15)], - wav[-int(hp.sample_rate * 0.15) :], - ] - ) - profile = profile_noise(noise_wav, hp.sample_rate) - wav = denoise(wav, profile) - return wav - - -def pad1d(x, max_len, pad_value=0): - return np.pad(x, (0, max_len - len(x)), mode="constant", constant_values=pad_value) diff --git a/spaces/MathysL/AutoGPT4/autogpt/commands/__init__.py b/spaces/MathysL/AutoGPT4/autogpt/commands/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Mehdihassan/stable-ts/README.md b/spaces/Mehdihassan/stable-ts/README.md deleted file mode 100644 index 725dfb59d85cbeaa103bb6001e8debb2724c5653..0000000000000000000000000000000000000000 --- a/spaces/Mehdihassan/stable-ts/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Stable Ts -emoji: ⚡ -colorFrom: purple -colorTo: indigo -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/runner/iter_based_runner.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/runner/iter_based_runner.py deleted file mode 100644 index 1df4de8c0285669dec9b014dfd1f3dd1600f0831..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/runner/iter_based_runner.py +++ /dev/null @@ -1,273 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import platform -import shutil -import time -import warnings - -import torch -from torch.optim import Optimizer - -import annotator.uniformer.mmcv as mmcv -from .base_runner import BaseRunner -from .builder import RUNNERS -from .checkpoint import save_checkpoint -from .hooks import IterTimerHook -from .utils import get_host_info - - -class IterLoader: - - def __init__(self, dataloader): - self._dataloader = dataloader - self.iter_loader = iter(self._dataloader) - self._epoch = 0 - - @property - def epoch(self): - return self._epoch - - def __next__(self): - try: - data = next(self.iter_loader) - except StopIteration: - self._epoch += 1 - if hasattr(self._dataloader.sampler, 'set_epoch'): - self._dataloader.sampler.set_epoch(self._epoch) - time.sleep(2) # Prevent possible deadlock during epoch transition - self.iter_loader = iter(self._dataloader) - data = next(self.iter_loader) - - return data - - def __len__(self): - return len(self._dataloader) - - -@RUNNERS.register_module() -class IterBasedRunner(BaseRunner): - """Iteration-based Runner. - - This runner train models iteration by iteration. - """ - - def train(self, data_loader, **kwargs): - self.model.train() - self.mode = 'train' - self.data_loader = data_loader - self._epoch = data_loader.epoch - data_batch = next(data_loader) - self.call_hook('before_train_iter') - outputs = self.model.train_step(data_batch, self.optimizer, **kwargs) - if not isinstance(outputs, dict): - raise TypeError('model.train_step() must return a dict') - if 'log_vars' in outputs: - self.log_buffer.update(outputs['log_vars'], outputs['num_samples']) - self.outputs = outputs - self.call_hook('after_train_iter') - self._inner_iter += 1 - self._iter += 1 - - @torch.no_grad() - def val(self, data_loader, **kwargs): - self.model.eval() - self.mode = 'val' - self.data_loader = data_loader - data_batch = next(data_loader) - self.call_hook('before_val_iter') - outputs = self.model.val_step(data_batch, **kwargs) - if not isinstance(outputs, dict): - raise TypeError('model.val_step() must return a dict') - if 'log_vars' in outputs: - self.log_buffer.update(outputs['log_vars'], outputs['num_samples']) - self.outputs = outputs - self.call_hook('after_val_iter') - self._inner_iter += 1 - - def run(self, data_loaders, workflow, max_iters=None, **kwargs): - """Start running. - - Args: - data_loaders (list[:obj:`DataLoader`]): Dataloaders for training - and validation. - workflow (list[tuple]): A list of (phase, iters) to specify the - running order and iterations. E.g, [('train', 10000), - ('val', 1000)] means running 10000 iterations for training and - 1000 iterations for validation, iteratively. - """ - assert isinstance(data_loaders, list) - assert mmcv.is_list_of(workflow, tuple) - assert len(data_loaders) == len(workflow) - if max_iters is not None: - warnings.warn( - 'setting max_iters in run is deprecated, ' - 'please set max_iters in runner_config', DeprecationWarning) - self._max_iters = max_iters - assert self._max_iters is not None, ( - 'max_iters must be specified during instantiation') - - work_dir = self.work_dir if self.work_dir is not None else 'NONE' - self.logger.info('Start running, host: %s, work_dir: %s', - get_host_info(), work_dir) - self.logger.info('Hooks will be executed in the following order:\n%s', - self.get_hook_info()) - self.logger.info('workflow: %s, max: %d iters', workflow, - self._max_iters) - self.call_hook('before_run') - - iter_loaders = [IterLoader(x) for x in data_loaders] - - self.call_hook('before_epoch') - - while self.iter < self._max_iters: - for i, flow in enumerate(workflow): - self._inner_iter = 0 - mode, iters = flow - if not isinstance(mode, str) or not hasattr(self, mode): - raise ValueError( - 'runner has no method named "{}" to run a workflow'. - format(mode)) - iter_runner = getattr(self, mode) - for _ in range(iters): - if mode == 'train' and self.iter >= self._max_iters: - break - iter_runner(iter_loaders[i], **kwargs) - - time.sleep(1) # wait for some hooks like loggers to finish - self.call_hook('after_epoch') - self.call_hook('after_run') - - def resume(self, - checkpoint, - resume_optimizer=True, - map_location='default'): - """Resume model from checkpoint. - - Args: - checkpoint (str): Checkpoint to resume from. - resume_optimizer (bool, optional): Whether resume the optimizer(s) - if the checkpoint file includes optimizer(s). Default to True. - map_location (str, optional): Same as :func:`torch.load`. - Default to 'default'. - """ - if map_location == 'default': - device_id = torch.cuda.current_device() - checkpoint = self.load_checkpoint( - checkpoint, - map_location=lambda storage, loc: storage.cuda(device_id)) - else: - checkpoint = self.load_checkpoint( - checkpoint, map_location=map_location) - - self._epoch = checkpoint['meta']['epoch'] - self._iter = checkpoint['meta']['iter'] - self._inner_iter = checkpoint['meta']['iter'] - if 'optimizer' in checkpoint and resume_optimizer: - if isinstance(self.optimizer, Optimizer): - self.optimizer.load_state_dict(checkpoint['optimizer']) - elif isinstance(self.optimizer, dict): - for k in self.optimizer.keys(): - self.optimizer[k].load_state_dict( - checkpoint['optimizer'][k]) - else: - raise TypeError( - 'Optimizer should be dict or torch.optim.Optimizer ' - f'but got {type(self.optimizer)}') - - self.logger.info(f'resumed from epoch: {self.epoch}, iter {self.iter}') - - def save_checkpoint(self, - out_dir, - filename_tmpl='iter_{}.pth', - meta=None, - save_optimizer=True, - create_symlink=True): - """Save checkpoint to file. - - Args: - out_dir (str): Directory to save checkpoint files. - filename_tmpl (str, optional): Checkpoint file template. - Defaults to 'iter_{}.pth'. - meta (dict, optional): Metadata to be saved in checkpoint. - Defaults to None. - save_optimizer (bool, optional): Whether save optimizer. - Defaults to True. - create_symlink (bool, optional): Whether create symlink to the - latest checkpoint file. Defaults to True. - """ - if meta is None: - meta = {} - elif not isinstance(meta, dict): - raise TypeError( - f'meta should be a dict or None, but got {type(meta)}') - if self.meta is not None: - meta.update(self.meta) - # Note: meta.update(self.meta) should be done before - # meta.update(epoch=self.epoch + 1, iter=self.iter) otherwise - # there will be problems with resumed checkpoints. - # More details in https://github.com/open-mmlab/mmcv/pull/1108 - meta.update(epoch=self.epoch + 1, iter=self.iter) - - filename = filename_tmpl.format(self.iter + 1) - filepath = osp.join(out_dir, filename) - optimizer = self.optimizer if save_optimizer else None - save_checkpoint(self.model, filepath, optimizer=optimizer, meta=meta) - # in some environments, `os.symlink` is not supported, you may need to - # set `create_symlink` to False - if create_symlink: - dst_file = osp.join(out_dir, 'latest.pth') - if platform.system() != 'Windows': - mmcv.symlink(filename, dst_file) - else: - shutil.copy(filepath, dst_file) - - def register_training_hooks(self, - lr_config, - optimizer_config=None, - checkpoint_config=None, - log_config=None, - momentum_config=None, - custom_hooks_config=None): - """Register default hooks for iter-based training. - - Checkpoint hook, optimizer stepper hook and logger hooks will be set to - `by_epoch=False` by default. - - Default hooks include: - - +----------------------+-------------------------+ - | Hooks | Priority | - +======================+=========================+ - | LrUpdaterHook | VERY_HIGH (10) | - +----------------------+-------------------------+ - | MomentumUpdaterHook | HIGH (30) | - +----------------------+-------------------------+ - | OptimizerStepperHook | ABOVE_NORMAL (40) | - +----------------------+-------------------------+ - | CheckpointSaverHook | NORMAL (50) | - +----------------------+-------------------------+ - | IterTimerHook | LOW (70) | - +----------------------+-------------------------+ - | LoggerHook(s) | VERY_LOW (90) | - +----------------------+-------------------------+ - | CustomHook(s) | defaults to NORMAL (50) | - +----------------------+-------------------------+ - - If custom hooks have same priority with default hooks, custom hooks - will be triggered after default hooks. - """ - if checkpoint_config is not None: - checkpoint_config.setdefault('by_epoch', False) - if lr_config is not None: - lr_config.setdefault('by_epoch', False) - if log_config is not None: - for info in log_config['hooks']: - info.setdefault('by_epoch', False) - super(IterBasedRunner, self).register_training_hooks( - lr_config=lr_config, - momentum_config=momentum_config, - optimizer_config=optimizer_config, - checkpoint_config=checkpoint_config, - log_config=log_config, - timer_config=IterTimerHook(), - custom_hooks_config=custom_hooks_config) diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/decode_heads/sep_fcn_head.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/decode_heads/sep_fcn_head.py deleted file mode 100644 index a0986143fa4f2bd36f5271354fe5f843f35b9e6f..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/decode_heads/sep_fcn_head.py +++ /dev/null @@ -1,51 +0,0 @@ -from annotator.uniformer.mmcv.cnn import DepthwiseSeparableConvModule - -from ..builder import HEADS -from .fcn_head import FCNHead - - -@HEADS.register_module() -class DepthwiseSeparableFCNHead(FCNHead): - """Depthwise-Separable Fully Convolutional Network for Semantic - Segmentation. - - This head is implemented according to Fast-SCNN paper. - Args: - in_channels(int): Number of output channels of FFM. - channels(int): Number of middle-stage channels in the decode head. - concat_input(bool): Whether to concatenate original decode input into - the result of several consecutive convolution layers. - Default: True. - num_classes(int): Used to determine the dimension of - final prediction tensor. - in_index(int): Correspond with 'out_indices' in FastSCNN backbone. - norm_cfg (dict | None): Config of norm layers. - align_corners (bool): align_corners argument of F.interpolate. - Default: False. - loss_decode(dict): Config of loss type and some - relevant additional options. - """ - - def __init__(self, **kwargs): - super(DepthwiseSeparableFCNHead, self).__init__(**kwargs) - self.convs[0] = DepthwiseSeparableConvModule( - self.in_channels, - self.channels, - kernel_size=self.kernel_size, - padding=self.kernel_size // 2, - norm_cfg=self.norm_cfg) - for i in range(1, self.num_convs): - self.convs[i] = DepthwiseSeparableConvModule( - self.channels, - self.channels, - kernel_size=self.kernel_size, - padding=self.kernel_size // 2, - norm_cfg=self.norm_cfg) - - if self.concat_input: - self.conv_cat = DepthwiseSeparableConvModule( - self.in_channels + self.channels, - self.channels, - kernel_size=self.kernel_size, - padding=self.kernel_size // 2, - norm_cfg=self.norm_cfg) diff --git a/spaces/Mileena/PIFu-Clothed-Human-Digitization/README.md b/spaces/Mileena/PIFu-Clothed-Human-Digitization/README.md deleted file mode 100644 index 53e34df08da55169377741bd2e7676843237d810..0000000000000000000000000000000000000000 --- a/spaces/Mileena/PIFu-Clothed-Human-Digitization/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: PIFu Clothed Human Digitization -emoji: "🧍🏽‍♀️🧍🏻🧍🏽‍♂️\_" -colorFrom: pink -colorTo: green -sdk: gradio -sdk_version: 3.0.2 -app_file: ./PIFu/spaces.py -pinned: false -python_version: 3.7.13 -duplicated_from: radames/PIFu-Clothed-Human-Digitization ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/MohamedAlgebali/VideoQuERI/gpt4.py b/spaces/MohamedAlgebali/VideoQuERI/gpt4.py deleted file mode 100644 index c948060b763e2bc0ebc71886d5eff0fbc162fb81..0000000000000000000000000000000000000000 --- a/spaces/MohamedAlgebali/VideoQuERI/gpt4.py +++ /dev/null @@ -1,68 +0,0 @@ -from uuid import uuid4 -from re import findall -import tls_client - -class Completion: - async def create(self, prompt): - """ - Create a completion for the given prompt using the you.com API. - - Args: - prompt (str): The prompt for which completion is requested. - proxy (str, optional): The proxy to be used for the API request. Defaults to None. - - Returns: - str: The completion result as a string. - - Raises: - Exception: If unable to fetch the response or the required token from the response. - """ - client = tls_client.Session(client_identifier="firefox108") - client.headers = { - "authority": "you.com", - "accept": "text/event-stream", - "accept-language": "en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3", - "cache-control": "no-cache", - "referer": "https://you.com/search?q=who+are+you&tbm=youchat", - "sec-ch-ua": '"Not_A Brand";v="99", "Google Chrome";v="109", "Chromium";v="109"', - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-platform": '"Windows"', - "sec-fetch-dest": "empty", - "sec-fetch-mode": "cors", - "sec-fetch-site": "same-origin", - "cookie": f"safesearch_guest=Off; uuid_guest={str(uuid4())}", - - "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:95.0) Gecko/20100101 Firefox/95.0", - } - - # Add print statements to display headers and user agent - print("Headers:", client.headers) - print("User-Agent:", client.headers["user-agent"]) - - params = { - "q": prompt, - "page": 1, - "count": 10, - "safeSearch": "Off", - "onShoppingPage": False, - "mkt": "", - "responseFilter": "WebPages,Translations,TimeZone,Computation,RelatedSearches", - "domain": "youchat", - "queryTraceId": str(uuid4()), - "chat": [], - } - resp = client.get( - "https://you.com/api/streamingSearch", params=params, timeout_seconds=30 - ) - - print("Response Status Code:", resp.status_code) - print("Response Text:", resp.text) - - if "youChatToken" not in resp.text: - raise Exception("Unable to fetch response.") - return ( - "".join(findall(r"{\"youChatToken\": \"(.*?)\"}", resp.text)) - .replace("\\n", "\n") - .replace("\\\\", "\\") - .replace('\\"', '"') - ) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/adaptive_span/adaptive_span_model.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/adaptive_span/adaptive_span_model.py deleted file mode 100644 index d96c95b85dbcf29e9384cc6d8d9630d2489991b2..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/adaptive_span/adaptive_span_model.py +++ /dev/null @@ -1,263 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from fairseq.modules.layer_norm import LayerNorm - -from .adaptive_span_attention import AdaptiveSpan - -# Size notations: -# B = batch_size, H = d_model, M = block_size, L = attn_span - - -def _skew(X, pad_value): - """shift every row 1 step to right""" - # X = B x M x L - B, M, L = X.size() - X = F.pad(X, (0, M + 1), value=pad_value) # B x M x (L+M+1) - X = X.view(B, -1) # B x ML+MM+M - X = X[:, :-M] # B x ML+MM - X = X.view(B, M, M + L) # B x M x L+M - return X - - -def _unskew(X): - """reverse _skew operation""" - # X = B x M x L+M - B, M, L = X.size() - L -= M - X = X.view(B, -1) # B x ML+MM - X = F.pad(X, (0, M)) # B x ML+MM+M - X = X.view(B, M, M + L + 1) # B x M x L+M+1 - X = X[:, :, :L] # B x M x L - return X - - -class SeqAttention(nn.Module): - """Sequential self-attention layer. - Each token will attend to its previous fixed number of steps. - Note that attention doesn't include the current step itself. - """ - - def __init__(self, d_model, n_head, attn_span, dropout, adapt_span_layer, **kargs): - nn.Module.__init__(self) - self.dropout = nn.Dropout(dropout) - self.d_model = d_model # size of a single head - self.attn_span = attn_span - self.adaptive_span = AdaptiveSpan( - attn_span=attn_span, - n_head=n_head, - adapt_span_layer=adapt_span_layer, - **kargs - ) - - def forward(self, query, key, value, key_pe): - # query size = B x M x H - # key, value sizes = B x (M+L) x H - - key, value, key_pe = self.adaptive_span.trim_memory(query, key, value, key_pe) - - # compute attention from context - # B x M (dest) x (M+L) (src) - attn_cont = torch.matmul(query, key.transpose(-1, -2)) - attn_cont = _unskew(attn_cont) # B x M x L - - # compute the effect of position embedding - attn_pos = torch.matmul(query, key_pe) # B x M x L_pos - attn = attn_cont + attn_pos - - attn = attn / math.sqrt(self.d_model) # B x M X L_pos - - attn = F.softmax(attn.float(), dim=-1).type_as(attn) - - # trim attention lengths according to the learned span - attn = self.adaptive_span(attn) - - attn = self.dropout(attn) # B x M X L_pos - - attn_cont = _skew(attn, 0) # B x M X (L+M) - out = torch.matmul(attn_cont, value) # B x M x H - return out - - def get_cache_size(self): - return self.adaptive_span.get_cache_size() - - -class MultiHeadSeqAttention(nn.Module): - def __init__(self, d_model, n_head, **kargs): - nn.Module.__init__(self) - assert d_model % n_head == 0 - self.n_head = n_head - self.head_dim = d_model // n_head - self.attn = SeqAttention(d_model=self.head_dim, n_head=n_head, **kargs) - self.proj_query = nn.Linear(d_model, d_model, bias=False) - nn.init.xavier_normal_(self.proj_query.weight) - self.proj_out = nn.Linear(d_model, d_model, bias=False) - nn.init.xavier_normal_(self.proj_out.weight) - self.proj_val = nn.Linear(d_model, d_model, bias=False) - nn.init.xavier_normal_(self.proj_val.weight) - self.proj_key = nn.Linear(d_model, d_model, bias=False) - nn.init.xavier_normal_(self.proj_key.weight) - - def head_reshape(self, x): - K = self.n_head - D = self.head_dim - x = x.view(x.size()[:-1] + (K, D)) # B x (M+L) x K x D - x = x.transpose(1, 2).contiguous() # B x K x (M+L) x D - x = x.view(-1, x.size(-2), x.size(-1)) # B_K x (M+L) x D - return x - - def forward(self, query, key, value, key_pe): - B = query.size(0) - K = self.n_head - D = self.head_dim - M = query.size(1) - - query = self.proj_query(query) - query = self.head_reshape(query) - value = self.proj_val(value) - value = self.head_reshape(value) - key = self.proj_key(key) - key = self.head_reshape(key) - - out = self.attn(query, key, value, key_pe) # B_K x M x D - out = out.view(B, K, M, D) # B x K x M x D - out = out.transpose(1, 2).contiguous() # B x M x K x D - out = out.view(B, M, -1) # B x M x K_D - out = self.proj_out(out) - return out - - -class FeedForwardLayer(nn.Module): - def __init__(self, d_model, d_inner, dropout, **kargs): - nn.Module.__init__(self) - self.fc1 = nn.Linear(d_model, d_inner) - self.fc2 = nn.Linear(d_inner, d_model) - nn.init.xavier_uniform_(self.fc1.weight) - nn.init.xavier_uniform_(self.fc2.weight) - self.dropout = nn.Dropout(dropout) - - def forward(self, h): - h1 = F.relu(self.fc1(h)) - h1 = self.dropout(h1) - h2 = self.fc2(h1) - return h2 - - -class TransformerSeqLayer(nn.Module): - def __init__(self, d_model, **kargs): - nn.Module.__init__(self) - self.attn = MultiHeadSeqAttention(d_model=d_model, **kargs) - self.norm1 = LayerNorm(d_model) - self.ff = FeedForwardLayer(d_model=d_model, **kargs) - self.norm2 = LayerNorm(d_model) - - def forward(self, h, h_cache, key_pe): - # h = B x M x H - # h_cache = B x L x H - h_all = torch.cat([h_cache, h], dim=1) # B x (M+L) x H - attn_out = self.attn(h, h_all, h_all, key_pe) - h = self.norm1(h + attn_out) # B x M x H - if self.ff is not None: - ff_out = self.ff(h) - out = self.norm2(h + ff_out) # B x M x H - else: - out = h - return out - - def get_cache_size(self): - return self.attn.attn.get_cache_size() - - -class TransformerSeq(nn.Module): - def __init__( - self, - vocab_size, - d_model, - n_head, - n_layer, - attn_span, - emb_dropout, - aux_loss_scaler, - adapt_span_layer, - **kargs - ): - nn.Module.__init__(self) - # token embeddings - self.in_emb = nn.Embedding(vocab_size, d_model) - nn.init.normal_(self.in_emb.weight, mean=0, std=d_model ** -0.5) - self.out_emb = nn.Linear(d_model, vocab_size) - self.aux_loss_scaler = aux_loss_scaler - if emb_dropout > 0: - self.emb_dropout = nn.Dropout(emb_dropout) - else: - self.emb_dropout = None - # position embeddings - self.key_pe = nn.Parameter(torch.randn(1, d_model // n_head, attn_span)) - - self.layers = nn.ModuleList() - self.layers.extend( - TransformerSeqLayer( - d_model=d_model, - n_head=n_head, - attn_span=attn_span, - adapt_span_layer=adapt_span_layer, - **kargs - ) - for _ in range(n_layer) - ) - - def forward(self, x, h_cache, target=None): - # x size = B x M - block_size = x.size(1) - h = self.in_emb(x) # B x M x H - if self.emb_dropout is not None: - h = self.emb_dropout(h) - - h_cache_next = [] - for l, layer in enumerate(self.layers): - cache_size = layer.attn.attn.get_cache_size() - if cache_size > block_size: - h_cache_next_l = torch.cat( - [h_cache[l][:, -cache_size + block_size :, :], h], dim=1 - ).detach() - else: - h_cache_next_l = h[:, -cache_size:, :].detach() - h_cache_next.append(h_cache_next_l) - h = layer(h, h_cache[l], self.key_pe) # B x M x H - - if self.emb_dropout is not None: - h = self.emb_dropout(h) - - out = F.log_softmax(self.out_emb(h).float(), dim=-1).type_as(h) - dummy_loss = None - - return out, h_cache_next, dummy_loss - - def get_aux_loss(self): - loss = 0.0 - for layer in self.layers: - loss += layer.attn.attn.adaptive_span.get_loss() - return self.aux_loss_scaler * loss - - def get_current_max_span(self): - max_span = 0.0 - for layer in self.layers: - max_span = max( - max_span, layer.attn.attn.adaptive_span.get_current_max_span() - ) - return max_span - - def get_current_avg_span(self): - avg_span = 0.0 - for layer in self.layers: - avg_span += layer.attn.attn.adaptive_span.get_current_avg_span() - return avg_span / len(self.layers) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/discriminative_reranking_nmt/drnmt_rerank.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/discriminative_reranking_nmt/drnmt_rerank.py deleted file mode 100644 index 2e0fc2bd29aedb0b477b7cc8e2c3b606acdd454a..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/discriminative_reranking_nmt/drnmt_rerank.py +++ /dev/null @@ -1,364 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Score raw text with a trained model. -""" - -from collections import namedtuple -import logging -from multiprocessing import Pool -import sys -import os -import random - -import numpy as np -import sacrebleu -import torch - -from fairseq import checkpoint_utils, options, utils - - -logger = logging.getLogger("fairseq_cli.drnmt_rerank") -logger.setLevel(logging.INFO) - -Batch = namedtuple("Batch", "ids src_tokens src_lengths") - - -pool_init_variables = {} - - -def init_loaded_scores(mt_scores, model_scores, hyp, ref): - global pool_init_variables - pool_init_variables["mt_scores"] = mt_scores - pool_init_variables["model_scores"] = model_scores - pool_init_variables["hyp"] = hyp - pool_init_variables["ref"] = ref - - -def parse_fairseq_gen(filename, task): - source = {} - hypos = {} - scores = {} - with open(filename, "r", encoding="utf-8") as f: - for line in f: - line = line.strip() - if line.startswith("S-"): # source - uid, text = line.split("\t", 1) - uid = int(uid[2:]) - source[uid] = text - elif line.startswith("D-"): # hypo - uid, score, text = line.split("\t", 2) - uid = int(uid[2:]) - if uid not in hypos: - hypos[uid] = [] - scores[uid] = [] - hypos[uid].append(text) - scores[uid].append(float(score)) - else: - continue - - source_out = [source[i] for i in range(len(hypos))] - hypos_out = [h for i in range(len(hypos)) for h in hypos[i]] - scores_out = [s for i in range(len(scores)) for s in scores[i]] - - return source_out, hypos_out, scores_out - - -def read_target(filename): - with open(filename, "r", encoding="utf-8") as f: - output = [line.strip() for line in f] - return output - - -def make_batches(args, src, hyp, task, max_positions, encode_fn): - assert len(src) * args.beam == len( - hyp - ), f"Expect {len(src) * args.beam} hypotheses for {len(src)} source sentences with beam size {args.beam}. Got {len(hyp)} hypotheses intead." - hyp_encode = [ - task.source_dictionary.encode_line(encode_fn(h), add_if_not_exist=False).long() - for h in hyp - ] - if task.cfg.include_src: - src_encode = [ - task.source_dictionary.encode_line( - encode_fn(s), add_if_not_exist=False - ).long() - for s in src - ] - tokens = [(src_encode[i // args.beam], h) for i, h in enumerate(hyp_encode)] - lengths = [(t1.numel(), t2.numel()) for t1, t2 in tokens] - else: - tokens = [(h,) for h in hyp_encode] - lengths = [(h.numel(),) for h in hyp_encode] - - itr = task.get_batch_iterator( - dataset=task.build_dataset_for_inference(tokens, lengths), - max_tokens=args.max_tokens, - max_sentences=args.batch_size, - max_positions=max_positions, - ignore_invalid_inputs=args.skip_invalid_size_inputs_valid_test, - ).next_epoch_itr(shuffle=False) - - for batch in itr: - yield Batch( - ids=batch["id"], - src_tokens=batch["net_input"]["src_tokens"], - src_lengths=batch["net_input"]["src_lengths"], - ) - - -def decode_rerank_scores(args): - if args.max_tokens is None and args.batch_size is None: - args.batch_size = 1 - - logger.info(args) - - use_cuda = torch.cuda.is_available() and not args.cpu - - # Load ensemble - logger.info("loading model(s) from {}".format(args.path)) - models, _model_args, task = checkpoint_utils.load_model_ensemble_and_task( - [args.path], arg_overrides=eval(args.model_overrides), - ) - - for model in models: - if args.fp16: - model.half() - if use_cuda: - model.cuda() - - # Initialize generator - generator = task.build_generator(args) - - # Handle tokenization and BPE - tokenizer = task.build_tokenizer(args) - bpe = task.build_bpe(args) - - def encode_fn(x): - if tokenizer is not None: - x = tokenizer.encode(x) - if bpe is not None: - x = bpe.encode(x) - return x - - max_positions = utils.resolve_max_positions( - task.max_positions(), *[model.max_positions() for model in models] - ) - - src, hyp, mt_scores = parse_fairseq_gen(args.in_text, task) - model_scores = {} - logger.info("decode reranker score") - for batch in make_batches(args, src, hyp, task, max_positions, encode_fn): - src_tokens = batch.src_tokens - src_lengths = batch.src_lengths - if use_cuda: - src_tokens = src_tokens.cuda() - src_lengths = src_lengths.cuda() - - sample = { - "net_input": {"src_tokens": src_tokens, "src_lengths": src_lengths}, - } - scores = task.inference_step(generator, models, sample) - - for id, sc in zip(batch.ids.tolist(), scores.tolist()): - model_scores[id] = sc[0] - - model_scores = [model_scores[i] for i in range(len(model_scores))] - - return src, hyp, mt_scores, model_scores - - -def get_score(mt_s, md_s, w1, lp, tgt_len): - return mt_s / (tgt_len ** lp) * w1 + md_s - - -def get_best_hyps(mt_scores, md_scores, hypos, fw_weight, lenpen, beam): - assert len(mt_scores) == len(md_scores) and len(mt_scores) == len(hypos) - hypo_scores = [] - best_hypos = [] - best_scores = [] - offset = 0 - for i in range(len(hypos)): - tgt_len = len(hypos[i].split()) - hypo_scores.append( - get_score(mt_scores[i], md_scores[i], fw_weight, lenpen, tgt_len) - ) - - if (i + 1) % beam == 0: - max_i = np.argmax(hypo_scores) - best_hypos.append(hypos[offset + max_i]) - best_scores.append(hypo_scores[max_i]) - hypo_scores = [] - offset += beam - return best_hypos, best_scores - - -def eval_metric(args, hypos, ref): - if args.metric == "bleu": - score = sacrebleu.corpus_bleu(hypos, [ref]).score - else: - score = sacrebleu.corpus_ter(hypos, [ref]).score - - return score - - -def score_target_hypo(args, fw_weight, lp): - mt_scores = pool_init_variables["mt_scores"] - model_scores = pool_init_variables["model_scores"] - hyp = pool_init_variables["hyp"] - ref = pool_init_variables["ref"] - best_hypos, _ = get_best_hyps( - mt_scores, model_scores, hyp, fw_weight, lp, args.beam - ) - rerank_eval = None - if ref: - rerank_eval = eval_metric(args, best_hypos, ref) - print(f"fw_weight {fw_weight}, lenpen {lp}, eval {rerank_eval}") - - return rerank_eval - - -def print_result(best_scores, best_hypos, output_file): - for i, (s, h) in enumerate(zip(best_scores, best_hypos)): - print(f"{i}\t{s}\t{h}", file=output_file) - - -def main(args): - utils.import_user_module(args) - - src, hyp, mt_scores, model_scores = decode_rerank_scores(args) - - assert ( - not args.tune or args.target_text is not None - ), "--target-text has to be set when tuning weights" - if args.target_text: - ref = read_target(args.target_text) - assert len(src) == len( - ref - ), f"different numbers of source and target sentences ({len(src)} vs. {len(ref)})" - - orig_best_hypos = [hyp[i] for i in range(0, len(hyp), args.beam)] - orig_eval = eval_metric(args, orig_best_hypos, ref) - - if args.tune: - logger.info("tune weights for reranking") - - random_params = np.array( - [ - [ - random.uniform( - args.lower_bound_fw_weight, args.upper_bound_fw_weight - ), - random.uniform(args.lower_bound_lenpen, args.upper_bound_lenpen), - ] - for k in range(args.num_trials) - ] - ) - - logger.info("launching pool") - with Pool( - 32, - initializer=init_loaded_scores, - initargs=(mt_scores, model_scores, hyp, ref), - ) as p: - rerank_scores = p.starmap( - score_target_hypo, - [ - (args, random_params[i][0], random_params[i][1],) - for i in range(args.num_trials) - ], - ) - if args.metric == "bleu": - best_index = np.argmax(rerank_scores) - else: - best_index = np.argmin(rerank_scores) - best_fw_weight = random_params[best_index][0] - best_lenpen = random_params[best_index][1] - else: - assert ( - args.lenpen is not None and args.fw_weight is not None - ), "--lenpen and --fw-weight should be set" - best_fw_weight, best_lenpen = args.fw_weight, args.lenpen - - best_hypos, best_scores = get_best_hyps( - mt_scores, model_scores, hyp, best_fw_weight, best_lenpen, args.beam - ) - - if args.results_path is not None: - os.makedirs(args.results_path, exist_ok=True) - output_path = os.path.join( - args.results_path, "generate-{}.txt".format(args.gen_subset), - ) - with open(output_path, "w", buffering=1, encoding="utf-8") as o: - print_result(best_scores, best_hypos, o) - else: - print_result(best_scores, best_hypos, sys.stdout) - - if args.target_text: - rerank_eval = eval_metric(args, best_hypos, ref) - print(f"before reranking, {args.metric.upper()}:", orig_eval) - print( - f"after reranking with fw_weight={best_fw_weight}, lenpen={best_lenpen}, {args.metric.upper()}:", - rerank_eval, - ) - - -def cli_main(): - parser = options.get_generation_parser(interactive=True) - - parser.add_argument( - "--in-text", - default=None, - required=True, - help="text from fairseq-interactive output, containing source sentences and hypotheses", - ) - parser.add_argument("--target-text", default=None, help="reference text") - parser.add_argument("--metric", type=str, choices=["bleu", "ter"], default="bleu") - parser.add_argument( - "--tune", - action="store_true", - help="if set, tune weights on fw scores and lenpen instead of applying fixed weights for reranking", - ) - parser.add_argument( - "--lower-bound-fw-weight", - default=0.0, - type=float, - help="lower bound of search space", - ) - parser.add_argument( - "--upper-bound-fw-weight", - default=3, - type=float, - help="upper bound of search space", - ) - parser.add_argument( - "--lower-bound-lenpen", - default=0.0, - type=float, - help="lower bound of search space", - ) - parser.add_argument( - "--upper-bound-lenpen", - default=3, - type=float, - help="upper bound of search space", - ) - parser.add_argument( - "--fw-weight", type=float, default=None, help="weight on the fw model score" - ) - parser.add_argument( - "--num-trials", - default=1000, - type=int, - help="number of trials to do for random search", - ) - - args = options.parse_args_and_arch(parser) - main(args) - - -if __name__ == "__main__": - cli_main() diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/multilingual/data_scripts/README.md b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/multilingual/data_scripts/README.md deleted file mode 100644 index cc610c0c9e936a5ae4659ceda691c6db6d387296..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/multilingual/data_scripts/README.md +++ /dev/null @@ -1,24 +0,0 @@ - -# Install dependency -```bash -pip install -r requirement.txt -``` - -# Download the data set -```bash -export WORKDIR_ROOT= - -``` -The downloaded data will be at $WORKDIR_ROOT/ML50 - -# preprocess the data -Install SPM [here](https://github.com/google/sentencepiece) -```bash -export WORKDIR_ROOT= -export SPM_PATH= -``` -* $WORKDIR_ROOT/ML50/raw: extracted raw data -* $WORKDIR_ROOT/ML50/dedup: dedup data -* $WORKDIR_ROOT/ML50/clean: data with valid and test sentences removed from the dedup data - - diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/decode_word_step2.sh b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/decode_word_step2.sh deleted file mode 100644 index 59a6cbb12539cf62658f8344f7be7cecf2e3380f..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/decode_word_step2.sh +++ /dev/null @@ -1,30 +0,0 @@ -#!/bin/bash - -# prepare a new data directory of HMM word output - -. ./path.sh - -set -eu - -out_dir= # same as in train.sh -dec_lmparam= # LM hyperparameters (e.g., 7.0.0) - -dec_exp=tri3b # what HMM stage to decode (e.g., tri3b) -dec_suffix=word -dec_splits="train valid" -dec_data_dir=$out_dir/dec_data_word # where to write HMM output - -data_dir=$out_dir/data -wrd_data_dir=$out_dir/data_word - -for x in $dec_splits; do - mkdir -p $dec_data_dir/$x - cp $data_dir/$x/{feats.scp,cmvn.scp,utt2spk,spk2utt} $dec_data_dir/$x/ - - tra=$out_dir/exp/$dec_exp/decode${dec_suffix}_${x}/scoring/${dec_lmparam}.tra - cat $tra | utils/int2sym.pl -f 2- $data_dir/lang_word/words.txt | \ - sed 's:::g' | sed 's:::g' > $dec_data_dir/$x/text - utils/fix_data_dir.sh $dec_data_dir/$x - echo "WER on $x is" $(compute-wer ark:$wrd_data_dir/${x}_gt/text ark:$dec_data_dir/$x/text | cut -d" " -f2-) -done - diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/optim/lr_scheduler/fairseq_lr_scheduler.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/optim/lr_scheduler/fairseq_lr_scheduler.py deleted file mode 100644 index ac6340fa0744a08d2b527972dfc669573fb4e1c3..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/optim/lr_scheduler/fairseq_lr_scheduler.py +++ /dev/null @@ -1,62 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from argparse import Namespace - -from fairseq.dataclass.utils import gen_parser_from_dataclass -from fairseq.optim import FairseqOptimizer - - -class FairseqLRScheduler(object): - def __init__(self, cfg, optimizer): - super().__init__() - if optimizer is not None and not isinstance(optimizer, FairseqOptimizer): - raise ValueError("optimizer must be an instance of FairseqOptimizer") - self.cfg = cfg - self.optimizer = optimizer - self.best = None - - @classmethod - def add_args(cls, parser): - """Add arguments to the parser for this LR scheduler.""" - dc = getattr(cls, "__dataclass", None) - if dc is not None: - gen_parser_from_dataclass(parser, dc()) - - def state_dict(self): - """Return the LR scheduler state dict.""" - return {"best": self.best} - - def load_state_dict(self, state_dict): - """Load an LR scheduler state dict.""" - self.best = state_dict["best"] - - def step_begin_epoch(self, epoch): - """Update the learning rate at the beginning of the given epoch.""" - pass - - def step(self, epoch, val_loss=None): - """Update the learning rate at the end of the given epoch.""" - if val_loss is not None: - if self.best is None: - self.best = val_loss - else: - self.best = min(self.best, val_loss) - - def step_update(self, num_updates): - """Update the learning rate after each update.""" - return self.optimizer.get_lr() - - def reinit(self, total_num_update, num_updates): - pass - - -class LegacyFairseqLRScheduler(FairseqLRScheduler): - def __init__(self, args: Namespace, optimizer): - if not isinstance(optimizer, FairseqOptimizer): - raise ValueError("optimizer must be an instance of FairseqOptimizer") - self.args = args - self.optimizer = optimizer - self.best = None diff --git a/spaces/OFA-Sys/OFA-Text2Image_Generation/style.css b/spaces/OFA-Sys/OFA-Text2Image_Generation/style.css deleted file mode 100644 index 99b158ebfd6408132556e8accb5ae72f692264d2..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Text2Image_Generation/style.css +++ /dev/null @@ -1,38 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 32px; - margin-top: 0; - text-align: center; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} - -iframe { - height:90vh; - width:90vw; - } - -img{ - max-width: 100%; -} \ No newline at end of file diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/speech2unit/pretrained/logmel_feature_reader.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/speech2unit/pretrained/logmel_feature_reader.py deleted file mode 100644 index 106f50247622deca688b223f1ad63275d5b65e58..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/speech2unit/pretrained/logmel_feature_reader.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import soundfile as sf -import torch -import torchaudio.compliance.kaldi as kaldi - - -class LogMelFeatureReader: - """ - Wrapper class to run inference on HuBERT model. - Helps extract features for a given audio file. - """ - - def __init__(self, *args, **kwargs): - self.num_mel_bins = kwargs.get("num_mel_bins", 80) - self.frame_length = kwargs.get("frame_length", 25.0) - - def get_feats(self, file_path): - wav, sr = sf.read(file_path) - feats = torch.from_numpy(wav).float() - feats = kaldi.fbank( - feats.unsqueeze(0), - num_mel_bins=self.num_mel_bins, - frame_length=self.frame_length, - sample_frequency=sr, - ) - return feats diff --git a/spaces/Omnibus/Bark-simple/app.py b/spaces/Omnibus/Bark-simple/app.py deleted file mode 100644 index f190830ec7101dfee0e68bb588b95fc22e1a6a07..0000000000000000000000000000000000000000 --- a/spaces/Omnibus/Bark-simple/app.py +++ /dev/null @@ -1,121 +0,0 @@ -import gradio as gr -import torch -from pathlib import Path -from transformers import AutoProcessor, BarkModel -import scipy -from pytube import YouTube -from pydub import AudioSegment -from TTS.api import TTS -#import ffmpeg - - -# device = "cuda" if torch.cuda.is_available() else "cpu" -# model = BarkModel.from_pretrained("suno/bark-small", torch_dtype=torch.float16).to(device) -# model.enable_cpu_offload() - -device = "cpu" - - -processor = AutoProcessor.from_pretrained("suno/bark-small") -model = BarkModel.from_pretrained("suno/bark-small").to(device) -num_list = ["1","2","3","4","5","6","7","8","9","10"] -lang_list = ["en","de"] - -def run_bark(text, n, lang): - #history_prompt = [] - semantic_prompt=f"v2/{lang}_speaker_{int(n)-1}" - - #text=["Hello, my name is Suno. And, uh — and I like pizza. [laughs] But I also have other interests such as playing tic tac toe."], - inputs = processor(text=text, - voice_preset = semantic_prompt, - return_tensors="pt", - ) - - speech_values = model.generate(**inputs, do_sample=True) - sampling_rate = model.generation_config.sample_rate - - #sampling_rate = model.config.sample_rate - #sampling_rate = 24000 - scipy.io.wavfile.write("bark_out.wav", rate=sampling_rate, data=speech_values.cpu().numpy().squeeze()) - return ("bark_out.wav") - -def custom_bark(inp): - speaker_wav=Path("Mid.mp3") - tts = TTS(model_name="tts_models/multilingual/multi-dataset/your_tts", progress_bar=False).to(device) - tts.tts_to_file(inp, speaker_wav=speaker_wav, language="en", file_path="output.wav") - return ("output.wav") - -def load_video_yt(vid): - yt = YouTube(vid) - vid = yt.streams.filter(progressive=True, file_extension='mp4').order_by('resolution').desc().first().download(filename="tmp.mp4") - vid_aud = yt.streams.filter(only_audio=True)[0].download(filename="tmp_aud.mp4") - print (yt.length) - return vid, vid_aud, "tmp_aud.mp4" - -def trim_clip(clip, start_t, end_t): - clip = Path("tmp_aud.mp4") - #clip = "tmp_aud.mp3" - # Open an mp3 file - song = AudioSegment.from_file("tmp_aud.mp4", - format="mp4") - - # start and end time - #start_min = 0 - #start_sec = 10 - #end_min = 0 - #end_sec = 55 - start_min = int(start_t.split(":",1)[0]) - start_sec = int(start_t.split(":",1)[1]) - end_min = int(end_t.split(":",1)[0]) - end_sec = int(end_t.split(":",1)[1]) - # pydub does things in milliseconds, so convert time - start = ((start_min*60)+start_sec)*1000 - end = ((end_min*60)+end_sec)*1000 - #start = 0 - #end = 15*1000 - # song clip of 10 seconds from starting - first_10_seconds = song[start: end] - - # save file - first_10_seconds.export("Mid.mp3", format="mp3") - print("New Audio file is created and saved") - - return "Mid.mp3" - -with gr.Blocks() as app: - with gr.Column(): - in_text = gr.Textbox() - with gr.Tab("Default"): - with gr.Row(): - speaker_num = gr.Dropdown(label="Speaker Voice", choices=num_list,value="1") - speaker_lang = gr.Dropdown(label="Speaker Language", choices=lang_list,value="en") - go_btn = gr.Button() - with gr.Tab("Upload"): - with gr.Row(): - with gr.Column(): - in_aud_mic = gr.Audio(source='microphone') - in_aud_file = gr.Audio(source='upload', interactive = True) - aud_file = gr.File() - with gr.Column(): - in_aud_yt = gr.Textbox(label="YouTube URL") - load_yt_btn = gr.Button("Load URL") - with gr.Column(): - with gr.Row(): - start_time = gr.Textbox(label = "Start", value = "0:00", placeholder = "0:23") - end_time = gr.Textbox(label = "End", value = "0:01", placeholder = "1:12") - - trim_clip_btn = gr.Button("Trim Clip") - trim_aud = gr.Audio(source='upload', interactive = False) - alt_go_btn = gr.Button() - yt_vid = gr.Video(type = 'filepath') - #speaker_num = gr.Number(value=0) - - with gr.Column(): - out_audio = gr.Audio() - - go_btn.click(run_bark,[in_text, speaker_num, speaker_lang],out_audio) - load_yt_btn.click(load_video_yt, in_aud_yt, [yt_vid,in_aud_file,aud_file]) - trim_clip_btn.click(trim_clip,[aud_file, start_time, end_time],trim_aud) - alt_go_btn.click(custom_bark, in_text, out_audio) - -app.launch() \ No newline at end of file diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/layers/roi_align_rotated.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/layers/roi_align_rotated.py deleted file mode 100644 index d097326c3a6116e872cecf0d675b42958f359b14..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/layers/roi_align_rotated.py +++ /dev/null @@ -1,91 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import torch -from torch import nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.modules.utils import _pair - - -class _ROIAlignRotated(Function): - @staticmethod - def forward(ctx, input, roi, output_size, spatial_scale, sampling_ratio): - ctx.save_for_backward(roi) - ctx.output_size = _pair(output_size) - ctx.spatial_scale = spatial_scale - ctx.sampling_ratio = sampling_ratio - ctx.input_shape = input.size() - output = torch.ops.detectron2.roi_align_rotated_forward( - input, roi, spatial_scale, output_size[0], output_size[1], sampling_ratio - ) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - (rois,) = ctx.saved_tensors - output_size = ctx.output_size - spatial_scale = ctx.spatial_scale - sampling_ratio = ctx.sampling_ratio - bs, ch, h, w = ctx.input_shape - grad_input = torch.ops.detectron2.roi_align_rotated_backward( - grad_output, - rois, - spatial_scale, - output_size[0], - output_size[1], - bs, - ch, - h, - w, - sampling_ratio, - ) - return grad_input, None, None, None, None, None - - -roi_align_rotated = _ROIAlignRotated.apply - - -class ROIAlignRotated(nn.Module): - def __init__(self, output_size, spatial_scale, sampling_ratio): - """ - Args: - output_size (tuple): h, w - spatial_scale (float): scale the input boxes by this number - sampling_ratio (int): number of inputs samples to take for each output - sample. 0 to take samples densely. - - Note: - ROIAlignRotated supports continuous coordinate by default: - Given a continuous coordinate c, its two neighboring pixel indices (in our - pixel model) are computed by floor(c - 0.5) and ceil(c - 0.5). For example, - c=1.3 has pixel neighbors with discrete indices [0] and [1] (which are sampled - from the underlying signal at continuous coordinates 0.5 and 1.5). - """ - super(ROIAlignRotated, self).__init__() - self.output_size = output_size - self.spatial_scale = spatial_scale - self.sampling_ratio = sampling_ratio - - def forward(self, input, rois): - """ - Args: - input: NCHW images - rois: Bx6 boxes. First column is the index into N. - The other 5 columns are (x_ctr, y_ctr, width, height, angle_degrees). - """ - assert rois.dim() == 2 and rois.size(1) == 6 - orig_dtype = input.dtype - if orig_dtype == torch.float16: - input = input.float() - rois = rois.float() - return roi_align_rotated( - input, rois, self.output_size, self.spatial_scale, self.sampling_ratio - ).to(dtype=orig_dtype) - - def __repr__(self): - tmpstr = self.__class__.__name__ + "(" - tmpstr += "output_size=" + str(self.output_size) - tmpstr += ", spatial_scale=" + str(self.spatial_scale) - tmpstr += ", sampling_ratio=" + str(self.sampling_ratio) - tmpstr += ")" - return tmpstr diff --git a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/test_time_augmentation.py b/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/test_time_augmentation.py deleted file mode 100644 index a913cd4ae0ed8e22121380929ffcd51b9f3500a6..0000000000000000000000000000000000000000 --- a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/test_time_augmentation.py +++ /dev/null @@ -1,107 +0,0 @@ -# ------------------------------------------------------------------------------ -# Reference: https://github.com/facebookresearch/Mask2Former/blob/main/mask2former/test_time_augmentation.py -# ------------------------------------------------------------------------------ - -import copy -import logging -from itertools import count - -import numpy as np -import torch -from fvcore.transforms import HFlipTransform -from torch import nn -from torch.nn.parallel import DistributedDataParallel - -from detectron2.data.detection_utils import read_image -from .datasetmapper_tta import DatasetMapperTTA -import torch.nn.functional as F - -__all__ = [ - "SemanticSegmentorWithTTA", -] - - -class SemanticSegmentorWithTTA(nn.Module): - """ - A SemanticSegmentor with test-time augmentation enabled. - Its :meth:`__call__` method has the same interface as :meth:`SemanticSegmentor.forward`. - """ - - def __init__(self, cfg, model, tta_mapper=None, batch_size=1): - """ - Args: - cfg (CfgNode): - model (SemanticSegmentor): a SemanticSegmentor to apply TTA on. - tta_mapper (callable): takes a dataset dict and returns a list of - augmented versions of the dataset dict. Defaults to - `DatasetMapperTTA(cfg)`. - batch_size (int): batch the augmented images into this batch size for inference. - """ - super().__init__() - if isinstance(model, DistributedDataParallel): - model = model.module - self.cfg = cfg.clone() - self.num_classes = self.cfg.MODEL.SEM_SEG_HEAD.NUM_CLASSES - - self.model = model - - if tta_mapper is None: - tta_mapper = DatasetMapperTTA(cfg) - self.tta_mapper = tta_mapper - self.batch_size = batch_size - - def __call__(self, batched_inputs): - """ - Same input/output format as :meth:`SemanticSegmentor.forward` - """ - - def _maybe_read_image(dataset_dict): - ret = copy.copy(dataset_dict) - if "image" not in ret: - image = read_image(ret.pop("file_name"), self.model.input_format) - image = torch.from_numpy(np.ascontiguousarray(image.transpose(2, 0, 1))) # CHW - ret["image"] = image - if "height" not in ret and "width" not in ret: - ret["height"] = image.shape[1] - ret["width"] = image.shape[2] - return ret - - processed_results = [] - for x in batched_inputs: - result = self._inference_one_image(_maybe_read_image(x)) - processed_results.append(result) - return processed_results - - def _inference_one_image(self, input): - """ - Args: - input (dict): one dataset dict with "image" field being a CHW tensor - Returns: - dict: one output dict - """ - orig_shape = (input["height"], input["width"]) - augmented_inputs, tfms = self._get_augmented_inputs(input) - - final_predictions = None - count_predictions = 0 - for input, tfm in zip(augmented_inputs, tfms): - count_predictions += 1 - with torch.no_grad(): - if final_predictions is None: - if any(isinstance(t, HFlipTransform) for t in tfm.transforms): - final_predictions = self.model([input])[0].pop("sem_seg").flip(dims=[2]) - else: - final_predictions = self.model([input])[0].pop("sem_seg") - else: - if any(isinstance(t, HFlipTransform) for t in tfm.transforms): - final_predictions += self.model([input])[0].pop("sem_seg").flip(dims=[2]) - else: - final_predictions += self.model([input])[0].pop("sem_seg") - - final_predictions = final_predictions / count_predictions - return {"sem_seg": final_predictions} - - def _get_augmented_inputs(self, input): - augmented_inputs = self.tta_mapper(input) - tfms = [x.pop("transforms") for x in augmented_inputs] - return augmented_inputs, tfms \ No newline at end of file diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/dnl_r50-d8.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/dnl_r50-d8.py deleted file mode 100644 index edb4c174c51e34c103737ba39bfc48bf831e561d..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/dnl_r50-d8.py +++ /dev/null @@ -1,46 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 2, 4), - strides=(1, 2, 1, 1), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - decode_head=dict( - type='DNLHead', - in_channels=2048, - in_index=3, - channels=512, - dropout_ratio=0.1, - reduction=2, - use_scale=True, - mode='embedded_gaussian', - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=1024, - in_index=2, - channels=256, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/PeepDaSlan9/AutoGPT/tests/integration/weaviate_memory_tests.py b/spaces/PeepDaSlan9/AutoGPT/tests/integration/weaviate_memory_tests.py deleted file mode 100644 index 015eab05484f485aeb8ee035e92ad7811e9dddd4..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/AutoGPT/tests/integration/weaviate_memory_tests.py +++ /dev/null @@ -1,117 +0,0 @@ -import os -import sys -import unittest -from unittest import mock -from uuid import uuid4 - -from weaviate import Client -from weaviate.util import get_valid_uuid - -from autogpt.config import Config -from autogpt.memory.base import get_ada_embedding -from autogpt.memory.weaviate import WeaviateMemory - - -class TestWeaviateMemory(unittest.TestCase): - cfg = None - client = None - index = None - - @classmethod - def setUpClass(cls): - # only create the connection to weaviate once - cls.cfg = Config() - - if cls.cfg.use_weaviate_embedded: - from weaviate.embedded import EmbeddedOptions - - cls.client = Client( - embedded_options=EmbeddedOptions( - hostname=cls.cfg.weaviate_host, - port=int(cls.cfg.weaviate_port), - persistence_data_path=cls.cfg.weaviate_embedded_path, - ) - ) - else: - cls.client = Client( - f"{cls.cfg.weaviate_protocol}://{cls.cfg.weaviate_host}:{self.cfg.weaviate_port}" - ) - - cls.index = WeaviateMemory.format_classname(cls.cfg.memory_index) - - """ - In order to run these tests you will need a local instance of - Weaviate running. Refer to https://weaviate.io/developers/weaviate/installation/docker-compose - for creating local instances using docker. - Alternatively in your .env file set the following environmental variables to run Weaviate embedded (see: https://weaviate.io/developers/weaviate/installation/embedded): - - USE_WEAVIATE_EMBEDDED=True - WEAVIATE_EMBEDDED_PATH="/home/me/.local/share/weaviate" - """ - - def setUp(self): - try: - self.client.schema.delete_class(self.index) - except: - pass - - self.memory = WeaviateMemory(self.cfg) - - def test_add(self): - doc = "You are a Titan name Thanos and you are looking for the Infinity Stones" - self.memory.add(doc) - result = self.client.query.get(self.index, ["raw_text"]).do() - actual = result["data"]["Get"][self.index] - - self.assertEqual(len(actual), 1) - self.assertEqual(actual[0]["raw_text"], doc) - - def test_get(self): - doc = "You are an Avenger and swore to defend the Galaxy from a menace called Thanos" - - with self.client.batch as batch: - batch.add_data_object( - uuid=get_valid_uuid(uuid4()), - data_object={"raw_text": doc}, - class_name=self.index, - vector=get_ada_embedding(doc), - ) - - batch.flush() - - actual = self.memory.get(doc) - - self.assertEqual(len(actual), 1) - self.assertEqual(actual[0], doc) - - def test_get_stats(self): - docs = [ - "You are now about to count the number of docs in this index", - "And then you about to find out if you can count correctly", - ] - - [self.memory.add(doc) for doc in docs] - - stats = self.memory.get_stats() - - self.assertTrue(stats) - self.assertTrue("count" in stats) - self.assertEqual(stats["count"], 2) - - def test_clear(self): - docs = [ - "Shame this is the last test for this class", - "Testing is fun when someone else is doing it", - ] - - [self.memory.add(doc) for doc in docs] - - self.assertEqual(self.memory.get_stats()["count"], 2) - - self.memory.clear() - - self.assertEqual(self.memory.get_stats()["count"], 0) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/PeepDaSlan9/AutoGPT/tests/test_image_gen.py b/spaces/PeepDaSlan9/AutoGPT/tests/test_image_gen.py deleted file mode 100644 index 19c57e427d5c1b84aa7f72925733d0056ddf5268..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/AutoGPT/tests/test_image_gen.py +++ /dev/null @@ -1,102 +0,0 @@ -import hashlib -import os -import unittest - -from PIL import Image - -from autogpt.commands.image_gen import generate_image, generate_image_with_sd_webui -from autogpt.config import Config -from autogpt.workspace import path_in_workspace - - -def lst(txt): - return txt.split(":")[1].strip() - - -@unittest.skipIf(os.getenv("CI"), "Skipping image generation tests") -class TestImageGen(unittest.TestCase): - def setUp(self): - self.config = Config() - - def test_dalle(self): - self.config.image_provider = "dalle" - - # Test using size 256 - result = lst(generate_image("astronaut riding a horse", 256)) - image_path = path_in_workspace(result) - self.assertTrue(image_path.exists()) - with Image.open(image_path) as img: - self.assertEqual(img.size, (256, 256)) - image_path.unlink() - - # Test using size 512 - result = lst(generate_image("astronaut riding a horse", 512)) - image_path = path_in_workspace(result) - with Image.open(image_path) as img: - self.assertEqual(img.size, (512, 512)) - image_path.unlink() - - def test_huggingface(self): - self.config.image_provider = "huggingface" - - # Test usin SD 1.4 model and size 512 - self.config.huggingface_image_model = "CompVis/stable-diffusion-v1-4" - result = lst(generate_image("astronaut riding a horse", 512)) - image_path = path_in_workspace(result) - self.assertTrue(image_path.exists()) - with Image.open(image_path) as img: - self.assertEqual(img.size, (512, 512)) - image_path.unlink() - - # Test using SD 2.1 768 model and size 768 - self.config.huggingface_image_model = "stabilityai/stable-diffusion-2-1" - result = lst(generate_image("astronaut riding a horse", 768)) - image_path = path_in_workspace(result) - with Image.open(image_path) as img: - self.assertEqual(img.size, (768, 768)) - image_path.unlink() - - def test_sd_webui(self): - self.config.image_provider = "sd_webui" - return - - # Test using size 128 - result = lst(generate_image_with_sd_webui("astronaut riding a horse", 128)) - image_path = path_in_workspace(result) - self.assertTrue(image_path.exists()) - with Image.open(image_path) as img: - self.assertEqual(img.size, (128, 128)) - image_path.unlink() - - # Test using size 64 and negative prompt - result = lst( - generate_image_with_sd_webui( - "astronaut riding a horse", - negative_prompt="horse", - size=64, - extra={"seed": 123}, - ) - ) - image_path = path_in_workspace(result) - with Image.open(image_path) as img: - self.assertEqual(img.size, (64, 64)) - neg_image_hash = hashlib.md5(img.tobytes()).hexdigest() - image_path.unlink() - - # Same test as above but without the negative prompt - result = lst( - generate_image_with_sd_webui( - "astronaut riding a horse", image_size=64, size=1, extra={"seed": 123} - ) - ) - image_path = path_in_workspace(result) - with Image.open(image_path) as img: - self.assertEqual(img.size, (64, 64)) - image_hash = hashlib.md5(img.tobytes()).hexdigest() - image_path.unlink() - - self.assertNotEqual(image_hash, neg_image_hash) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/ops/fused_bias_leakyrelu.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/ops/fused_bias_leakyrelu.py deleted file mode 100644 index 6d12508469c6c8fa1884debece44c58d158cb6fa..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/ops/fused_bias_leakyrelu.py +++ /dev/null @@ -1,268 +0,0 @@ -# modified from https://github.com/rosinality/stylegan2-pytorch/blob/master/op/fused_act.py # noqa:E501 - -# Copyright (c) 2021, NVIDIA Corporation. All rights reserved. -# NVIDIA Source Code License for StyleGAN2 with Adaptive Discriminator -# Augmentation (ADA) -# ======================================================================= - -# 1. Definitions - -# "Licensor" means any person or entity that distributes its Work. - -# "Software" means the original work of authorship made available under -# this License. - -# "Work" means the Software and any additions to or derivative works of -# the Software that are made available under this License. - -# The terms "reproduce," "reproduction," "derivative works," and -# "distribution" have the meaning as provided under U.S. copyright law; -# provided, however, that for the purposes of this License, derivative -# works shall not include works that remain separable from, or merely -# link (or bind by name) to the interfaces of, the Work. - -# Works, including the Software, are "made available" under this License -# by including in or with the Work either (a) a copyright notice -# referencing the applicability of this License to the Work, or (b) a -# copy of this License. - -# 2. License Grants - -# 2.1 Copyright Grant. Subject to the terms and conditions of this -# License, each Licensor grants to you a perpetual, worldwide, -# non-exclusive, royalty-free, copyright license to reproduce, -# prepare derivative works of, publicly display, publicly perform, -# sublicense and distribute its Work and any resulting derivative -# works in any form. - -# 3. Limitations - -# 3.1 Redistribution. You may reproduce or distribute the Work only -# if (a) you do so under this License, (b) you include a complete -# copy of this License with your distribution, and (c) you retain -# without modification any copyright, patent, trademark, or -# attribution notices that are present in the Work. - -# 3.2 Derivative Works. You may specify that additional or different -# terms apply to the use, reproduction, and distribution of your -# derivative works of the Work ("Your Terms") only if (a) Your Terms -# provide that the use limitation in Section 3.3 applies to your -# derivative works, and (b) you identify the specific derivative -# works that are subject to Your Terms. Notwithstanding Your Terms, -# this License (including the redistribution requirements in Section -# 3.1) will continue to apply to the Work itself. - -# 3.3 Use Limitation. The Work and any derivative works thereof only -# may be used or intended for use non-commercially. Notwithstanding -# the foregoing, NVIDIA and its affiliates may use the Work and any -# derivative works commercially. As used herein, "non-commercially" -# means for research or evaluation purposes only. - -# 3.4 Patent Claims. If you bring or threaten to bring a patent claim -# against any Licensor (including any claim, cross-claim or -# counterclaim in a lawsuit) to enforce any patents that you allege -# are infringed by any Work, then your rights under this License from -# such Licensor (including the grant in Section 2.1) will terminate -# immediately. - -# 3.5 Trademarks. This License does not grant any rights to use any -# Licensor’s or its affiliates’ names, logos, or trademarks, except -# as necessary to reproduce the notices described in this License. - -# 3.6 Termination. If you violate any term of this License, then your -# rights under this License (including the grant in Section 2.1) will -# terminate immediately. - -# 4. Disclaimer of Warranty. - -# THE WORK IS PROVIDED "AS IS" WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, EITHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OR CONDITIONS OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE OR -# NON-INFRINGEMENT. YOU BEAR THE RISK OF UNDERTAKING ANY ACTIVITIES UNDER -# THIS LICENSE. - -# 5. Limitation of Liability. - -# EXCEPT AS PROHIBITED BY APPLICABLE LAW, IN NO EVENT AND UNDER NO LEGAL -# THEORY, WHETHER IN TORT (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE -# SHALL ANY LICENSOR BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY DIRECT, -# INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF -# OR RELATED TO THIS LICENSE, THE USE OR INABILITY TO USE THE WORK -# (INCLUDING BUT NOT LIMITED TO LOSS OF GOODWILL, BUSINESS INTERRUPTION, -# LOST PROFITS OR DATA, COMPUTER FAILURE OR MALFUNCTION, OR ANY OTHER -# COMMERCIAL DAMAGES OR LOSSES), EVEN IF THE LICENSOR HAS BEEN ADVISED OF -# THE POSSIBILITY OF SUCH DAMAGES. - -# ======================================================================= - -import torch -import torch.nn.functional as F -from torch import nn -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', ['fused_bias_leakyrelu']) - - -class FusedBiasLeakyReLUFunctionBackward(Function): - """Calculate second order deviation. - - This function is to compute the second order deviation for the fused leaky - relu operation. - """ - - @staticmethod - def forward(ctx, grad_output, out, negative_slope, scale): - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - empty = grad_output.new_empty(0) - - grad_input = ext_module.fused_bias_leakyrelu( - grad_output, - empty, - out, - act=3, - grad=1, - alpha=negative_slope, - scale=scale) - - dim = [0] - - if grad_input.ndim > 2: - dim += list(range(2, grad_input.ndim)) - - grad_bias = grad_input.sum(dim).detach() - - return grad_input, grad_bias - - @staticmethod - def backward(ctx, gradgrad_input, gradgrad_bias): - out, = ctx.saved_tensors - - # The second order deviation, in fact, contains two parts, while the - # the first part is zero. Thus, we direct consider the second part - # which is similar with the first order deviation in implementation. - gradgrad_out = ext_module.fused_bias_leakyrelu( - gradgrad_input, - gradgrad_bias.to(out.dtype), - out, - act=3, - grad=1, - alpha=ctx.negative_slope, - scale=ctx.scale) - - return gradgrad_out, None, None, None - - -class FusedBiasLeakyReLUFunction(Function): - - @staticmethod - def forward(ctx, input, bias, negative_slope, scale): - empty = input.new_empty(0) - - out = ext_module.fused_bias_leakyrelu( - input, - bias, - empty, - act=3, - grad=0, - alpha=negative_slope, - scale=scale) - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - return out - - @staticmethod - def backward(ctx, grad_output): - out, = ctx.saved_tensors - - grad_input, grad_bias = FusedBiasLeakyReLUFunctionBackward.apply( - grad_output, out, ctx.negative_slope, ctx.scale) - - return grad_input, grad_bias, None, None - - -class FusedBiasLeakyReLU(nn.Module): - """Fused bias leaky ReLU. - - This function is introduced in the StyleGAN2: - http://arxiv.org/abs/1912.04958 - - The bias term comes from the convolution operation. In addition, to keep - the variance of the feature map or gradients unchanged, they also adopt a - scale similarly with Kaiming initialization. However, since the - :math:`1+{alpha}^2` : is too small, we can just ignore it. Therefore, the - final scale is just :math:`\sqrt{2}`:. Of course, you may change it with # noqa: W605, E501 - your own scale. - - TODO: Implement the CPU version. - - Args: - channel (int): The channel number of the feature map. - negative_slope (float, optional): Same as nn.LeakyRelu. - Defaults to 0.2. - scale (float, optional): A scalar to adjust the variance of the feature - map. Defaults to 2**0.5. - """ - - def __init__(self, num_channels, negative_slope=0.2, scale=2**0.5): - super(FusedBiasLeakyReLU, self).__init__() - - self.bias = nn.Parameter(torch.zeros(num_channels)) - self.negative_slope = negative_slope - self.scale = scale - - def forward(self, input): - return fused_bias_leakyrelu(input, self.bias, self.negative_slope, - self.scale) - - -def fused_bias_leakyrelu(input, bias, negative_slope=0.2, scale=2**0.5): - """Fused bias leaky ReLU function. - - This function is introduced in the StyleGAN2: - http://arxiv.org/abs/1912.04958 - - The bias term comes from the convolution operation. In addition, to keep - the variance of the feature map or gradients unchanged, they also adopt a - scale similarly with Kaiming initialization. However, since the - :math:`1+{alpha}^2` : is too small, we can just ignore it. Therefore, the - final scale is just :math:`\sqrt{2}`:. Of course, you may change it with # noqa: W605, E501 - your own scale. - - Args: - input (torch.Tensor): Input feature map. - bias (nn.Parameter): The bias from convolution operation. - negative_slope (float, optional): Same as nn.LeakyRelu. - Defaults to 0.2. - scale (float, optional): A scalar to adjust the variance of the feature - map. Defaults to 2**0.5. - - Returns: - torch.Tensor: Feature map after non-linear activation. - """ - - if not input.is_cuda: - return bias_leakyrelu_ref(input, bias, negative_slope, scale) - - return FusedBiasLeakyReLUFunction.apply(input, bias.to(input.dtype), - negative_slope, scale) - - -def bias_leakyrelu_ref(x, bias, negative_slope=0.2, scale=2**0.5): - - if bias is not None: - assert bias.ndim == 1 - assert bias.shape[0] == x.shape[1] - x = x + bias.reshape([-1 if i == 1 else 1 for i in range(x.ndim)]) - - x = F.leaky_relu(x, negative_slope) - if scale != 1: - x = x * scale - - return x diff --git a/spaces/Pranjal12345/Text_to_Speech/tortoise/models/xtransformers.py b/spaces/Pranjal12345/Text_to_Speech/tortoise/models/xtransformers.py deleted file mode 100644 index 8be2df455c46bf8c89efb0d5fdbb704a9fb622f6..0000000000000000000000000000000000000000 --- a/spaces/Pranjal12345/Text_to_Speech/tortoise/models/xtransformers.py +++ /dev/null @@ -1,1248 +0,0 @@ -import math -from collections import namedtuple -from functools import partial -from inspect import isfunction - -import torch -import torch.nn.functional as F -from einops import rearrange, repeat -from torch import nn, einsum - -DEFAULT_DIM_HEAD = 64 - -Intermediates = namedtuple('Intermediates', [ - 'pre_softmax_attn', - 'post_softmax_attn' -]) - -LayerIntermediates = namedtuple('Intermediates', [ - 'hiddens', - 'attn_intermediates', - 'past_key_values', -]) - - -# helpers - -def exists(val): - return val is not None - - -def default(val, d): - if exists(val): - return val - return d() if isfunction(d) else d - - -def cast_tuple(val, depth): - return val if isinstance(val, tuple) else (val,) * depth - - -class always(): - def __init__(self, val): - self.val = val - - def __call__(self, *args, **kwargs): - return self.val - - -class not_equals(): - def __init__(self, val): - self.val = val - - def __call__(self, x, *args, **kwargs): - return x != self.val - - -class equals(): - def __init__(self, val): - self.val = val - - def __call__(self, x, *args, **kwargs): - return x == self.val - - -def max_neg_value(tensor): - return -torch.finfo(tensor.dtype).max - - -def l2norm(t): - return F.normalize(t, p=2, dim=-1) - - -# init helpers - -def init_zero_(layer): - nn.init.constant_(layer.weight, 0.) - if exists(layer.bias): - nn.init.constant_(layer.bias, 0.) - - -# keyword argument helpers - -def pick_and_pop(keys, d): - values = list(map(lambda key: d.pop(key), keys)) - return dict(zip(keys, values)) - - -def group_dict_by_key(cond, d): - return_val = [dict(), dict()] - for key in d.keys(): - match = bool(cond(key)) - ind = int(not match) - return_val[ind][key] = d[key] - return (*return_val,) - - -def string_begins_with(prefix, str): - return str.startswith(prefix) - - -def group_by_key_prefix(prefix, d): - return group_dict_by_key(partial(string_begins_with, prefix), d) - - -def groupby_prefix_and_trim(prefix, d): - kwargs_with_prefix, kwargs = group_dict_by_key(partial(string_begins_with, prefix), d) - kwargs_without_prefix = dict(map(lambda x: (x[0][len(prefix):], x[1]), tuple(kwargs_with_prefix.items()))) - return kwargs_without_prefix, kwargs - - -# activations - -class ReluSquared(nn.Module): - def forward(self, x): - return F.relu(x) ** 2 - - -# positional embeddings - -class AbsolutePositionalEmbedding(nn.Module): - def __init__(self, dim, max_seq_len): - super().__init__() - self.scale = dim ** -0.5 - self.emb = nn.Embedding(max_seq_len, dim) - - def forward(self, x): - n = torch.arange(x.shape[1], device=x.device) - pos_emb = self.emb(n) - pos_emb = rearrange(pos_emb, 'n d -> () n d') - return pos_emb * self.scale - - -class FixedPositionalEmbedding(nn.Module): - def __init__(self, dim): - super().__init__() - inv_freq = 1. / (10000 ** (torch.arange(0, dim, 2).float() / dim)) - self.register_buffer('inv_freq', inv_freq) - - def forward(self, x, seq_dim=1, offset=0): - t = torch.arange(x.shape[seq_dim], device=x.device).type_as(self.inv_freq) + offset - sinusoid_inp = torch.einsum('i , j -> i j', t, self.inv_freq) - emb = torch.cat((sinusoid_inp.sin(), sinusoid_inp.cos()), dim=-1) - return rearrange(emb, 'n d -> () n d') - - -class RelativePositionBias(nn.Module): - def __init__(self, scale, causal=False, num_buckets=32, max_distance=128, heads=8): - super().__init__() - self.scale = scale - self.causal = causal - self.num_buckets = num_buckets - self.max_distance = max_distance - self.relative_attention_bias = nn.Embedding(num_buckets, heads) - - @staticmethod - def _relative_position_bucket(relative_position, causal=True, num_buckets=32, max_distance=128): - ret = 0 - n = -relative_position - if not causal: - num_buckets //= 2 - ret += (n < 0).long() * num_buckets - n = torch.abs(n) - else: - n = torch.max(n, torch.zeros_like(n)) - - max_exact = num_buckets // 2 - is_small = n < max_exact - - val_if_large = max_exact + ( - torch.log(n.float() / max_exact) / math.log(max_distance / max_exact) * (num_buckets - max_exact) - ).long() - val_if_large = torch.min(val_if_large, torch.full_like(val_if_large, num_buckets - 1)) - - ret += torch.where(is_small, n, val_if_large) - return ret - - def forward(self, qk_dots): - i, j, device = *qk_dots.shape[-2:], qk_dots.device - q_pos = torch.arange(i, dtype=torch.long, device=device) - k_pos = torch.arange(j, dtype=torch.long, device=device) - rel_pos = k_pos[None, :] - q_pos[:, None] - rp_bucket = self._relative_position_bucket(rel_pos, causal=self.causal, num_buckets=self.num_buckets, - max_distance=self.max_distance) - values = self.relative_attention_bias(rp_bucket) - bias = rearrange(values, 'i j h -> () h i j') - return qk_dots + (bias * self.scale) - - -class AlibiPositionalBias(nn.Module): - def __init__(self, heads, **kwargs): - super().__init__() - self.heads = heads - slopes = torch.Tensor(self._get_slopes(heads)) - slopes = rearrange(slopes, 'h -> () h () ()') - self.register_buffer('slopes', slopes, persistent=False) - self.register_buffer('bias', None, persistent=False) - - @staticmethod - def _get_slopes(heads): - def get_slopes_power_of_2(n): - start = (2 ** (-2 ** -(math.log2(n) - 3))) - ratio = start - return [start * ratio ** i for i in range(n)] - - if math.log2(heads).is_integer(): - return get_slopes_power_of_2(heads) - - closest_power_of_2 = 2 ** math.floor(math.log2(heads)) - return get_slopes_power_of_2(closest_power_of_2) + get_slopes_power_of_2(2 * closest_power_of_2)[0::2][ - :heads - closest_power_of_2] - - def forward(self, qk_dots): - h, i, j, device = *qk_dots.shape[-3:], qk_dots.device - - if exists(self.bias) and self.bias.shape[-1] >= j: - return qk_dots + self.bias[..., :j] - - bias = torch.arange(j, device=device) - bias = rearrange(bias, 'j -> () () () j') - bias = bias * self.slopes - - num_heads_unalibied = h - bias.shape[1] - bias = F.pad(bias, (0, 0, 0, 0, 0, num_heads_unalibied)) - - self.register_buffer('bias', bias, persistent=False) - return qk_dots + self.bias - - -class LearnedAlibiPositionalBias(AlibiPositionalBias): - def __init__(self, heads, bidirectional=False): - super().__init__(heads) - los_slopes = torch.log(self.slopes) - self.learned_logslopes = nn.Parameter(los_slopes) - - self.bidirectional = bidirectional - if self.bidirectional: - self.learned_logslopes_future = nn.Parameter(los_slopes) - - def forward(self, qk_dots): - h, i, j, device = *qk_dots.shape[-3:], qk_dots.device - - def get_slopes(param): - return F.pad(param.exp(), (0, 0, 0, 0, 0, h - param.shape[1])) - - if exists(self.bias) and self.bias.shape[-1] >= j: - bias = self.bias[..., :i, :j] - else: - i_arange = torch.arange(i, device=device) - j_arange = torch.arange(j, device=device) - bias = rearrange(j_arange, 'j -> 1 1 1 j') - rearrange(i_arange, 'i -> 1 1 i 1') - self.register_buffer('bias', bias, persistent=False) - - if self.bidirectional: - past_slopes = get_slopes(self.learned_logslopes) - future_slopes = get_slopes(self.learned_logslopes_future) - bias = torch.tril(bias * past_slopes) + torch.triu(bias * future_slopes) - else: - slopes = get_slopes(self.learned_logslopes) - bias = bias * slopes - - return qk_dots + bias - - -class RotaryEmbedding(nn.Module): - def __init__(self, dim): - super().__init__() - inv_freq = 1. / (10000 ** (torch.arange(0, dim, 2).float() / dim)) - self.register_buffer('inv_freq', inv_freq) - - def forward(self, max_seq_len, device): - t = torch.arange(max_seq_len, device=device).type_as(self.inv_freq) - freqs = torch.einsum('i , j -> i j', t, self.inv_freq) - emb = torch.cat((freqs, freqs), dim=-1) - return rearrange(emb, 'n d -> () () n d') - - -def rotate_half(x): - x = rearrange(x, '... (j d) -> ... j d', j=2) - x1, x2 = x.unbind(dim=-2) - return torch.cat((-x2, x1), dim=-1) - - -def apply_rotary_pos_emb(t, freqs): - seq_len = t.shape[-2] - freqs = freqs[:, :, -seq_len:] - return (t * freqs.cos()) + (rotate_half(t) * freqs.sin()) - - -# norms - -class Scale(nn.Module): - def __init__(self, value, fn): - super().__init__() - self.value = value - self.fn = fn - - def forward(self, x, **kwargs): - out = self.fn(x, **kwargs) - scale_fn = lambda t: t * self.value - - if not isinstance(out, tuple): - return scale_fn(out) - - return (scale_fn(out[0]), *out[1:]) - - -class Rezero(nn.Module): - def __init__(self, fn): - super().__init__() - self.fn = fn - self.g = nn.Parameter(torch.zeros(1)) - - def forward(self, x, **kwargs): - out = self.fn(x, **kwargs) - rezero_fn = lambda t: t * self.g - - if not isinstance(out, tuple): - return rezero_fn(out) - - return (rezero_fn(out[0]), *out[1:]) - - -class ScaleNorm(nn.Module): - def __init__(self, dim, eps=1e-5): - super().__init__() - self.scale = dim ** -0.5 - self.eps = eps - self.g = nn.Parameter(torch.ones(1)) - - def forward(self, x): - norm = torch.norm(x, dim=-1, keepdim=True) * self.scale - return x / norm.clamp(min=self.eps) * self.g - - -class RMSNorm(nn.Module): - def __init__(self, dim, eps=1e-8): - super().__init__() - self.scale = dim ** -0.5 - self.eps = eps - self.g = nn.Parameter(torch.ones(dim)) - - def forward(self, x): - norm = torch.norm(x, dim=-1, keepdim=True) * self.scale - return x / norm.clamp(min=self.eps) * self.g - - -class RMSScaleShiftNorm(nn.Module): - def __init__(self, dim, eps=1e-8): - super().__init__() - self.scale = dim ** -0.5 - self.eps = eps - self.g = nn.Parameter(torch.ones(dim)) - self.scale_shift_process = nn.Linear(dim * 2, dim * 2) - - def forward(self, x, norm_scale_shift_inp): - norm = torch.norm(x, dim=-1, keepdim=True) * self.scale - norm = x / norm.clamp(min=self.eps) * self.g - - ss_emb = self.scale_shift_process(norm_scale_shift_inp) - scale, shift = torch.chunk(ss_emb, 2, dim=1) - h = norm * (1 + scale.unsqueeze(1)) + shift.unsqueeze(1) - return h - - -# residual and residual gates - -class Residual(nn.Module): - def __init__(self, dim, scale_residual=False): - super().__init__() - self.residual_scale = nn.Parameter(torch.ones(dim)) if scale_residual else None - - def forward(self, x, residual): - if exists(self.residual_scale): - residual = residual * self.residual_scale - - return x + residual - - -class GRUGating(nn.Module): - def __init__(self, dim, scale_residual=False): - super().__init__() - self.gru = nn.GRUCell(dim, dim) - self.residual_scale = nn.Parameter(torch.ones(dim)) if scale_residual else None - - def forward(self, x, residual): - if exists(self.residual_scale): - residual = residual * self.residual_scale - - gated_output = self.gru( - rearrange(x, 'b n d -> (b n) d'), - rearrange(residual, 'b n d -> (b n) d') - ) - - return gated_output.reshape_as(x) - - -# token shifting - -def shift(t, amount, mask=None): - if amount == 0: - return t - - if exists(mask): - t = t.masked_fill(~mask[..., None], 0.) - - return F.pad(t, (0, 0, amount, -amount), value=0.) - - -class ShiftTokens(nn.Module): - def __init__(self, shifts, fn): - super().__init__() - self.fn = fn - self.shifts = tuple(shifts) - - def forward(self, x, **kwargs): - mask = kwargs.get('mask', None) - shifts = self.shifts - segments = len(shifts) - feats_per_shift = x.shape[-1] // segments - splitted = x.split(feats_per_shift, dim=-1) - segments_to_shift, rest = splitted[:segments], splitted[segments:] - segments_to_shift = list(map(lambda args: shift(*args, mask=mask), zip(segments_to_shift, shifts))) - x = torch.cat((*segments_to_shift, *rest), dim=-1) - return self.fn(x, **kwargs) - - -# feedforward - -class GLU(nn.Module): - def __init__(self, dim_in, dim_out, activation): - super().__init__() - self.act = activation - self.proj = nn.Linear(dim_in, dim_out * 2) - - def forward(self, x): - x, gate = self.proj(x).chunk(2, dim=-1) - return x * self.act(gate) - - -class FeedForward(nn.Module): - def __init__( - self, - dim, - dim_out=None, - mult=4, - glu=False, - relu_squared=False, - post_act_ln=False, - dropout=0., - zero_init_output=False - ): - super().__init__() - inner_dim = int(dim * mult) - dim_out = default(dim_out, dim) - activation = ReluSquared() if relu_squared else nn.GELU() - - project_in = nn.Sequential( - nn.Linear(dim, inner_dim), - activation - ) if not glu else GLU(dim, inner_dim, activation) - - self.net = nn.Sequential( - project_in, - nn.LayerNorm(inner_dim) if post_act_ln else nn.Identity(), - nn.Dropout(dropout), - nn.Linear(inner_dim, dim_out) - ) - - # init last linear layer to 0 - if zero_init_output: - init_zero_(self.net[-1]) - - def forward(self, x): - return self.net(x) - - -# attention. - -class Attention(nn.Module): - def __init__( - self, - dim, - dim_head=DEFAULT_DIM_HEAD, - heads=8, - causal=False, - talking_heads=False, - head_scale=False, - collab_heads=False, - collab_compression=.3, - sparse_topk=None, - use_entmax15=False, - num_mem_kv=0, - dropout=0., - on_attn=False, - gate_values=False, - zero_init_output=False, - max_attend_past=None, - qk_norm=False, - scale_init_value=None, - rel_pos_bias=False, - rel_pos_num_buckets=32, - rel_pos_max_distance=128, - ): - super().__init__() - self.scale = dim_head ** -0.5 - - self.heads = heads - self.causal = causal - self.max_attend_past = max_attend_past - - qk_dim = v_dim = dim_head * heads - - # collaborative heads - self.collab_heads = collab_heads - if self.collab_heads: - qk_dim = int(collab_compression * qk_dim) - self.collab_mixing = nn.Parameter(torch.randn(heads, qk_dim)) - - self.to_q = nn.Linear(dim, qk_dim, bias=False) - self.to_k = nn.Linear(dim, qk_dim, bias=False) - self.to_v = nn.Linear(dim, v_dim, bias=False) - - self.dropout = nn.Dropout(dropout) - - # add GLU gating for aggregated values, from alphafold2 - self.to_v_gate = None - if gate_values: - self.to_v_gate = nn.Linear(dim, v_dim) - nn.init.constant_(self.to_v_gate.weight, 0) - nn.init.constant_(self.to_v_gate.bias, 1) - - # cosine sim attention - self.qk_norm = qk_norm - if qk_norm: - scale_init_value = default(scale_init_value, - -3) # if not provided, initialize as though it were sequence length of 1024 - self.scale = nn.Parameter(torch.ones(1, heads, 1, 1) * scale_init_value) - - # talking heads - self.talking_heads = talking_heads - if talking_heads: - self.pre_softmax_proj = nn.Parameter(torch.randn(heads, heads)) - self.post_softmax_proj = nn.Parameter(torch.randn(heads, heads)) - - # head scaling - self.head_scale = head_scale - if head_scale: - self.head_scale_params = nn.Parameter(torch.ones(1, heads, 1, 1)) - - # explicit topk sparse attention - self.sparse_topk = sparse_topk - - # entmax - self.attn_fn = F.softmax - - # add memory key / values - self.num_mem_kv = num_mem_kv - if num_mem_kv > 0: - self.mem_k = nn.Parameter(torch.randn(heads, num_mem_kv, dim_head)) - self.mem_v = nn.Parameter(torch.randn(heads, num_mem_kv, dim_head)) - - # attention on attention - self.attn_on_attn = on_attn - self.to_out = nn.Sequential(nn.Linear(v_dim, dim * 2), nn.GLU()) if on_attn else nn.Linear(v_dim, dim) - - self.rel_pos_bias = rel_pos_bias - if rel_pos_bias: - assert rel_pos_num_buckets <= rel_pos_max_distance, 'number of relative position buckets must be less than the relative position max distance' - self.rel_pos = RelativePositionBias(scale=dim_head ** 0.5, causal=causal, heads=heads, - num_buckets=rel_pos_num_buckets, max_distance=rel_pos_max_distance) - - # init output projection 0 - if zero_init_output: - init_zero_(self.to_out) - - def forward( - self, - x, - context=None, - mask=None, - context_mask=None, - attn_mask=None, - sinusoidal_emb=None, - rotary_pos_emb=None, - prev_attn=None, - mem=None, - layer_past=None, - ): - b, n, _, h, talking_heads, collab_heads, head_scale, scale, device, has_context = *x.shape, self.heads, self.talking_heads, self.collab_heads, self.head_scale, self.scale, x.device, exists( - context) - kv_input = default(context, x) - - q_input = x - k_input = kv_input - v_input = kv_input - - if exists(mem): - k_input = torch.cat((mem, k_input), dim=-2) - v_input = torch.cat((mem, v_input), dim=-2) - - if exists(sinusoidal_emb): - # in shortformer, the query would start at a position offset depending on the past cached memory - offset = k_input.shape[-2] - q_input.shape[-2] - q_input = q_input + sinusoidal_emb(q_input, offset=offset) - k_input = k_input + sinusoidal_emb(k_input) - - q = self.to_q(q_input) - k = self.to_k(k_input) - v = self.to_v(v_input) - - if not collab_heads: - q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> b h n d', h=h), (q, k, v)) - else: - q = einsum('b i d, h d -> b h i d', q, self.collab_mixing) - k = rearrange(k, 'b n d -> b () n d') - v = rearrange(v, 'b n (h d) -> b h n d', h=h) - - if layer_past is not None: - past_key, past_value = layer_past - k = torch.cat([past_key, k], dim=-2) - v = torch.cat([past_value, v], dim=-2) - k_cache = k - v_cache = v - - if exists(rotary_pos_emb) and not has_context: - l = rotary_pos_emb.shape[-1] - (ql, qr), (kl, kr), (vl, vr) = map(lambda t: (t[..., :l], t[..., l:]), (q, k, v)) - ql, kl, vl = map(lambda t: apply_rotary_pos_emb(t, rotary_pos_emb), (ql, kl, vl)) - q, k, v = map(lambda t: torch.cat(t, dim=-1), ((ql, qr), (kl, kr), (vl, vr))) - - input_mask = None - if any(map(exists, (mask, context_mask))): - q_mask = default(mask, lambda: torch.ones((b, n), device=device).bool()) - k_mask = q_mask if not exists(context) else context_mask - k_mask = default(k_mask, lambda: torch.ones((b, k.shape[-2]), device=device).bool()) - q_mask = rearrange(q_mask, 'b i -> b () i ()') - k_mask = rearrange(k_mask, 'b j -> b () () j') - input_mask = q_mask * k_mask - - if self.num_mem_kv > 0: - mem_k, mem_v = map(lambda t: repeat(t, 'h n d -> b h n d', b=b), (self.mem_k, self.mem_v)) - k = torch.cat((mem_k, k), dim=-2) - v = torch.cat((mem_v, v), dim=-2) - if exists(input_mask): - input_mask = F.pad(input_mask, (self.num_mem_kv, 0), value=True) - - if collab_heads: - k = k.expand(-1, h, -1, -1) - - if self.qk_norm: - q, k = map(l2norm, (q, k)) - scale = 1 / (self.scale.exp().clamp(min=1e-2)) - - dots = einsum('b h i d, b h j d -> b h i j', q, k) * scale - mask_value = max_neg_value(dots) - - if exists(prev_attn): - dots = dots + prev_attn - - pre_softmax_attn = dots.clone() - - if talking_heads: - dots = einsum('b h i j, h k -> b k i j', dots, self.pre_softmax_proj).contiguous() - - if self.rel_pos_bias: - dots = self.rel_pos(dots) - - if exists(input_mask): - dots.masked_fill_(~input_mask, mask_value) - del input_mask - - if exists(attn_mask): - assert 2 <= attn_mask.ndim <= 4, 'attention mask must have greater than 2 dimensions but less than or equal to 4' - if attn_mask.ndim == 2: - attn_mask = rearrange(attn_mask, 'i j -> () () i j') - elif attn_mask.ndim == 3: - attn_mask = rearrange(attn_mask, 'h i j -> () h i j') - dots.masked_fill_(~attn_mask, mask_value) - - if exists(self.max_attend_past): - i, j = dots.shape[-2:] - range_q = torch.arange(j - i, j, device=device) - range_k = torch.arange(j, device=device) - dist = rearrange(range_q, 'i -> () () i ()') - rearrange(range_k, 'j -> () () () j') - mask = dist > self.max_attend_past - dots.masked_fill_(mask, mask_value) - del mask - - if self.causal: - i, j = dots.shape[-2:] - r = torch.arange(i, device=device) - mask = rearrange(r, 'i -> () () i ()') < rearrange(r, 'j -> () () () j') - mask = F.pad(mask, (j - i, 0), value=False) - dots.masked_fill_(mask, mask_value) - del mask - - if exists(self.sparse_topk) and self.sparse_topk < dots.shape[-1]: - top, _ = dots.topk(self.sparse_topk, dim=-1) - vk = top[..., -1].unsqueeze(-1).expand_as(dots) - mask = dots < vk - dots.masked_fill_(mask, mask_value) - del mask - - attn = self.attn_fn(dots, dim=-1) - post_softmax_attn = attn.clone() - - attn = self.dropout(attn) - - if talking_heads: - attn = einsum('b h i j, h k -> b k i j', attn, self.post_softmax_proj).contiguous() - - out = einsum('b h i j, b h j d -> b h i d', attn, v) - - if head_scale: - out = out * self.head_scale_params - - out = rearrange(out, 'b h n d -> b n (h d)') - - if exists(self.to_v_gate): - gates = self.to_v_gate(x) - out = out * gates.sigmoid() - - intermediates = Intermediates( - pre_softmax_attn=pre_softmax_attn, - post_softmax_attn=post_softmax_attn - ) - - return self.to_out(out), intermediates, k_cache, v_cache - - -class AttentionLayers(nn.Module): - def __init__( - self, - dim, - depth, - heads=8, - causal=False, - cross_attend=False, - only_cross=False, - use_scalenorm=False, - use_rms_scaleshift_norm=False, - use_rmsnorm=False, - use_rezero=False, - alibi_pos_bias=False, - alibi_num_heads=None, - alibi_learned=False, - position_infused_attn=False, - rotary_pos_emb=False, - rotary_emb_dim=None, - custom_layers=None, - sandwich_coef=None, - par_ratio=None, - residual_attn=False, - cross_residual_attn=False, - macaron=False, - pre_norm=True, - gate_residual=False, - scale_residual=False, - shift_tokens=0, - sandwich_norm=False, - use_qk_norm_attn=False, - qk_norm_attn_seq_len=None, - zero_init_branch_output=False, - **kwargs - ): - super().__init__() - ff_kwargs, kwargs = groupby_prefix_and_trim('ff_', kwargs) - attn_kwargs, _ = groupby_prefix_and_trim('attn_', kwargs) - - dim_head = attn_kwargs.get('dim_head', DEFAULT_DIM_HEAD) - - self.dim = dim - self.depth = depth - self.layers = nn.ModuleList([]) - self.causal = causal - - rel_pos_bias = 'rel_pos_bias' in attn_kwargs - self.has_pos_emb = position_infused_attn or rel_pos_bias or rotary_pos_emb - self.pia_pos_emb = FixedPositionalEmbedding(dim) if position_infused_attn else None - - rotary_emb_dim = max(default(rotary_emb_dim, dim_head // 2), 32) - self.rotary_pos_emb = RotaryEmbedding(rotary_emb_dim) if rotary_pos_emb else None - - assert not ( - alibi_pos_bias and rel_pos_bias), 'you can only choose Alibi positional bias or T5 relative positional bias, not both' - - if alibi_pos_bias: - alibi_num_heads = default(alibi_num_heads, heads) - assert alibi_num_heads <= heads, 'number of ALiBi heads must be less than the total number of heads' - alibi_pos_klass = LearnedAlibiPositionalBias if alibi_learned or not causal else AlibiPositionalBias - self.rel_pos = alibi_pos_klass(heads=alibi_num_heads, bidirectional=not causal) - else: - self.rel_pos = None - - assert not (not pre_norm and sandwich_norm), 'sandwich norm cannot be used when not using prenorm' - self.pre_norm = pre_norm - self.sandwich_norm = sandwich_norm - - self.residual_attn = residual_attn - self.cross_residual_attn = cross_residual_attn - self.cross_attend = cross_attend - - norm_class = ScaleNorm if use_scalenorm else nn.LayerNorm - norm_class = RMSNorm if use_rmsnorm else norm_class - norm_class = RMSScaleShiftNorm if use_rms_scaleshift_norm else norm_class - norm_fn = partial(norm_class, dim) - - norm_fn = nn.Identity if use_rezero else norm_fn - branch_fn = Rezero if use_rezero else None - - if cross_attend and not only_cross: - default_block = ('a', 'c', 'f') - elif cross_attend and only_cross: - default_block = ('c', 'f') - else: - default_block = ('a', 'f') - - if macaron: - default_block = ('f',) + default_block - - # qk normalization - - if use_qk_norm_attn: - attn_scale_init_value = -math.log(math.log2(qk_norm_attn_seq_len ** 2 - qk_norm_attn_seq_len)) if exists( - qk_norm_attn_seq_len) else None - attn_kwargs = {**attn_kwargs, 'qk_norm': True, 'scale_init_value': attn_scale_init_value} - - # zero init - - if zero_init_branch_output: - attn_kwargs = {**attn_kwargs, 'zero_init_output': True} - ff_kwargs = {**ff_kwargs, 'zero_init_output': True} - - # calculate layer block order - - if exists(custom_layers): - layer_types = custom_layers - elif exists(par_ratio): - par_depth = depth * len(default_block) - assert 1 < par_ratio <= par_depth, 'par ratio out of range' - default_block = tuple(filter(not_equals('f'), default_block)) - par_attn = par_depth // par_ratio - depth_cut = par_depth * 2 // 3 # 2 / 3 attention layer cutoff suggested by PAR paper - par_width = (depth_cut + depth_cut // par_attn) // par_attn - assert len(default_block) <= par_width, 'default block is too large for par_ratio' - par_block = default_block + ('f',) * (par_width - len(default_block)) - par_head = par_block * par_attn - layer_types = par_head + ('f',) * (par_depth - len(par_head)) - elif exists(sandwich_coef): - assert sandwich_coef > 0 and sandwich_coef <= depth, 'sandwich coefficient should be less than the depth' - layer_types = ('a',) * sandwich_coef + default_block * (depth - sandwich_coef) + ('f',) * sandwich_coef - else: - layer_types = default_block * depth - - self.layer_types = layer_types - self.num_attn_layers = len(list(filter(equals('a'), layer_types))) - - # calculate token shifting - - shift_tokens = cast_tuple(shift_tokens, len(layer_types)) - - # iterate and construct layers - - for ind, (layer_type, layer_shift_tokens) in enumerate(zip(self.layer_types, shift_tokens)): - is_last_layer = ind == (len(self.layer_types) - 1) - - if layer_type == 'a': - layer = Attention(dim, heads=heads, causal=causal, **attn_kwargs) - elif layer_type == 'c': - layer = Attention(dim, heads=heads, **attn_kwargs) - elif layer_type == 'f': - layer = FeedForward(dim, **ff_kwargs) - layer = layer if not macaron else Scale(0.5, layer) - else: - raise Exception(f'invalid layer type {layer_type}') - - if layer_shift_tokens > 0: - shift_range_upper = layer_shift_tokens + 1 - shift_range_lower = -layer_shift_tokens if not causal else 0 - layer = ShiftTokens(range(shift_range_lower, shift_range_upper), layer) - - if exists(branch_fn): - layer = branch_fn(layer) - - residual_fn = GRUGating if gate_residual else Residual - residual = residual_fn(dim, scale_residual=scale_residual) - - layer_uses_qk_norm = use_qk_norm_attn and layer_type in ('a', 'c') - - pre_branch_norm = norm_fn() if pre_norm and not layer_uses_qk_norm else None - post_branch_norm = norm_fn() if sandwich_norm or layer_uses_qk_norm else None - post_main_norm = norm_fn() if not pre_norm and not is_last_layer else None - - norms = nn.ModuleList([ - pre_branch_norm, - post_branch_norm, - post_main_norm - ]) - - self.layers.append(nn.ModuleList([ - norms, - layer, - residual - ])) - - def forward( - self, - x, - context=None, - full_context=None, # for passing a list of hidden states from an encoder - mask=None, - context_mask=None, - attn_mask=None, - mems=None, - return_hiddens=False, - norm_scale_shift_inp=None, - past_key_values=None, - expected_seq_len=None, - ): - - assert not (self.cross_attend ^ (exists(context) or exists( - full_context))), 'context must be passed in if cross_attend is set to True' - assert context is None or full_context is None, 'only one of full_context or context can be provided' - - hiddens = [] - intermediates = [] - prev_attn = None - prev_cross_attn = None - - mems = mems.copy() if exists(mems) else [None] * self.num_attn_layers - norm_args = {} - if exists(norm_scale_shift_inp): - norm_args['norm_scale_shift_inp'] = norm_scale_shift_inp - - rotary_pos_emb = None - if exists(self.rotary_pos_emb): - if not self.training and self.causal: - assert expected_seq_len is not None, "To decode a transformer with rotary embeddings, you must specify an `expected_seq_len`" - elif expected_seq_len is None: - expected_seq_len = 0 - seq_len = x.shape[1] - if past_key_values is not None: - seq_len += past_key_values[0][0].shape[-2] - max_rotary_emb_length = max(list(map(lambda m: (m.shape[1] if exists(m) else 0) + seq_len, mems)) + [expected_seq_len]) - rotary_pos_emb = self.rotary_pos_emb(max_rotary_emb_length, x.device) - - present_key_values = [] - cross_attn_count = 0 - for ind, (layer_type, (norm, block, residual_fn)) in enumerate(zip(self.layer_types, self.layers)): - if layer_type == 'a': - layer_mem = mems.pop(0) if mems else None - - residual = x - - pre_branch_norm, post_branch_norm, post_main_norm = norm - - if exists(pre_branch_norm): - x = pre_branch_norm(x, **norm_args) - - if layer_type == 'a' or layer_type == 'c': - if past_key_values is not None: - layer_kv = past_key_values.pop(0) - layer_past = tuple(s.to(x.device) for s in layer_kv) - else: - layer_past = None - - if layer_type == 'a': - out, inter, k, v = block(x, None, mask, None, attn_mask, self.pia_pos_emb, rotary_pos_emb, - prev_attn, layer_mem, layer_past) - elif layer_type == 'c': - if exists(full_context): - out, inter, k, v = block(x, full_context[cross_attn_count], mask, context_mask, None, None, - None, prev_attn, None, layer_past) - else: - out, inter, k, v = block(x, context, mask, context_mask, None, None, None, prev_attn, None, layer_past) - elif layer_type == 'f': - out = block(x) - - if layer_type == 'a' or layer_type == 'c' and present_key_values is not None: - present_key_values.append((k.detach(), v.detach())) - - if exists(post_branch_norm): - out = post_branch_norm(out, **norm_args) - - x = residual_fn(out, residual) - - if layer_type in ('a', 'c'): - intermediates.append(inter) - - if layer_type == 'a' and self.residual_attn: - prev_attn = inter.pre_softmax_attn - elif layer_type == 'c' and self.cross_residual_attn: - prev_cross_attn = inter.pre_softmax_attn - - if exists(post_main_norm): - x = post_main_norm(x, **norm_args) - - if layer_type == 'c': - cross_attn_count += 1 - - if layer_type == 'f': - hiddens.append(x) - - if return_hiddens: - intermediates = LayerIntermediates( - hiddens=hiddens, - attn_intermediates=intermediates, - past_key_values=present_key_values - ) - - return x, intermediates - - return x - - -class Encoder(AttentionLayers): - def __init__(self, **kwargs): - assert 'causal' not in kwargs, 'cannot set causality on encoder' - super().__init__(causal=False, **kwargs) - - -class Decoder(AttentionLayers): - def __init__(self, **kwargs): - assert 'causal' not in kwargs, 'cannot set causality on decoder' - super().__init__(causal=True, **kwargs) - - -class CrossAttender(AttentionLayers): - def __init__(self, **kwargs): - super().__init__(cross_attend=True, only_cross=True, **kwargs) - - -class ViTransformerWrapper(nn.Module): - def __init__( - self, - *, - image_size, - patch_size, - attn_layers, - num_classes=None, - dropout=0., - emb_dropout=0. - ): - super().__init__() - assert isinstance(attn_layers, Encoder), 'attention layers must be an Encoder' - assert image_size % patch_size == 0, 'image dimensions must be divisible by the patch size' - dim = attn_layers.dim - num_patches = (image_size // patch_size) ** 2 - patch_dim = 3 * patch_size ** 2 - - self.patch_size = patch_size - - self.pos_embedding = nn.Parameter(torch.randn(1, num_patches + 1, dim)) - self.patch_to_embedding = nn.Linear(patch_dim, dim) - self.cls_token = nn.Parameter(torch.randn(1, 1, dim)) - self.dropout = nn.Dropout(emb_dropout) - - self.attn_layers = attn_layers - self.norm = nn.LayerNorm(dim) - self.mlp_head = FeedForward(dim, dim_out=num_classes, dropout=dropout) if exists(num_classes) else None - - def forward( - self, - img, - return_embeddings=False - ): - p = self.patch_size - - x = rearrange(img, 'b c (h p1) (w p2) -> b (h w) (p1 p2 c)', p1=p, p2=p) - x = self.patch_to_embedding(x) - b, n, _ = x.shape - - cls_tokens = repeat(self.cls_token, '() n d -> b n d', b=b) - x = torch.cat((cls_tokens, x), dim=1) - x = x + self.pos_embedding[:, :(n + 1)] - x = self.dropout(x) - - x = self.attn_layers(x) - x = self.norm(x) - - if not exists(self.mlp_head) or return_embeddings: - return x - - return self.mlp_head(x[:, 0]) - - -class TransformerWrapper(nn.Module): - def __init__( - self, - *, - num_tokens, - max_seq_len, - attn_layers, - emb_dim=None, - max_mem_len=0., - shift_mem_down=0, - emb_dropout=0., - num_memory_tokens=None, - tie_embedding=False, - use_pos_emb=True - ): - super().__init__() - assert isinstance(attn_layers, AttentionLayers), 'attention layers must be one of Encoder or Decoder' - - dim = attn_layers.dim - emb_dim = default(emb_dim, dim) - - self.max_seq_len = max_seq_len - self.max_mem_len = max_mem_len - self.shift_mem_down = shift_mem_down - - self.token_emb = nn.Embedding(num_tokens, emb_dim) - self.pos_emb = AbsolutePositionalEmbedding(emb_dim, max_seq_len) if ( - use_pos_emb and not attn_layers.has_pos_emb) else always(0) - self.emb_dropout = nn.Dropout(emb_dropout) - - self.project_emb = nn.Linear(emb_dim, dim) if emb_dim != dim else nn.Identity() - self.attn_layers = attn_layers - self.norm = nn.LayerNorm(dim) - - self.init_() - - self.to_logits = nn.Linear(dim, num_tokens) if not tie_embedding else lambda t: t @ self.token_emb.weight.t() - - # memory tokens (like [cls]) from Memory Transformers paper - num_memory_tokens = default(num_memory_tokens, 0) - self.num_memory_tokens = num_memory_tokens - if num_memory_tokens > 0: - self.memory_tokens = nn.Parameter(torch.randn(num_memory_tokens, dim)) - - def init_(self): - nn.init.kaiming_normal_(self.token_emb.weight) - - def forward( - self, - x, - return_embeddings=False, - mask=None, - return_hiddens=False, - return_attn=False, - mems=None, - use_cache=False, - **kwargs - ): - b, n, device, num_mem = *x.shape, x.device, self.num_memory_tokens - x = self.token_emb(x) - x = x + self.pos_emb(x) - x = self.emb_dropout(x) - - x = self.project_emb(x) - - if num_mem > 0: - mem = repeat(self.memory_tokens, 'n d -> b n d', b=b) - x = torch.cat((mem, x), dim=1) - - # auto-handle masking after appending memory tokens - if exists(mask): - mask = F.pad(mask, (num_mem, 0), value=True) - - if self.shift_mem_down and exists(mems): - mems_l, mems_r = mems[:self.shift_mem_down], mems[self.shift_mem_down:] - mems = [*mems_r, *mems_l] - - x, intermediates = self.attn_layers(x, mask=mask, mems=mems, return_hiddens=True, **kwargs) - x = self.norm(x) - - mem, x = x[:, :num_mem], x[:, num_mem:] - - out = self.to_logits(x) if not return_embeddings else x - - if return_hiddens: - hiddens = intermediates.hiddens - return out, hiddens - - res = [out] - if return_attn: - attn_maps = list(map(lambda t: t.post_softmax_attn, intermediates.attn_intermediates)) - res.append(attn_maps) - if use_cache: - res.append(intermediates.past_key_values) - - if len(res) > 1: - return tuple(res) - return res[0] - - -class ContinuousTransformerWrapper(nn.Module): - def __init__( - self, - *, - max_seq_len, - attn_layers, - dim_in=None, - dim_out=None, - emb_dim=None, - emb_dropout=0., - use_pos_emb=True - ): - super().__init__() - assert isinstance(attn_layers, AttentionLayers), 'attention layers must be one of Encoder or Decoder' - - dim = attn_layers.dim - - self.max_seq_len = max_seq_len - - self.pos_emb = AbsolutePositionalEmbedding(dim, max_seq_len) if ( - use_pos_emb and not attn_layers.has_pos_emb) else always(0) - self.emb_dropout = nn.Dropout(emb_dropout) - - self.project_in = nn.Linear(dim_in, dim) if exists(dim_in) else nn.Identity() - - self.attn_layers = attn_layers - self.norm = nn.LayerNorm(dim) - - self.project_out = nn.Linear(dim, dim_out) if exists(dim_out) else nn.Identity() - - def forward( - self, - x, - return_embeddings=False, - mask=None, - return_attn=False, - mems=None, - use_cache=False, - **kwargs - ): - b, n, _, device = *x.shape, x.device - - x = self.project_in(x) - x = x + self.pos_emb(x) - x = self.emb_dropout(x) - - x, intermediates = self.attn_layers(x, mask=mask, mems=mems, return_hiddens=True, **kwargs) - x = self.norm(x) - - out = self.project_out(x) if not return_embeddings else x - - res = [out] - if return_attn: - attn_maps = list(map(lambda t: t.post_softmax_attn, intermediates.attn_intermediates)) - res.append(attn_maps) - if use_cache: - res.append(intermediates.past_key_values) - - if len(res) > 1: - return tuple(res) - return res[0] - diff --git a/spaces/RMXK/RVC_HFF/infer/modules/uvr5/mdxnet.py b/spaces/RMXK/RVC_HFF/infer/modules/uvr5/mdxnet.py deleted file mode 100644 index 86a066893ad99cfed77788027a9deb8ed486a7f2..0000000000000000000000000000000000000000 --- a/spaces/RMXK/RVC_HFF/infer/modules/uvr5/mdxnet.py +++ /dev/null @@ -1,246 +0,0 @@ -import os -import logging - -logger = logging.getLogger(__name__) - -import librosa -import numpy as np -import soundfile as sf -import torch -from tqdm import tqdm - -cpu = torch.device("cpu") - - -class ConvTDFNetTrim: - def __init__( - self, device, model_name, target_name, L, dim_f, dim_t, n_fft, hop=1024 - ): - super(ConvTDFNetTrim, self).__init__() - - self.dim_f = dim_f - self.dim_t = 2**dim_t - self.n_fft = n_fft - self.hop = hop - self.n_bins = self.n_fft // 2 + 1 - self.chunk_size = hop * (self.dim_t - 1) - self.window = torch.hann_window(window_length=self.n_fft, periodic=True).to( - device - ) - self.target_name = target_name - self.blender = "blender" in model_name - - self.dim_c = 4 - out_c = self.dim_c * 4 if target_name == "*" else self.dim_c - self.freq_pad = torch.zeros( - [1, out_c, self.n_bins - self.dim_f, self.dim_t] - ).to(device) - - self.n = L // 2 - - def stft(self, x): - x = x.reshape([-1, self.chunk_size]) - x = torch.stft( - x, - n_fft=self.n_fft, - hop_length=self.hop, - window=self.window, - center=True, - return_complex=True, - ) - x = torch.view_as_real(x) - x = x.permute([0, 3, 1, 2]) - x = x.reshape([-1, 2, 2, self.n_bins, self.dim_t]).reshape( - [-1, self.dim_c, self.n_bins, self.dim_t] - ) - return x[:, :, : self.dim_f] - - def istft(self, x, freq_pad=None): - freq_pad = ( - self.freq_pad.repeat([x.shape[0], 1, 1, 1]) - if freq_pad is None - else freq_pad - ) - x = torch.cat([x, freq_pad], -2) - c = 4 * 2 if self.target_name == "*" else 2 - x = x.reshape([-1, c, 2, self.n_bins, self.dim_t]).reshape( - [-1, 2, self.n_bins, self.dim_t] - ) - x = x.permute([0, 2, 3, 1]) - x = x.contiguous() - x = torch.view_as_complex(x) - x = torch.istft( - x, n_fft=self.n_fft, hop_length=self.hop, window=self.window, center=True - ) - return x.reshape([-1, c, self.chunk_size]) - - -def get_models(device, dim_f, dim_t, n_fft): - return ConvTDFNetTrim( - device=device, - model_name="Conv-TDF", - target_name="vocals", - L=11, - dim_f=dim_f, - dim_t=dim_t, - n_fft=n_fft, - ) - - -class Predictor: - def __init__(self, args): - import onnxruntime as ort - - logger.info(ort.get_available_providers()) - self.args = args - self.model_ = get_models( - device=cpu, dim_f=args.dim_f, dim_t=args.dim_t, n_fft=args.n_fft - ) - self.model = ort.InferenceSession( - os.path.join(args.onnx, self.model_.target_name + ".onnx"), - providers=[ - "CUDAExecutionProvider", - "DmlExecutionProvider", - "CPUExecutionProvider", - ], - ) - logger.info("ONNX load done") - - def demix(self, mix): - samples = mix.shape[-1] - margin = self.args.margin - chunk_size = self.args.chunks * 44100 - assert not margin == 0, "margin cannot be zero!" - if margin > chunk_size: - margin = chunk_size - - segmented_mix = {} - - if self.args.chunks == 0 or samples < chunk_size: - chunk_size = samples - - counter = -1 - for skip in range(0, samples, chunk_size): - counter += 1 - - s_margin = 0 if counter == 0 else margin - end = min(skip + chunk_size + margin, samples) - - start = skip - s_margin - - segmented_mix[skip] = mix[:, start:end].copy() - if end == samples: - break - - sources = self.demix_base(segmented_mix, margin_size=margin) - """ - mix:(2,big_sample) - segmented_mix:offset->(2,small_sample) - sources:(1,2,big_sample) - """ - return sources - - def demix_base(self, mixes, margin_size): - chunked_sources = [] - progress_bar = tqdm(total=len(mixes)) - progress_bar.set_description("Processing") - for mix in mixes: - cmix = mixes[mix] - sources = [] - n_sample = cmix.shape[1] - model = self.model_ - trim = model.n_fft // 2 - gen_size = model.chunk_size - 2 * trim - pad = gen_size - n_sample % gen_size - mix_p = np.concatenate( - (np.zeros((2, trim)), cmix, np.zeros((2, pad)), np.zeros((2, trim))), 1 - ) - mix_waves = [] - i = 0 - while i < n_sample + pad: - waves = np.array(mix_p[:, i : i + model.chunk_size]) - mix_waves.append(waves) - i += gen_size - mix_waves = torch.tensor(mix_waves, dtype=torch.float32).to(cpu) - with torch.no_grad(): - _ort = self.model - spek = model.stft(mix_waves) - if self.args.denoise: - spec_pred = ( - -_ort.run(None, {"input": -spek.cpu().numpy()})[0] * 0.5 - + _ort.run(None, {"input": spek.cpu().numpy()})[0] * 0.5 - ) - tar_waves = model.istft(torch.tensor(spec_pred)) - else: - tar_waves = model.istft( - torch.tensor(_ort.run(None, {"input": spek.cpu().numpy()})[0]) - ) - tar_signal = ( - tar_waves[:, :, trim:-trim] - .transpose(0, 1) - .reshape(2, -1) - .numpy()[:, :-pad] - ) - - start = 0 if mix == 0 else margin_size - end = None if mix == list(mixes.keys())[::-1][0] else -margin_size - if margin_size == 0: - end = None - sources.append(tar_signal[:, start:end]) - - progress_bar.update(1) - - chunked_sources.append(sources) - _sources = np.concatenate(chunked_sources, axis=-1) - # del self.model - progress_bar.close() - return _sources - - def prediction(self, m, vocal_root, others_root, format): - os.makedirs(vocal_root, exist_ok=True) - os.makedirs(others_root, exist_ok=True) - basename = os.path.basename(m) - mix, rate = librosa.load(m, mono=False, sr=44100) - if mix.ndim == 1: - mix = np.asfortranarray([mix, mix]) - mix = mix.T - sources = self.demix(mix.T) - opt = sources[0].T - if format in ["wav", "flac"]: - sf.write( - "%s/%s_main_vocal.%s" % (vocal_root, basename, format), mix - opt, rate - ) - sf.write("%s/%s_others.%s" % (others_root, basename, format), opt, rate) - else: - path_vocal = "%s/%s_main_vocal.wav" % (vocal_root, basename) - path_other = "%s/%s_others.wav" % (others_root, basename) - sf.write(path_vocal, mix - opt, rate) - sf.write(path_other, opt, rate) - if os.path.exists(path_vocal): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path_vocal, path_vocal[:-4] + ".%s" % format) - ) - if os.path.exists(path_other): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path_other, path_other[:-4] + ".%s" % format) - ) - - -class MDXNetDereverb: - def __init__(self, chunks, device): - self.onnx = "assets/uvr5_weights/onnx_dereverb_By_FoxJoy" - self.shifts = 10 # 'Predict with randomised equivariant stabilisation' - self.mixing = "min_mag" # ['default','min_mag','max_mag'] - self.chunks = chunks - self.margin = 44100 - self.dim_t = 9 - self.dim_f = 3072 - self.n_fft = 6144 - self.denoise = True - self.pred = Predictor(self) - self.device = device - - def path_audio(self, input, vocal_root, others_root, format): - self.pred.prediction(input, vocal_root, others_root, format) diff --git a/spaces/Rajagopal/ImageBind_zeroshot_demo2/CODE_OF_CONDUCT.md b/spaces/Rajagopal/ImageBind_zeroshot_demo2/CODE_OF_CONDUCT.md deleted file mode 100644 index f913b6a55a6c5ab6e1224e11fc039c3d4c3b6283..0000000000000000000000000000000000000000 --- a/spaces/Rajagopal/ImageBind_zeroshot_demo2/CODE_OF_CONDUCT.md +++ /dev/null @@ -1,80 +0,0 @@ -# Code of Conduct - -## Our Pledge - -In the interest of fostering an open and welcoming environment, we as -contributors and maintainers pledge to make participation in our project and -our community a harassment-free experience for everyone, regardless of age, body -size, disability, ethnicity, sex characteristics, gender identity and expression, -level of experience, education, socio-economic status, nationality, personal -appearance, race, religion, or sexual identity and orientation. - -## Our Standards - -Examples of behavior that contributes to creating a positive environment -include: - -* Using welcoming and inclusive language -* Being respectful of differing viewpoints and experiences -* Gracefully accepting constructive criticism -* Focusing on what is best for the community -* Showing empathy towards other community members - -Examples of unacceptable behavior by participants include: - -* The use of sexualized language or imagery and unwelcome sexual attention or -advances -* Trolling, insulting/derogatory comments, and personal or political attacks -* Public or private harassment -* Publishing others' private information, such as a physical or electronic -address, without explicit permission -* Other conduct which could reasonably be considered inappropriate in a -professional setting - -## Our Responsibilities - -Project maintainers are responsible for clarifying the standards of acceptable -behavior and are expected to take appropriate and fair corrective action in -response to any instances of unacceptable behavior. - -Project maintainers have the right and responsibility to remove, edit, or -reject comments, commits, code, wiki edits, issues, and other contributions -that are not aligned to this Code of Conduct, or to ban temporarily or -permanently any contributor for other behaviors that they deem inappropriate, -threatening, offensive, or harmful. - -## Scope - -This Code of Conduct applies within all project spaces, and it also applies when -an individual is representing the project or its community in public spaces. -Examples of representing a project or community include using an official -project e-mail address, posting via an official social media account, or acting -as an appointed representative at an online or offline event. Representation of -a project may be further defined and clarified by project maintainers. - -This Code of Conduct also applies outside the project spaces when there is a -reasonable belief that an individual's behavior may have a negative impact on -the project or its community. - -## Enforcement - -Instances of abusive, harassing, or otherwise unacceptable behavior may be -reported by contacting the project team at . All -complaints will be reviewed and investigated and will result in a response that -is deemed necessary and appropriate to the circumstances. The project team is -obligated to maintain confidentiality with regard to the reporter of an incident. -Further details of specific enforcement policies may be posted separately. - -Project maintainers who do not follow or enforce the Code of Conduct in good -faith may face temporary or permanent repercussions as determined by other -members of the project's leadership. - -## Attribution - -This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, -available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html - -[homepage]: https://www.contributor-covenant.org - -For answers to common questions about this code of conduct, see -https://www.contributor-covenant.org/faq \ No newline at end of file diff --git a/spaces/RamAnanth1/videocrafter/lvdm/models/modules/lora.py b/spaces/RamAnanth1/videocrafter/lvdm/models/modules/lora.py deleted file mode 100644 index a2b64b72b9ac0788d23d34660fc3ac92163e4890..0000000000000000000000000000000000000000 --- a/spaces/RamAnanth1/videocrafter/lvdm/models/modules/lora.py +++ /dev/null @@ -1,1251 +0,0 @@ -import json -from itertools import groupby -from typing import Dict, List, Optional, Set, Tuple, Type, Union - - -import torch -import torch.nn as nn -import torch.nn.functional as F - -# try: -# from safetensors.torch import safe_open -# from safetensors.torch import save_file as safe_save - -# safetensors_available = True -# except ImportError: -# from .safe_open import safe_open - -# def safe_save( -# tensors: Dict[str, torch.Tensor], -# filename: str, -# metadata: Optional[Dict[str, str]] = None, -# ) -> None: -# raise EnvironmentError( -# "Saving safetensors requires the safetensors library. Please install with pip or similar." -# ) - -# safetensors_available = False - - -class LoraInjectedLinear(nn.Module): - def __init__( - self, in_features, out_features, bias=False, r=4, dropout_p=0.1, scale=1.0 - ): - super().__init__() - - if r > min(in_features, out_features): - raise ValueError( - f"LoRA rank {r} must be less or equal than {min(in_features, out_features)}" - ) - self.r = r - self.linear = nn.Linear(in_features, out_features, bias) - self.lora_down = nn.Linear(in_features, r, bias=False) - self.dropout = nn.Dropout(dropout_p) - self.lora_up = nn.Linear(r, out_features, bias=False) - self.scale = scale - self.selector = nn.Identity() - - nn.init.normal_(self.lora_down.weight, std=1 / r) - nn.init.zeros_(self.lora_up.weight) - - def forward(self, input): - return ( - self.linear(input) - + self.dropout(self.lora_up(self.selector(self.lora_down(input)))) - * self.scale - ) - - def realize_as_lora(self): - return self.lora_up.weight.data * self.scale, self.lora_down.weight.data - - def set_selector_from_diag(self, diag: torch.Tensor): - # diag is a 1D tensor of size (r,) - assert diag.shape == (self.r,) - self.selector = nn.Linear(self.r, self.r, bias=False) - self.selector.weight.data = torch.diag(diag) - self.selector.weight.data = self.selector.weight.data.to( - self.lora_up.weight.device - ).to(self.lora_up.weight.dtype) - - -class LoraInjectedConv2d(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups: int = 1, - bias: bool = True, - r: int = 4, - dropout_p: float = 0.1, - scale: float = 1.0, - ): - super().__init__() - if r > min(in_channels, out_channels): - raise ValueError( - f"LoRA rank {r} must be less or equal than {min(in_channels, out_channels)}" - ) - self.r = r - self.conv = nn.Conv2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups, - bias=bias, - ) - - self.lora_down = nn.Conv2d( - in_channels=in_channels, - out_channels=r, - kernel_size=kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups, - bias=False, - ) - self.dropout = nn.Dropout(dropout_p) - self.lora_up = nn.Conv2d( - in_channels=r, - out_channels=out_channels, - kernel_size=1, - stride=1, - padding=0, - bias=False, - ) - self.selector = nn.Identity() - self.scale = scale - - nn.init.normal_(self.lora_down.weight, std=1 / r) - nn.init.zeros_(self.lora_up.weight) - - def forward(self, input): - return ( - self.conv(input) - + self.dropout(self.lora_up(self.selector(self.lora_down(input)))) - * self.scale - ) - - def realize_as_lora(self): - return self.lora_up.weight.data * self.scale, self.lora_down.weight.data - - def set_selector_from_diag(self, diag: torch.Tensor): - # diag is a 1D tensor of size (r,) - assert diag.shape == (self.r,) - self.selector = nn.Conv2d( - in_channels=self.r, - out_channels=self.r, - kernel_size=1, - stride=1, - padding=0, - bias=False, - ) - self.selector.weight.data = torch.diag(diag) - - # same device + dtype as lora_up - self.selector.weight.data = self.selector.weight.data.to( - self.lora_up.weight.device - ).to(self.lora_up.weight.dtype) - - -UNET_DEFAULT_TARGET_REPLACE = {"MemoryEfficientCrossAttention","CrossAttention", "Attention", "GEGLU"} - -UNET_EXTENDED_TARGET_REPLACE = {"TimestepEmbedSequential","SpatialTemporalTransformer", "MemoryEfficientCrossAttention","CrossAttention", "Attention", "GEGLU"} - -TEXT_ENCODER_DEFAULT_TARGET_REPLACE = {"CLIPAttention"} - -TEXT_ENCODER_EXTENDED_TARGET_REPLACE = {"CLIPMLP","CLIPAttention"} - -DEFAULT_TARGET_REPLACE = UNET_DEFAULT_TARGET_REPLACE - -EMBED_FLAG = "" - - -def _find_children( - model, - search_class: List[Type[nn.Module]] = [nn.Linear], -): - """ - Find all modules of a certain class (or union of classes). - - Returns all matching modules, along with the parent of those moduless and the - names they are referenced by. - """ - # For each target find every linear_class module that isn't a child of a LoraInjectedLinear - for parent in model.modules(): - for name, module in parent.named_children(): - if any([isinstance(module, _class) for _class in search_class]): - yield parent, name, module - - -def _find_modules_v2( - model, - ancestor_class: Optional[Set[str]] = None, - search_class: List[Type[nn.Module]] = [nn.Linear], - exclude_children_of: Optional[List[Type[nn.Module]]] = [ - LoraInjectedLinear, - LoraInjectedConv2d, - ], -): - """ - Find all modules of a certain class (or union of classes) that are direct or - indirect descendants of other modules of a certain class (or union of classes). - - Returns all matching modules, along with the parent of those moduless and the - names they are referenced by. - """ - - # Get the targets we should replace all linears under - if type(ancestor_class) is not set: - ancestor_class = set(ancestor_class) - print(ancestor_class) - if ancestor_class is not None: - ancestors = ( - module - for module in model.modules() - if module.__class__.__name__ in ancestor_class - ) - else: - # this, incase you want to naively iterate over all modules. - ancestors = [module for module in model.modules()] - - # For each target find every linear_class module that isn't a child of a LoraInjectedLinear - for ancestor in ancestors: - for fullname, module in ancestor.named_children(): - if any([isinstance(module, _class) for _class in search_class]): - # Find the direct parent if this is a descendant, not a child, of target - *path, name = fullname.split(".") - parent = ancestor - while path: - parent = parent.get_submodule(path.pop(0)) - # Skip this linear if it's a child of a LoraInjectedLinear - if exclude_children_of and any( - [isinstance(parent, _class) for _class in exclude_children_of] - ): - continue - # Otherwise, yield it - yield parent, name, module - - -def _find_modules_old( - model, - ancestor_class: Set[str] = DEFAULT_TARGET_REPLACE, - search_class: List[Type[nn.Module]] = [nn.Linear], - exclude_children_of: Optional[List[Type[nn.Module]]] = [LoraInjectedLinear], -): - ret = [] - for _module in model.modules(): - if _module.__class__.__name__ in ancestor_class: - - for name, _child_module in _module.named_children(): - if _child_module.__class__ in search_class: - ret.append((_module, name, _child_module)) - print(ret) - return ret - - -_find_modules = _find_modules_v2 - - -def inject_trainable_lora( - model: nn.Module, - target_replace_module: Set[str] = DEFAULT_TARGET_REPLACE, - r: int = 4, - loras=None, # path to lora .pt - verbose: bool = False, - dropout_p: float = 0.0, - scale: float = 1.0, -): - """ - inject lora into model, and returns lora parameter groups. - """ - - require_grad_params = [] - names = [] - - if loras != None: - loras = torch.load(loras) - - for _module, name, _child_module in _find_modules( - model, target_replace_module, search_class=[nn.Linear] - ): - weight = _child_module.weight - bias = _child_module.bias - if verbose: - print("LoRA Injection : injecting lora into ", name) - print("LoRA Injection : weight shape", weight.shape) - _tmp = LoraInjectedLinear( - _child_module.in_features, - _child_module.out_features, - _child_module.bias is not None, - r=r, - dropout_p=dropout_p, - scale=scale, - ) - _tmp.linear.weight = weight - if bias is not None: - _tmp.linear.bias = bias - - # switch the module - _tmp.to(_child_module.weight.device).to(_child_module.weight.dtype) - _module._modules[name] = _tmp - - require_grad_params.append(_module._modules[name].lora_up.parameters()) - require_grad_params.append(_module._modules[name].lora_down.parameters()) - - if loras != None: - _module._modules[name].lora_up.weight = loras.pop(0) - _module._modules[name].lora_down.weight = loras.pop(0) - - _module._modules[name].lora_up.weight.requires_grad = True - _module._modules[name].lora_down.weight.requires_grad = True - names.append(name) - - return require_grad_params, names - - -def inject_trainable_lora_extended( - model: nn.Module, - target_replace_module: Set[str] = UNET_EXTENDED_TARGET_REPLACE, - r: int = 4, - loras=None, # path to lora .pt -): - """ - inject lora into model, and returns lora parameter groups. - """ - - require_grad_params = [] - names = [] - - if loras != None: - loras = torch.load(loras) - - for _module, name, _child_module in _find_modules( - model, target_replace_module, search_class=[nn.Linear, nn.Conv2d] - ): - if _child_module.__class__ == nn.Linear: - weight = _child_module.weight - bias = _child_module.bias - _tmp = LoraInjectedLinear( - _child_module.in_features, - _child_module.out_features, - _child_module.bias is not None, - r=r, - ) - _tmp.linear.weight = weight - if bias is not None: - _tmp.linear.bias = bias - elif _child_module.__class__ == nn.Conv2d: - weight = _child_module.weight - bias = _child_module.bias - _tmp = LoraInjectedConv2d( - _child_module.in_channels, - _child_module.out_channels, - _child_module.kernel_size, - _child_module.stride, - _child_module.padding, - _child_module.dilation, - _child_module.groups, - _child_module.bias is not None, - r=r, - ) - - _tmp.conv.weight = weight - if bias is not None: - _tmp.conv.bias = bias - - # switch the module - _tmp.to(_child_module.weight.device).to(_child_module.weight.dtype) - if bias is not None: - _tmp.to(_child_module.bias.device).to(_child_module.bias.dtype) - - _module._modules[name] = _tmp - - require_grad_params.append(_module._modules[name].lora_up.parameters()) - require_grad_params.append(_module._modules[name].lora_down.parameters()) - - if loras != None: - _module._modules[name].lora_up.weight = loras.pop(0) - _module._modules[name].lora_down.weight = loras.pop(0) - - _module._modules[name].lora_up.weight.requires_grad = True - _module._modules[name].lora_down.weight.requires_grad = True - names.append(name) - - return require_grad_params, names - - -def extract_lora_ups_down(model, target_replace_module=DEFAULT_TARGET_REPLACE): - - loras = [] - - for _m, _n, _child_module in _find_modules( - model, - target_replace_module, - search_class=[LoraInjectedLinear, LoraInjectedConv2d], - ): - loras.append((_child_module.lora_up, _child_module.lora_down)) - - if len(loras) == 0: - raise ValueError("No lora injected.") - - return loras - - -def extract_lora_as_tensor( - model, target_replace_module=DEFAULT_TARGET_REPLACE, as_fp16=True -): - - loras = [] - - for _m, _n, _child_module in _find_modules( - model, - target_replace_module, - search_class=[LoraInjectedLinear, LoraInjectedConv2d], - ): - up, down = _child_module.realize_as_lora() - if as_fp16: - up = up.to(torch.float16) - down = down.to(torch.float16) - - loras.append((up, down)) - - if len(loras) == 0: - raise ValueError("No lora injected.") - - return loras - - -def save_lora_weight( - model, - path="./lora.pt", - target_replace_module=DEFAULT_TARGET_REPLACE, -): - weights = [] - for _up, _down in extract_lora_ups_down( - model, target_replace_module=target_replace_module - ): - weights.append(_up.weight.to("cpu").to(torch.float16)) - weights.append(_down.weight.to("cpu").to(torch.float16)) - - torch.save(weights, path) - - -def save_lora_as_json(model, path="./lora.json"): - weights = [] - for _up, _down in extract_lora_ups_down(model): - weights.append(_up.weight.detach().cpu().numpy().tolist()) - weights.append(_down.weight.detach().cpu().numpy().tolist()) - - import json - - with open(path, "w") as f: - json.dump(weights, f) - - -def save_safeloras_with_embeds( - modelmap: Dict[str, Tuple[nn.Module, Set[str]]] = {}, - embeds: Dict[str, torch.Tensor] = {}, - outpath="./lora.safetensors", -): - """ - Saves the Lora from multiple modules in a single safetensor file. - - modelmap is a dictionary of { - "module name": (module, target_replace_module) - } - """ - weights = {} - metadata = {} - - for name, (model, target_replace_module) in modelmap.items(): - metadata[name] = json.dumps(list(target_replace_module)) - - for i, (_up, _down) in enumerate( - extract_lora_as_tensor(model, target_replace_module) - ): - rank = _down.shape[0] - - metadata[f"{name}:{i}:rank"] = str(rank) - weights[f"{name}:{i}:up"] = _up - weights[f"{name}:{i}:down"] = _down - - for token, tensor in embeds.items(): - metadata[token] = EMBED_FLAG - weights[token] = tensor - - print(f"Saving weights to {outpath}") - safe_save(weights, outpath, metadata) - - -def save_safeloras( - modelmap: Dict[str, Tuple[nn.Module, Set[str]]] = {}, - outpath="./lora.safetensors", -): - return save_safeloras_with_embeds(modelmap=modelmap, outpath=outpath) - - -def convert_loras_to_safeloras_with_embeds( - modelmap: Dict[str, Tuple[str, Set[str], int]] = {}, - embeds: Dict[str, torch.Tensor] = {}, - outpath="./lora.safetensors", -): - """ - Converts the Lora from multiple pytorch .pt files into a single safetensor file. - - modelmap is a dictionary of { - "module name": (pytorch_model_path, target_replace_module, rank) - } - """ - - weights = {} - metadata = {} - - for name, (path, target_replace_module, r) in modelmap.items(): - metadata[name] = json.dumps(list(target_replace_module)) - - lora = torch.load(path) - for i, weight in enumerate(lora): - is_up = i % 2 == 0 - i = i // 2 - - if is_up: - metadata[f"{name}:{i}:rank"] = str(r) - weights[f"{name}:{i}:up"] = weight - else: - weights[f"{name}:{i}:down"] = weight - - for token, tensor in embeds.items(): - metadata[token] = EMBED_FLAG - weights[token] = tensor - - print(f"Saving weights to {outpath}") - safe_save(weights, outpath, metadata) - - -def convert_loras_to_safeloras( - modelmap: Dict[str, Tuple[str, Set[str], int]] = {}, - outpath="./lora.safetensors", -): - convert_loras_to_safeloras_with_embeds(modelmap=modelmap, outpath=outpath) - - -def parse_safeloras( - safeloras, -) -> Dict[str, Tuple[List[nn.parameter.Parameter], List[int], List[str]]]: - """ - Converts a loaded safetensor file that contains a set of module Loras - into Parameters and other information - - Output is a dictionary of { - "module name": ( - [list of weights], - [list of ranks], - target_replacement_modules - ) - } - """ - loras = {} - metadata = safeloras.metadata() - - get_name = lambda k: k.split(":")[0] - - keys = list(safeloras.keys()) - keys.sort(key=get_name) - - for name, module_keys in groupby(keys, get_name): - info = metadata.get(name) - - if not info: - raise ValueError( - f"Tensor {name} has no metadata - is this a Lora safetensor?" - ) - - # Skip Textual Inversion embeds - if info == EMBED_FLAG: - continue - - # Handle Loras - # Extract the targets - target = json.loads(info) - - # Build the result lists - Python needs us to preallocate lists to insert into them - module_keys = list(module_keys) - ranks = [4] * (len(module_keys) // 2) - weights = [None] * len(module_keys) - - for key in module_keys: - # Split the model name and index out of the key - _, idx, direction = key.split(":") - idx = int(idx) - - # Add the rank - ranks[idx] = int(metadata[f"{name}:{idx}:rank"]) - - # Insert the weight into the list - idx = idx * 2 + (1 if direction == "down" else 0) - weights[idx] = nn.parameter.Parameter(safeloras.get_tensor(key)) - - loras[name] = (weights, ranks, target) - - return loras - - -def parse_safeloras_embeds( - safeloras, -) -> Dict[str, torch.Tensor]: - """ - Converts a loaded safetensor file that contains Textual Inversion embeds into - a dictionary of embed_token: Tensor - """ - embeds = {} - metadata = safeloras.metadata() - - for key in safeloras.keys(): - # Only handle Textual Inversion embeds - meta = metadata.get(key) - if not meta or meta != EMBED_FLAG: - continue - - embeds[key] = safeloras.get_tensor(key) - - return embeds - -def net_load_lora(net, checkpoint_path, alpha=1.0, remove=False): - visited=[] - state_dict = torch.load(checkpoint_path) - for k, v in state_dict.items(): - state_dict[k] = v.to(net.device) - - for key in state_dict: - if ".alpha" in key or key in visited: - continue - layer_infos = key.split(".")[:-2] # remove lora_up and down weight - curr_layer = net - # find the target layer - temp_name = layer_infos.pop(0) - while len(layer_infos) > -1: - curr_layer = curr_layer.__getattr__(temp_name) - if len(layer_infos) > 0: - temp_name = layer_infos.pop(0) - elif len(layer_infos) == 0: - break - if curr_layer.__class__ not in [nn.Linear, nn.Conv2d]: - print('missing param at:', key) - continue - pair_keys = [] - if "lora_down" in key: - pair_keys.append(key.replace("lora_down", "lora_up")) - pair_keys.append(key) - else: - pair_keys.append(key) - pair_keys.append(key.replace("lora_up", "lora_down")) - - # update weight - if len(state_dict[pair_keys[0]].shape) == 4: - # for conv - weight_up = state_dict[pair_keys[0]].squeeze(3).squeeze(2).to(torch.float32) - weight_down = state_dict[pair_keys[1]].squeeze(3).squeeze(2).to(torch.float32) - if remove: - curr_layer.weight.data -= alpha * torch.mm(weight_up, weight_down).unsqueeze(2).unsqueeze(3) - else: - curr_layer.weight.data += alpha * torch.mm(weight_up, weight_down).unsqueeze(2).unsqueeze(3) - else: - # for linear - weight_up = state_dict[pair_keys[0]].to(torch.float32) - weight_down = state_dict[pair_keys[1]].to(torch.float32) - if remove: - curr_layer.weight.data -= alpha * torch.mm(weight_up, weight_down) - else: - curr_layer.weight.data += alpha * torch.mm(weight_up, weight_down) - - # update visited list - for item in pair_keys: - visited.append(item) - print('load_weight_num:',len(visited)) - return - -def change_lora(model, inject_lora=False, lora_scale=1.0, lora_path='', last_time_lora='', last_time_lora_scale=1.0): - # remove lora - if last_time_lora != '': - net_load_lora(model, last_time_lora, alpha=last_time_lora_scale, remove=True) - # add new lora - if inject_lora: - net_load_lora(model, lora_path, alpha=lora_scale) - - -def net_load_lora_v2(net, checkpoint_path, alpha=1.0, remove=False, origin_weight=None): - visited=[] - state_dict = torch.load(checkpoint_path) - for k, v in state_dict.items(): - state_dict[k] = v.to(net.device) - - for key in state_dict: - if ".alpha" in key or key in visited: - continue - layer_infos = key.split(".")[:-2] # remove lora_up and down weight - curr_layer = net - # find the target layer - temp_name = layer_infos.pop(0) - while len(layer_infos) > -1: - curr_layer = curr_layer.__getattr__(temp_name) - if len(layer_infos) > 0: - temp_name = layer_infos.pop(0) - elif len(layer_infos) == 0: - break - if curr_layer.__class__ not in [nn.Linear, nn.Conv2d]: - print('missing param at:', key) - continue - pair_keys = [] - if "lora_down" in key: - pair_keys.append(key.replace("lora_down", "lora_up")) - pair_keys.append(key) - else: - pair_keys.append(key) - pair_keys.append(key.replace("lora_up", "lora_down")) - - # storage weight - if origin_weight is None: - origin_weight = dict() - storage_key = key.replace("lora_down", "lora").replace("lora_up", "lora") - origin_weight[storage_key] = curr_layer.weight.data.clone() - else: - storage_key = key.replace("lora_down", "lora").replace("lora_up", "lora") - if storage_key not in origin_weight.keys(): - origin_weight[storage_key] = curr_layer.weight.data.clone() - - - # update - if len(state_dict[pair_keys[0]].shape) == 4: - # for conv - if remove: - curr_layer.weight.data = origin_weight[storage_key].clone() - else: - weight_up = state_dict[pair_keys[0]].squeeze(3).squeeze(2).to(torch.float32) - weight_down = state_dict[pair_keys[1]].squeeze(3).squeeze(2).to(torch.float32) - curr_layer.weight.data += alpha * torch.mm(weight_up, weight_down).unsqueeze(2).unsqueeze(3) - else: - # for linear - if remove: - curr_layer.weight.data = origin_weight[storage_key].clone() - else: - weight_up = state_dict[pair_keys[0]].to(torch.float32) - weight_down = state_dict[pair_keys[1]].to(torch.float32) - curr_layer.weight.data += alpha * torch.mm(weight_up, weight_down) - - # update visited list - for item in pair_keys: - visited.append(item) - print('load_weight_num:',len(visited)) - return origin_weight - -def change_lora_v2(model, inject_lora=False, lora_scale=1.0, lora_path='', last_time_lora='', last_time_lora_scale=1.0, origin_weight=None): - # remove lora - if last_time_lora != '': - origin_weight = net_load_lora_v2(model, last_time_lora, alpha=last_time_lora_scale, remove=True, origin_weight=origin_weight) - # add new lora - if inject_lora: - origin_weight = net_load_lora_v2(model, lora_path, alpha=lora_scale, origin_weight=origin_weight) - return origin_weight - - - - - -def load_safeloras(path, device="cpu"): - safeloras = safe_open(path, framework="pt", device=device) - return parse_safeloras(safeloras) - - -def load_safeloras_embeds(path, device="cpu"): - safeloras = safe_open(path, framework="pt", device=device) - return parse_safeloras_embeds(safeloras) - - -def load_safeloras_both(path, device="cpu"): - safeloras = safe_open(path, framework="pt", device=device) - return parse_safeloras(safeloras), parse_safeloras_embeds(safeloras) - - -def collapse_lora(model, alpha=1.0): - - for _module, name, _child_module in _find_modules( - model, - UNET_EXTENDED_TARGET_REPLACE | TEXT_ENCODER_EXTENDED_TARGET_REPLACE, - search_class=[LoraInjectedLinear, LoraInjectedConv2d], - ): - - if isinstance(_child_module, LoraInjectedLinear): - print("Collapsing Lin Lora in", name) - - _child_module.linear.weight = nn.Parameter( - _child_module.linear.weight.data - + alpha - * ( - _child_module.lora_up.weight.data - @ _child_module.lora_down.weight.data - ) - .type(_child_module.linear.weight.dtype) - .to(_child_module.linear.weight.device) - ) - - else: - print("Collapsing Conv Lora in", name) - _child_module.conv.weight = nn.Parameter( - _child_module.conv.weight.data - + alpha - * ( - _child_module.lora_up.weight.data.flatten(start_dim=1) - @ _child_module.lora_down.weight.data.flatten(start_dim=1) - ) - .reshape(_child_module.conv.weight.data.shape) - .type(_child_module.conv.weight.dtype) - .to(_child_module.conv.weight.device) - ) - - -def monkeypatch_or_replace_lora( - model, - loras, - target_replace_module=DEFAULT_TARGET_REPLACE, - r: Union[int, List[int]] = 4, -): - for _module, name, _child_module in _find_modules( - model, target_replace_module, search_class=[nn.Linear, LoraInjectedLinear] - ): - _source = ( - _child_module.linear - if isinstance(_child_module, LoraInjectedLinear) - else _child_module - ) - - weight = _source.weight - bias = _source.bias - _tmp = LoraInjectedLinear( - _source.in_features, - _source.out_features, - _source.bias is not None, - r=r.pop(0) if isinstance(r, list) else r, - ) - _tmp.linear.weight = weight - - if bias is not None: - _tmp.linear.bias = bias - - # switch the module - _module._modules[name] = _tmp - - up_weight = loras.pop(0) - down_weight = loras.pop(0) - - _module._modules[name].lora_up.weight = nn.Parameter( - up_weight.type(weight.dtype) - ) - _module._modules[name].lora_down.weight = nn.Parameter( - down_weight.type(weight.dtype) - ) - - _module._modules[name].to(weight.device) - - -def monkeypatch_or_replace_lora_extended( - model, - loras, - target_replace_module=DEFAULT_TARGET_REPLACE, - r: Union[int, List[int]] = 4, -): - for _module, name, _child_module in _find_modules( - model, - target_replace_module, - search_class=[nn.Linear, LoraInjectedLinear, nn.Conv2d, LoraInjectedConv2d], - ): - - if (_child_module.__class__ == nn.Linear) or ( - _child_module.__class__ == LoraInjectedLinear - ): - if len(loras[0].shape) != 2: - continue - - _source = ( - _child_module.linear - if isinstance(_child_module, LoraInjectedLinear) - else _child_module - ) - - weight = _source.weight - bias = _source.bias - _tmp = LoraInjectedLinear( - _source.in_features, - _source.out_features, - _source.bias is not None, - r=r.pop(0) if isinstance(r, list) else r, - ) - _tmp.linear.weight = weight - - if bias is not None: - _tmp.linear.bias = bias - - elif (_child_module.__class__ == nn.Conv2d) or ( - _child_module.__class__ == LoraInjectedConv2d - ): - if len(loras[0].shape) != 4: - continue - _source = ( - _child_module.conv - if isinstance(_child_module, LoraInjectedConv2d) - else _child_module - ) - - weight = _source.weight - bias = _source.bias - _tmp = LoraInjectedConv2d( - _source.in_channels, - _source.out_channels, - _source.kernel_size, - _source.stride, - _source.padding, - _source.dilation, - _source.groups, - _source.bias is not None, - r=r.pop(0) if isinstance(r, list) else r, - ) - - _tmp.conv.weight = weight - - if bias is not None: - _tmp.conv.bias = bias - - # switch the module - _module._modules[name] = _tmp - - up_weight = loras.pop(0) - down_weight = loras.pop(0) - - _module._modules[name].lora_up.weight = nn.Parameter( - up_weight.type(weight.dtype) - ) - _module._modules[name].lora_down.weight = nn.Parameter( - down_weight.type(weight.dtype) - ) - - _module._modules[name].to(weight.device) - - -def monkeypatch_or_replace_safeloras(models, safeloras): - loras = parse_safeloras(safeloras) - - for name, (lora, ranks, target) in loras.items(): - model = getattr(models, name, None) - - if not model: - print(f"No model provided for {name}, contained in Lora") - continue - - monkeypatch_or_replace_lora_extended(model, lora, target, ranks) - - -def monkeypatch_remove_lora(model): - for _module, name, _child_module in _find_modules( - model, search_class=[LoraInjectedLinear, LoraInjectedConv2d] - ): - if isinstance(_child_module, LoraInjectedLinear): - _source = _child_module.linear - weight, bias = _source.weight, _source.bias - - _tmp = nn.Linear( - _source.in_features, _source.out_features, bias is not None - ) - - _tmp.weight = weight - if bias is not None: - _tmp.bias = bias - - else: - _source = _child_module.conv - weight, bias = _source.weight, _source.bias - - _tmp = nn.Conv2d( - in_channels=_source.in_channels, - out_channels=_source.out_channels, - kernel_size=_source.kernel_size, - stride=_source.stride, - padding=_source.padding, - dilation=_source.dilation, - groups=_source.groups, - bias=bias is not None, - ) - - _tmp.weight = weight - if bias is not None: - _tmp.bias = bias - - _module._modules[name] = _tmp - - -def monkeypatch_add_lora( - model, - loras, - target_replace_module=DEFAULT_TARGET_REPLACE, - alpha: float = 1.0, - beta: float = 1.0, -): - for _module, name, _child_module in _find_modules( - model, target_replace_module, search_class=[LoraInjectedLinear] - ): - weight = _child_module.linear.weight - - up_weight = loras.pop(0) - down_weight = loras.pop(0) - - _module._modules[name].lora_up.weight = nn.Parameter( - up_weight.type(weight.dtype).to(weight.device) * alpha - + _module._modules[name].lora_up.weight.to(weight.device) * beta - ) - _module._modules[name].lora_down.weight = nn.Parameter( - down_weight.type(weight.dtype).to(weight.device) * alpha - + _module._modules[name].lora_down.weight.to(weight.device) * beta - ) - - _module._modules[name].to(weight.device) - - -def tune_lora_scale(model, alpha: float = 1.0): - for _module in model.modules(): - if _module.__class__.__name__ in ["LoraInjectedLinear", "LoraInjectedConv2d"]: - _module.scale = alpha - - -def set_lora_diag(model, diag: torch.Tensor): - for _module in model.modules(): - if _module.__class__.__name__ in ["LoraInjectedLinear", "LoraInjectedConv2d"]: - _module.set_selector_from_diag(diag) - - -def _text_lora_path(path: str) -> str: - assert path.endswith(".pt"), "Only .pt files are supported" - return ".".join(path.split(".")[:-1] + ["text_encoder", "pt"]) - - -def _ti_lora_path(path: str) -> str: - assert path.endswith(".pt"), "Only .pt files are supported" - return ".".join(path.split(".")[:-1] + ["ti", "pt"]) - - -def apply_learned_embed_in_clip( - learned_embeds, - text_encoder, - tokenizer, - token: Optional[Union[str, List[str]]] = None, - idempotent=False, -): - if isinstance(token, str): - trained_tokens = [token] - elif isinstance(token, list): - assert len(learned_embeds.keys()) == len( - token - ), "The number of tokens and the number of embeds should be the same" - trained_tokens = token - else: - trained_tokens = list(learned_embeds.keys()) - - for token in trained_tokens: - print(token) - embeds = learned_embeds[token] - - # cast to dtype of text_encoder - dtype = text_encoder.get_input_embeddings().weight.dtype - num_added_tokens = tokenizer.add_tokens(token) - - i = 1 - if not idempotent: - while num_added_tokens == 0: - print(f"The tokenizer already contains the token {token}.") - token = f"{token[:-1]}-{i}>" - print(f"Attempting to add the token {token}.") - num_added_tokens = tokenizer.add_tokens(token) - i += 1 - elif num_added_tokens == 0 and idempotent: - print(f"The tokenizer already contains the token {token}.") - print(f"Replacing {token} embedding.") - - # resize the token embeddings - text_encoder.resize_token_embeddings(len(tokenizer)) - - # get the id for the token and assign the embeds - token_id = tokenizer.convert_tokens_to_ids(token) - text_encoder.get_input_embeddings().weight.data[token_id] = embeds - return token - - -def load_learned_embed_in_clip( - learned_embeds_path, - text_encoder, - tokenizer, - token: Optional[Union[str, List[str]]] = None, - idempotent=False, -): - learned_embeds = torch.load(learned_embeds_path) - apply_learned_embed_in_clip( - learned_embeds, text_encoder, tokenizer, token, idempotent - ) - - -def patch_pipe( - pipe, - maybe_unet_path, - token: Optional[str] = None, - r: int = 4, - patch_unet=True, - patch_text=True, - patch_ti=True, - idempotent_token=True, - unet_target_replace_module=DEFAULT_TARGET_REPLACE, - text_target_replace_module=TEXT_ENCODER_DEFAULT_TARGET_REPLACE, -): - if maybe_unet_path.endswith(".pt"): - # torch format - - if maybe_unet_path.endswith(".ti.pt"): - unet_path = maybe_unet_path[:-6] + ".pt" - elif maybe_unet_path.endswith(".text_encoder.pt"): - unet_path = maybe_unet_path[:-16] + ".pt" - else: - unet_path = maybe_unet_path - - ti_path = _ti_lora_path(unet_path) - text_path = _text_lora_path(unet_path) - - if patch_unet: - print("LoRA : Patching Unet") - monkeypatch_or_replace_lora( - pipe.unet, - torch.load(unet_path), - r=r, - target_replace_module=unet_target_replace_module, - ) - - if patch_text: - print("LoRA : Patching text encoder") - monkeypatch_or_replace_lora( - pipe.text_encoder, - torch.load(text_path), - target_replace_module=text_target_replace_module, - r=r, - ) - if patch_ti: - print("LoRA : Patching token input") - token = load_learned_embed_in_clip( - ti_path, - pipe.text_encoder, - pipe.tokenizer, - token=token, - idempotent=idempotent_token, - ) - - elif maybe_unet_path.endswith(".safetensors"): - safeloras = safe_open(maybe_unet_path, framework="pt", device="cpu") - monkeypatch_or_replace_safeloras(pipe, safeloras) - tok_dict = parse_safeloras_embeds(safeloras) - if patch_ti: - apply_learned_embed_in_clip( - tok_dict, - pipe.text_encoder, - pipe.tokenizer, - token=token, - idempotent=idempotent_token, - ) - return tok_dict - - -@torch.no_grad() -def inspect_lora(model): - moved = {} - - for name, _module in model.named_modules(): - if _module.__class__.__name__ in ["LoraInjectedLinear", "LoraInjectedConv2d"]: - ups = _module.lora_up.weight.data.clone() - downs = _module.lora_down.weight.data.clone() - - wght: torch.Tensor = ups.flatten(1) @ downs.flatten(1) - - dist = wght.flatten().abs().mean().item() - if name in moved: - moved[name].append(dist) - else: - moved[name] = [dist] - - return moved - - -def save_all( - unet, - text_encoder, - save_path, - placeholder_token_ids=None, - placeholder_tokens=None, - save_lora=True, - save_ti=True, - target_replace_module_text=TEXT_ENCODER_DEFAULT_TARGET_REPLACE, - target_replace_module_unet=DEFAULT_TARGET_REPLACE, - safe_form=True, -): - if not safe_form: - # save ti - if save_ti: - ti_path = _ti_lora_path(save_path) - learned_embeds_dict = {} - for tok, tok_id in zip(placeholder_tokens, placeholder_token_ids): - learned_embeds = text_encoder.get_input_embeddings().weight[tok_id] - print( - f"Current Learned Embeddings for {tok}:, id {tok_id} ", - learned_embeds[:4], - ) - learned_embeds_dict[tok] = learned_embeds.detach().cpu() - - torch.save(learned_embeds_dict, ti_path) - print("Ti saved to ", ti_path) - - # save text encoder - if save_lora: - - save_lora_weight( - unet, save_path, target_replace_module=target_replace_module_unet - ) - print("Unet saved to ", save_path) - - save_lora_weight( - text_encoder, - _text_lora_path(save_path), - target_replace_module=target_replace_module_text, - ) - print("Text Encoder saved to ", _text_lora_path(save_path)) - - else: - assert save_path.endswith( - ".safetensors" - ), f"Save path : {save_path} should end with .safetensors" - - loras = {} - embeds = {} - - if save_lora: - - loras["unet"] = (unet, target_replace_module_unet) - loras["text_encoder"] = (text_encoder, target_replace_module_text) - - if save_ti: - for tok, tok_id in zip(placeholder_tokens, placeholder_token_ids): - learned_embeds = text_encoder.get_input_embeddings().weight[tok_id] - print( - f"Current Learned Embeddings for {tok}:, id {tok_id} ", - learned_embeds[:4], - ) - embeds[tok] = learned_embeds.detach().cpu() - - save_safeloras_with_embeds(loras, embeds, save_path) diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/installer.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/installer.py deleted file mode 100644 index b7096df14b4a15980ad138a3990d3e25aeb3bfe1..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/installer.py +++ /dev/null @@ -1,104 +0,0 @@ -import glob -import os -import subprocess -import sys -import tempfile -import warnings -from distutils import log -from distutils.errors import DistutilsError - -import pkg_resources -from setuptools.wheel import Wheel -from ._deprecation_warning import SetuptoolsDeprecationWarning - - -def _fixup_find_links(find_links): - """Ensure find-links option end-up being a list of strings.""" - if isinstance(find_links, str): - return find_links.split() - assert isinstance(find_links, (tuple, list)) - return find_links - - -def fetch_build_egg(dist, req): # noqa: C901 # is too complex (16) # FIXME - """Fetch an egg needed for building. - - Use pip/wheel to fetch/build a wheel.""" - warnings.warn( - "setuptools.installer is deprecated. Requirements should " - "be satisfied by a PEP 517 installer.", - SetuptoolsDeprecationWarning, - ) - # Warn if wheel is not available - try: - pkg_resources.get_distribution('wheel') - except pkg_resources.DistributionNotFound: - dist.announce('WARNING: The wheel package is not available.', log.WARN) - # Ignore environment markers; if supplied, it is required. - req = strip_marker(req) - # Take easy_install options into account, but do not override relevant - # pip environment variables (like PIP_INDEX_URL or PIP_QUIET); they'll - # take precedence. - opts = dist.get_option_dict('easy_install') - if 'allow_hosts' in opts: - raise DistutilsError('the `allow-hosts` option is not supported ' - 'when using pip to install requirements.') - quiet = 'PIP_QUIET' not in os.environ and 'PIP_VERBOSE' not in os.environ - if 'PIP_INDEX_URL' in os.environ: - index_url = None - elif 'index_url' in opts: - index_url = opts['index_url'][1] - else: - index_url = None - find_links = ( - _fixup_find_links(opts['find_links'][1])[:] if 'find_links' in opts - else [] - ) - if dist.dependency_links: - find_links.extend(dist.dependency_links) - eggs_dir = os.path.realpath(dist.get_egg_cache_dir()) - environment = pkg_resources.Environment() - for egg_dist in pkg_resources.find_distributions(eggs_dir): - if egg_dist in req and environment.can_add(egg_dist): - return egg_dist - with tempfile.TemporaryDirectory() as tmpdir: - cmd = [ - sys.executable, '-m', 'pip', - '--disable-pip-version-check', - 'wheel', '--no-deps', - '-w', tmpdir, - ] - if quiet: - cmd.append('--quiet') - if index_url is not None: - cmd.extend(('--index-url', index_url)) - for link in find_links or []: - cmd.extend(('--find-links', link)) - # If requirement is a PEP 508 direct URL, directly pass - # the URL to pip, as `req @ url` does not work on the - # command line. - cmd.append(req.url or str(req)) - try: - subprocess.check_call(cmd) - except subprocess.CalledProcessError as e: - raise DistutilsError(str(e)) from e - wheel = Wheel(glob.glob(os.path.join(tmpdir, '*.whl'))[0]) - dist_location = os.path.join(eggs_dir, wheel.egg_name()) - wheel.install_as_egg(dist_location) - dist_metadata = pkg_resources.PathMetadata( - dist_location, os.path.join(dist_location, 'EGG-INFO')) - dist = pkg_resources.Distribution.from_filename( - dist_location, metadata=dist_metadata) - return dist - - -def strip_marker(req): - """ - Return a new requirement without the environment marker to avoid - calling pip with something like `babel; extra == "i18n"`, which - would always be ignored. - """ - # create a copy to avoid mutating the input - req = pkg_resources.Requirement.parse(str(req)) - req.marker = None - return req diff --git a/spaces/Realcat/image-matching-webui/hloc/extractors/dir.py b/spaces/Realcat/image-matching-webui/hloc/extractors/dir.py deleted file mode 100644 index d1fa39e45e68355c3f06accfc6327d0ab50cf999..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/hloc/extractors/dir.py +++ /dev/null @@ -1,80 +0,0 @@ -import sys -from pathlib import Path -import torch -from zipfile import ZipFile -import os -import sklearn -import gdown - -from ..utils.base_model import BaseModel - -sys.path.append( - str(Path(__file__).parent / "../../third_party/deep-image-retrieval") -) -os.environ["DB_ROOT"] = "" # required by dirtorch - -from dirtorch.utils import common # noqa: E402 -from dirtorch.extract_features import load_model # noqa: E402 - -# The DIR model checkpoints (pickle files) include sklearn.decomposition.pca, -# which has been deprecated in sklearn v0.24 -# and must be explicitly imported with `from sklearn.decomposition import PCA`. -# This is a hacky workaround to maintain forward compatibility. -sys.modules["sklearn.decomposition.pca"] = sklearn.decomposition._pca - - -class DIR(BaseModel): - default_conf = { - "model_name": "Resnet-101-AP-GeM", - "whiten_name": "Landmarks_clean", - "whiten_params": { - "whitenp": 0.25, - "whitenv": None, - "whitenm": 1.0, - }, - "pooling": "gem", - "gemp": 3, - } - required_inputs = ["image"] - - dir_models = { - "Resnet-101-AP-GeM": "https://docs.google.com/uc?export=download&id=1UWJGDuHtzaQdFhSMojoYVQjmCXhIwVvy", - } - - def _init(self, conf): - checkpoint = Path( - torch.hub.get_dir(), "dirtorch", conf["model_name"] + ".pt" - ) - if not checkpoint.exists(): - checkpoint.parent.mkdir(exist_ok=True, parents=True) - link = self.dir_models[conf["model_name"]] - gdown.download(str(link), str(checkpoint) + ".zip", quiet=False) - zf = ZipFile(str(checkpoint) + ".zip", "r") - zf.extractall(checkpoint.parent) - zf.close() - os.remove(str(checkpoint) + ".zip") - - self.net = load_model(checkpoint, False) # first load on CPU - if conf["whiten_name"]: - assert conf["whiten_name"] in self.net.pca - - def _forward(self, data): - image = data["image"] - assert image.shape[1] == 3 - mean = self.net.preprocess["mean"] - std = self.net.preprocess["std"] - image = image - image.new_tensor(mean)[:, None, None] - image = image / image.new_tensor(std)[:, None, None] - - desc = self.net(image) - desc = desc.unsqueeze(0) # batch dimension - if self.conf["whiten_name"]: - pca = self.net.pca[self.conf["whiten_name"]] - desc = common.whiten_features( - desc.cpu().numpy(), pca, **self.conf["whiten_params"] - ) - desc = torch.from_numpy(desc) - - return { - "global_descriptor": desc, - } diff --git a/spaces/Recognai/veganuary_ner/README.md b/spaces/Recognai/veganuary_ner/README.md deleted file mode 100644 index ea9bb586f62c4a6f337b5d7aab1a8dcd888a60c7..0000000000000000000000000000000000000000 --- a/spaces/Recognai/veganuary_ner/README.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -title: Veganuary -emoji: 🏃 -colorFrom: gray -colorTo: pink -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`models`: _List[string]_ -HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space. -Will be parsed automatically from your code if not specified here. - -`datasets`: _List[string]_ -HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space. -Will be parsed automatically from your code if not specified here. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/detectors/reppoints_detector.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/detectors/reppoints_detector.py deleted file mode 100644 index a5f6be31e14488e4b8a006b7142a82c872388d82..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/detectors/reppoints_detector.py +++ /dev/null @@ -1,22 +0,0 @@ -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class RepPointsDetector(SingleStageDetector): - """RepPoints: Point Set Representation for Object Detection. - - This detector is the implementation of: - - RepPoints detector (https://arxiv.org/pdf/1904.11490) - """ - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(RepPointsDetector, - self).__init__(backbone, neck, bbox_head, train_cfg, test_cfg, - pretrained) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/apis/train.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/apis/train.py deleted file mode 100644 index 7f2f1f95c0a8e7c9232f7aa490e8104f8e37c4f5..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/apis/train.py +++ /dev/null @@ -1,185 +0,0 @@ -import random -import warnings - -import numpy as np -import torch -from mmcv.parallel import MMDataParallel, MMDistributedDataParallel -from mmcv.runner import (HOOKS, DistSamplerSeedHook, EpochBasedRunner, - Fp16OptimizerHook, OptimizerHook, build_optimizer, - build_runner) -from mmcv.utils import build_from_cfg - -from mmdet.core import DistEvalHook, EvalHook -from mmdet.datasets import (build_dataloader, build_dataset, - replace_ImageToTensor) -from mmdet.utils import get_root_logger -from mmcv_custom.runner import EpochBasedRunnerAmp -try: - import apex -except: - print('apex is not installed') - - -def set_random_seed(seed, deterministic=False): - """Set random seed. - - Args: - seed (int): Seed to be used. - deterministic (bool): Whether to set the deterministic option for - CUDNN backend, i.e., set `torch.backends.cudnn.deterministic` - to True and `torch.backends.cudnn.benchmark` to False. - Default: False. - """ - random.seed(seed) - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - if deterministic: - torch.backends.cudnn.deterministic = True - torch.backends.cudnn.benchmark = False - - -def train_detector(model, - dataset, - cfg, - distributed=False, - validate=False, - timestamp=None, - meta=None): - logger = get_root_logger(cfg.log_level) - - # prepare data loaders - dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset] - if 'imgs_per_gpu' in cfg.data: - logger.warning('"imgs_per_gpu" is deprecated in MMDet V2.0. ' - 'Please use "samples_per_gpu" instead') - if 'samples_per_gpu' in cfg.data: - logger.warning( - f'Got "imgs_per_gpu"={cfg.data.imgs_per_gpu} and ' - f'"samples_per_gpu"={cfg.data.samples_per_gpu}, "imgs_per_gpu"' - f'={cfg.data.imgs_per_gpu} is used in this experiments') - else: - logger.warning( - 'Automatically set "samples_per_gpu"="imgs_per_gpu"=' - f'{cfg.data.imgs_per_gpu} in this experiments') - cfg.data.samples_per_gpu = cfg.data.imgs_per_gpu - - data_loaders = [ - build_dataloader( - ds, - cfg.data.samples_per_gpu, - cfg.data.workers_per_gpu, - # cfg.gpus will be ignored if distributed - len(cfg.gpu_ids), - dist=distributed, - seed=cfg.seed) for ds in dataset - ] - - # build optimizer - optimizer = build_optimizer(model, cfg.optimizer) - - # use apex fp16 optimizer - if cfg.optimizer_config.get("type", None) and cfg.optimizer_config["type"] == "DistOptimizerHook": - if cfg.optimizer_config.get("use_fp16", False): - model, optimizer = apex.amp.initialize( - model.cuda(), optimizer, opt_level="O1") - for m in model.modules(): - if hasattr(m, "fp16_enabled"): - m.fp16_enabled = True - - # put model on gpus - if distributed: - find_unused_parameters = cfg.get('find_unused_parameters', False) - # Sets the `find_unused_parameters` parameter in - # torch.nn.parallel.DistributedDataParallel - model = MMDistributedDataParallel( - model.cuda(), - device_ids=[torch.cuda.current_device()], - broadcast_buffers=False, - find_unused_parameters=find_unused_parameters) - else: - model = MMDataParallel( - model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids) - - if 'runner' not in cfg: - cfg.runner = { - 'type': 'EpochBasedRunner', - 'max_epochs': cfg.total_epochs - } - warnings.warn( - 'config is now expected to have a `runner` section, ' - 'please set `runner` in your config.', UserWarning) - else: - if 'total_epochs' in cfg: - assert cfg.total_epochs == cfg.runner.max_epochs - - # build runner - runner = build_runner( - cfg.runner, - default_args=dict( - model=model, - optimizer=optimizer, - work_dir=cfg.work_dir, - logger=logger, - meta=meta)) - - # an ugly workaround to make .log and .log.json filenames the same - runner.timestamp = timestamp - - # fp16 setting - fp16_cfg = cfg.get('fp16', None) - if fp16_cfg is not None: - optimizer_config = Fp16OptimizerHook( - **cfg.optimizer_config, **fp16_cfg, distributed=distributed) - elif distributed and 'type' not in cfg.optimizer_config: - optimizer_config = OptimizerHook(**cfg.optimizer_config) - else: - optimizer_config = cfg.optimizer_config - - # register hooks - runner.register_training_hooks(cfg.lr_config, optimizer_config, - cfg.checkpoint_config, cfg.log_config, - cfg.get('momentum_config', None)) - if distributed: - if isinstance(runner, EpochBasedRunner): - runner.register_hook(DistSamplerSeedHook()) - - # register eval hooks - if validate: - # Support batch_size > 1 in validation - val_samples_per_gpu = cfg.data.val.pop('samples_per_gpu', 1) - if val_samples_per_gpu > 1: - # Replace 'ImageToTensor' to 'DefaultFormatBundle' - cfg.data.val.pipeline = replace_ImageToTensor( - cfg.data.val.pipeline) - val_dataset = build_dataset(cfg.data.val, dict(test_mode=True)) - val_dataloader = build_dataloader( - val_dataset, - samples_per_gpu=val_samples_per_gpu, - workers_per_gpu=cfg.data.workers_per_gpu, - dist=distributed, - shuffle=False) - eval_cfg = cfg.get('evaluation', {}) - eval_cfg['by_epoch'] = cfg.runner['type'] != 'IterBasedRunner' - eval_hook = DistEvalHook if distributed else EvalHook - runner.register_hook(eval_hook(val_dataloader, **eval_cfg)) - - # user-defined hooks - if cfg.get('custom_hooks', None): - custom_hooks = cfg.custom_hooks - assert isinstance(custom_hooks, list), \ - f'custom_hooks expect list type, but got {type(custom_hooks)}' - for hook_cfg in cfg.custom_hooks: - assert isinstance(hook_cfg, dict), \ - 'Each item in custom_hooks expects dict type, but got ' \ - f'{type(hook_cfg)}' - hook_cfg = hook_cfg.copy() - priority = hook_cfg.pop('priority', 'NORMAL') - hook = build_from_cfg(hook_cfg, HOOKS) - runner.register_hook(hook, priority=priority) - - if cfg.resume_from: - runner.resume(cfg.resume_from) - elif cfg.load_from: - runner.load_checkpoint(cfg.load_from) - runner.run(data_loaders, cfg.workflow) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/utils/__init__.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/utils/__init__.py deleted file mode 100644 index e79ad8c02a2d465f0690a4aa80683a5c6d784d52..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/utils/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -from .collect_env import collect_env -from .logger import get_root_logger -from .optimizer import DistOptimizerHook - -__all__ = ['get_root_logger', 'collect_env', 'DistOptimizerHook'] diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/decode_heads/enc_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/decode_heads/enc_head.py deleted file mode 100644 index da57af617e05d41761628fd2d6d232655b32d905..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/decode_heads/enc_head.py +++ /dev/null @@ -1,187 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from annotator.uniformer.mmcv.cnn import ConvModule, build_norm_layer - -from annotator.uniformer.mmseg.ops import Encoding, resize -from ..builder import HEADS, build_loss -from .decode_head import BaseDecodeHead - - -class EncModule(nn.Module): - """Encoding Module used in EncNet. - - Args: - in_channels (int): Input channels. - num_codes (int): Number of code words. - conv_cfg (dict|None): Config of conv layers. - norm_cfg (dict|None): Config of norm layers. - act_cfg (dict): Config of activation layers. - """ - - def __init__(self, in_channels, num_codes, conv_cfg, norm_cfg, act_cfg): - super(EncModule, self).__init__() - self.encoding_project = ConvModule( - in_channels, - in_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - # TODO: resolve this hack - # change to 1d - if norm_cfg is not None: - encoding_norm_cfg = norm_cfg.copy() - if encoding_norm_cfg['type'] in ['BN', 'IN']: - encoding_norm_cfg['type'] += '1d' - else: - encoding_norm_cfg['type'] = encoding_norm_cfg['type'].replace( - '2d', '1d') - else: - # fallback to BN1d - encoding_norm_cfg = dict(type='BN1d') - self.encoding = nn.Sequential( - Encoding(channels=in_channels, num_codes=num_codes), - build_norm_layer(encoding_norm_cfg, num_codes)[1], - nn.ReLU(inplace=True)) - self.fc = nn.Sequential( - nn.Linear(in_channels, in_channels), nn.Sigmoid()) - - def forward(self, x): - """Forward function.""" - encoding_projection = self.encoding_project(x) - encoding_feat = self.encoding(encoding_projection).mean(dim=1) - batch_size, channels, _, _ = x.size() - gamma = self.fc(encoding_feat) - y = gamma.view(batch_size, channels, 1, 1) - output = F.relu_(x + x * y) - return encoding_feat, output - - -@HEADS.register_module() -class EncHead(BaseDecodeHead): - """Context Encoding for Semantic Segmentation. - - This head is the implementation of `EncNet - `_. - - Args: - num_codes (int): Number of code words. Default: 32. - use_se_loss (bool): Whether use Semantic Encoding Loss (SE-loss) to - regularize the training. Default: True. - add_lateral (bool): Whether use lateral connection to fuse features. - Default: False. - loss_se_decode (dict): Config of decode loss. - Default: dict(type='CrossEntropyLoss', use_sigmoid=True). - """ - - def __init__(self, - num_codes=32, - use_se_loss=True, - add_lateral=False, - loss_se_decode=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=0.2), - **kwargs): - super(EncHead, self).__init__( - input_transform='multiple_select', **kwargs) - self.use_se_loss = use_se_loss - self.add_lateral = add_lateral - self.num_codes = num_codes - self.bottleneck = ConvModule( - self.in_channels[-1], - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - if add_lateral: - self.lateral_convs = nn.ModuleList() - for in_channels in self.in_channels[:-1]: # skip the last one - self.lateral_convs.append( - ConvModule( - in_channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - self.fusion = ConvModule( - len(self.in_channels) * self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.enc_module = EncModule( - self.channels, - num_codes=num_codes, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - if self.use_se_loss: - self.loss_se_decode = build_loss(loss_se_decode) - self.se_layer = nn.Linear(self.channels, self.num_classes) - - def forward(self, inputs): - """Forward function.""" - inputs = self._transform_inputs(inputs) - feat = self.bottleneck(inputs[-1]) - if self.add_lateral: - laterals = [ - resize( - lateral_conv(inputs[i]), - size=feat.shape[2:], - mode='bilinear', - align_corners=self.align_corners) - for i, lateral_conv in enumerate(self.lateral_convs) - ] - feat = self.fusion(torch.cat([feat, *laterals], 1)) - encode_feat, output = self.enc_module(feat) - output = self.cls_seg(output) - if self.use_se_loss: - se_output = self.se_layer(encode_feat) - return output, se_output - else: - return output - - def forward_test(self, inputs, img_metas, test_cfg): - """Forward function for testing, ignore se_loss.""" - if self.use_se_loss: - return self.forward(inputs)[0] - else: - return self.forward(inputs) - - @staticmethod - def _convert_to_onehot_labels(seg_label, num_classes): - """Convert segmentation label to onehot. - - Args: - seg_label (Tensor): Segmentation label of shape (N, H, W). - num_classes (int): Number of classes. - - Returns: - Tensor: Onehot labels of shape (N, num_classes). - """ - - batch_size = seg_label.size(0) - onehot_labels = seg_label.new_zeros((batch_size, num_classes)) - for i in range(batch_size): - hist = seg_label[i].float().histc( - bins=num_classes, min=0, max=num_classes - 1) - onehot_labels[i] = hist > 0 - return onehot_labels - - def losses(self, seg_logit, seg_label): - """Compute segmentation and semantic encoding loss.""" - seg_logit, se_seg_logit = seg_logit - loss = dict() - loss.update(super(EncHead, self).losses(seg_logit, seg_label)) - se_loss = self.loss_se_decode( - se_seg_logit, - self._convert_to_onehot_labels(seg_label, self.num_classes)) - loss['loss_se'] = se_loss - return loss diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/cnn/bricks/transformer.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/cnn/bricks/transformer.py deleted file mode 100644 index e61ae0dd941a7be00b3e41a3de833ec50470a45f..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/cnn/bricks/transformer.py +++ /dev/null @@ -1,595 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import warnings - -import torch -import torch.nn as nn - -from annotator.uniformer.mmcv import ConfigDict, deprecated_api_warning -from annotator.uniformer.mmcv.cnn import Linear, build_activation_layer, build_norm_layer -from annotator.uniformer.mmcv.runner.base_module import BaseModule, ModuleList, Sequential -from annotator.uniformer.mmcv.utils import build_from_cfg -from .drop import build_dropout -from .registry import (ATTENTION, FEEDFORWARD_NETWORK, POSITIONAL_ENCODING, - TRANSFORMER_LAYER, TRANSFORMER_LAYER_SEQUENCE) - -# Avoid BC-breaking of importing MultiScaleDeformableAttention from this file -try: - from annotator.uniformer.mmcv.ops.multi_scale_deform_attn import MultiScaleDeformableAttention # noqa F401 - warnings.warn( - ImportWarning( - '``MultiScaleDeformableAttention`` has been moved to ' - '``mmcv.ops.multi_scale_deform_attn``, please change original path ' # noqa E501 - '``from annotator.uniformer.mmcv.cnn.bricks.transformer import MultiScaleDeformableAttention`` ' # noqa E501 - 'to ``from annotator.uniformer.mmcv.ops.multi_scale_deform_attn import MultiScaleDeformableAttention`` ' # noqa E501 - )) - -except ImportError: - warnings.warn('Fail to import ``MultiScaleDeformableAttention`` from ' - '``mmcv.ops.multi_scale_deform_attn``, ' - 'You should install ``mmcv-full`` if you need this module. ') - - -def build_positional_encoding(cfg, default_args=None): - """Builder for Position Encoding.""" - return build_from_cfg(cfg, POSITIONAL_ENCODING, default_args) - - -def build_attention(cfg, default_args=None): - """Builder for attention.""" - return build_from_cfg(cfg, ATTENTION, default_args) - - -def build_feedforward_network(cfg, default_args=None): - """Builder for feed-forward network (FFN).""" - return build_from_cfg(cfg, FEEDFORWARD_NETWORK, default_args) - - -def build_transformer_layer(cfg, default_args=None): - """Builder for transformer layer.""" - return build_from_cfg(cfg, TRANSFORMER_LAYER, default_args) - - -def build_transformer_layer_sequence(cfg, default_args=None): - """Builder for transformer encoder and transformer decoder.""" - return build_from_cfg(cfg, TRANSFORMER_LAYER_SEQUENCE, default_args) - - -@ATTENTION.register_module() -class MultiheadAttention(BaseModule): - """A wrapper for ``torch.nn.MultiheadAttention``. - - This module implements MultiheadAttention with identity connection, - and positional encoding is also passed as input. - - Args: - embed_dims (int): The embedding dimension. - num_heads (int): Parallel attention heads. - attn_drop (float): A Dropout layer on attn_output_weights. - Default: 0.0. - proj_drop (float): A Dropout layer after `nn.MultiheadAttention`. - Default: 0.0. - dropout_layer (obj:`ConfigDict`): The dropout_layer used - when adding the shortcut. - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - batch_first (bool): When it is True, Key, Query and Value are shape of - (batch, n, embed_dim), otherwise (n, batch, embed_dim). - Default to False. - """ - - def __init__(self, - embed_dims, - num_heads, - attn_drop=0., - proj_drop=0., - dropout_layer=dict(type='Dropout', drop_prob=0.), - init_cfg=None, - batch_first=False, - **kwargs): - super(MultiheadAttention, self).__init__(init_cfg) - if 'dropout' in kwargs: - warnings.warn('The arguments `dropout` in MultiheadAttention ' - 'has been deprecated, now you can separately ' - 'set `attn_drop`(float), proj_drop(float), ' - 'and `dropout_layer`(dict) ') - attn_drop = kwargs['dropout'] - dropout_layer['drop_prob'] = kwargs.pop('dropout') - - self.embed_dims = embed_dims - self.num_heads = num_heads - self.batch_first = batch_first - - self.attn = nn.MultiheadAttention(embed_dims, num_heads, attn_drop, - **kwargs) - - self.proj_drop = nn.Dropout(proj_drop) - self.dropout_layer = build_dropout( - dropout_layer) if dropout_layer else nn.Identity() - - @deprecated_api_warning({'residual': 'identity'}, - cls_name='MultiheadAttention') - def forward(self, - query, - key=None, - value=None, - identity=None, - query_pos=None, - key_pos=None, - attn_mask=None, - key_padding_mask=None, - **kwargs): - """Forward function for `MultiheadAttention`. - - **kwargs allow passing a more general data flow when combining - with other operations in `transformerlayer`. - - Args: - query (Tensor): The input query with shape [num_queries, bs, - embed_dims] if self.batch_first is False, else - [bs, num_queries embed_dims]. - key (Tensor): The key tensor with shape [num_keys, bs, - embed_dims] if self.batch_first is False, else - [bs, num_keys, embed_dims] . - If None, the ``query`` will be used. Defaults to None. - value (Tensor): The value tensor with same shape as `key`. - Same in `nn.MultiheadAttention.forward`. Defaults to None. - If None, the `key` will be used. - identity (Tensor): This tensor, with the same shape as x, - will be used for the identity link. - If None, `x` will be used. Defaults to None. - query_pos (Tensor): The positional encoding for query, with - the same shape as `x`. If not None, it will - be added to `x` before forward function. Defaults to None. - key_pos (Tensor): The positional encoding for `key`, with the - same shape as `key`. Defaults to None. If not None, it will - be added to `key` before forward function. If None, and - `query_pos` has the same shape as `key`, then `query_pos` - will be used for `key_pos`. Defaults to None. - attn_mask (Tensor): ByteTensor mask with shape [num_queries, - num_keys]. Same in `nn.MultiheadAttention.forward`. - Defaults to None. - key_padding_mask (Tensor): ByteTensor with shape [bs, num_keys]. - Defaults to None. - - Returns: - Tensor: forwarded results with shape - [num_queries, bs, embed_dims] - if self.batch_first is False, else - [bs, num_queries embed_dims]. - """ - - if key is None: - key = query - if value is None: - value = key - if identity is None: - identity = query - if key_pos is None: - if query_pos is not None: - # use query_pos if key_pos is not available - if query_pos.shape == key.shape: - key_pos = query_pos - else: - warnings.warn(f'position encoding of key is' - f'missing in {self.__class__.__name__}.') - if query_pos is not None: - query = query + query_pos - if key_pos is not None: - key = key + key_pos - - # Because the dataflow('key', 'query', 'value') of - # ``torch.nn.MultiheadAttention`` is (num_query, batch, - # embed_dims), We should adjust the shape of dataflow from - # batch_first (batch, num_query, embed_dims) to num_query_first - # (num_query ,batch, embed_dims), and recover ``attn_output`` - # from num_query_first to batch_first. - if self.batch_first: - query = query.transpose(0, 1) - key = key.transpose(0, 1) - value = value.transpose(0, 1) - - out = self.attn( - query=query, - key=key, - value=value, - attn_mask=attn_mask, - key_padding_mask=key_padding_mask)[0] - - if self.batch_first: - out = out.transpose(0, 1) - - return identity + self.dropout_layer(self.proj_drop(out)) - - -@FEEDFORWARD_NETWORK.register_module() -class FFN(BaseModule): - """Implements feed-forward networks (FFNs) with identity connection. - - Args: - embed_dims (int): The feature dimension. Same as - `MultiheadAttention`. Defaults: 256. - feedforward_channels (int): The hidden dimension of FFNs. - Defaults: 1024. - num_fcs (int, optional): The number of fully-connected layers in - FFNs. Default: 2. - act_cfg (dict, optional): The activation config for FFNs. - Default: dict(type='ReLU') - ffn_drop (float, optional): Probability of an element to be - zeroed in FFN. Default 0.0. - add_identity (bool, optional): Whether to add the - identity connection. Default: `True`. - dropout_layer (obj:`ConfigDict`): The dropout_layer used - when adding the shortcut. - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - """ - - @deprecated_api_warning( - { - 'dropout': 'ffn_drop', - 'add_residual': 'add_identity' - }, - cls_name='FFN') - def __init__(self, - embed_dims=256, - feedforward_channels=1024, - num_fcs=2, - act_cfg=dict(type='ReLU', inplace=True), - ffn_drop=0., - dropout_layer=None, - add_identity=True, - init_cfg=None, - **kwargs): - super(FFN, self).__init__(init_cfg) - assert num_fcs >= 2, 'num_fcs should be no less ' \ - f'than 2. got {num_fcs}.' - self.embed_dims = embed_dims - self.feedforward_channels = feedforward_channels - self.num_fcs = num_fcs - self.act_cfg = act_cfg - self.activate = build_activation_layer(act_cfg) - - layers = [] - in_channels = embed_dims - for _ in range(num_fcs - 1): - layers.append( - Sequential( - Linear(in_channels, feedforward_channels), self.activate, - nn.Dropout(ffn_drop))) - in_channels = feedforward_channels - layers.append(Linear(feedforward_channels, embed_dims)) - layers.append(nn.Dropout(ffn_drop)) - self.layers = Sequential(*layers) - self.dropout_layer = build_dropout( - dropout_layer) if dropout_layer else torch.nn.Identity() - self.add_identity = add_identity - - @deprecated_api_warning({'residual': 'identity'}, cls_name='FFN') - def forward(self, x, identity=None): - """Forward function for `FFN`. - - The function would add x to the output tensor if residue is None. - """ - out = self.layers(x) - if not self.add_identity: - return self.dropout_layer(out) - if identity is None: - identity = x - return identity + self.dropout_layer(out) - - -@TRANSFORMER_LAYER.register_module() -class BaseTransformerLayer(BaseModule): - """Base `TransformerLayer` for vision transformer. - - It can be built from `mmcv.ConfigDict` and support more flexible - customization, for example, using any number of `FFN or LN ` and - use different kinds of `attention` by specifying a list of `ConfigDict` - named `attn_cfgs`. It is worth mentioning that it supports `prenorm` - when you specifying `norm` as the first element of `operation_order`. - More details about the `prenorm`: `On Layer Normalization in the - Transformer Architecture `_ . - - Args: - attn_cfgs (list[`mmcv.ConfigDict`] | obj:`mmcv.ConfigDict` | None )): - Configs for `self_attention` or `cross_attention` modules, - The order of the configs in the list should be consistent with - corresponding attentions in operation_order. - If it is a dict, all of the attention modules in operation_order - will be built with this config. Default: None. - ffn_cfgs (list[`mmcv.ConfigDict`] | obj:`mmcv.ConfigDict` | None )): - Configs for FFN, The order of the configs in the list should be - consistent with corresponding ffn in operation_order. - If it is a dict, all of the attention modules in operation_order - will be built with this config. - operation_order (tuple[str]): The execution order of operation - in transformer. Such as ('self_attn', 'norm', 'ffn', 'norm'). - Support `prenorm` when you specifying first element as `norm`. - Default:None. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN'). - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - batch_first (bool): Key, Query and Value are shape - of (batch, n, embed_dim) - or (n, batch, embed_dim). Default to False. - """ - - def __init__(self, - attn_cfgs=None, - ffn_cfgs=dict( - type='FFN', - embed_dims=256, - feedforward_channels=1024, - num_fcs=2, - ffn_drop=0., - act_cfg=dict(type='ReLU', inplace=True), - ), - operation_order=None, - norm_cfg=dict(type='LN'), - init_cfg=None, - batch_first=False, - **kwargs): - - deprecated_args = dict( - feedforward_channels='feedforward_channels', - ffn_dropout='ffn_drop', - ffn_num_fcs='num_fcs') - for ori_name, new_name in deprecated_args.items(): - if ori_name in kwargs: - warnings.warn( - f'The arguments `{ori_name}` in BaseTransformerLayer ' - f'has been deprecated, now you should set `{new_name}` ' - f'and other FFN related arguments ' - f'to a dict named `ffn_cfgs`. ') - ffn_cfgs[new_name] = kwargs[ori_name] - - super(BaseTransformerLayer, self).__init__(init_cfg) - - self.batch_first = batch_first - - assert set(operation_order) & set( - ['self_attn', 'norm', 'ffn', 'cross_attn']) == \ - set(operation_order), f'The operation_order of' \ - f' {self.__class__.__name__} should ' \ - f'contains all four operation type ' \ - f"{['self_attn', 'norm', 'ffn', 'cross_attn']}" - - num_attn = operation_order.count('self_attn') + operation_order.count( - 'cross_attn') - if isinstance(attn_cfgs, dict): - attn_cfgs = [copy.deepcopy(attn_cfgs) for _ in range(num_attn)] - else: - assert num_attn == len(attn_cfgs), f'The length ' \ - f'of attn_cfg {num_attn} is ' \ - f'not consistent with the number of attention' \ - f'in operation_order {operation_order}.' - - self.num_attn = num_attn - self.operation_order = operation_order - self.norm_cfg = norm_cfg - self.pre_norm = operation_order[0] == 'norm' - self.attentions = ModuleList() - - index = 0 - for operation_name in operation_order: - if operation_name in ['self_attn', 'cross_attn']: - if 'batch_first' in attn_cfgs[index]: - assert self.batch_first == attn_cfgs[index]['batch_first'] - else: - attn_cfgs[index]['batch_first'] = self.batch_first - attention = build_attention(attn_cfgs[index]) - # Some custom attentions used as `self_attn` - # or `cross_attn` can have different behavior. - attention.operation_name = operation_name - self.attentions.append(attention) - index += 1 - - self.embed_dims = self.attentions[0].embed_dims - - self.ffns = ModuleList() - num_ffns = operation_order.count('ffn') - if isinstance(ffn_cfgs, dict): - ffn_cfgs = ConfigDict(ffn_cfgs) - if isinstance(ffn_cfgs, dict): - ffn_cfgs = [copy.deepcopy(ffn_cfgs) for _ in range(num_ffns)] - assert len(ffn_cfgs) == num_ffns - for ffn_index in range(num_ffns): - if 'embed_dims' not in ffn_cfgs[ffn_index]: - ffn_cfgs['embed_dims'] = self.embed_dims - else: - assert ffn_cfgs[ffn_index]['embed_dims'] == self.embed_dims - self.ffns.append( - build_feedforward_network(ffn_cfgs[ffn_index], - dict(type='FFN'))) - - self.norms = ModuleList() - num_norms = operation_order.count('norm') - for _ in range(num_norms): - self.norms.append(build_norm_layer(norm_cfg, self.embed_dims)[1]) - - def forward(self, - query, - key=None, - value=None, - query_pos=None, - key_pos=None, - attn_masks=None, - query_key_padding_mask=None, - key_padding_mask=None, - **kwargs): - """Forward function for `TransformerDecoderLayer`. - - **kwargs contains some specific arguments of attentions. - - Args: - query (Tensor): The input query with shape - [num_queries, bs, embed_dims] if - self.batch_first is False, else - [bs, num_queries embed_dims]. - key (Tensor): The key tensor with shape [num_keys, bs, - embed_dims] if self.batch_first is False, else - [bs, num_keys, embed_dims] . - value (Tensor): The value tensor with same shape as `key`. - query_pos (Tensor): The positional encoding for `query`. - Default: None. - key_pos (Tensor): The positional encoding for `key`. - Default: None. - attn_masks (List[Tensor] | None): 2D Tensor used in - calculation of corresponding attention. The length of - it should equal to the number of `attention` in - `operation_order`. Default: None. - query_key_padding_mask (Tensor): ByteTensor for `query`, with - shape [bs, num_queries]. Only used in `self_attn` layer. - Defaults to None. - key_padding_mask (Tensor): ByteTensor for `query`, with - shape [bs, num_keys]. Default: None. - - Returns: - Tensor: forwarded results with shape [num_queries, bs, embed_dims]. - """ - - norm_index = 0 - attn_index = 0 - ffn_index = 0 - identity = query - if attn_masks is None: - attn_masks = [None for _ in range(self.num_attn)] - elif isinstance(attn_masks, torch.Tensor): - attn_masks = [ - copy.deepcopy(attn_masks) for _ in range(self.num_attn) - ] - warnings.warn(f'Use same attn_mask in all attentions in ' - f'{self.__class__.__name__} ') - else: - assert len(attn_masks) == self.num_attn, f'The length of ' \ - f'attn_masks {len(attn_masks)} must be equal ' \ - f'to the number of attention in ' \ - f'operation_order {self.num_attn}' - - for layer in self.operation_order: - if layer == 'self_attn': - temp_key = temp_value = query - query = self.attentions[attn_index]( - query, - temp_key, - temp_value, - identity if self.pre_norm else None, - query_pos=query_pos, - key_pos=query_pos, - attn_mask=attn_masks[attn_index], - key_padding_mask=query_key_padding_mask, - **kwargs) - attn_index += 1 - identity = query - - elif layer == 'norm': - query = self.norms[norm_index](query) - norm_index += 1 - - elif layer == 'cross_attn': - query = self.attentions[attn_index]( - query, - key, - value, - identity if self.pre_norm else None, - query_pos=query_pos, - key_pos=key_pos, - attn_mask=attn_masks[attn_index], - key_padding_mask=key_padding_mask, - **kwargs) - attn_index += 1 - identity = query - - elif layer == 'ffn': - query = self.ffns[ffn_index]( - query, identity if self.pre_norm else None) - ffn_index += 1 - - return query - - -@TRANSFORMER_LAYER_SEQUENCE.register_module() -class TransformerLayerSequence(BaseModule): - """Base class for TransformerEncoder and TransformerDecoder in vision - transformer. - - As base-class of Encoder and Decoder in vision transformer. - Support customization such as specifying different kind - of `transformer_layer` in `transformer_coder`. - - Args: - transformerlayer (list[obj:`mmcv.ConfigDict`] | - obj:`mmcv.ConfigDict`): Config of transformerlayer - in TransformerCoder. If it is obj:`mmcv.ConfigDict`, - it would be repeated `num_layer` times to a - list[`mmcv.ConfigDict`]. Default: None. - num_layers (int): The number of `TransformerLayer`. Default: None. - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - """ - - def __init__(self, transformerlayers=None, num_layers=None, init_cfg=None): - super(TransformerLayerSequence, self).__init__(init_cfg) - if isinstance(transformerlayers, dict): - transformerlayers = [ - copy.deepcopy(transformerlayers) for _ in range(num_layers) - ] - else: - assert isinstance(transformerlayers, list) and \ - len(transformerlayers) == num_layers - self.num_layers = num_layers - self.layers = ModuleList() - for i in range(num_layers): - self.layers.append(build_transformer_layer(transformerlayers[i])) - self.embed_dims = self.layers[0].embed_dims - self.pre_norm = self.layers[0].pre_norm - - def forward(self, - query, - key, - value, - query_pos=None, - key_pos=None, - attn_masks=None, - query_key_padding_mask=None, - key_padding_mask=None, - **kwargs): - """Forward function for `TransformerCoder`. - - Args: - query (Tensor): Input query with shape - `(num_queries, bs, embed_dims)`. - key (Tensor): The key tensor with shape - `(num_keys, bs, embed_dims)`. - value (Tensor): The value tensor with shape - `(num_keys, bs, embed_dims)`. - query_pos (Tensor): The positional encoding for `query`. - Default: None. - key_pos (Tensor): The positional encoding for `key`. - Default: None. - attn_masks (List[Tensor], optional): Each element is 2D Tensor - which is used in calculation of corresponding attention in - operation_order. Default: None. - query_key_padding_mask (Tensor): ByteTensor for `query`, with - shape [bs, num_queries]. Only used in self-attention - Default: None. - key_padding_mask (Tensor): ByteTensor for `query`, with - shape [bs, num_keys]. Default: None. - - Returns: - Tensor: results with shape [num_queries, bs, embed_dims]. - """ - for layer in self.layers: - query = layer( - query, - key, - value, - query_pos=query_pos, - key_pos=key_pos, - attn_masks=attn_masks, - query_key_padding_mask=query_key_padding_mask, - key_padding_mask=key_padding_mask, - **kwargs) - return query diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/roiaware_pool3d.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/roiaware_pool3d.py deleted file mode 100644 index 291b0e5a9b692492c7d7e495ea639c46042e2f18..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/roiaware_pool3d.py +++ /dev/null @@ -1,114 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import nn as nn -from torch.autograd import Function - -import annotator.uniformer.mmcv as mmcv -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['roiaware_pool3d_forward', 'roiaware_pool3d_backward']) - - -class RoIAwarePool3d(nn.Module): - """Encode the geometry-specific features of each 3D proposal. - - Please refer to `PartA2 `_ for more - details. - - Args: - out_size (int or tuple): The size of output features. n or - [n1, n2, n3]. - max_pts_per_voxel (int, optional): The maximum number of points per - voxel. Default: 128. - mode (str, optional): Pooling method of RoIAware, 'max' or 'avg'. - Default: 'max'. - """ - - def __init__(self, out_size, max_pts_per_voxel=128, mode='max'): - super().__init__() - - self.out_size = out_size - self.max_pts_per_voxel = max_pts_per_voxel - assert mode in ['max', 'avg'] - pool_mapping = {'max': 0, 'avg': 1} - self.mode = pool_mapping[mode] - - def forward(self, rois, pts, pts_feature): - """ - Args: - rois (torch.Tensor): [N, 7], in LiDAR coordinate, - (x, y, z) is the bottom center of rois. - pts (torch.Tensor): [npoints, 3], coordinates of input points. - pts_feature (torch.Tensor): [npoints, C], features of input points. - - Returns: - pooled_features (torch.Tensor): [N, out_x, out_y, out_z, C] - """ - - return RoIAwarePool3dFunction.apply(rois, pts, pts_feature, - self.out_size, - self.max_pts_per_voxel, self.mode) - - -class RoIAwarePool3dFunction(Function): - - @staticmethod - def forward(ctx, rois, pts, pts_feature, out_size, max_pts_per_voxel, - mode): - """ - Args: - rois (torch.Tensor): [N, 7], in LiDAR coordinate, - (x, y, z) is the bottom center of rois. - pts (torch.Tensor): [npoints, 3], coordinates of input points. - pts_feature (torch.Tensor): [npoints, C], features of input points. - out_size (int or tuple): The size of output features. n or - [n1, n2, n3]. - max_pts_per_voxel (int): The maximum number of points per voxel. - Default: 128. - mode (int): Pooling method of RoIAware, 0 (max pool) or 1 (average - pool). - - Returns: - pooled_features (torch.Tensor): [N, out_x, out_y, out_z, C], output - pooled features. - """ - - if isinstance(out_size, int): - out_x = out_y = out_z = out_size - else: - assert len(out_size) == 3 - assert mmcv.is_tuple_of(out_size, int) - out_x, out_y, out_z = out_size - - num_rois = rois.shape[0] - num_channels = pts_feature.shape[-1] - num_pts = pts.shape[0] - - pooled_features = pts_feature.new_zeros( - (num_rois, out_x, out_y, out_z, num_channels)) - argmax = pts_feature.new_zeros( - (num_rois, out_x, out_y, out_z, num_channels), dtype=torch.int) - pts_idx_of_voxels = pts_feature.new_zeros( - (num_rois, out_x, out_y, out_z, max_pts_per_voxel), - dtype=torch.int) - - ext_module.roiaware_pool3d_forward(rois, pts, pts_feature, argmax, - pts_idx_of_voxels, pooled_features, - mode) - - ctx.roiaware_pool3d_for_backward = (pts_idx_of_voxels, argmax, mode, - num_pts, num_channels) - return pooled_features - - @staticmethod - def backward(ctx, grad_out): - ret = ctx.roiaware_pool3d_for_backward - pts_idx_of_voxels, argmax, mode, num_pts, num_channels = ret - - grad_in = grad_out.new_zeros((num_pts, num_channels)) - ext_module.roiaware_pool3d_backward(pts_idx_of_voxels, argmax, - grad_out.contiguous(), grad_in, - mode) - - return None, None, grad_in, None, None, None diff --git a/spaces/Rothfeld/textual-inversion-init-token/README.md b/spaces/Rothfeld/textual-inversion-init-token/README.md deleted file mode 100644 index 293d4821ffe1c40c8580a45536e64335fae28e1c..0000000000000000000000000000000000000000 --- a/spaces/Rothfeld/textual-inversion-init-token/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Textual Inversion Init Token -emoji: 🐢 -colorFrom: blue -colorTo: pink -sdk: gradio -sdk_version: 3.4 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/SaintPepe/google-ddpm-church-256/README.md b/spaces/SaintPepe/google-ddpm-church-256/README.md deleted file mode 100644 index d9b798bf0882e566271e885dcb69398d057cf085..0000000000000000000000000000000000000000 --- a/spaces/SaintPepe/google-ddpm-church-256/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Google Ddpm Church 256 -emoji: 🏃 -colorFrom: red -colorTo: green -sdk: streamlit -sdk_version: 1.15.2 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Sandiago21/speech-to-speech-translation-italian/README.md b/spaces/Sandiago21/speech-to-speech-translation-italian/README.md deleted file mode 100644 index 0a12d6207936fb17f45ce78aa9ca85787bd83c37..0000000000000000000000000000000000000000 --- a/spaces/Sandiago21/speech-to-speech-translation-italian/README.md +++ /dev/null @@ -1,6 +0,0 @@ ---- -title: speech-to-speech-translation-italian -app_file: app.py -sdk: gradio -sdk_version: 3.36.0 ---- diff --git a/spaces/Shad0ws/AI-Agent-with-Google-Search-APIs/src/agent.py b/spaces/Shad0ws/AI-Agent-with-Google-Search-APIs/src/agent.py deleted file mode 100644 index 17b9d4f51210f8244a4ac9c174d5b1dfd36c714e..0000000000000000000000000000000000000000 --- a/spaces/Shad0ws/AI-Agent-with-Google-Search-APIs/src/agent.py +++ /dev/null @@ -1,81 +0,0 @@ -from typing import Optional -from langchain.embeddings import OpenAIEmbeddings -from langchain import LLMChain, PromptTemplate -from langchain.vectorstores import FAISS -from langchain.docstore import InMemoryDocstore -from src.baby_agi import BabyAGI -from langchain.agents import ZeroShotAgent, Tool -from langchain import OpenAI, SerpAPIWrapper, LLMChain -from constants import ( - EMBEDDING_MODEL_NAME, - EMBEDDING_SIZE, - TODO_CHAIN_MODEL_NAME, - BABY_AGI_MODEL_NAME -) - - -def run_agent( - user_input, - num_iterations, - baby_agi_model=BABY_AGI_MODEL_NAME, - todo_chaining_model=TODO_CHAIN_MODEL_NAME, - embedding_model=EMBEDDING_MODEL_NAME - ): - - # Define your embedding model - embeddings_model = OpenAIEmbeddings(model=embedding_model) - # Initialize the vectorstore as empty - import faiss - - embedding_size = EMBEDDING_SIZE - index = faiss.IndexFlatL2(embedding_size) - vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {}) - - todo_prompt = PromptTemplate.from_template( - "You are a planner who is an expert at coming up with a todo list for a given objective. Come up with a todo list for this objective: {objective}" - ) - todo_chain = LLMChain( - llm=OpenAI(temperature=0, model_name=todo_chaining_model), - prompt=todo_prompt - ) - search = SerpAPIWrapper() - tools = [ - Tool( - name="Search", - func=search.run, - description="useful for when you need to answer questions about current events", - ), - Tool( - name="TODO", - func=todo_chain.run, - description="useful for when you need to come up with todo lists. Input: an objective to create a todo list for. Output: a todo list for that objective. Please be very clear what the objective is!", - ), - ] - - prefix = """You are an AI who performs one task based on the following objective: {objective}. Take into account these previously completed tasks: {context}.""" - suffix = """Question: {task} - {agent_scratchpad}""" - - prompt = ZeroShotAgent.create_prompt( - tools, - prefix=prefix, - suffix=suffix, - input_variables=["objective", "task", "context", "agent_scratchpad"], - ) - - OBJECTIVE = user_input - llm = OpenAI(temperature=0, model_name=baby_agi_model) - # Logging of LLMChains - verbose = False - # If None, will keep on going forever. Customize the number of loops you want it to go through. - max_iterations: Optional[int] = num_iterations - baby_agi = BabyAGI.from_llm( - prompt=prompt, - tools=tools, - llm=llm, - vectorstore=vectorstore, - verbose=verbose, - max_iterations=max_iterations - ) - if (user_input): - baby_agi({"objective": OBJECTIVE}) diff --git a/spaces/StarbucksCN/starbucks_doc/core/lifecycle.py b/spaces/StarbucksCN/starbucks_doc/core/lifecycle.py deleted file mode 100644 index 9c9bb50b0ff451933a17c62284ffa33c69a3ccf0..0000000000000000000000000000000000000000 --- a/spaces/StarbucksCN/starbucks_doc/core/lifecycle.py +++ /dev/null @@ -1,184 +0,0 @@ -import enum -from abc import ABC, abstractmethod -from typing import TypeVar, Optional - -from core import logger_factory - - -class Initializable(ABC): - @abstractmethod - def initialize(self) -> None: - pass - - -class Startable(ABC): - @abstractmethod - def start(self) -> None: - pass - - -class Stoppable(ABC): - @abstractmethod - def stop(self) -> None: - pass - - -class Disposable(ABC): - @abstractmethod - def dispose(self) -> None: - pass - - -class LifecycleAware(ABC): - def __init__(self, state: "LifecycleState") -> None: - """ - Args: - state(LifecycleState): lifecycle state - """ - self.state = state - - def get_lifecycle_state(self) -> "LifecycleState": - return self.state - - -class Lifecycle(Initializable, Startable, Stoppable, Disposable, LifecycleAware, ABC): - def __init__(self) -> None: - self.logger = logger_factory.get_logger(self.__class__.__name__) - self.lifecycle_state = LifecycleState(lifecycle=self) - - def initialize(self) -> None: - if not self.lifecycle_state.can_initialize(self.lifecycle_state.get_phase()): - self.logger.warning("[{}]cannot initialize".format(self.__class__.__name__)) - return - self.lifecycle_state.set_phase(LifecyclePhase.INITIALIZING) - self.do_init() - self.lifecycle_state.set_phase(LifecyclePhase.INITIALIZED) - - def start(self) -> None: - if not self.lifecycle_state.can_start(self.lifecycle_state.get_phase()): - self.logger.warning("[{}]cannot start".format(self.__class__.__name__)) - return - self.lifecycle_state.set_phase(LifecyclePhase.STARTING) - self.do_start() - self.lifecycle_state.set_phase(LifecyclePhase.STARTED) - - def stop(self) -> None: - if not self.lifecycle_state.can_stop(self.lifecycle_state.get_phase()): - self.logger.warning("[{}]cannot stop".format(self.__class__.__name__)) - return - self.lifecycle_state.set_phase(LifecyclePhase.STOPPING) - self.do_stop() - self.lifecycle_state.set_phase(LifecyclePhase.STOPPED) - - def dispose(self) -> None: - if not self.lifecycle_state.can_dispose(self.lifecycle_state.get_phase()): - self.logger.warning("[{}]cannot dispose".format(self.__class__.__name__)) - return - self.lifecycle_state.set_phase(LifecyclePhase.DISPOSING) - self.do_dispose() - self.lifecycle_state.set_phase(LifecyclePhase.DISPOSED) - - @abstractmethod - def do_init(self) -> None: - pass - - @abstractmethod - def do_start(self) -> None: - pass - - @abstractmethod - def do_stop(self) -> None: - pass - - @abstractmethod - def do_dispose(self) -> None: - pass - - -class LifecyclePhase(enum.Enum): - INITIALIZING = 1 - INITIALIZED = 2 - STARTING = 3 - STARTED = 4 - STOPPING = 5 - STOPPED = 6 - DISPOSING = 7 - DISPOSED = 8 - - -class LifecycleController(ABC): - def can_initialize(self, phase: Optional[LifecyclePhase]) -> bool: - return phase is None or phase == LifecyclePhase.DISPOSED - - def can_start(self, phase: Optional[LifecyclePhase]) -> bool: - return phase is not None and ( - phase == LifecyclePhase.INITIALIZED or phase == LifecyclePhase.STOPPED - ) - - def can_stop(self, phase: Optional[LifecyclePhase]) -> bool: - return phase is not None and phase == LifecyclePhase.STARTED - - def can_dispose(self, phase: Optional[LifecyclePhase]) -> bool: - return phase is not None and ( - phase == LifecyclePhase.INITIALIZED or phase == LifecyclePhase.STOPPED - ) - - -LS = TypeVar("LS", bound=Lifecycle) - - -class LifecycleState(LifecycleController, ABC): - phase: Optional[LifecyclePhase] - - def __init__(self, lifecycle: LS) -> None: - self.phase = None - self.prev_phase = None - self.lifecycle = lifecycle - self.logger = logger_factory.get_logger(__name__) - - def is_initializing(self) -> bool: - return self.phase == LifecyclePhase.INITIALIZING - - def is_initialized(self) -> bool: - return self.phase == LifecyclePhase.INITIALIZED - - def is_starting(self) -> bool: - return self.phase == LifecyclePhase.STARTING - - def is_started(self) -> bool: - return self.phase == LifecyclePhase.STARTED - - def is_stopping(self) -> bool: - return self.phase == LifecyclePhase.STOPPING - - def is_stopped(self) -> bool: - return self.phase == LifecyclePhase.STOPPED - - def is_disposing(self) -> bool: - return self.phase == LifecyclePhase.DISPOSING - - def is_disposed(self) -> bool: - return self.phase == LifecyclePhase.DISPOSED - - def get_phase(self) -> Optional[LifecyclePhase]: - return self.phase - - def set_phase(self, phase: Optional[LifecyclePhase]) -> None: - prev = "None" - if self.phase is not None: - prev = self.phase.name - current = "None" - if phase is not None: - current = phase.name - self.logger.info( - "[setPhaseName][{}]{} --> {}".format( - self.lifecycle.__class__.__name__, - prev, - current, - ) - ) - self.phase = phase - - def rollback(self, err: Exception) -> None: - self.phase = self.prev_phase - self.prev_phase = None diff --git a/spaces/Stearns/Soar/pysoarlib/__init__.py b/spaces/Stearns/Soar/pysoarlib/__init__.py deleted file mode 100644 index 82734df28f7af83dad2002a430f626c48eb04aaf..0000000000000000000000000000000000000000 --- a/spaces/Stearns/Soar/pysoarlib/__init__.py +++ /dev/null @@ -1,36 +0,0 @@ -""" Helper classes and functions for creating a soar agent and working with SML - -Depends on the Python_sml_ClientInterface, so make sure that SOAR_HOME is on the PYTHONPATH - -SoarClient and AgentConnector are used to create an agent -WMInterface is a standardized interface for adding/removing structures from working memory -SoarWME is a wrapper for creating working memory elements -SVSCommands will generate svs command strings for some common use cases - -Also adds helper methods to the Identifier class to access children more easily -(See IdentifierExtensions) - -""" -import Python_sml_ClientInterface as sml - -__all__ = ["WMInterface", "SoarWME", "SVSCommands", "AgentConnector", "SoarClient", "TimeConnector"] - -# Extend the sml Identifier class definition with additional utility methods -from .IdentifierExtensions import * -sml.Identifier.GetChildString = get_child_str -sml.Identifier.GetChildInt = get_child_int -sml.Identifier.GetChildFloat = get_child_float -sml.Identifier.GetChildId = get_child_id -sml.Identifier.GetAllChildIds = get_all_child_ids -sml.Identifier.GetAllChildValues = get_all_child_values -sml.Identifier.GetAllChildWmes = get_all_child_wmes -sml.Identifier.__lt__ = lambda self, other: self.GetIdentifierSymbol() < other.GetIdentifierSymbol() - -from .WMInterface import WMInterface -from .SoarWME import SoarWME -from .SVSCommands import SVSCommands -from .AgentConnector import AgentConnector -from .SoarClient import SoarClient -from .TimeConnector import TimeConnector - - diff --git a/spaces/Sumit7864/Image-Enhancer/docs/anime_comparisons.md b/spaces/Sumit7864/Image-Enhancer/docs/anime_comparisons.md deleted file mode 100644 index 09603bdc989bbf68b1f9f466acac5d8e442b8a01..0000000000000000000000000000000000000000 --- a/spaces/Sumit7864/Image-Enhancer/docs/anime_comparisons.md +++ /dev/null @@ -1,66 +0,0 @@ -# Comparisons among different anime models - -[English](anime_comparisons.md) **|** [简体中文](anime_comparisons_CN.md) - -## Update News - -- 2022/04/24: Release **AnimeVideo-v3**. We have made the following improvements: - - **better naturalness** - - **Fewer artifacts** - - **more faithful to the original colors** - - **better texture restoration** - - **better background restoration** - -## Comparisons - -We have compared our RealESRGAN-AnimeVideo-v3 with the following methods. -Our RealESRGAN-AnimeVideo-v3 can achieve better results with faster inference speed. - -- [waifu2x](https://github.com/nihui/waifu2x-ncnn-vulkan) with the hyperparameters: `tile=0`, `noiselevel=2` -- [Real-CUGAN](https://github.com/bilibili/ailab/tree/main/Real-CUGAN): we use the [20220227](https://github.com/bilibili/ailab/releases/tag/Real-CUGAN-add-faster-low-memory-mode) version, the hyperparameters are: `cache_mode=0`, `tile=0`, `alpha=1`. -- our RealESRGAN-AnimeVideo-v3 - -## Results - -You may need to **zoom in** for comparing details, or **click the image** to see in the full size. Please note that the images -in the table below are the resized and cropped patches from the original images, you can download the original inputs and outputs from [Google Drive](https://drive.google.com/drive/folders/1bc_Hje1Nqop9NDkUvci2VACSjL7HZMRp?usp=sharing) . - -**More natural results, better background restoration** -| Input | waifu2x | Real-CUGAN | RealESRGAN
    AnimeVideo-v3 | -| :---: | :---: | :---: | :---: | -|![157083983-bec52c67-9a5e-4eed-afef-01fe6cd2af85_patch](https://user-images.githubusercontent.com/11482921/164452769-5d8cb4f8-1708-42d2-b941-f44a6f136feb.png) | ![](https://user-images.githubusercontent.com/11482921/164452767-c825cdec-f721-4ff1-aef1-fec41f146c4c.png) | ![](https://user-images.githubusercontent.com/11482921/164452755-3be50895-e3d4-432d-a7b9-9085c2a8e771.png) | ![](https://user-images.githubusercontent.com/11482921/164452771-be300656-379a-4323-a755-df8025a8c451.png) | -|![a0010_patch](https://user-images.githubusercontent.com/11482921/164454047-22eeb493-3fa9-4142-9fc2-6f2a1c074cd5.png) | ![](https://user-images.githubusercontent.com/11482921/164454046-d5e79f8f-00a0-4b55-bc39-295d0d69747a.png) | ![](https://user-images.githubusercontent.com/11482921/164454040-87886b11-9d08-48bd-862f-0d4aed72eb19.png) | ![](https://user-images.githubusercontent.com/11482921/164454055-73dc9f02-286e-4d5c-8f70-c13742e08f42.png) | -|![00000044_patch](https://user-images.githubusercontent.com/11482921/164451232-bacf64fc-e55a-44db-afbb-6b31ab0f8973.png) | ![](https://user-images.githubusercontent.com/11482921/164451318-f309b61a-75b8-4b74-b5f3-595725f1cf0b.png) | ![](https://user-images.githubusercontent.com/11482921/164451348-994f8a35-adbe-4a4b-9c61-feaa294af06a.png) | ![](https://user-images.githubusercontent.com/11482921/164451361-9b7d376e-6f75-4648-b752-542b44845d1c.png) | - -**Fewer artifacts, better detailed textures** -| Input | waifu2x | Real-CUGAN | RealESRGAN
    AnimeVideo-v3 | -| :---: | :---: | :---: | :---: | -|![00000053_patch](https://user-images.githubusercontent.com/11482921/164448411-148a7e5c-cfcd-4504-8bc7-e318eb883bb6.png) | ![](https://user-images.githubusercontent.com/11482921/164448633-dfc15224-b6d2-4403-a3c9-4bb819979364.png) | ![](https://user-images.githubusercontent.com/11482921/164448771-0d359509-5293-4d4c-8e3c-86a2a314ea88.png) | ![](https://user-images.githubusercontent.com/11482921/164448848-1a4ff99e-075b-4458-9db7-2c89e8160aa0.png) | -|![Disney_v4_22_018514_s2_patch](https://user-images.githubusercontent.com/11482921/164451898-83311cdf-bd3e-450f-b9f6-34d7fea3ab79.png) | ![](https://user-images.githubusercontent.com/11482921/164451894-6c56521c-6561-40d6-a3a5-8dde2c167b8a.png) | ![](https://user-images.githubusercontent.com/11482921/164451888-af9b47e3-39dc-4f3e-b0d7-d372d8191e2a.png) | ![](https://user-images.githubusercontent.com/11482921/164451901-31ca4dd4-9847-4baa-8cde-ad50f4053dcf.png) | -|![Japan_v2_0_007261_s2_patch](https://user-images.githubusercontent.com/11482921/164454578-73c77392-77de-49c5-b03c-c36631723192.png) | ![](https://user-images.githubusercontent.com/11482921/164454574-b1ede5f0-4520-4eaa-8f59-086751a34e62.png) | ![](https://user-images.githubusercontent.com/11482921/164454567-4cb3fdd8-6a2d-4016-85b2-a305a8ff80e4.png) | ![](https://user-images.githubusercontent.com/11482921/164454583-7f243f20-eca3-4500-ac43-eb058a4a101a.png) | -|![huluxiongdi_2_patch](https://user-images.githubusercontent.com/11482921/164453482-0726c842-337e-40ec-bf6c-f902ee956a8b.png) | ![](https://user-images.githubusercontent.com/11482921/164453480-71d5e091-5bfa-4c77-9c57-4e37f66ca0a3.png) | ![](https://user-images.githubusercontent.com/11482921/164453468-c295d3c9-3661-45f0-9ecd-406a1877f76e.png) | ![](https://user-images.githubusercontent.com/11482921/164453486-3091887c-587c-450e-b6fe-905cb518d57e.png) | - -**Other better results** -| Input | waifu2x | Real-CUGAN | RealESRGAN
    AnimeVideo-v3 | -| :---: | :---: | :---: | :---: | -|![Japan_v2_1_128525_s1_patch](https://user-images.githubusercontent.com/11482921/164454933-67697f7c-b6ef-47dc-bfca-822a78af8acf.png) | ![](https://user-images.githubusercontent.com/11482921/164454931-9450de7c-f0b3-4638-9c1e-0668e0c41ef0.png) | ![](https://user-images.githubusercontent.com/11482921/164454926-ed746976-786d-41c5-8a83-7693cd774c3a.png) | ![](https://user-images.githubusercontent.com/11482921/164454936-8abdf0f0-fb30-40eb-8281-3b46c0bcb9ae.png) | -|![tianshuqitan_2_patch](https://user-images.githubusercontent.com/11482921/164456948-807c1476-90b6-4507-81da-cb986d01600c.png) | ![](https://user-images.githubusercontent.com/11482921/164456943-25e89de9-d7e5-4f61-a2e1-96786af6ae9e.png) | ![](https://user-images.githubusercontent.com/11482921/164456954-b468c447-59f5-4594-9693-3683e44ba3e6.png) | ![](https://user-images.githubusercontent.com/11482921/164456957-640f910c-3b04-407c-ac20-044d72e19735.png) | -|![00000051_patch](https://user-images.githubusercontent.com/11482921/164456044-e9a6b3fa-b24e-4eb7-acf9-1f7746551b1e.png) ![00000051_patch](https://user-images.githubusercontent.com/11482921/164456421-b67245b0-767d-4250-9105-80bbe507ecfc.png) | ![](https://user-images.githubusercontent.com/11482921/164456040-85763cf2-cb28-4ba3-abb6-1dbb48c55713.png) ![](https://user-images.githubusercontent.com/11482921/164456419-59cf342e-bc1e-4044-868c-e1090abad313.png) | ![](https://user-images.githubusercontent.com/11482921/164456031-4244bb7b-8649-4e01-86f4-40c2099c5afd.png) ![](https://user-images.githubusercontent.com/11482921/164456411-b6afcbe9-c054-448d-a6df-96d3ba3047f8.png) | ![](https://user-images.githubusercontent.com/11482921/164456035-12e270be-fd52-46d4-b18a-3d3b680731fe.png) ![](https://user-images.githubusercontent.com/11482921/164456417-dcaa8b62-f497-427d-b2d2-f390f1200fb9.png) | -|![00000099_patch](https://user-images.githubusercontent.com/11482921/164455312-6411b6e1-5823-4131-a4b0-a6be8a9ae89f.png) | ![](https://user-images.githubusercontent.com/11482921/164455310-f2b99646-3a22-47a4-805b-dc451ac86ddb.png) | ![](https://user-images.githubusercontent.com/11482921/164455294-35471b42-2826-4451-b7ec-6de01344954c.png) | ![](https://user-images.githubusercontent.com/11482921/164455305-fa4c9758-564a-4081-8b4e-f11057a0404d.png) | -|![00000016_patch](https://user-images.githubusercontent.com/11482921/164455672-447353c9-2da2-4fcb-ba4a-7dd6b94c19c1.png) | ![](https://user-images.githubusercontent.com/11482921/164455669-df384631-baaa-42f8-9150-40f658471558.png) | ![](https://user-images.githubusercontent.com/11482921/164455657-68006bf0-138d-4981-aaca-8aa927d2f78a.png) | ![](https://user-images.githubusercontent.com/11482921/164455664-0342b93e-a62a-4b36-a90e-7118f3f1e45d.png) | - -## Inference Speed - -### PyTorch - -Note that we only report the **model** time, and ignore the IO time. - -| GPU | Input Resolution | waifu2x | Real-CUGAN | RealESRGAN-AnimeVideo-v3 -| :---: | :---: | :---: | :---: | :---: | -| V100 | 1921 x 1080 | - | 3.4 fps | **10.0** fps | -| V100 | 1280 x 720 | - | 7.2 fps | **22.6** fps | -| V100 | 640 x 480 | - | 24.4 fps | **65.9** fps | - -### ncnn - -- [ ] TODO diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/print_argv.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/print_argv.py deleted file mode 100644 index 4ec9e2799ede8f83053fbe20df50b0ee097dda77..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/print_argv.py +++ /dev/null @@ -1,3 +0,0 @@ -import sys - -print(sys.argv[1:]) diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/data/datasets/builtin.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/data/datasets/builtin.py deleted file mode 100644 index 39bbb1feec64f76705ba32c46f19f89f71be2ca7..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/data/datasets/builtin.py +++ /dev/null @@ -1,259 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. - - -""" -This file registers pre-defined datasets at hard-coded paths, and their metadata. - -We hard-code metadata for common datasets. This will enable: -1. Consistency check when loading the datasets -2. Use models on these standard datasets directly and run demos, - without having to download the dataset annotations - -We hard-code some paths to the dataset that's assumed to -exist in "./datasets/". - -Users SHOULD NOT use this file to create new dataset / metadata for new dataset. -To add new dataset, refer to the tutorial "docs/DATASETS.md". -""" - -import os - -from annotator.oneformer.detectron2.data import DatasetCatalog, MetadataCatalog - -from .builtin_meta import ADE20K_SEM_SEG_CATEGORIES, _get_builtin_metadata -from .cityscapes import load_cityscapes_instances, load_cityscapes_semantic -from .cityscapes_panoptic import register_all_cityscapes_panoptic -from .coco import load_sem_seg, register_coco_instances -from .coco_panoptic import register_coco_panoptic, register_coco_panoptic_separated -from .lvis import get_lvis_instances_meta, register_lvis_instances -from .pascal_voc import register_pascal_voc - -# ==== Predefined datasets and splits for COCO ========== - -_PREDEFINED_SPLITS_COCO = {} -_PREDEFINED_SPLITS_COCO["coco"] = { - "coco_2014_train": ("coco/train2014", "coco/annotations/instances_train2014.json"), - "coco_2014_val": ("coco/val2014", "coco/annotations/instances_val2014.json"), - "coco_2014_minival": ("coco/val2014", "coco/annotations/instances_minival2014.json"), - "coco_2014_valminusminival": ( - "coco/val2014", - "coco/annotations/instances_valminusminival2014.json", - ), - "coco_2017_train": ("coco/train2017", "coco/annotations/instances_train2017.json"), - "coco_2017_val": ("coco/val2017", "coco/annotations/instances_val2017.json"), - "coco_2017_test": ("coco/test2017", "coco/annotations/image_info_test2017.json"), - "coco_2017_test-dev": ("coco/test2017", "coco/annotations/image_info_test-dev2017.json"), - "coco_2017_val_100": ("coco/val2017", "coco/annotations/instances_val2017_100.json"), -} - -_PREDEFINED_SPLITS_COCO["coco_person"] = { - "keypoints_coco_2014_train": ( - "coco/train2014", - "coco/annotations/person_keypoints_train2014.json", - ), - "keypoints_coco_2014_val": ("coco/val2014", "coco/annotations/person_keypoints_val2014.json"), - "keypoints_coco_2014_minival": ( - "coco/val2014", - "coco/annotations/person_keypoints_minival2014.json", - ), - "keypoints_coco_2014_valminusminival": ( - "coco/val2014", - "coco/annotations/person_keypoints_valminusminival2014.json", - ), - "keypoints_coco_2017_train": ( - "coco/train2017", - "coco/annotations/person_keypoints_train2017.json", - ), - "keypoints_coco_2017_val": ("coco/val2017", "coco/annotations/person_keypoints_val2017.json"), - "keypoints_coco_2017_val_100": ( - "coco/val2017", - "coco/annotations/person_keypoints_val2017_100.json", - ), -} - - -_PREDEFINED_SPLITS_COCO_PANOPTIC = { - "coco_2017_train_panoptic": ( - # This is the original panoptic annotation directory - "coco/panoptic_train2017", - "coco/annotations/panoptic_train2017.json", - # This directory contains semantic annotations that are - # converted from panoptic annotations. - # It is used by PanopticFPN. - # You can use the script at detectron2/datasets/prepare_panoptic_fpn.py - # to create these directories. - "coco/panoptic_stuff_train2017", - ), - "coco_2017_val_panoptic": ( - "coco/panoptic_val2017", - "coco/annotations/panoptic_val2017.json", - "coco/panoptic_stuff_val2017", - ), - "coco_2017_val_100_panoptic": ( - "coco/panoptic_val2017_100", - "coco/annotations/panoptic_val2017_100.json", - "coco/panoptic_stuff_val2017_100", - ), -} - - -def register_all_coco(root): - for dataset_name, splits_per_dataset in _PREDEFINED_SPLITS_COCO.items(): - for key, (image_root, json_file) in splits_per_dataset.items(): - # Assume pre-defined datasets live in `./datasets`. - register_coco_instances( - key, - _get_builtin_metadata(dataset_name), - os.path.join(root, json_file) if "://" not in json_file else json_file, - os.path.join(root, image_root), - ) - - for ( - prefix, - (panoptic_root, panoptic_json, semantic_root), - ) in _PREDEFINED_SPLITS_COCO_PANOPTIC.items(): - prefix_instances = prefix[: -len("_panoptic")] - instances_meta = MetadataCatalog.get(prefix_instances) - image_root, instances_json = instances_meta.image_root, instances_meta.json_file - # The "separated" version of COCO panoptic segmentation dataset, - # e.g. used by Panoptic FPN - register_coco_panoptic_separated( - prefix, - _get_builtin_metadata("coco_panoptic_separated"), - image_root, - os.path.join(root, panoptic_root), - os.path.join(root, panoptic_json), - os.path.join(root, semantic_root), - instances_json, - ) - # The "standard" version of COCO panoptic segmentation dataset, - # e.g. used by Panoptic-DeepLab - register_coco_panoptic( - prefix, - _get_builtin_metadata("coco_panoptic_standard"), - image_root, - os.path.join(root, panoptic_root), - os.path.join(root, panoptic_json), - instances_json, - ) - - -# ==== Predefined datasets and splits for LVIS ========== - - -_PREDEFINED_SPLITS_LVIS = { - "lvis_v1": { - "lvis_v1_train": ("coco/", "lvis/lvis_v1_train.json"), - "lvis_v1_val": ("coco/", "lvis/lvis_v1_val.json"), - "lvis_v1_test_dev": ("coco/", "lvis/lvis_v1_image_info_test_dev.json"), - "lvis_v1_test_challenge": ("coco/", "lvis/lvis_v1_image_info_test_challenge.json"), - }, - "lvis_v0.5": { - "lvis_v0.5_train": ("coco/", "lvis/lvis_v0.5_train.json"), - "lvis_v0.5_val": ("coco/", "lvis/lvis_v0.5_val.json"), - "lvis_v0.5_val_rand_100": ("coco/", "lvis/lvis_v0.5_val_rand_100.json"), - "lvis_v0.5_test": ("coco/", "lvis/lvis_v0.5_image_info_test.json"), - }, - "lvis_v0.5_cocofied": { - "lvis_v0.5_train_cocofied": ("coco/", "lvis/lvis_v0.5_train_cocofied.json"), - "lvis_v0.5_val_cocofied": ("coco/", "lvis/lvis_v0.5_val_cocofied.json"), - }, -} - - -def register_all_lvis(root): - for dataset_name, splits_per_dataset in _PREDEFINED_SPLITS_LVIS.items(): - for key, (image_root, json_file) in splits_per_dataset.items(): - register_lvis_instances( - key, - get_lvis_instances_meta(dataset_name), - os.path.join(root, json_file) if "://" not in json_file else json_file, - os.path.join(root, image_root), - ) - - -# ==== Predefined splits for raw cityscapes images =========== -_RAW_CITYSCAPES_SPLITS = { - "cityscapes_fine_{task}_train": ("cityscapes/leftImg8bit/train/", "cityscapes/gtFine/train/"), - "cityscapes_fine_{task}_val": ("cityscapes/leftImg8bit/val/", "cityscapes/gtFine/val/"), - "cityscapes_fine_{task}_test": ("cityscapes/leftImg8bit/test/", "cityscapes/gtFine/test/"), -} - - -def register_all_cityscapes(root): - for key, (image_dir, gt_dir) in _RAW_CITYSCAPES_SPLITS.items(): - meta = _get_builtin_metadata("cityscapes") - image_dir = os.path.join(root, image_dir) - gt_dir = os.path.join(root, gt_dir) - - inst_key = key.format(task="instance_seg") - DatasetCatalog.register( - inst_key, - lambda x=image_dir, y=gt_dir: load_cityscapes_instances( - x, y, from_json=True, to_polygons=True - ), - ) - MetadataCatalog.get(inst_key).set( - image_dir=image_dir, gt_dir=gt_dir, evaluator_type="cityscapes_instance", **meta - ) - - sem_key = key.format(task="sem_seg") - DatasetCatalog.register( - sem_key, lambda x=image_dir, y=gt_dir: load_cityscapes_semantic(x, y) - ) - MetadataCatalog.get(sem_key).set( - image_dir=image_dir, - gt_dir=gt_dir, - evaluator_type="cityscapes_sem_seg", - ignore_label=255, - **meta, - ) - - -# ==== Predefined splits for PASCAL VOC =========== -def register_all_pascal_voc(root): - SPLITS = [ - ("voc_2007_trainval", "VOC2007", "trainval"), - ("voc_2007_train", "VOC2007", "train"), - ("voc_2007_val", "VOC2007", "val"), - ("voc_2007_test", "VOC2007", "test"), - ("voc_2012_trainval", "VOC2012", "trainval"), - ("voc_2012_train", "VOC2012", "train"), - ("voc_2012_val", "VOC2012", "val"), - ] - for name, dirname, split in SPLITS: - year = 2007 if "2007" in name else 2012 - register_pascal_voc(name, os.path.join(root, dirname), split, year) - MetadataCatalog.get(name).evaluator_type = "pascal_voc" - - -def register_all_ade20k(root): - root = os.path.join(root, "ADEChallengeData2016") - for name, dirname in [("train", "training"), ("val", "validation")]: - image_dir = os.path.join(root, "images", dirname) - gt_dir = os.path.join(root, "annotations_detectron2", dirname) - name = f"ade20k_sem_seg_{name}" - DatasetCatalog.register( - name, lambda x=image_dir, y=gt_dir: load_sem_seg(y, x, gt_ext="png", image_ext="jpg") - ) - MetadataCatalog.get(name).set( - stuff_classes=ADE20K_SEM_SEG_CATEGORIES[:], - image_root=image_dir, - sem_seg_root=gt_dir, - evaluator_type="sem_seg", - ignore_label=255, - ) - - -# True for open source; -# Internally at fb, we register them elsewhere -if __name__.endswith(".builtin"): - # Assume pre-defined datasets live in `./datasets`. - _root = os.path.expanduser(os.getenv("DETECTRON2_DATASETS", "datasets")) - register_all_coco(_root) - register_all_lvis(_root) - register_all_cityscapes(_root) - register_all_cityscapes_panoptic(_root) - register_all_pascal_voc(_root) - register_all_ade20k(_root) diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/utils/comm.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/utils/comm.py deleted file mode 100644 index a9ea9a9f578c5704d1e7ff563ef156e9133ab465..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/utils/comm.py +++ /dev/null @@ -1,238 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -""" -This file contains primitives for multi-gpu communication. -This is useful when doing distributed training. -""" - -import functools -import numpy as np -import torch -import torch.distributed as dist - -_LOCAL_PROCESS_GROUP = None -_MISSING_LOCAL_PG_ERROR = ( - "Local process group is not yet created! Please use detectron2's `launch()` " - "to start processes and initialize pytorch process group. If you need to start " - "processes in other ways, please call comm.create_local_process_group(" - "num_workers_per_machine) after calling torch.distributed.init_process_group()." -) - - -def get_world_size() -> int: - if not dist.is_available(): - return 1 - if not dist.is_initialized(): - return 1 - return dist.get_world_size() - - -def get_rank() -> int: - if not dist.is_available(): - return 0 - if not dist.is_initialized(): - return 0 - return dist.get_rank() - - -@functools.lru_cache() -def create_local_process_group(num_workers_per_machine: int) -> None: - """ - Create a process group that contains ranks within the same machine. - - Detectron2's launch() in engine/launch.py will call this function. If you start - workers without launch(), you'll have to also call this. Otherwise utilities - like `get_local_rank()` will not work. - - This function contains a barrier. All processes must call it together. - - Args: - num_workers_per_machine: the number of worker processes per machine. Typically - the number of GPUs. - """ - global _LOCAL_PROCESS_GROUP - assert _LOCAL_PROCESS_GROUP is None - assert get_world_size() % num_workers_per_machine == 0 - num_machines = get_world_size() // num_workers_per_machine - machine_rank = get_rank() // num_workers_per_machine - for i in range(num_machines): - ranks_on_i = list(range(i * num_workers_per_machine, (i + 1) * num_workers_per_machine)) - pg = dist.new_group(ranks_on_i) - if i == machine_rank: - _LOCAL_PROCESS_GROUP = pg - - -def get_local_process_group(): - """ - Returns: - A torch process group which only includes processes that are on the same - machine as the current process. This group can be useful for communication - within a machine, e.g. a per-machine SyncBN. - """ - assert _LOCAL_PROCESS_GROUP is not None, _MISSING_LOCAL_PG_ERROR - return _LOCAL_PROCESS_GROUP - - -def get_local_rank() -> int: - """ - Returns: - The rank of the current process within the local (per-machine) process group. - """ - if not dist.is_available(): - return 0 - if not dist.is_initialized(): - return 0 - assert _LOCAL_PROCESS_GROUP is not None, _MISSING_LOCAL_PG_ERROR - return dist.get_rank(group=_LOCAL_PROCESS_GROUP) - - -def get_local_size() -> int: - """ - Returns: - The size of the per-machine process group, - i.e. the number of processes per machine. - """ - if not dist.is_available(): - return 1 - if not dist.is_initialized(): - return 1 - assert _LOCAL_PROCESS_GROUP is not None, _MISSING_LOCAL_PG_ERROR - return dist.get_world_size(group=_LOCAL_PROCESS_GROUP) - - -def is_main_process() -> bool: - return get_rank() == 0 - - -def synchronize(): - """ - Helper function to synchronize (barrier) among all processes when - using distributed training - """ - if not dist.is_available(): - return - if not dist.is_initialized(): - return - world_size = dist.get_world_size() - if world_size == 1: - return - if dist.get_backend() == dist.Backend.NCCL: - # This argument is needed to avoid warnings. - # It's valid only for NCCL backend. - dist.barrier(device_ids=[torch.cuda.current_device()]) - else: - dist.barrier() - - -@functools.lru_cache() -def _get_global_gloo_group(): - """ - Return a process group based on gloo backend, containing all the ranks - The result is cached. - """ - if dist.get_backend() == "nccl": - return dist.new_group(backend="gloo") - else: - return dist.group.WORLD - - -def all_gather(data, group=None): - """ - Run all_gather on arbitrary picklable data (not necessarily tensors). - - Args: - data: any picklable object - group: a torch process group. By default, will use a group which - contains all ranks on gloo backend. - - Returns: - list[data]: list of data gathered from each rank - """ - if get_world_size() == 1: - return [data] - if group is None: - group = _get_global_gloo_group() # use CPU group by default, to reduce GPU RAM usage. - world_size = dist.get_world_size(group) - if world_size == 1: - return [data] - - output = [None for _ in range(world_size)] - dist.all_gather_object(output, data, group=group) - return output - - -def gather(data, dst=0, group=None): - """ - Run gather on arbitrary picklable data (not necessarily tensors). - - Args: - data: any picklable object - dst (int): destination rank - group: a torch process group. By default, will use a group which - contains all ranks on gloo backend. - - Returns: - list[data]: on dst, a list of data gathered from each rank. Otherwise, - an empty list. - """ - if get_world_size() == 1: - return [data] - if group is None: - group = _get_global_gloo_group() - world_size = dist.get_world_size(group=group) - if world_size == 1: - return [data] - rank = dist.get_rank(group=group) - - if rank == dst: - output = [None for _ in range(world_size)] - dist.gather_object(data, output, dst=dst, group=group) - return output - else: - dist.gather_object(data, None, dst=dst, group=group) - return [] - - -def shared_random_seed(): - """ - Returns: - int: a random number that is the same across all workers. - If workers need a shared RNG, they can use this shared seed to - create one. - - All workers must call this function, otherwise it will deadlock. - """ - ints = np.random.randint(2**31) - all_ints = all_gather(ints) - return all_ints[0] - - -def reduce_dict(input_dict, average=True): - """ - Reduce the values in the dictionary from all processes so that process with rank - 0 has the reduced results. - - Args: - input_dict (dict): inputs to be reduced. All the values must be scalar CUDA Tensor. - average (bool): whether to do average or sum - - Returns: - a dict with the same keys as input_dict, after reduction. - """ - world_size = get_world_size() - if world_size < 2: - return input_dict - with torch.no_grad(): - names = [] - values = [] - # sort the keys so that they are consistent across processes - for k in sorted(input_dict.keys()): - names.append(k) - values.append(input_dict[k]) - values = torch.stack(values, dim=0) - dist.reduce(values, dst=0) - if dist.get_rank() == 0 and average: - # only main process gets accumulated, so only divide by - # world_size in this case - values /= world_size - reduced_dict = {k: v for k, v in zip(names, values)} - return reduced_dict diff --git a/spaces/TRI-ML/risk_biased_prediction/scripts/eval_scripts/draw_cost.py b/spaces/TRI-ML/risk_biased_prediction/scripts/eval_scripts/draw_cost.py deleted file mode 100644 index 9599c2f7bddf447e59c516479191025e3df89d1a..0000000000000000000000000000000000000000 --- a/spaces/TRI-ML/risk_biased_prediction/scripts/eval_scripts/draw_cost.py +++ /dev/null @@ -1,84 +0,0 @@ -import os - -import matplotlib.pyplot as plt -from mmcv import Config -import numpy as np -from pytorch_lightning.utilities.seed import seed_everything - -from risk_biased.scene_dataset.scene import RandomScene, RandomSceneParams -from risk_biased.scene_dataset.scene_plotter import ScenePlotter -from risk_biased.utils.cost import ( - DistanceCostNumpy, - DistanceCostParams, - TTCCostNumpy, - TTCCostParams, -) - -if __name__ == "__main__": - working_dir = os.path.dirname(os.path.realpath(os.path.join(__file__, ".."))) - config_path = os.path.join( - working_dir, "..", "risk_biased", "config", "learning_config.py" - ) - config = Config.fromfile(config_path) - if config.seed is not None: - seed_everything(config.seed) - ped_speed = 2 - is_torch = False - - fig, ax = plt.subplots( - 3, 4, sharex=True, sharey=True, tight_layout=True, subplot_kw={"aspect": 1} - ) - - scene_params = RandomSceneParams.from_config(config) - scene_params.batch_size = 1 - for ii in range(9): - test_scene = RandomScene( - scene_params, - is_torch=is_torch, - ) - dist_cost = DistanceCostNumpy(DistanceCostParams.from_config(config)) - ttc_cost = TTCCostNumpy(TTCCostParams.from_config(config)) - - nx = 1000 - ny = 100 - x, y = np.meshgrid( - np.linspace(-test_scene.ego_length, test_scene.road_length, nx), - np.linspace(test_scene.bottom, test_scene.top, ny), - ) - - i = 2 - (int(ii >= 6) + int(ii >= 3)) - j = ii % 3 - vx = float(ii % 3 - 1) - vy = float((ii >= 6)) - float(ii <= 2) - print(f"horizontal velocity {vx}") - print(f"vertical velocity {vy}") - norm = np.maximum(np.sqrt(vx * vx + vy * vy), 1) - vx = vx / norm * np.ones([nx * ny, 1]) - vy = vy / norm * np.ones([nx * ny, 1]) - v_ped = ped_speed * np.stack((vx, vy), -1) - v_ego = np.array([[[test_scene.ego_ref_speed, 0]]]) - - p_init = np.stack((x, y), -1).reshape((nx * ny, 2)) - p_final = p_init + v_ped[:, 0, :] * test_scene.time_scene - len_traj = 30 - ped_trajs = np.linspace(p_init, p_final, len_traj, axis=1) - ego_traj = np.linspace( - [[0, 0]], - [test_scene.ego_ref_speed * test_scene.time_scene, 0], - len_traj, - axis=1, - ) - - cost, _ = ttc_cost(ego_traj, ped_trajs, v_ego, v_ped) - - cost = cost.reshape(ny, nx) - colorbar = ax[i][j].pcolormesh(x, y, cost, cmap="RdBu") - plotter = ScenePlotter(test_scene, ax=ax[i][j]) - plotter.plot_road() - - fig.subplots_adjust(wspace=0.1, hspace=0.1) - fig.tight_layout() - fig.colorbar(colorbar, ax=ax.ravel().tolist()) - for a in ax[:, -1]: - a.remove() - plt.show() diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/distro/distro.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/distro/distro.py deleted file mode 100644 index 89e1868047225bbcdfe04bdc4bea3281bf91bc20..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/distro/distro.py +++ /dev/null @@ -1,1399 +0,0 @@ -#!/usr/bin/env python -# Copyright 2015,2016,2017 Nir Cohen -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -""" -The ``distro`` package (``distro`` stands for Linux Distribution) provides -information about the Linux distribution it runs on, such as a reliable -machine-readable distro ID, or version information. - -It is the recommended replacement for Python's original -:py:func:`platform.linux_distribution` function, but it provides much more -functionality. An alternative implementation became necessary because Python -3.5 deprecated this function, and Python 3.8 removed it altogether. Its -predecessor function :py:func:`platform.dist` was already deprecated since -Python 2.6 and removed in Python 3.8. Still, there are many cases in which -access to OS distribution information is needed. See `Python issue 1322 -`_ for more information. -""" - -import argparse -import json -import logging -import os -import re -import shlex -import subprocess -import sys -import warnings -from typing import ( - Any, - Callable, - Dict, - Iterable, - Optional, - Sequence, - TextIO, - Tuple, - Type, -) - -try: - from typing import TypedDict -except ImportError: - # Python 3.7 - TypedDict = dict - -__version__ = "1.8.0" - - -class VersionDict(TypedDict): - major: str - minor: str - build_number: str - - -class InfoDict(TypedDict): - id: str - version: str - version_parts: VersionDict - like: str - codename: str - - -_UNIXCONFDIR = os.environ.get("UNIXCONFDIR", "/etc") -_UNIXUSRLIBDIR = os.environ.get("UNIXUSRLIBDIR", "/usr/lib") -_OS_RELEASE_BASENAME = "os-release" - -#: Translation table for normalizing the "ID" attribute defined in os-release -#: files, for use by the :func:`distro.id` method. -#: -#: * Key: Value as defined in the os-release file, translated to lower case, -#: with blanks translated to underscores. -#: -#: * Value: Normalized value. -NORMALIZED_OS_ID = { - "ol": "oracle", # Oracle Linux - "opensuse-leap": "opensuse", # Newer versions of OpenSuSE report as opensuse-leap -} - -#: Translation table for normalizing the "Distributor ID" attribute returned by -#: the lsb_release command, for use by the :func:`distro.id` method. -#: -#: * Key: Value as returned by the lsb_release command, translated to lower -#: case, with blanks translated to underscores. -#: -#: * Value: Normalized value. -NORMALIZED_LSB_ID = { - "enterpriseenterpriseas": "oracle", # Oracle Enterprise Linux 4 - "enterpriseenterpriseserver": "oracle", # Oracle Linux 5 - "redhatenterpriseworkstation": "rhel", # RHEL 6, 7 Workstation - "redhatenterpriseserver": "rhel", # RHEL 6, 7 Server - "redhatenterprisecomputenode": "rhel", # RHEL 6 ComputeNode -} - -#: Translation table for normalizing the distro ID derived from the file name -#: of distro release files, for use by the :func:`distro.id` method. -#: -#: * Key: Value as derived from the file name of a distro release file, -#: translated to lower case, with blanks translated to underscores. -#: -#: * Value: Normalized value. -NORMALIZED_DISTRO_ID = { - "redhat": "rhel", # RHEL 6.x, 7.x -} - -# Pattern for content of distro release file (reversed) -_DISTRO_RELEASE_CONTENT_REVERSED_PATTERN = re.compile( - r"(?:[^)]*\)(.*)\()? *(?:STL )?([\d.+\-a-z]*\d) *(?:esaeler *)?(.+)" -) - -# Pattern for base file name of distro release file -_DISTRO_RELEASE_BASENAME_PATTERN = re.compile(r"(\w+)[-_](release|version)$") - -# Base file names to be looked up for if _UNIXCONFDIR is not readable. -_DISTRO_RELEASE_BASENAMES = [ - "SuSE-release", - "arch-release", - "base-release", - "centos-release", - "fedora-release", - "gentoo-release", - "mageia-release", - "mandrake-release", - "mandriva-release", - "mandrivalinux-release", - "manjaro-release", - "oracle-release", - "redhat-release", - "rocky-release", - "sl-release", - "slackware-version", -] - -# Base file names to be ignored when searching for distro release file -_DISTRO_RELEASE_IGNORE_BASENAMES = ( - "debian_version", - "lsb-release", - "oem-release", - _OS_RELEASE_BASENAME, - "system-release", - "plesk-release", - "iredmail-release", -) - - -def linux_distribution(full_distribution_name: bool = True) -> Tuple[str, str, str]: - """ - .. deprecated:: 1.6.0 - - :func:`distro.linux_distribution()` is deprecated. It should only be - used as a compatibility shim with Python's - :py:func:`platform.linux_distribution()`. Please use :func:`distro.id`, - :func:`distro.version` and :func:`distro.name` instead. - - Return information about the current OS distribution as a tuple - ``(id_name, version, codename)`` with items as follows: - - * ``id_name``: If *full_distribution_name* is false, the result of - :func:`distro.id`. Otherwise, the result of :func:`distro.name`. - - * ``version``: The result of :func:`distro.version`. - - * ``codename``: The extra item (usually in parentheses) after the - os-release version number, or the result of :func:`distro.codename`. - - The interface of this function is compatible with the original - :py:func:`platform.linux_distribution` function, supporting a subset of - its parameters. - - The data it returns may not exactly be the same, because it uses more data - sources than the original function, and that may lead to different data if - the OS distribution is not consistent across multiple data sources it - provides (there are indeed such distributions ...). - - Another reason for differences is the fact that the :func:`distro.id` - method normalizes the distro ID string to a reliable machine-readable value - for a number of popular OS distributions. - """ - warnings.warn( - "distro.linux_distribution() is deprecated. It should only be used as a " - "compatibility shim with Python's platform.linux_distribution(). Please use " - "distro.id(), distro.version() and distro.name() instead.", - DeprecationWarning, - stacklevel=2, - ) - return _distro.linux_distribution(full_distribution_name) - - -def id() -> str: - """ - Return the distro ID of the current distribution, as a - machine-readable string. - - For a number of OS distributions, the returned distro ID value is - *reliable*, in the sense that it is documented and that it does not change - across releases of the distribution. - - This package maintains the following reliable distro ID values: - - ============== ========================================= - Distro ID Distribution - ============== ========================================= - "ubuntu" Ubuntu - "debian" Debian - "rhel" RedHat Enterprise Linux - "centos" CentOS - "fedora" Fedora - "sles" SUSE Linux Enterprise Server - "opensuse" openSUSE - "amzn" Amazon Linux - "arch" Arch Linux - "buildroot" Buildroot - "cloudlinux" CloudLinux OS - "exherbo" Exherbo Linux - "gentoo" GenToo Linux - "ibm_powerkvm" IBM PowerKVM - "kvmibm" KVM for IBM z Systems - "linuxmint" Linux Mint - "mageia" Mageia - "mandriva" Mandriva Linux - "parallels" Parallels - "pidora" Pidora - "raspbian" Raspbian - "oracle" Oracle Linux (and Oracle Enterprise Linux) - "scientific" Scientific Linux - "slackware" Slackware - "xenserver" XenServer - "openbsd" OpenBSD - "netbsd" NetBSD - "freebsd" FreeBSD - "midnightbsd" MidnightBSD - "rocky" Rocky Linux - "aix" AIX - "guix" Guix System - ============== ========================================= - - If you have a need to get distros for reliable IDs added into this set, - or if you find that the :func:`distro.id` function returns a different - distro ID for one of the listed distros, please create an issue in the - `distro issue tracker`_. - - **Lookup hierarchy and transformations:** - - First, the ID is obtained from the following sources, in the specified - order. The first available and non-empty value is used: - - * the value of the "ID" attribute of the os-release file, - - * the value of the "Distributor ID" attribute returned by the lsb_release - command, - - * the first part of the file name of the distro release file, - - The so determined ID value then passes the following transformations, - before it is returned by this method: - - * it is translated to lower case, - - * blanks (which should not be there anyway) are translated to underscores, - - * a normalization of the ID is performed, based upon - `normalization tables`_. The purpose of this normalization is to ensure - that the ID is as reliable as possible, even across incompatible changes - in the OS distributions. A common reason for an incompatible change is - the addition of an os-release file, or the addition of the lsb_release - command, with ID values that differ from what was previously determined - from the distro release file name. - """ - return _distro.id() - - -def name(pretty: bool = False) -> str: - """ - Return the name of the current OS distribution, as a human-readable - string. - - If *pretty* is false, the name is returned without version or codename. - (e.g. "CentOS Linux") - - If *pretty* is true, the version and codename are appended. - (e.g. "CentOS Linux 7.1.1503 (Core)") - - **Lookup hierarchy:** - - The name is obtained from the following sources, in the specified order. - The first available and non-empty value is used: - - * If *pretty* is false: - - - the value of the "NAME" attribute of the os-release file, - - - the value of the "Distributor ID" attribute returned by the lsb_release - command, - - - the value of the "" field of the distro release file. - - * If *pretty* is true: - - - the value of the "PRETTY_NAME" attribute of the os-release file, - - - the value of the "Description" attribute returned by the lsb_release - command, - - - the value of the "" field of the distro release file, appended - with the value of the pretty version ("" and "" - fields) of the distro release file, if available. - """ - return _distro.name(pretty) - - -def version(pretty: bool = False, best: bool = False) -> str: - """ - Return the version of the current OS distribution, as a human-readable - string. - - If *pretty* is false, the version is returned without codename (e.g. - "7.0"). - - If *pretty* is true, the codename in parenthesis is appended, if the - codename is non-empty (e.g. "7.0 (Maipo)"). - - Some distributions provide version numbers with different precisions in - the different sources of distribution information. Examining the different - sources in a fixed priority order does not always yield the most precise - version (e.g. for Debian 8.2, or CentOS 7.1). - - Some other distributions may not provide this kind of information. In these - cases, an empty string would be returned. This behavior can be observed - with rolling releases distributions (e.g. Arch Linux). - - The *best* parameter can be used to control the approach for the returned - version: - - If *best* is false, the first non-empty version number in priority order of - the examined sources is returned. - - If *best* is true, the most precise version number out of all examined - sources is returned. - - **Lookup hierarchy:** - - In all cases, the version number is obtained from the following sources. - If *best* is false, this order represents the priority order: - - * the value of the "VERSION_ID" attribute of the os-release file, - * the value of the "Release" attribute returned by the lsb_release - command, - * the version number parsed from the "" field of the first line - of the distro release file, - * the version number parsed from the "PRETTY_NAME" attribute of the - os-release file, if it follows the format of the distro release files. - * the version number parsed from the "Description" attribute returned by - the lsb_release command, if it follows the format of the distro release - files. - """ - return _distro.version(pretty, best) - - -def version_parts(best: bool = False) -> Tuple[str, str, str]: - """ - Return the version of the current OS distribution as a tuple - ``(major, minor, build_number)`` with items as follows: - - * ``major``: The result of :func:`distro.major_version`. - - * ``minor``: The result of :func:`distro.minor_version`. - - * ``build_number``: The result of :func:`distro.build_number`. - - For a description of the *best* parameter, see the :func:`distro.version` - method. - """ - return _distro.version_parts(best) - - -def major_version(best: bool = False) -> str: - """ - Return the major version of the current OS distribution, as a string, - if provided. - Otherwise, the empty string is returned. The major version is the first - part of the dot-separated version string. - - For a description of the *best* parameter, see the :func:`distro.version` - method. - """ - return _distro.major_version(best) - - -def minor_version(best: bool = False) -> str: - """ - Return the minor version of the current OS distribution, as a string, - if provided. - Otherwise, the empty string is returned. The minor version is the second - part of the dot-separated version string. - - For a description of the *best* parameter, see the :func:`distro.version` - method. - """ - return _distro.minor_version(best) - - -def build_number(best: bool = False) -> str: - """ - Return the build number of the current OS distribution, as a string, - if provided. - Otherwise, the empty string is returned. The build number is the third part - of the dot-separated version string. - - For a description of the *best* parameter, see the :func:`distro.version` - method. - """ - return _distro.build_number(best) - - -def like() -> str: - """ - Return a space-separated list of distro IDs of distributions that are - closely related to the current OS distribution in regards to packaging - and programming interfaces, for example distributions the current - distribution is a derivative from. - - **Lookup hierarchy:** - - This information item is only provided by the os-release file. - For details, see the description of the "ID_LIKE" attribute in the - `os-release man page - `_. - """ - return _distro.like() - - -def codename() -> str: - """ - Return the codename for the release of the current OS distribution, - as a string. - - If the distribution does not have a codename, an empty string is returned. - - Note that the returned codename is not always really a codename. For - example, openSUSE returns "x86_64". This function does not handle such - cases in any special way and just returns the string it finds, if any. - - **Lookup hierarchy:** - - * the codename within the "VERSION" attribute of the os-release file, if - provided, - - * the value of the "Codename" attribute returned by the lsb_release - command, - - * the value of the "" field of the distro release file. - """ - return _distro.codename() - - -def info(pretty: bool = False, best: bool = False) -> InfoDict: - """ - Return certain machine-readable information items about the current OS - distribution in a dictionary, as shown in the following example: - - .. sourcecode:: python - - { - 'id': 'rhel', - 'version': '7.0', - 'version_parts': { - 'major': '7', - 'minor': '0', - 'build_number': '' - }, - 'like': 'fedora', - 'codename': 'Maipo' - } - - The dictionary structure and keys are always the same, regardless of which - information items are available in the underlying data sources. The values - for the various keys are as follows: - - * ``id``: The result of :func:`distro.id`. - - * ``version``: The result of :func:`distro.version`. - - * ``version_parts -> major``: The result of :func:`distro.major_version`. - - * ``version_parts -> minor``: The result of :func:`distro.minor_version`. - - * ``version_parts -> build_number``: The result of - :func:`distro.build_number`. - - * ``like``: The result of :func:`distro.like`. - - * ``codename``: The result of :func:`distro.codename`. - - For a description of the *pretty* and *best* parameters, see the - :func:`distro.version` method. - """ - return _distro.info(pretty, best) - - -def os_release_info() -> Dict[str, str]: - """ - Return a dictionary containing key-value pairs for the information items - from the os-release file data source of the current OS distribution. - - See `os-release file`_ for details about these information items. - """ - return _distro.os_release_info() - - -def lsb_release_info() -> Dict[str, str]: - """ - Return a dictionary containing key-value pairs for the information items - from the lsb_release command data source of the current OS distribution. - - See `lsb_release command output`_ for details about these information - items. - """ - return _distro.lsb_release_info() - - -def distro_release_info() -> Dict[str, str]: - """ - Return a dictionary containing key-value pairs for the information items - from the distro release file data source of the current OS distribution. - - See `distro release file`_ for details about these information items. - """ - return _distro.distro_release_info() - - -def uname_info() -> Dict[str, str]: - """ - Return a dictionary containing key-value pairs for the information items - from the distro release file data source of the current OS distribution. - """ - return _distro.uname_info() - - -def os_release_attr(attribute: str) -> str: - """ - Return a single named information item from the os-release file data source - of the current OS distribution. - - Parameters: - - * ``attribute`` (string): Key of the information item. - - Returns: - - * (string): Value of the information item, if the item exists. - The empty string, if the item does not exist. - - See `os-release file`_ for details about these information items. - """ - return _distro.os_release_attr(attribute) - - -def lsb_release_attr(attribute: str) -> str: - """ - Return a single named information item from the lsb_release command output - data source of the current OS distribution. - - Parameters: - - * ``attribute`` (string): Key of the information item. - - Returns: - - * (string): Value of the information item, if the item exists. - The empty string, if the item does not exist. - - See `lsb_release command output`_ for details about these information - items. - """ - return _distro.lsb_release_attr(attribute) - - -def distro_release_attr(attribute: str) -> str: - """ - Return a single named information item from the distro release file - data source of the current OS distribution. - - Parameters: - - * ``attribute`` (string): Key of the information item. - - Returns: - - * (string): Value of the information item, if the item exists. - The empty string, if the item does not exist. - - See `distro release file`_ for details about these information items. - """ - return _distro.distro_release_attr(attribute) - - -def uname_attr(attribute: str) -> str: - """ - Return a single named information item from the distro release file - data source of the current OS distribution. - - Parameters: - - * ``attribute`` (string): Key of the information item. - - Returns: - - * (string): Value of the information item, if the item exists. - The empty string, if the item does not exist. - """ - return _distro.uname_attr(attribute) - - -try: - from functools import cached_property -except ImportError: - # Python < 3.8 - class cached_property: # type: ignore - """A version of @property which caches the value. On access, it calls the - underlying function and sets the value in `__dict__` so future accesses - will not re-call the property. - """ - - def __init__(self, f: Callable[[Any], Any]) -> None: - self._fname = f.__name__ - self._f = f - - def __get__(self, obj: Any, owner: Type[Any]) -> Any: - assert obj is not None, f"call {self._fname} on an instance" - ret = obj.__dict__[self._fname] = self._f(obj) - return ret - - -class LinuxDistribution: - """ - Provides information about a OS distribution. - - This package creates a private module-global instance of this class with - default initialization arguments, that is used by the - `consolidated accessor functions`_ and `single source accessor functions`_. - By using default initialization arguments, that module-global instance - returns data about the current OS distribution (i.e. the distro this - package runs on). - - Normally, it is not necessary to create additional instances of this class. - However, in situations where control is needed over the exact data sources - that are used, instances of this class can be created with a specific - distro release file, or a specific os-release file, or without invoking the - lsb_release command. - """ - - def __init__( - self, - include_lsb: Optional[bool] = None, - os_release_file: str = "", - distro_release_file: str = "", - include_uname: Optional[bool] = None, - root_dir: Optional[str] = None, - include_oslevel: Optional[bool] = None, - ) -> None: - """ - The initialization method of this class gathers information from the - available data sources, and stores that in private instance attributes. - Subsequent access to the information items uses these private instance - attributes, so that the data sources are read only once. - - Parameters: - - * ``include_lsb`` (bool): Controls whether the - `lsb_release command output`_ is included as a data source. - - If the lsb_release command is not available in the program execution - path, the data source for the lsb_release command will be empty. - - * ``os_release_file`` (string): The path name of the - `os-release file`_ that is to be used as a data source. - - An empty string (the default) will cause the default path name to - be used (see `os-release file`_ for details). - - If the specified or defaulted os-release file does not exist, the - data source for the os-release file will be empty. - - * ``distro_release_file`` (string): The path name of the - `distro release file`_ that is to be used as a data source. - - An empty string (the default) will cause a default search algorithm - to be used (see `distro release file`_ for details). - - If the specified distro release file does not exist, or if no default - distro release file can be found, the data source for the distro - release file will be empty. - - * ``include_uname`` (bool): Controls whether uname command output is - included as a data source. If the uname command is not available in - the program execution path the data source for the uname command will - be empty. - - * ``root_dir`` (string): The absolute path to the root directory to use - to find distro-related information files. Note that ``include_*`` - parameters must not be enabled in combination with ``root_dir``. - - * ``include_oslevel`` (bool): Controls whether (AIX) oslevel command - output is included as a data source. If the oslevel command is not - available in the program execution path the data source will be - empty. - - Public instance attributes: - - * ``os_release_file`` (string): The path name of the - `os-release file`_ that is actually used as a data source. The - empty string if no distro release file is used as a data source. - - * ``distro_release_file`` (string): The path name of the - `distro release file`_ that is actually used as a data source. The - empty string if no distro release file is used as a data source. - - * ``include_lsb`` (bool): The result of the ``include_lsb`` parameter. - This controls whether the lsb information will be loaded. - - * ``include_uname`` (bool): The result of the ``include_uname`` - parameter. This controls whether the uname information will - be loaded. - - * ``include_oslevel`` (bool): The result of the ``include_oslevel`` - parameter. This controls whether (AIX) oslevel information will be - loaded. - - * ``root_dir`` (string): The result of the ``root_dir`` parameter. - The absolute path to the root directory to use to find distro-related - information files. - - Raises: - - * :py:exc:`ValueError`: Initialization parameters combination is not - supported. - - * :py:exc:`OSError`: Some I/O issue with an os-release file or distro - release file. - - * :py:exc:`UnicodeError`: A data source has unexpected characters or - uses an unexpected encoding. - """ - self.root_dir = root_dir - self.etc_dir = os.path.join(root_dir, "etc") if root_dir else _UNIXCONFDIR - self.usr_lib_dir = ( - os.path.join(root_dir, "usr/lib") if root_dir else _UNIXUSRLIBDIR - ) - - if os_release_file: - self.os_release_file = os_release_file - else: - etc_dir_os_release_file = os.path.join(self.etc_dir, _OS_RELEASE_BASENAME) - usr_lib_os_release_file = os.path.join( - self.usr_lib_dir, _OS_RELEASE_BASENAME - ) - - # NOTE: The idea is to respect order **and** have it set - # at all times for API backwards compatibility. - if os.path.isfile(etc_dir_os_release_file) or not os.path.isfile( - usr_lib_os_release_file - ): - self.os_release_file = etc_dir_os_release_file - else: - self.os_release_file = usr_lib_os_release_file - - self.distro_release_file = distro_release_file or "" # updated later - - is_root_dir_defined = root_dir is not None - if is_root_dir_defined and (include_lsb or include_uname or include_oslevel): - raise ValueError( - "Including subprocess data sources from specific root_dir is disallowed" - " to prevent false information" - ) - self.include_lsb = ( - include_lsb if include_lsb is not None else not is_root_dir_defined - ) - self.include_uname = ( - include_uname if include_uname is not None else not is_root_dir_defined - ) - self.include_oslevel = ( - include_oslevel if include_oslevel is not None else not is_root_dir_defined - ) - - def __repr__(self) -> str: - """Return repr of all info""" - return ( - "LinuxDistribution(" - "os_release_file={self.os_release_file!r}, " - "distro_release_file={self.distro_release_file!r}, " - "include_lsb={self.include_lsb!r}, " - "include_uname={self.include_uname!r}, " - "include_oslevel={self.include_oslevel!r}, " - "root_dir={self.root_dir!r}, " - "_os_release_info={self._os_release_info!r}, " - "_lsb_release_info={self._lsb_release_info!r}, " - "_distro_release_info={self._distro_release_info!r}, " - "_uname_info={self._uname_info!r}, " - "_oslevel_info={self._oslevel_info!r})".format(self=self) - ) - - def linux_distribution( - self, full_distribution_name: bool = True - ) -> Tuple[str, str, str]: - """ - Return information about the OS distribution that is compatible - with Python's :func:`platform.linux_distribution`, supporting a subset - of its parameters. - - For details, see :func:`distro.linux_distribution`. - """ - return ( - self.name() if full_distribution_name else self.id(), - self.version(), - self._os_release_info.get("release_codename") or self.codename(), - ) - - def id(self) -> str: - """Return the distro ID of the OS distribution, as a string. - - For details, see :func:`distro.id`. - """ - - def normalize(distro_id: str, table: Dict[str, str]) -> str: - distro_id = distro_id.lower().replace(" ", "_") - return table.get(distro_id, distro_id) - - distro_id = self.os_release_attr("id") - if distro_id: - return normalize(distro_id, NORMALIZED_OS_ID) - - distro_id = self.lsb_release_attr("distributor_id") - if distro_id: - return normalize(distro_id, NORMALIZED_LSB_ID) - - distro_id = self.distro_release_attr("id") - if distro_id: - return normalize(distro_id, NORMALIZED_DISTRO_ID) - - distro_id = self.uname_attr("id") - if distro_id: - return normalize(distro_id, NORMALIZED_DISTRO_ID) - - return "" - - def name(self, pretty: bool = False) -> str: - """ - Return the name of the OS distribution, as a string. - - For details, see :func:`distro.name`. - """ - name = ( - self.os_release_attr("name") - or self.lsb_release_attr("distributor_id") - or self.distro_release_attr("name") - or self.uname_attr("name") - ) - if pretty: - name = self.os_release_attr("pretty_name") or self.lsb_release_attr( - "description" - ) - if not name: - name = self.distro_release_attr("name") or self.uname_attr("name") - version = self.version(pretty=True) - if version: - name = f"{name} {version}" - return name or "" - - def version(self, pretty: bool = False, best: bool = False) -> str: - """ - Return the version of the OS distribution, as a string. - - For details, see :func:`distro.version`. - """ - versions = [ - self.os_release_attr("version_id"), - self.lsb_release_attr("release"), - self.distro_release_attr("version_id"), - self._parse_distro_release_content(self.os_release_attr("pretty_name")).get( - "version_id", "" - ), - self._parse_distro_release_content( - self.lsb_release_attr("description") - ).get("version_id", ""), - self.uname_attr("release"), - ] - if self.uname_attr("id").startswith("aix"): - # On AIX platforms, prefer oslevel command output. - versions.insert(0, self.oslevel_info()) - elif self.id() == "debian" or "debian" in self.like().split(): - # On Debian-like, add debian_version file content to candidates list. - versions.append(self._debian_version) - version = "" - if best: - # This algorithm uses the last version in priority order that has - # the best precision. If the versions are not in conflict, that - # does not matter; otherwise, using the last one instead of the - # first one might be considered a surprise. - for v in versions: - if v.count(".") > version.count(".") or version == "": - version = v - else: - for v in versions: - if v != "": - version = v - break - if pretty and version and self.codename(): - version = f"{version} ({self.codename()})" - return version - - def version_parts(self, best: bool = False) -> Tuple[str, str, str]: - """ - Return the version of the OS distribution, as a tuple of version - numbers. - - For details, see :func:`distro.version_parts`. - """ - version_str = self.version(best=best) - if version_str: - version_regex = re.compile(r"(\d+)\.?(\d+)?\.?(\d+)?") - matches = version_regex.match(version_str) - if matches: - major, minor, build_number = matches.groups() - return major, minor or "", build_number or "" - return "", "", "" - - def major_version(self, best: bool = False) -> str: - """ - Return the major version number of the current distribution. - - For details, see :func:`distro.major_version`. - """ - return self.version_parts(best)[0] - - def minor_version(self, best: bool = False) -> str: - """ - Return the minor version number of the current distribution. - - For details, see :func:`distro.minor_version`. - """ - return self.version_parts(best)[1] - - def build_number(self, best: bool = False) -> str: - """ - Return the build number of the current distribution. - - For details, see :func:`distro.build_number`. - """ - return self.version_parts(best)[2] - - def like(self) -> str: - """ - Return the IDs of distributions that are like the OS distribution. - - For details, see :func:`distro.like`. - """ - return self.os_release_attr("id_like") or "" - - def codename(self) -> str: - """ - Return the codename of the OS distribution. - - For details, see :func:`distro.codename`. - """ - try: - # Handle os_release specially since distros might purposefully set - # this to empty string to have no codename - return self._os_release_info["codename"] - except KeyError: - return ( - self.lsb_release_attr("codename") - or self.distro_release_attr("codename") - or "" - ) - - def info(self, pretty: bool = False, best: bool = False) -> InfoDict: - """ - Return certain machine-readable information about the OS - distribution. - - For details, see :func:`distro.info`. - """ - return dict( - id=self.id(), - version=self.version(pretty, best), - version_parts=dict( - major=self.major_version(best), - minor=self.minor_version(best), - build_number=self.build_number(best), - ), - like=self.like(), - codename=self.codename(), - ) - - def os_release_info(self) -> Dict[str, str]: - """ - Return a dictionary containing key-value pairs for the information - items from the os-release file data source of the OS distribution. - - For details, see :func:`distro.os_release_info`. - """ - return self._os_release_info - - def lsb_release_info(self) -> Dict[str, str]: - """ - Return a dictionary containing key-value pairs for the information - items from the lsb_release command data source of the OS - distribution. - - For details, see :func:`distro.lsb_release_info`. - """ - return self._lsb_release_info - - def distro_release_info(self) -> Dict[str, str]: - """ - Return a dictionary containing key-value pairs for the information - items from the distro release file data source of the OS - distribution. - - For details, see :func:`distro.distro_release_info`. - """ - return self._distro_release_info - - def uname_info(self) -> Dict[str, str]: - """ - Return a dictionary containing key-value pairs for the information - items from the uname command data source of the OS distribution. - - For details, see :func:`distro.uname_info`. - """ - return self._uname_info - - def oslevel_info(self) -> str: - """ - Return AIX' oslevel command output. - """ - return self._oslevel_info - - def os_release_attr(self, attribute: str) -> str: - """ - Return a single named information item from the os-release file data - source of the OS distribution. - - For details, see :func:`distro.os_release_attr`. - """ - return self._os_release_info.get(attribute, "") - - def lsb_release_attr(self, attribute: str) -> str: - """ - Return a single named information item from the lsb_release command - output data source of the OS distribution. - - For details, see :func:`distro.lsb_release_attr`. - """ - return self._lsb_release_info.get(attribute, "") - - def distro_release_attr(self, attribute: str) -> str: - """ - Return a single named information item from the distro release file - data source of the OS distribution. - - For details, see :func:`distro.distro_release_attr`. - """ - return self._distro_release_info.get(attribute, "") - - def uname_attr(self, attribute: str) -> str: - """ - Return a single named information item from the uname command - output data source of the OS distribution. - - For details, see :func:`distro.uname_attr`. - """ - return self._uname_info.get(attribute, "") - - @cached_property - def _os_release_info(self) -> Dict[str, str]: - """ - Get the information items from the specified os-release file. - - Returns: - A dictionary containing all information items. - """ - if os.path.isfile(self.os_release_file): - with open(self.os_release_file, encoding="utf-8") as release_file: - return self._parse_os_release_content(release_file) - return {} - - @staticmethod - def _parse_os_release_content(lines: TextIO) -> Dict[str, str]: - """ - Parse the lines of an os-release file. - - Parameters: - - * lines: Iterable through the lines in the os-release file. - Each line must be a unicode string or a UTF-8 encoded byte - string. - - Returns: - A dictionary containing all information items. - """ - props = {} - lexer = shlex.shlex(lines, posix=True) - lexer.whitespace_split = True - - tokens = list(lexer) - for token in tokens: - # At this point, all shell-like parsing has been done (i.e. - # comments processed, quotes and backslash escape sequences - # processed, multi-line values assembled, trailing newlines - # stripped, etc.), so the tokens are now either: - # * variable assignments: var=value - # * commands or their arguments (not allowed in os-release) - # Ignore any tokens that are not variable assignments - if "=" in token: - k, v = token.split("=", 1) - props[k.lower()] = v - - if "version" in props: - # extract release codename (if any) from version attribute - match = re.search(r"\((\D+)\)|,\s*(\D+)", props["version"]) - if match: - release_codename = match.group(1) or match.group(2) - props["codename"] = props["release_codename"] = release_codename - - if "version_codename" in props: - # os-release added a version_codename field. Use that in - # preference to anything else Note that some distros purposefully - # do not have code names. They should be setting - # version_codename="" - props["codename"] = props["version_codename"] - elif "ubuntu_codename" in props: - # Same as above but a non-standard field name used on older Ubuntus - props["codename"] = props["ubuntu_codename"] - - return props - - @cached_property - def _lsb_release_info(self) -> Dict[str, str]: - """ - Get the information items from the lsb_release command output. - - Returns: - A dictionary containing all information items. - """ - if not self.include_lsb: - return {} - try: - cmd = ("lsb_release", "-a") - stdout = subprocess.check_output(cmd, stderr=subprocess.DEVNULL) - # Command not found or lsb_release returned error - except (OSError, subprocess.CalledProcessError): - return {} - content = self._to_str(stdout).splitlines() - return self._parse_lsb_release_content(content) - - @staticmethod - def _parse_lsb_release_content(lines: Iterable[str]) -> Dict[str, str]: - """ - Parse the output of the lsb_release command. - - Parameters: - - * lines: Iterable through the lines of the lsb_release output. - Each line must be a unicode string or a UTF-8 encoded byte - string. - - Returns: - A dictionary containing all information items. - """ - props = {} - for line in lines: - kv = line.strip("\n").split(":", 1) - if len(kv) != 2: - # Ignore lines without colon. - continue - k, v = kv - props.update({k.replace(" ", "_").lower(): v.strip()}) - return props - - @cached_property - def _uname_info(self) -> Dict[str, str]: - if not self.include_uname: - return {} - try: - cmd = ("uname", "-rs") - stdout = subprocess.check_output(cmd, stderr=subprocess.DEVNULL) - except OSError: - return {} - content = self._to_str(stdout).splitlines() - return self._parse_uname_content(content) - - @cached_property - def _oslevel_info(self) -> str: - if not self.include_oslevel: - return "" - try: - stdout = subprocess.check_output("oslevel", stderr=subprocess.DEVNULL) - except (OSError, subprocess.CalledProcessError): - return "" - return self._to_str(stdout).strip() - - @cached_property - def _debian_version(self) -> str: - try: - with open( - os.path.join(self.etc_dir, "debian_version"), encoding="ascii" - ) as fp: - return fp.readline().rstrip() - except FileNotFoundError: - return "" - - @staticmethod - def _parse_uname_content(lines: Sequence[str]) -> Dict[str, str]: - if not lines: - return {} - props = {} - match = re.search(r"^([^\s]+)\s+([\d\.]+)", lines[0].strip()) - if match: - name, version = match.groups() - - # This is to prevent the Linux kernel version from - # appearing as the 'best' version on otherwise - # identifiable distributions. - if name == "Linux": - return {} - props["id"] = name.lower() - props["name"] = name - props["release"] = version - return props - - @staticmethod - def _to_str(bytestring: bytes) -> str: - encoding = sys.getfilesystemencoding() - return bytestring.decode(encoding) - - @cached_property - def _distro_release_info(self) -> Dict[str, str]: - """ - Get the information items from the specified distro release file. - - Returns: - A dictionary containing all information items. - """ - if self.distro_release_file: - # If it was specified, we use it and parse what we can, even if - # its file name or content does not match the expected pattern. - distro_info = self._parse_distro_release_file(self.distro_release_file) - basename = os.path.basename(self.distro_release_file) - # The file name pattern for user-specified distro release files - # is somewhat more tolerant (compared to when searching for the - # file), because we want to use what was specified as best as - # possible. - match = _DISTRO_RELEASE_BASENAME_PATTERN.match(basename) - else: - try: - basenames = [ - basename - for basename in os.listdir(self.etc_dir) - if basename not in _DISTRO_RELEASE_IGNORE_BASENAMES - and os.path.isfile(os.path.join(self.etc_dir, basename)) - ] - # We sort for repeatability in cases where there are multiple - # distro specific files; e.g. CentOS, Oracle, Enterprise all - # containing `redhat-release` on top of their own. - basenames.sort() - except OSError: - # This may occur when /etc is not readable but we can't be - # sure about the *-release files. Check common entries of - # /etc for information. If they turn out to not be there the - # error is handled in `_parse_distro_release_file()`. - basenames = _DISTRO_RELEASE_BASENAMES - for basename in basenames: - match = _DISTRO_RELEASE_BASENAME_PATTERN.match(basename) - if match is None: - continue - filepath = os.path.join(self.etc_dir, basename) - distro_info = self._parse_distro_release_file(filepath) - # The name is always present if the pattern matches. - if "name" not in distro_info: - continue - self.distro_release_file = filepath - break - else: # the loop didn't "break": no candidate. - return {} - - if match is not None: - distro_info["id"] = match.group(1) - - # CloudLinux < 7: manually enrich info with proper id. - if "cloudlinux" in distro_info.get("name", "").lower(): - distro_info["id"] = "cloudlinux" - - return distro_info - - def _parse_distro_release_file(self, filepath: str) -> Dict[str, str]: - """ - Parse a distro release file. - - Parameters: - - * filepath: Path name of the distro release file. - - Returns: - A dictionary containing all information items. - """ - try: - with open(filepath, encoding="utf-8") as fp: - # Only parse the first line. For instance, on SLES there - # are multiple lines. We don't want them... - return self._parse_distro_release_content(fp.readline()) - except OSError: - # Ignore not being able to read a specific, seemingly version - # related file. - # See https://github.com/python-distro/distro/issues/162 - return {} - - @staticmethod - def _parse_distro_release_content(line: str) -> Dict[str, str]: - """ - Parse a line from a distro release file. - - Parameters: - * line: Line from the distro release file. Must be a unicode string - or a UTF-8 encoded byte string. - - Returns: - A dictionary containing all information items. - """ - matches = _DISTRO_RELEASE_CONTENT_REVERSED_PATTERN.match(line.strip()[::-1]) - distro_info = {} - if matches: - # regexp ensures non-None - distro_info["name"] = matches.group(3)[::-1] - if matches.group(2): - distro_info["version_id"] = matches.group(2)[::-1] - if matches.group(1): - distro_info["codename"] = matches.group(1)[::-1] - elif line: - distro_info["name"] = line.strip() - return distro_info - - -_distro = LinuxDistribution() - - -def main() -> None: - logger = logging.getLogger(__name__) - logger.setLevel(logging.DEBUG) - logger.addHandler(logging.StreamHandler(sys.stdout)) - - parser = argparse.ArgumentParser(description="OS distro info tool") - parser.add_argument( - "--json", "-j", help="Output in machine readable format", action="store_true" - ) - - parser.add_argument( - "--root-dir", - "-r", - type=str, - dest="root_dir", - help="Path to the root filesystem directory (defaults to /)", - ) - - args = parser.parse_args() - - if args.root_dir: - dist = LinuxDistribution( - include_lsb=False, - include_uname=False, - include_oslevel=False, - root_dir=args.root_dir, - ) - else: - dist = _distro - - if args.json: - logger.info(json.dumps(dist.info(), indent=4, sort_keys=True)) - else: - logger.info("Name: %s", dist.name(pretty=True)) - distribution_version = dist.version(pretty=True) - logger.info("Version: %s", distribution_version) - distribution_codename = dist.codename() - logger.info("Codename: %s", distribution_codename) - - -if __name__ == "__main__": - main() diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tools/deploy/README.md b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tools/deploy/README.md deleted file mode 100644 index e33cbeb54c003a5738da68c838fdaa4e0d218501..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tools/deploy/README.md +++ /dev/null @@ -1,66 +0,0 @@ -See [deployment tutorial](https://detectron2.readthedocs.io/tutorials/deployment.html) -for some high-level background about deployment. - -This directory contains the following examples: - -1. An example script `export_model.py` - that exports a detectron2 model for deployment using different methods and formats. - -2. A C++ example that runs inference with Mask R-CNN model in TorchScript format. - -## Build -Deployment depends on libtorch and OpenCV. Some require more dependencies: - -* Running TorchScript-format models produced by `--export-method=caffe2_tracing` requires libtorch - to be built with caffe2 enabled. -* Running TorchScript-format models produced by `--export-method=tracing/scripting` requires libtorchvision (C++ library of torchvision). - -All methods are supported in one C++ file that requires all the above dependencies. -Adjust it and remove code you don't need. -As a reference, we provide a [Dockerfile](../../docker/deploy.Dockerfile) that installs all the above dependencies and builds the C++ example. - -## Use - -We show a few example commands to export and execute a Mask R-CNN model in C++. - -* `export-method=tracing, format=torchscript`: -``` -./export_model.py --config-file ../../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml \ - --output ./output --export-method tracing --format torchscript \ - MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl \ - MODEL.DEVICE cuda - -./build/torchscript_mask_rcnn output/model.ts input.jpg tracing -``` - -* `export-method=scripting, format=torchscript`: -``` -./export_model.py --config-file ../../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml \ - --output ./output --export-method scripting --format torchscript \ - MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl \ - -./build/torchscript_mask_rcnn output/model.ts input.jpg scripting -``` - -* `export-method=caffe2_tracing, format=torchscript`: - -``` -./export_model.py --config-file ../../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml \ - --output ./output --export-method caffe2_tracing --format torchscript \ - MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl \ - -./build/torchscript_mask_rcnn output/model.ts input.jpg caffe2_tracing -``` - - -## Notes: - -1. Tracing/Caffe2-tracing requires valid weights & sample inputs. - Therefore the above commands require pre-trained models and [COCO dataset](https://detectron2.readthedocs.io/tutorials/builtin_datasets.html). - You can modify the script to obtain sample inputs in other ways instead of from COCO. - -2. `--run-eval` is implemented only for tracing mode - to evaluate the exported model using the dataset in the config. - It's recommended to always verify the accuracy in case the conversion is not successful. - Evaluation can be slow if model is exported to CPU or dataset is too large ("coco_2017_val_100" is a small subset of COCO useful for evaluation). - `caffe2_tracing` accuracy may be slightly different (within 0.1 AP) from original model due to numerical precisions between different runtime. diff --git a/spaces/TheRealZoink/Zoink_OV3RL0AD/index.html b/spaces/TheRealZoink/Zoink_OV3RL0AD/index.html deleted file mode 100644 index 737b1433859fbb18daf9966125661ca9d5c1434a..0000000000000000000000000000000000000000 --- a/spaces/TheRealZoink/Zoink_OV3RL0AD/index.html +++ /dev/null @@ -1,103 +0,0 @@ - - - - - - -

    WELCOME TO THE ZOINK

    -

    Explore the unknown

    -

    bitches

    - - diff --git a/spaces/Theivaprakasham/yolov6/tools/train.py b/spaces/Theivaprakasham/yolov6/tools/train.py deleted file mode 100644 index 927d997bd8c6e572e43bcd8e137928ceab0f9afb..0000000000000000000000000000000000000000 --- a/spaces/Theivaprakasham/yolov6/tools/train.py +++ /dev/null @@ -1,87 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding:utf-8 -*- -import argparse -import os -import os.path as osp -import torch -import torch.distributed as dist -import sys - -ROOT = os.getcwd() -if str(ROOT) not in sys.path: - sys.path.append(str(ROOT)) - -from yolov6.core.engine import Trainer -from yolov6.utils.config import Config -from yolov6.utils.events import LOGGER, save_yaml -from yolov6.utils.envs import get_envs, select_device, set_random_seed - - -def get_args_parser(add_help=True): - parser = argparse.ArgumentParser(description='YOLOv6 PyTorch Training', add_help=add_help) - parser.add_argument('--data-path', default='./data/coco.yaml', type=str, help='dataset path') - parser.add_argument('--conf-file', default='./configs/yolov6s.py', type=str, help='experiment description file') - parser.add_argument('--img-size', type=int, default=640, help='train, val image size (pixels)') - parser.add_argument('--batch-size', default=32, type=int, help='total batch size for all GPUs') - parser.add_argument('--epochs', default=400, type=int, help='number of total epochs to run') - parser.add_argument('--workers', default=8, type=int, help='number of data loading workers (default: 8)') - parser.add_argument('--device', default='0', type=str, help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - parser.add_argument('--noval', action='store_true', help='only evaluate in final epoch') - parser.add_argument('--check-images', action='store_true', help='check images when initializing datasets') - parser.add_argument('--check-labels', action='store_true', help='check label files when initializing datasets') - parser.add_argument('--output-dir', default='./runs/train', type=str, help='path to save outputs') - parser.add_argument('--name', default='exp', type=str, help='experiment name, save to output_dir/name') - parser.add_argument('--dist_url', type=str, default="tcp://127.0.0.1:8888") - parser.add_argument('--gpu_count', type=int, default=0) - parser.add_argument('--local_rank', type=int, default=-1, help='DDP parameter, do not modify') - - return parser - - -def check_and_init(args): - '''check config files and device, and initialize ''' - - # check files - args.save_dir = osp.join(args.output_dir, args.name) - os.makedirs(args.save_dir, exist_ok=True) - cfg = Config.fromfile(args.conf_file) - - # check device - device = select_device(args.device) - - # set random seed - set_random_seed(1+args.rank, deterministic=(args.rank == -1)) - - # save args - save_yaml(vars(args), osp.join(args.save_dir, 'args.yaml')) - - return cfg, device - - -def main(args): - '''main function of training''' - # Setup - args.rank, args.local_rank, args.world_size = get_envs() - LOGGER.info(f'training args are: {args}\n') - cfg, device = check_and_init(args) - - if args.local_rank != -1: # if DDP mode - torch.cuda.set_device(args.local_rank) - device = torch.device('cuda', args.local_rank) - LOGGER.info('Initializing process group... ') - dist.init_process_group(backend="nccl" if dist.is_nccl_available() else "gloo", \ - init_method=args.dist_url, rank=args.local_rank, world_size=args.world_size) - - # Start - trainer = Trainer(args, cfg, device) - trainer.train() - - # End - if args.world_size > 1 and args.rank == 0: - LOGGER.info('Destroying process group... ') - dist.destroy_process_group() - - -if __name__ == '__main__': - args = get_args_parser().parse_args() - main(args) diff --git a/spaces/Tonic1/falcon-180b-demo/app.py b/spaces/Tonic1/falcon-180b-demo/app.py deleted file mode 100644 index 4c24a980318ba17651b79644076f1dce63f8dfed..0000000000000000000000000000000000000000 --- a/spaces/Tonic1/falcon-180b-demo/app.py +++ /dev/null @@ -1,145 +0,0 @@ -import json -import os -import shutil -import requests - -import gradio as gr -from huggingface_hub import Repository, InferenceClient - -HF_TOKEN = os.environ.get("HF_TOKEN", None) -API_URL = "https://api-inference.huggingface.co/models/tiiuae/falcon-180B-chat" -BOT_NAME = "Falcon" - -STOP_SEQUENCES = ["\nUser:", "<|endoftext|>", " User:", "###"] - -EXAMPLES = [ - ["Hey Falcon! Any recommendations for my holidays in Abu Dhabi?"], - ["What's the Everett interpretation of quantum mechanics?"], - ["Give me a list of the top 10 dive sites you would recommend around the world."], - ["Can you tell me more about deep-water soloing?"], - ["Can you write a short tweet about the release of our latest AI model, Falcon LLM?"] - ] - -client = InferenceClient( - API_URL, - headers={"Authorization": f"Bearer {HF_TOKEN}"}, -) - -def format_prompt(message, history, system_prompt): - prompt = "" - if system_prompt: - prompt += f"System: {system_prompt}\n" - for user_prompt, bot_response in history: - prompt += f"User: {user_prompt}\n" - prompt += f"Falcon: {bot_response}\n" # Response already contains "Falcon: " - prompt += f"""User: {message} -Falcon:""" - return prompt - -seed = 42 - -def generate( - prompt, history, system_prompt="", temperature=0.9, max_new_tokens=2048, top_p=0.95, repetition_penalty=1.0, -): - temperature = float(temperature) - if temperature < 1e-2: - temperature = 1e-2 - top_p = float(top_p) - global seed - generate_kwargs = dict( - temperature=temperature, - max_new_tokens=max_new_tokens, - top_p=top_p, - repetition_penalty=repetition_penalty, - stop_sequences=STOP_SEQUENCES, - do_sample=True, - seed=seed, - ) - seed = seed + 1 - formatted_prompt = format_prompt(prompt, history, system_prompt) - - stream = client.text_generation(formatted_prompt, **generate_kwargs, stream=True, details=True, return_full_text=False) - output = "" - - for response in stream: - output += response.token.text - - for stop_str in STOP_SEQUENCES: - if output.endswith(stop_str): - output = output[:-len(stop_str)] - output = output.rstrip() - yield output - yield output - return output - - -additional_inputs=[ - gr.Textbox("", label="Optional system prompt"), - gr.Slider( - label="Temperature", - value=0.9, - minimum=0.0, - maximum=1.0, - step=0.05, - interactive=True, - info="Higher values produce more diverse outputs", - ), - gr.Slider( - label="Max new tokens", - value=256, - minimum=0, - maximum=8192, - step=64, - interactive=True, - info="The maximum numbers of new tokens", - ), - gr.Slider( - label="Top-p (nucleus sampling)", - value=0.90, - minimum=0.0, - maximum=1, - step=0.05, - interactive=True, - info="Higher values sample more low-probability tokens", - ), - gr.Slider( - label="Repetition penalty", - value=1.2, - minimum=1.0, - maximum=2.0, - step=0.05, - interactive=True, - info="Penalize repeated tokens", - ) -] - - -with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(scale=0.4): - gr.Image("better_banner.jpeg", elem_id="banner-image", show_label=False) - with gr.Column(): - gr.Markdown( - """# Falcon-180B Demo - - **Chat with [Falcon-180B-Chat](https://huggingface.co/tiiuae/falcon-180b-chat), brainstorm ideas, discuss your holiday plans, and more!** - - ✨ This demo is powered by [Falcon-180B](https://huggingface.co/tiiuae/falcon-180B) and finetuned on a mixture of [Ultrachat](https://huggingface.co/datasets/stingning/ultrachat), [Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus) and [Airoboros](https://huggingface.co/datasets/jondurbin/airoboros-2.1). [Falcon-180B](https://huggingface.co/tiiuae/falcon-180b) is a state-of-the-art large language model built by the [Technology Innovation Institute](https://www.tii.ae) in Abu Dhabi. It is trained on 3.5 trillion tokens (including [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)) and available under the [Falcon-180B TII License](https://huggingface.co/spaces/tiiuae/falcon-180b-license/blob/main/LICENSE.txt). It currently holds the 🥇 1st place on the [🤗 Open LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for a pretrained model. - - 🧪 This is only a **first experimental preview**: we intend to provide increasingly capable versions of Falcon in the future, based on improved datasets and RLHF/RLAIF. - - 👀 **Learn more about Falcon LLM:** [falconllm.tii.ae](https://falconllm.tii.ae/) - - ➡️️ **Intended Use**: this demo is intended to showcase an early finetuning of [Falcon-180B](https://huggingface.co/tiiuae/falcon-180b), to illustrate the impact (and limitations) of finetuning on a dataset of conversations and instructions. We encourage the community to further build upon the base model, and to create even better instruct/chat versions! - - ⚠️ **Limitations**: the model can and will produce factually incorrect information, hallucinating facts and actions. As it has not undergone any advanced tuning/alignment, it can produce problematic outputs, especially if prompted to do so. Finally, this demo is limited to a session length of about 1,000 words. - """ - ) - - gr.ChatInterface( - generate, - examples=EXAMPLES, - additional_inputs=additional_inputs, - ) - -demo.queue(concurrency_count=100, api_open=False).launch(show_api=False) diff --git a/spaces/TrustSafeAI/NCTV/assets/css/bootstrap/bootstrap.min.css b/spaces/TrustSafeAI/NCTV/assets/css/bootstrap/bootstrap.min.css deleted file mode 100644 index 1472dec059b31e556f29fba41de840dc4add9c4f..0000000000000000000000000000000000000000 --- a/spaces/TrustSafeAI/NCTV/assets/css/bootstrap/bootstrap.min.css +++ /dev/null @@ -1,7 +0,0 @@ -@charset "UTF-8";/*! - * Bootstrap v5.1.3 (https://getbootstrap.com/) - * Copyright 2011-2021 The Bootstrap Authors - * Copyright 2011-2021 Twitter, Inc. - * Licensed under MIT (https://github.com/twbs/bootstrap/blob/main/LICENSE) - */:root{--bs-blue:#0d6efd;--bs-indigo:#6610f2;--bs-purple:#6f42c1;--bs-pink:#d63384;--bs-red:#dc3545;--bs-orange:#fd7e14;--bs-yellow:#ffc107;--bs-green:#198754;--bs-teal:#20c997;--bs-cyan:#0dcaf0;--bs-white:#fff;--bs-gray:#6c757d;--bs-gray-dark:#343a40;--bs-gray-100:#f8f9fa;--bs-gray-200:#e9ecef;--bs-gray-300:#dee2e6;--bs-gray-400:#ced4da;--bs-gray-500:#adb5bd;--bs-gray-600:#6c757d;--bs-gray-700:#495057;--bs-gray-800:#343a40;--bs-gray-900:#212529;--bs-primary:#0d6efd;--bs-secondary:#6c757d;--bs-success:#198754;--bs-info:#0dcaf0;--bs-warning:#ffc107;--bs-danger:#dc3545;--bs-light:#f8f9fa;--bs-dark:#212529;--bs-primary-rgb:13,110,253;--bs-secondary-rgb:108,117,125;--bs-success-rgb:25,135,84;--bs-info-rgb:13,202,240;--bs-warning-rgb:255,193,7;--bs-danger-rgb:220,53,69;--bs-light-rgb:248,249,250;--bs-dark-rgb:33,37,41;--bs-white-rgb:255,255,255;--bs-black-rgb:0,0,0;--bs-body-color-rgb:33,37,41;--bs-body-bg-rgb:255,255,255;--bs-font-sans-serif:system-ui,-apple-system,"Segoe UI",Roboto,"Helvetica Neue",Arial,"Noto Sans","Liberation Sans",sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol","Noto Color Emoji";--bs-font-monospace:SFMono-Regular,Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace;--bs-gradient:linear-gradient(180deg, rgba(255, 255, 255, 0.15), rgba(255, 255, 255, 0));--bs-body-font-family:var(--bs-font-sans-serif);--bs-body-font-size:1rem;--bs-body-font-weight:400;--bs-body-line-height:1.5;--bs-body-color:#212529;--bs-body-bg:#fff}*,::after,::before{box-sizing:border-box}@media (prefers-reduced-motion:no-preference){:root{scroll-behavior:smooth}}body{margin:0;font-family:var(--bs-body-font-family);font-size:var(--bs-body-font-size);font-weight:var(--bs-body-font-weight);line-height:var(--bs-body-line-height);color:var(--bs-body-color);text-align:var(--bs-body-text-align);background-color:var(--bs-body-bg);-webkit-text-size-adjust:100%;-webkit-tap-highlight-color:transparent}hr{margin:1rem 0;color:inherit;background-color:currentColor;border:0;opacity:.25}hr:not([size]){height:1px}.h1,.h2,.h3,.h4,.h5,.h6,h1,h2,h3,h4,h5,h6{margin-top:0;margin-bottom:.5rem;font-weight:500;line-height:1.2}.h1,h1{font-size:calc(1.375rem + 1.5vw)}@media (min-width:1200px){.h1,h1{font-size:2.5rem}}.h2,h2{font-size:calc(1.325rem + .9vw)}@media (min-width:1200px){.h2,h2{font-size:2rem}}.h3,h3{font-size:calc(1.3rem + .6vw)}@media (min-width:1200px){.h3,h3{font-size:1.75rem}}.h4,h4{font-size:calc(1.275rem + .3vw)}@media (min-width:1200px){.h4,h4{font-size:1.5rem}}.h5,h5{font-size:1.25rem}.h6,h6{font-size:1rem}p{margin-top:0;margin-bottom:1rem}abbr[data-bs-original-title],abbr[title]{-webkit-text-decoration:underline dotted;text-decoration:underline dotted;cursor:help;-webkit-text-decoration-skip-ink:none;text-decoration-skip-ink:none}address{margin-bottom:1rem;font-style:normal;line-height:inherit}ol,ul{padding-left:2rem}dl,ol,ul{margin-top:0;margin-bottom:1rem}ol ol,ol ul,ul ol,ul ul{margin-bottom:0}dt{font-weight:700}dd{margin-bottom:.5rem;margin-left:0}blockquote{margin:0 0 1rem}b,strong{font-weight:bolder}.small,small{font-size:.875em}.mark,mark{padding:.2em;background-color:#fcf8e3}sub,sup{position:relative;font-size:.75em;line-height:0;vertical-align:baseline}sub{bottom:-.25em}sup{top:-.5em}a{color:#0d6efd;text-decoration:underline}a:hover{color:#0a58ca}a:not([href]):not([class]),a:not([href]):not([class]):hover{color:inherit;text-decoration:none}code,kbd,pre,samp{font-family:var(--bs-font-monospace);font-size:1em;direction:ltr;unicode-bidi:bidi-override}pre{display:block;margin-top:0;margin-bottom:1rem;overflow:auto;font-size:.875em}pre code{font-size:inherit;color:inherit;word-break:normal}code{font-size:.875em;color:#d63384;word-wrap:break-word}a>code{color:inherit}kbd{padding:.2rem .4rem;font-size:.875em;color:#fff;background-color:#212529;border-radius:.2rem}kbd kbd{padding:0;font-size:1em;font-weight:700}figure{margin:0 0 1rem}img,svg{vertical-align:middle}table{caption-side:bottom;border-collapse:collapse}caption{padding-top:.5rem;padding-bottom:.5rem;color:#6c757d;text-align:left}th{text-align:inherit;text-align:-webkit-match-parent}tbody,td,tfoot,th,thead,tr{border-color:inherit;border-style:solid;border-width:0}label{display:inline-block}button{border-radius:0}button:focus:not(:focus-visible){outline:0}button,input,optgroup,select,textarea{margin:0;font-family:inherit;font-size:inherit;line-height:inherit}button,select{text-transform:none}[role=button]{cursor:pointer}select{word-wrap:normal}select:disabled{opacity:1}[list]::-webkit-calendar-picker-indicator{display:none}[type=button],[type=reset],[type=submit],button{-webkit-appearance:button}[type=button]:not(:disabled),[type=reset]:not(:disabled),[type=submit]:not(:disabled),button:not(:disabled){cursor:pointer}::-moz-focus-inner{padding:0;border-style:none}textarea{resize:vertical}fieldset{min-width:0;padding:0;margin:0;border:0}legend{float:left;width:100%;padding:0;margin-bottom:.5rem;font-size:calc(1.275rem + .3vw);line-height:inherit}@media (min-width:1200px){legend{font-size:1.5rem}}legend+*{clear:left}::-webkit-datetime-edit-day-field,::-webkit-datetime-edit-fields-wrapper,::-webkit-datetime-edit-hour-field,::-webkit-datetime-edit-minute,::-webkit-datetime-edit-month-field,::-webkit-datetime-edit-text,::-webkit-datetime-edit-year-field{padding:0}::-webkit-inner-spin-button{height:auto}[type=search]{outline-offset:-2px;-webkit-appearance:textfield}::-webkit-search-decoration{-webkit-appearance:none}::-webkit-color-swatch-wrapper{padding:0}::-webkit-file-upload-button{font:inherit}::file-selector-button{font:inherit}::-webkit-file-upload-button{font:inherit;-webkit-appearance:button}output{display:inline-block}iframe{border:0}summary{display:list-item;cursor:pointer}progress{vertical-align:baseline}[hidden]{display:none!important}.lead{font-size:1.25rem;font-weight:300}.display-1{font-size:calc(1.625rem + 4.5vw);font-weight:300;line-height:1.2}@media (min-width:1200px){.display-1{font-size:5rem}}.display-2{font-size:calc(1.575rem + 3.9vw);font-weight:300;line-height:1.2}@media (min-width:1200px){.display-2{font-size:4.5rem}}.display-3{font-size:calc(1.525rem + 3.3vw);font-weight:300;line-height:1.2}@media (min-width:1200px){.display-3{font-size:4rem}}.display-4{font-size:calc(1.475rem + 2.7vw);font-weight:300;line-height:1.2}@media (min-width:1200px){.display-4{font-size:3.5rem}}.display-5{font-size:calc(1.425rem + 2.1vw);font-weight:300;line-height:1.2}@media (min-width:1200px){.display-5{font-size:3rem}}.display-6{font-size:calc(1.375rem + 1.5vw);font-weight:300;line-height:1.2}@media (min-width:1200px){.display-6{font-size:2.5rem}}.list-unstyled{padding-left:0;list-style:none}.list-inline{padding-left:0;list-style:none}.list-inline-item{display:inline-block}.list-inline-item:not(:last-child){margin-right:.5rem}.initialism{font-size:.875em;text-transform:uppercase}.blockquote{margin-bottom:1rem;font-size:1.25rem}.blockquote>:last-child{margin-bottom:0}.blockquote-footer{margin-top:-1rem;margin-bottom:1rem;font-size:.875em;color:#6c757d}.blockquote-footer::before{content:"— "}.img-fluid{max-width:100%;height:auto}.img-thumbnail{padding:.25rem;background-color:#fff;border:1px solid #dee2e6;border-radius:.25rem;max-width:100%;height:auto}.figure{display:inline-block}.figure-img{margin-bottom:.5rem;line-height:1}.figure-caption{font-size:.875em;color:#6c757d}.container,.container-fluid,.container-lg,.container-md,.container-sm,.container-xl,.container-xxl{width:100%;padding-right:var(--bs-gutter-x,.75rem);padding-left:var(--bs-gutter-x,.75rem);margin-right:auto;margin-left:auto}@media (min-width:576px){.container,.container-sm{max-width:540px}}@media (min-width:768px){.container,.container-md,.container-sm{max-width:720px}}@media (min-width:992px){.container,.container-lg,.container-md,.container-sm{max-width:960px}}@media (min-width:1200px){.container,.container-lg,.container-md,.container-sm,.container-xl{max-width:1140px}}@media (min-width:1400px){.container,.container-lg,.container-md,.container-sm,.container-xl,.container-xxl{max-width:1320px}}.row{--bs-gutter-x:1.5rem;--bs-gutter-y:0;display:flex;flex-wrap:wrap;margin-top:calc(-1 * var(--bs-gutter-y));margin-right:calc(-.5 * var(--bs-gutter-x));margin-left:calc(-.5 * var(--bs-gutter-x))}.row>*{flex-shrink:0;width:100%;max-width:100%;padding-right:calc(var(--bs-gutter-x) * .5);padding-left:calc(var(--bs-gutter-x) * .5);margin-top:var(--bs-gutter-y)}.col{flex:1 0 0%}.row-cols-auto>*{flex:0 0 auto;width:auto}.row-cols-1>*{flex:0 0 auto;width:100%}.row-cols-2>*{flex:0 0 auto;width:50%}.row-cols-3>*{flex:0 0 auto;width:33.3333333333%}.row-cols-4>*{flex:0 0 auto;width:25%}.row-cols-5>*{flex:0 0 auto;width:20%}.row-cols-6>*{flex:0 0 auto;width:16.6666666667%}.col-auto{flex:0 0 auto;width:auto}.col-1{flex:0 0 auto;width:8.33333333%}.col-2{flex:0 0 auto;width:16.66666667%}.col-3{flex:0 0 auto;width:25%}.col-4{flex:0 0 auto;width:33.33333333%}.col-5{flex:0 0 auto;width:41.66666667%}.col-6{flex:0 0 auto;width:50%}.col-7{flex:0 0 auto;width:58.33333333%}.col-8{flex:0 0 auto;width:66.66666667%}.col-9{flex:0 0 auto;width:75%}.col-10{flex:0 0 auto;width:83.33333333%}.col-11{flex:0 0 auto;width:91.66666667%}.col-12{flex:0 0 auto;width:100%}.offset-1{margin-left:8.33333333%}.offset-2{margin-left:16.66666667%}.offset-3{margin-left:25%}.offset-4{margin-left:33.33333333%}.offset-5{margin-left:41.66666667%}.offset-6{margin-left:50%}.offset-7{margin-left:58.33333333%}.offset-8{margin-left:66.66666667%}.offset-9{margin-left:75%}.offset-10{margin-left:83.33333333%}.offset-11{margin-left:91.66666667%}.g-0,.gx-0{--bs-gutter-x:0}.g-0,.gy-0{--bs-gutter-y:0}.g-1,.gx-1{--bs-gutter-x:0.25rem}.g-1,.gy-1{--bs-gutter-y:0.25rem}.g-2,.gx-2{--bs-gutter-x:0.5rem}.g-2,.gy-2{--bs-gutter-y:0.5rem}.g-3,.gx-3{--bs-gutter-x:1rem}.g-3,.gy-3{--bs-gutter-y:1rem}.g-4,.gx-4{--bs-gutter-x:1.5rem}.g-4,.gy-4{--bs-gutter-y:1.5rem}.g-5,.gx-5{--bs-gutter-x:3rem}.g-5,.gy-5{--bs-gutter-y:3rem}@media (min-width:576px){.col-sm{flex:1 0 0%}.row-cols-sm-auto>*{flex:0 0 auto;width:auto}.row-cols-sm-1>*{flex:0 0 auto;width:100%}.row-cols-sm-2>*{flex:0 0 auto;width:50%}.row-cols-sm-3>*{flex:0 0 auto;width:33.3333333333%}.row-cols-sm-4>*{flex:0 0 auto;width:25%}.row-cols-sm-5>*{flex:0 0 auto;width:20%}.row-cols-sm-6>*{flex:0 0 auto;width:16.6666666667%}.col-sm-auto{flex:0 0 auto;width:auto}.col-sm-1{flex:0 0 auto;width:8.33333333%}.col-sm-2{flex:0 0 auto;width:16.66666667%}.col-sm-3{flex:0 0 auto;width:25%}.col-sm-4{flex:0 0 auto;width:33.33333333%}.col-sm-5{flex:0 0 auto;width:41.66666667%}.col-sm-6{flex:0 0 auto;width:50%}.col-sm-7{flex:0 0 auto;width:58.33333333%}.col-sm-8{flex:0 0 auto;width:66.66666667%}.col-sm-9{flex:0 0 auto;width:75%}.col-sm-10{flex:0 0 auto;width:83.33333333%}.col-sm-11{flex:0 0 auto;width:91.66666667%}.col-sm-12{flex:0 0 auto;width:100%}.offset-sm-0{margin-left:0}.offset-sm-1{margin-left:8.33333333%}.offset-sm-2{margin-left:16.66666667%}.offset-sm-3{margin-left:25%}.offset-sm-4{margin-left:33.33333333%}.offset-sm-5{margin-left:41.66666667%}.offset-sm-6{margin-left:50%}.offset-sm-7{margin-left:58.33333333%}.offset-sm-8{margin-left:66.66666667%}.offset-sm-9{margin-left:75%}.offset-sm-10{margin-left:83.33333333%}.offset-sm-11{margin-left:91.66666667%}.g-sm-0,.gx-sm-0{--bs-gutter-x:0}.g-sm-0,.gy-sm-0{--bs-gutter-y:0}.g-sm-1,.gx-sm-1{--bs-gutter-x:0.25rem}.g-sm-1,.gy-sm-1{--bs-gutter-y:0.25rem}.g-sm-2,.gx-sm-2{--bs-gutter-x:0.5rem}.g-sm-2,.gy-sm-2{--bs-gutter-y:0.5rem}.g-sm-3,.gx-sm-3{--bs-gutter-x:1rem}.g-sm-3,.gy-sm-3{--bs-gutter-y:1rem}.g-sm-4,.gx-sm-4{--bs-gutter-x:1.5rem}.g-sm-4,.gy-sm-4{--bs-gutter-y:1.5rem}.g-sm-5,.gx-sm-5{--bs-gutter-x:3rem}.g-sm-5,.gy-sm-5{--bs-gutter-y:3rem}}@media (min-width:768px){.col-md{flex:1 0 0%}.row-cols-md-auto>*{flex:0 0 auto;width:auto}.row-cols-md-1>*{flex:0 0 auto;width:100%}.row-cols-md-2>*{flex:0 0 auto;width:50%}.row-cols-md-3>*{flex:0 0 auto;width:33.3333333333%}.row-cols-md-4>*{flex:0 0 auto;width:25%}.row-cols-md-5>*{flex:0 0 auto;width:20%}.row-cols-md-6>*{flex:0 0 auto;width:16.6666666667%}.col-md-auto{flex:0 0 auto;width:auto}.col-md-1{flex:0 0 auto;width:8.33333333%}.col-md-2{flex:0 0 auto;width:16.66666667%}.col-md-3{flex:0 0 auto;width:25%}.col-md-4{flex:0 0 auto;width:33.33333333%}.col-md-5{flex:0 0 auto;width:41.66666667%}.col-md-6{flex:0 0 auto;width:50%}.col-md-7{flex:0 0 auto;width:58.33333333%}.col-md-8{flex:0 0 auto;width:66.66666667%}.col-md-9{flex:0 0 auto;width:75%}.col-md-10{flex:0 0 auto;width:83.33333333%}.col-md-11{flex:0 0 auto;width:91.66666667%}.col-md-12{flex:0 0 auto;width:100%}.offset-md-0{margin-left:0}.offset-md-1{margin-left:8.33333333%}.offset-md-2{margin-left:16.66666667%}.offset-md-3{margin-left:25%}.offset-md-4{margin-left:33.33333333%}.offset-md-5{margin-left:41.66666667%}.offset-md-6{margin-left:50%}.offset-md-7{margin-left:58.33333333%}.offset-md-8{margin-left:66.66666667%}.offset-md-9{margin-left:75%}.offset-md-10{margin-left:83.33333333%}.offset-md-11{margin-left:91.66666667%}.g-md-0,.gx-md-0{--bs-gutter-x:0}.g-md-0,.gy-md-0{--bs-gutter-y:0}.g-md-1,.gx-md-1{--bs-gutter-x:0.25rem}.g-md-1,.gy-md-1{--bs-gutter-y:0.25rem}.g-md-2,.gx-md-2{--bs-gutter-x:0.5rem}.g-md-2,.gy-md-2{--bs-gutter-y:0.5rem}.g-md-3,.gx-md-3{--bs-gutter-x:1rem}.g-md-3,.gy-md-3{--bs-gutter-y:1rem}.g-md-4,.gx-md-4{--bs-gutter-x:1.5rem}.g-md-4,.gy-md-4{--bs-gutter-y:1.5rem}.g-md-5,.gx-md-5{--bs-gutter-x:3rem}.g-md-5,.gy-md-5{--bs-gutter-y:3rem}}@media (min-width:992px){.col-lg{flex:1 0 0%}.row-cols-lg-auto>*{flex:0 0 auto;width:auto}.row-cols-lg-1>*{flex:0 0 auto;width:100%}.row-cols-lg-2>*{flex:0 0 auto;width:50%}.row-cols-lg-3>*{flex:0 0 auto;width:33.3333333333%}.row-cols-lg-4>*{flex:0 0 auto;width:25%}.row-cols-lg-5>*{flex:0 0 auto;width:20%}.row-cols-lg-6>*{flex:0 0 auto;width:16.6666666667%}.col-lg-auto{flex:0 0 auto;width:auto}.col-lg-1{flex:0 0 auto;width:8.33333333%}.col-lg-2{flex:0 0 auto;width:16.66666667%}.col-lg-3{flex:0 0 auto;width:25%}.col-lg-4{flex:0 0 auto;width:33.33333333%}.col-lg-5{flex:0 0 auto;width:41.66666667%}.col-lg-6{flex:0 0 auto;width:50%}.col-lg-7{flex:0 0 auto;width:58.33333333%}.col-lg-8{flex:0 0 auto;width:66.66666667%}.col-lg-9{flex:0 0 auto;width:75%}.col-lg-10{flex:0 0 auto;width:83.33333333%}.col-lg-11{flex:0 0 auto;width:91.66666667%}.col-lg-12{flex:0 0 auto;width:100%}.offset-lg-0{margin-left:0}.offset-lg-1{margin-left:8.33333333%}.offset-lg-2{margin-left:16.66666667%}.offset-lg-3{margin-left:25%}.offset-lg-4{margin-left:33.33333333%}.offset-lg-5{margin-left:41.66666667%}.offset-lg-6{margin-left:50%}.offset-lg-7{margin-left:58.33333333%}.offset-lg-8{margin-left:66.66666667%}.offset-lg-9{margin-left:75%}.offset-lg-10{margin-left:83.33333333%}.offset-lg-11{margin-left:91.66666667%}.g-lg-0,.gx-lg-0{--bs-gutter-x:0}.g-lg-0,.gy-lg-0{--bs-gutter-y:0}.g-lg-1,.gx-lg-1{--bs-gutter-x:0.25rem}.g-lg-1,.gy-lg-1{--bs-gutter-y:0.25rem}.g-lg-2,.gx-lg-2{--bs-gutter-x:0.5rem}.g-lg-2,.gy-lg-2{--bs-gutter-y:0.5rem}.g-lg-3,.gx-lg-3{--bs-gutter-x:1rem}.g-lg-3,.gy-lg-3{--bs-gutter-y:1rem}.g-lg-4,.gx-lg-4{--bs-gutter-x:1.5rem}.g-lg-4,.gy-lg-4{--bs-gutter-y:1.5rem}.g-lg-5,.gx-lg-5{--bs-gutter-x:3rem}.g-lg-5,.gy-lg-5{--bs-gutter-y:3rem}}@media (min-width:1200px){.col-xl{flex:1 0 0%}.row-cols-xl-auto>*{flex:0 0 auto;width:auto}.row-cols-xl-1>*{flex:0 0 auto;width:100%}.row-cols-xl-2>*{flex:0 0 auto;width:50%}.row-cols-xl-3>*{flex:0 0 auto;width:33.3333333333%}.row-cols-xl-4>*{flex:0 0 auto;width:25%}.row-cols-xl-5>*{flex:0 0 auto;width:20%}.row-cols-xl-6>*{flex:0 0 auto;width:16.6666666667%}.col-xl-auto{flex:0 0 auto;width:auto}.col-xl-1{flex:0 0 auto;width:8.33333333%}.col-xl-2{flex:0 0 auto;width:16.66666667%}.col-xl-3{flex:0 0 auto;width:25%}.col-xl-4{flex:0 0 auto;width:33.33333333%}.col-xl-5{flex:0 0 auto;width:41.66666667%}.col-xl-6{flex:0 0 auto;width:50%}.col-xl-7{flex:0 0 auto;width:58.33333333%}.col-xl-8{flex:0 0 auto;width:66.66666667%}.col-xl-9{flex:0 0 auto;width:75%}.col-xl-10{flex:0 0 auto;width:83.33333333%}.col-xl-11{flex:0 0 auto;width:91.66666667%}.col-xl-12{flex:0 0 auto;width:100%}.offset-xl-0{margin-left:0}.offset-xl-1{margin-left:8.33333333%}.offset-xl-2{margin-left:16.66666667%}.offset-xl-3{margin-left:25%}.offset-xl-4{margin-left:33.33333333%}.offset-xl-5{margin-left:41.66666667%}.offset-xl-6{margin-left:50%}.offset-xl-7{margin-left:58.33333333%}.offset-xl-8{margin-left:66.66666667%}.offset-xl-9{margin-left:75%}.offset-xl-10{margin-left:83.33333333%}.offset-xl-11{margin-left:91.66666667%}.g-xl-0,.gx-xl-0{--bs-gutter-x:0}.g-xl-0,.gy-xl-0{--bs-gutter-y:0}.g-xl-1,.gx-xl-1{--bs-gutter-x:0.25rem}.g-xl-1,.gy-xl-1{--bs-gutter-y:0.25rem}.g-xl-2,.gx-xl-2{--bs-gutter-x:0.5rem}.g-xl-2,.gy-xl-2{--bs-gutter-y:0.5rem}.g-xl-3,.gx-xl-3{--bs-gutter-x:1rem}.g-xl-3,.gy-xl-3{--bs-gutter-y:1rem}.g-xl-4,.gx-xl-4{--bs-gutter-x:1.5rem}.g-xl-4,.gy-xl-4{--bs-gutter-y:1.5rem}.g-xl-5,.gx-xl-5{--bs-gutter-x:3rem}.g-xl-5,.gy-xl-5{--bs-gutter-y:3rem}}@media (min-width:1400px){.col-xxl{flex:1 0 0%}.row-cols-xxl-auto>*{flex:0 0 auto;width:auto}.row-cols-xxl-1>*{flex:0 0 auto;width:100%}.row-cols-xxl-2>*{flex:0 0 auto;width:50%}.row-cols-xxl-3>*{flex:0 0 auto;width:33.3333333333%}.row-cols-xxl-4>*{flex:0 0 auto;width:25%}.row-cols-xxl-5>*{flex:0 0 auto;width:20%}.row-cols-xxl-6>*{flex:0 0 auto;width:16.6666666667%}.col-xxl-auto{flex:0 0 auto;width:auto}.col-xxl-1{flex:0 0 auto;width:8.33333333%}.col-xxl-2{flex:0 0 auto;width:16.66666667%}.col-xxl-3{flex:0 0 auto;width:25%}.col-xxl-4{flex:0 0 auto;width:33.33333333%}.col-xxl-5{flex:0 0 auto;width:41.66666667%}.col-xxl-6{flex:0 0 auto;width:50%}.col-xxl-7{flex:0 0 auto;width:58.33333333%}.col-xxl-8{flex:0 0 auto;width:66.66666667%}.col-xxl-9{flex:0 0 auto;width:75%}.col-xxl-10{flex:0 0 auto;width:83.33333333%}.col-xxl-11{flex:0 0 auto;width:91.66666667%}.col-xxl-12{flex:0 0 auto;width:100%}.offset-xxl-0{margin-left:0}.offset-xxl-1{margin-left:8.33333333%}.offset-xxl-2{margin-left:16.66666667%}.offset-xxl-3{margin-left:25%}.offset-xxl-4{margin-left:33.33333333%}.offset-xxl-5{margin-left:41.66666667%}.offset-xxl-6{margin-left:50%}.offset-xxl-7{margin-left:58.33333333%}.offset-xxl-8{margin-left:66.66666667%}.offset-xxl-9{margin-left:75%}.offset-xxl-10{margin-left:83.33333333%}.offset-xxl-11{margin-left:91.66666667%}.g-xxl-0,.gx-xxl-0{--bs-gutter-x:0}.g-xxl-0,.gy-xxl-0{--bs-gutter-y:0}.g-xxl-1,.gx-xxl-1{--bs-gutter-x:0.25rem}.g-xxl-1,.gy-xxl-1{--bs-gutter-y:0.25rem}.g-xxl-2,.gx-xxl-2{--bs-gutter-x:0.5rem}.g-xxl-2,.gy-xxl-2{--bs-gutter-y:0.5rem}.g-xxl-3,.gx-xxl-3{--bs-gutter-x:1rem}.g-xxl-3,.gy-xxl-3{--bs-gutter-y:1rem}.g-xxl-4,.gx-xxl-4{--bs-gutter-x:1.5rem}.g-xxl-4,.gy-xxl-4{--bs-gutter-y:1.5rem}.g-xxl-5,.gx-xxl-5{--bs-gutter-x:3rem}.g-xxl-5,.gy-xxl-5{--bs-gutter-y:3rem}}.table{--bs-table-bg:transparent;--bs-table-accent-bg:transparent;--bs-table-striped-color:#212529;--bs-table-striped-bg:rgba(0, 0, 0, 0.05);--bs-table-active-color:#212529;--bs-table-active-bg:rgba(0, 0, 0, 0.1);--bs-table-hover-color:#212529;--bs-table-hover-bg:rgba(0, 0, 0, 0.075);width:100%;margin-bottom:1rem;color:#212529;vertical-align:top;border-color:#dee2e6}.table>:not(caption)>*>*{padding:.5rem .5rem;background-color:var(--bs-table-bg);border-bottom-width:1px;box-shadow:inset 0 0 0 9999px var(--bs-table-accent-bg)}.table>tbody{vertical-align:inherit}.table>thead{vertical-align:bottom}.table>:not(:first-child){border-top:2px solid currentColor}.caption-top{caption-side:top}.table-sm>:not(caption)>*>*{padding:.25rem .25rem}.table-bordered>:not(caption)>*{border-width:1px 0}.table-bordered>:not(caption)>*>*{border-width:0 1px}.table-borderless>:not(caption)>*>*{border-bottom-width:0}.table-borderless>:not(:first-child){border-top-width:0}.table-striped>tbody>tr:nth-of-type(odd)>*{--bs-table-accent-bg:var(--bs-table-striped-bg);color:var(--bs-table-striped-color)}.table-active{--bs-table-accent-bg:var(--bs-table-active-bg);color:var(--bs-table-active-color)}.table-hover>tbody>tr:hover>*{--bs-table-accent-bg:var(--bs-table-hover-bg);color:var(--bs-table-hover-color)}.table-primary{--bs-table-bg:#cfe2ff;--bs-table-striped-bg:#c5d7f2;--bs-table-striped-color:#000;--bs-table-active-bg:#bacbe6;--bs-table-active-color:#000;--bs-table-hover-bg:#bfd1ec;--bs-table-hover-color:#000;color:#000;border-color:#bacbe6}.table-secondary{--bs-table-bg:#e2e3e5;--bs-table-striped-bg:#d7d8da;--bs-table-striped-color:#000;--bs-table-active-bg:#cbccce;--bs-table-active-color:#000;--bs-table-hover-bg:#d1d2d4;--bs-table-hover-color:#000;color:#000;border-color:#cbccce}.table-success{--bs-table-bg:#d1e7dd;--bs-table-striped-bg:#c7dbd2;--bs-table-striped-color:#000;--bs-table-active-bg:#bcd0c7;--bs-table-active-color:#000;--bs-table-hover-bg:#c1d6cc;--bs-table-hover-color:#000;color:#000;border-color:#bcd0c7}.table-info{--bs-table-bg:#cff4fc;--bs-table-striped-bg:#c5e8ef;--bs-table-striped-color:#000;--bs-table-active-bg:#badce3;--bs-table-active-color:#000;--bs-table-hover-bg:#bfe2e9;--bs-table-hover-color:#000;color:#000;border-color:#badce3}.table-warning{--bs-table-bg:#fff3cd;--bs-table-striped-bg:#f2e7c3;--bs-table-striped-color:#000;--bs-table-active-bg:#e6dbb9;--bs-table-active-color:#000;--bs-table-hover-bg:#ece1be;--bs-table-hover-color:#000;color:#000;border-color:#e6dbb9}.table-danger{--bs-table-bg:#f8d7da;--bs-table-striped-bg:#eccccf;--bs-table-striped-color:#000;--bs-table-active-bg:#dfc2c4;--bs-table-active-color:#000;--bs-table-hover-bg:#e5c7ca;--bs-table-hover-color:#000;color:#000;border-color:#dfc2c4}.table-light{--bs-table-bg:#f8f9fa;--bs-table-striped-bg:#ecedee;--bs-table-striped-color:#000;--bs-table-active-bg:#dfe0e1;--bs-table-active-color:#000;--bs-table-hover-bg:#e5e6e7;--bs-table-hover-color:#000;color:#000;border-color:#dfe0e1}.table-dark{--bs-table-bg:#212529;--bs-table-striped-bg:#2c3034;--bs-table-striped-color:#fff;--bs-table-active-bg:#373b3e;--bs-table-active-color:#fff;--bs-table-hover-bg:#323539;--bs-table-hover-color:#fff;color:#fff;border-color:#373b3e}.table-responsive{overflow-x:auto;-webkit-overflow-scrolling:touch}@media (max-width:575.98px){.table-responsive-sm{overflow-x:auto;-webkit-overflow-scrolling:touch}}@media (max-width:767.98px){.table-responsive-md{overflow-x:auto;-webkit-overflow-scrolling:touch}}@media (max-width:991.98px){.table-responsive-lg{overflow-x:auto;-webkit-overflow-scrolling:touch}}@media (max-width:1199.98px){.table-responsive-xl{overflow-x:auto;-webkit-overflow-scrolling:touch}}@media (max-width:1399.98px){.table-responsive-xxl{overflow-x:auto;-webkit-overflow-scrolling:touch}}.form-label{margin-bottom:.5rem}.col-form-label{padding-top:calc(.375rem + 1px);padding-bottom:calc(.375rem + 1px);margin-bottom:0;font-size:inherit;line-height:1.5}.col-form-label-lg{padding-top:calc(.5rem + 1px);padding-bottom:calc(.5rem + 1px);font-size:1.25rem}.col-form-label-sm{padding-top:calc(.25rem + 1px);padding-bottom:calc(.25rem + 1px);font-size:.875rem}.form-text{margin-top:.25rem;font-size:.875em;color:#6c757d}.form-control{display:block;width:100%;padding:.375rem .75rem;font-size:1rem;font-weight:400;line-height:1.5;color:#212529;background-color:#fff;background-clip:padding-box;border:1px solid #ced4da;-webkit-appearance:none;-moz-appearance:none;appearance:none;border-radius:.25rem;transition:border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.form-control{transition:none}}.form-control[type=file]{overflow:hidden}.form-control[type=file]:not(:disabled):not([readonly]){cursor:pointer}.form-control:focus{color:#212529;background-color:#fff;border-color:#86b7fe;outline:0;box-shadow:0 0 0 .25rem rgba(13,110,253,.25)}.form-control::-webkit-date-and-time-value{height:1.5em}.form-control::-moz-placeholder{color:#6c757d;opacity:1}.form-control::placeholder{color:#6c757d;opacity:1}.form-control:disabled,.form-control[readonly]{background-color:#e9ecef;opacity:1}.form-control::-webkit-file-upload-button{padding:.375rem .75rem;margin:-.375rem -.75rem;-webkit-margin-end:.75rem;margin-inline-end:.75rem;color:#212529;background-color:#e9ecef;pointer-events:none;border-color:inherit;border-style:solid;border-width:0;border-inline-end-width:1px;border-radius:0;-webkit-transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}.form-control::file-selector-button{padding:.375rem .75rem;margin:-.375rem -.75rem;-webkit-margin-end:.75rem;margin-inline-end:.75rem;color:#212529;background-color:#e9ecef;pointer-events:none;border-color:inherit;border-style:solid;border-width:0;border-inline-end-width:1px;border-radius:0;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.form-control::-webkit-file-upload-button{-webkit-transition:none;transition:none}.form-control::file-selector-button{transition:none}}.form-control:hover:not(:disabled):not([readonly])::-webkit-file-upload-button{background-color:#dde0e3}.form-control:hover:not(:disabled):not([readonly])::file-selector-button{background-color:#dde0e3}.form-control::-webkit-file-upload-button{padding:.375rem .75rem;margin:-.375rem -.75rem;-webkit-margin-end:.75rem;margin-inline-end:.75rem;color:#212529;background-color:#e9ecef;pointer-events:none;border-color:inherit;border-style:solid;border-width:0;border-inline-end-width:1px;border-radius:0;-webkit-transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.form-control::-webkit-file-upload-button{-webkit-transition:none;transition:none}}.form-control:hover:not(:disabled):not([readonly])::-webkit-file-upload-button{background-color:#dde0e3}.form-control-plaintext{display:block;width:100%;padding:.375rem 0;margin-bottom:0;line-height:1.5;color:#212529;background-color:transparent;border:solid transparent;border-width:1px 0}.form-control-plaintext.form-control-lg,.form-control-plaintext.form-control-sm{padding-right:0;padding-left:0}.form-control-sm{min-height:calc(1.5em + .5rem + 2px);padding:.25rem .5rem;font-size:.875rem;border-radius:.2rem}.form-control-sm::-webkit-file-upload-button{padding:.25rem .5rem;margin:-.25rem -.5rem;-webkit-margin-end:.5rem;margin-inline-end:.5rem}.form-control-sm::file-selector-button{padding:.25rem .5rem;margin:-.25rem -.5rem;-webkit-margin-end:.5rem;margin-inline-end:.5rem}.form-control-sm::-webkit-file-upload-button{padding:.25rem .5rem;margin:-.25rem -.5rem;-webkit-margin-end:.5rem;margin-inline-end:.5rem}.form-control-lg{min-height:calc(1.5em + 1rem + 2px);padding:.5rem 1rem;font-size:1.25rem;border-radius:.3rem}.form-control-lg::-webkit-file-upload-button{padding:.5rem 1rem;margin:-.5rem -1rem;-webkit-margin-end:1rem;margin-inline-end:1rem}.form-control-lg::file-selector-button{padding:.5rem 1rem;margin:-.5rem -1rem;-webkit-margin-end:1rem;margin-inline-end:1rem}.form-control-lg::-webkit-file-upload-button{padding:.5rem 1rem;margin:-.5rem -1rem;-webkit-margin-end:1rem;margin-inline-end:1rem}textarea.form-control{min-height:calc(1.5em + .75rem + 2px)}textarea.form-control-sm{min-height:calc(1.5em + .5rem + 2px)}textarea.form-control-lg{min-height:calc(1.5em + 1rem + 2px)}.form-control-color{width:3rem;height:auto;padding:.375rem}.form-control-color:not(:disabled):not([readonly]){cursor:pointer}.form-control-color::-moz-color-swatch{height:1.5em;border-radius:.25rem}.form-control-color::-webkit-color-swatch{height:1.5em;border-radius:.25rem}.form-select{display:block;width:100%;padding:.375rem 2.25rem .375rem .75rem;-moz-padding-start:calc(0.75rem - 3px);font-size:1rem;font-weight:400;line-height:1.5;color:#212529;background-color:#fff;background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16'%3e%3cpath fill='none' stroke='%23343a40' stroke-linecap='round' stroke-linejoin='round' stroke-width='2' d='M2 5l6 6 6-6'/%3e%3c/svg%3e");background-repeat:no-repeat;background-position:right .75rem center;background-size:16px 12px;border:1px solid #ced4da;border-radius:.25rem;transition:border-color .15s ease-in-out,box-shadow .15s ease-in-out;-webkit-appearance:none;-moz-appearance:none;appearance:none}@media (prefers-reduced-motion:reduce){.form-select{transition:none}}.form-select:focus{border-color:#86b7fe;outline:0;box-shadow:0 0 0 .25rem rgba(13,110,253,.25)}.form-select[multiple],.form-select[size]:not([size="1"]){padding-right:.75rem;background-image:none}.form-select:disabled{background-color:#e9ecef}.form-select:-moz-focusring{color:transparent;text-shadow:0 0 0 #212529}.form-select-sm{padding-top:.25rem;padding-bottom:.25rem;padding-left:.5rem;font-size:.875rem;border-radius:.2rem}.form-select-lg{padding-top:.5rem;padding-bottom:.5rem;padding-left:1rem;font-size:1.25rem;border-radius:.3rem}.form-check{display:block;min-height:1.5rem;padding-left:1.5em;margin-bottom:.125rem}.form-check .form-check-input{float:left;margin-left:-1.5em}.form-check-input{width:1em;height:1em;margin-top:.25em;vertical-align:top;background-color:#fff;background-repeat:no-repeat;background-position:center;background-size:contain;border:1px solid rgba(0,0,0,.25);-webkit-appearance:none;-moz-appearance:none;appearance:none;-webkit-print-color-adjust:exact;color-adjust:exact}.form-check-input[type=checkbox]{border-radius:.25em}.form-check-input[type=radio]{border-radius:50%}.form-check-input:active{filter:brightness(90%)}.form-check-input:focus{border-color:#86b7fe;outline:0;box-shadow:0 0 0 .25rem rgba(13,110,253,.25)}.form-check-input:checked{background-color:#0d6efd;border-color:#0d6efd}.form-check-input:checked[type=checkbox]{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 20 20'%3e%3cpath fill='none' stroke='%23fff' stroke-linecap='round' stroke-linejoin='round' stroke-width='3' d='M6 10l3 3l6-6'/%3e%3c/svg%3e")}.form-check-input:checked[type=radio]{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3e%3ccircle r='2' fill='%23fff'/%3e%3c/svg%3e")}.form-check-input[type=checkbox]:indeterminate{background-color:#0d6efd;border-color:#0d6efd;background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 20 20'%3e%3cpath fill='none' stroke='%23fff' stroke-linecap='round' stroke-linejoin='round' stroke-width='3' d='M6 10h8'/%3e%3c/svg%3e")}.form-check-input:disabled{pointer-events:none;filter:none;opacity:.5}.form-check-input:disabled~.form-check-label,.form-check-input[disabled]~.form-check-label{opacity:.5}.form-switch{padding-left:2.5em}.form-switch .form-check-input{width:2em;margin-left:-2.5em;background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3e%3ccircle r='3' fill='rgba%280, 0, 0, 0.25%29'/%3e%3c/svg%3e");background-position:left center;border-radius:2em;transition:background-position .15s ease-in-out}@media (prefers-reduced-motion:reduce){.form-switch .form-check-input{transition:none}}.form-switch .form-check-input:focus{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3e%3ccircle r='3' fill='%2386b7fe'/%3e%3c/svg%3e")}.form-switch .form-check-input:checked{background-position:right center;background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3e%3ccircle r='3' fill='%23fff'/%3e%3c/svg%3e")}.form-check-inline{display:inline-block;margin-right:1rem}.btn-check{position:absolute;clip:rect(0,0,0,0);pointer-events:none}.btn-check:disabled+.btn,.btn-check[disabled]+.btn{pointer-events:none;filter:none;opacity:.65}.form-range{width:100%;height:1.5rem;padding:0;background-color:transparent;-webkit-appearance:none;-moz-appearance:none;appearance:none}.form-range:focus{outline:0}.form-range:focus::-webkit-slider-thumb{box-shadow:0 0 0 1px #fff,0 0 0 .25rem rgba(13,110,253,.25)}.form-range:focus::-moz-range-thumb{box-shadow:0 0 0 1px #fff,0 0 0 .25rem rgba(13,110,253,.25)}.form-range::-moz-focus-outer{border:0}.form-range::-webkit-slider-thumb{width:1rem;height:1rem;margin-top:-.25rem;background-color:#0d6efd;border:0;border-radius:1rem;-webkit-transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;-webkit-appearance:none;appearance:none}@media (prefers-reduced-motion:reduce){.form-range::-webkit-slider-thumb{-webkit-transition:none;transition:none}}.form-range::-webkit-slider-thumb:active{background-color:#b6d4fe}.form-range::-webkit-slider-runnable-track{width:100%;height:.5rem;color:transparent;cursor:pointer;background-color:#dee2e6;border-color:transparent;border-radius:1rem}.form-range::-moz-range-thumb{width:1rem;height:1rem;background-color:#0d6efd;border:0;border-radius:1rem;-moz-transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;-moz-appearance:none;appearance:none}@media (prefers-reduced-motion:reduce){.form-range::-moz-range-thumb{-moz-transition:none;transition:none}}.form-range::-moz-range-thumb:active{background-color:#b6d4fe}.form-range::-moz-range-track{width:100%;height:.5rem;color:transparent;cursor:pointer;background-color:#dee2e6;border-color:transparent;border-radius:1rem}.form-range:disabled{pointer-events:none}.form-range:disabled::-webkit-slider-thumb{background-color:#adb5bd}.form-range:disabled::-moz-range-thumb{background-color:#adb5bd}.form-floating{position:relative}.form-floating>.form-control,.form-floating>.form-select{height:calc(3.5rem + 2px);line-height:1.25}.form-floating>label{position:absolute;top:0;left:0;height:100%;padding:1rem .75rem;pointer-events:none;border:1px solid transparent;transform-origin:0 0;transition:opacity .1s ease-in-out,transform .1s ease-in-out}@media (prefers-reduced-motion:reduce){.form-floating>label{transition:none}}.form-floating>.form-control{padding:1rem .75rem}.form-floating>.form-control::-moz-placeholder{color:transparent}.form-floating>.form-control::placeholder{color:transparent}.form-floating>.form-control:not(:-moz-placeholder-shown){padding-top:1.625rem;padding-bottom:.625rem}.form-floating>.form-control:focus,.form-floating>.form-control:not(:placeholder-shown){padding-top:1.625rem;padding-bottom:.625rem}.form-floating>.form-control:-webkit-autofill{padding-top:1.625rem;padding-bottom:.625rem}.form-floating>.form-select{padding-top:1.625rem;padding-bottom:.625rem}.form-floating>.form-control:not(:-moz-placeholder-shown)~label{opacity:.65;transform:scale(.85) translateY(-.5rem) translateX(.15rem)}.form-floating>.form-control:focus~label,.form-floating>.form-control:not(:placeholder-shown)~label,.form-floating>.form-select~label{opacity:.65;transform:scale(.85) translateY(-.5rem) translateX(.15rem)}.form-floating>.form-control:-webkit-autofill~label{opacity:.65;transform:scale(.85) translateY(-.5rem) translateX(.15rem)}.input-group{position:relative;display:flex;flex-wrap:wrap;align-items:stretch;width:100%}.input-group>.form-control,.input-group>.form-select{position:relative;flex:1 1 auto;width:1%;min-width:0}.input-group>.form-control:focus,.input-group>.form-select:focus{z-index:3}.input-group .btn{position:relative;z-index:2}.input-group .btn:focus{z-index:3}.input-group-text{display:flex;align-items:center;padding:.375rem .75rem;font-size:1rem;font-weight:400;line-height:1.5;color:#212529;text-align:center;white-space:nowrap;background-color:#e9ecef;border:1px solid #ced4da;border-radius:.25rem}.input-group-lg>.btn,.input-group-lg>.form-control,.input-group-lg>.form-select,.input-group-lg>.input-group-text{padding:.5rem 1rem;font-size:1.25rem;border-radius:.3rem}.input-group-sm>.btn,.input-group-sm>.form-control,.input-group-sm>.form-select,.input-group-sm>.input-group-text{padding:.25rem .5rem;font-size:.875rem;border-radius:.2rem}.input-group-lg>.form-select,.input-group-sm>.form-select{padding-right:3rem}.input-group:not(.has-validation)>.dropdown-toggle:nth-last-child(n+3),.input-group:not(.has-validation)>:not(:last-child):not(.dropdown-toggle):not(.dropdown-menu){border-top-right-radius:0;border-bottom-right-radius:0}.input-group.has-validation>.dropdown-toggle:nth-last-child(n+4),.input-group.has-validation>:nth-last-child(n+3):not(.dropdown-toggle):not(.dropdown-menu){border-top-right-radius:0;border-bottom-right-radius:0}.input-group>:not(:first-child):not(.dropdown-menu):not(.valid-tooltip):not(.valid-feedback):not(.invalid-tooltip):not(.invalid-feedback){margin-left:-1px;border-top-left-radius:0;border-bottom-left-radius:0}.valid-feedback{display:none;width:100%;margin-top:.25rem;font-size:.875em;color:#198754}.valid-tooltip{position:absolute;top:100%;z-index:5;display:none;max-width:100%;padding:.25rem .5rem;margin-top:.1rem;font-size:.875rem;color:#fff;background-color:rgba(25,135,84,.9);border-radius:.25rem}.is-valid~.valid-feedback,.is-valid~.valid-tooltip,.was-validated :valid~.valid-feedback,.was-validated :valid~.valid-tooltip{display:block}.form-control.is-valid,.was-validated .form-control:valid{border-color:#198754;padding-right:calc(1.5em + .75rem);background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 8 8'%3e%3cpath fill='%23198754' d='M2.3 6.73L.6 4.53c-.4-1.04.46-1.4 1.1-.8l1.1 1.4 3.4-3.8c.6-.63 1.6-.27 1.2.7l-4 4.6c-.43.5-.8.4-1.1.1z'/%3e%3c/svg%3e");background-repeat:no-repeat;background-position:right calc(.375em + .1875rem) center;background-size:calc(.75em + .375rem) calc(.75em + .375rem)}.form-control.is-valid:focus,.was-validated .form-control:valid:focus{border-color:#198754;box-shadow:0 0 0 .25rem rgba(25,135,84,.25)}.was-validated textarea.form-control:valid,textarea.form-control.is-valid{padding-right:calc(1.5em + .75rem);background-position:top calc(.375em + .1875rem) right calc(.375em + .1875rem)}.form-select.is-valid,.was-validated .form-select:valid{border-color:#198754}.form-select.is-valid:not([multiple]):not([size]),.form-select.is-valid:not([multiple])[size="1"],.was-validated .form-select:valid:not([multiple]):not([size]),.was-validated .form-select:valid:not([multiple])[size="1"]{padding-right:4.125rem;background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16'%3e%3cpath fill='none' stroke='%23343a40' stroke-linecap='round' stroke-linejoin='round' stroke-width='2' d='M2 5l6 6 6-6'/%3e%3c/svg%3e"),url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 8 8'%3e%3cpath fill='%23198754' d='M2.3 6.73L.6 4.53c-.4-1.04.46-1.4 1.1-.8l1.1 1.4 3.4-3.8c.6-.63 1.6-.27 1.2.7l-4 4.6c-.43.5-.8.4-1.1.1z'/%3e%3c/svg%3e");background-position:right .75rem center,center right 2.25rem;background-size:16px 12px,calc(.75em + .375rem) calc(.75em + .375rem)}.form-select.is-valid:focus,.was-validated .form-select:valid:focus{border-color:#198754;box-shadow:0 0 0 .25rem rgba(25,135,84,.25)}.form-check-input.is-valid,.was-validated .form-check-input:valid{border-color:#198754}.form-check-input.is-valid:checked,.was-validated .form-check-input:valid:checked{background-color:#198754}.form-check-input.is-valid:focus,.was-validated .form-check-input:valid:focus{box-shadow:0 0 0 .25rem rgba(25,135,84,.25)}.form-check-input.is-valid~.form-check-label,.was-validated .form-check-input:valid~.form-check-label{color:#198754}.form-check-inline .form-check-input~.valid-feedback{margin-left:.5em}.input-group .form-control.is-valid,.input-group .form-select.is-valid,.was-validated .input-group .form-control:valid,.was-validated .input-group .form-select:valid{z-index:1}.input-group .form-control.is-valid:focus,.input-group .form-select.is-valid:focus,.was-validated .input-group .form-control:valid:focus,.was-validated .input-group .form-select:valid:focus{z-index:3}.invalid-feedback{display:none;width:100%;margin-top:.25rem;font-size:.875em;color:#dc3545}.invalid-tooltip{position:absolute;top:100%;z-index:5;display:none;max-width:100%;padding:.25rem .5rem;margin-top:.1rem;font-size:.875rem;color:#fff;background-color:rgba(220,53,69,.9);border-radius:.25rem}.is-invalid~.invalid-feedback,.is-invalid~.invalid-tooltip,.was-validated :invalid~.invalid-feedback,.was-validated :invalid~.invalid-tooltip{display:block}.form-control.is-invalid,.was-validated .form-control:invalid{border-color:#dc3545;padding-right:calc(1.5em + .75rem);background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 12 12' width='12' height='12' fill='none' stroke='%23dc3545'%3e%3ccircle cx='6' cy='6' r='4.5'/%3e%3cpath stroke-linejoin='round' d='M5.8 3.6h.4L6 6.5z'/%3e%3ccircle cx='6' cy='8.2' r='.6' fill='%23dc3545' stroke='none'/%3e%3c/svg%3e");background-repeat:no-repeat;background-position:right calc(.375em + .1875rem) center;background-size:calc(.75em + .375rem) calc(.75em + .375rem)}.form-control.is-invalid:focus,.was-validated .form-control:invalid:focus{border-color:#dc3545;box-shadow:0 0 0 .25rem rgba(220,53,69,.25)}.was-validated textarea.form-control:invalid,textarea.form-control.is-invalid{padding-right:calc(1.5em + .75rem);background-position:top calc(.375em + .1875rem) right calc(.375em + .1875rem)}.form-select.is-invalid,.was-validated .form-select:invalid{border-color:#dc3545}.form-select.is-invalid:not([multiple]):not([size]),.form-select.is-invalid:not([multiple])[size="1"],.was-validated .form-select:invalid:not([multiple]):not([size]),.was-validated .form-select:invalid:not([multiple])[size="1"]{padding-right:4.125rem;background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16'%3e%3cpath fill='none' stroke='%23343a40' stroke-linecap='round' stroke-linejoin='round' stroke-width='2' d='M2 5l6 6 6-6'/%3e%3c/svg%3e"),url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 12 12' width='12' height='12' fill='none' stroke='%23dc3545'%3e%3ccircle cx='6' cy='6' r='4.5'/%3e%3cpath stroke-linejoin='round' d='M5.8 3.6h.4L6 6.5z'/%3e%3ccircle cx='6' cy='8.2' r='.6' fill='%23dc3545' stroke='none'/%3e%3c/svg%3e");background-position:right .75rem center,center right 2.25rem;background-size:16px 12px,calc(.75em + .375rem) calc(.75em + .375rem)}.form-select.is-invalid:focus,.was-validated .form-select:invalid:focus{border-color:#dc3545;box-shadow:0 0 0 .25rem rgba(220,53,69,.25)}.form-check-input.is-invalid,.was-validated .form-check-input:invalid{border-color:#dc3545}.form-check-input.is-invalid:checked,.was-validated .form-check-input:invalid:checked{background-color:#dc3545}.form-check-input.is-invalid:focus,.was-validated .form-check-input:invalid:focus{box-shadow:0 0 0 .25rem rgba(220,53,69,.25)}.form-check-input.is-invalid~.form-check-label,.was-validated .form-check-input:invalid~.form-check-label{color:#dc3545}.form-check-inline .form-check-input~.invalid-feedback{margin-left:.5em}.input-group .form-control.is-invalid,.input-group .form-select.is-invalid,.was-validated .input-group .form-control:invalid,.was-validated .input-group .form-select:invalid{z-index:2}.input-group .form-control.is-invalid:focus,.input-group .form-select.is-invalid:focus,.was-validated .input-group .form-control:invalid:focus,.was-validated .input-group .form-select:invalid:focus{z-index:3}.btn{display:inline-block;font-weight:400;line-height:1.5;color:#212529;text-align:center;text-decoration:none;vertical-align:middle;cursor:pointer;-webkit-user-select:none;-moz-user-select:none;user-select:none;background-color:transparent;border:1px solid transparent;padding:.375rem .75rem;font-size:1rem;border-radius:.25rem;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.btn{transition:none}}.btn:hover{color:#212529}.btn-check:focus+.btn,.btn:focus{outline:0;box-shadow:0 0 0 .25rem rgba(13,110,253,.25)}.btn.disabled,.btn:disabled,fieldset:disabled .btn{pointer-events:none;opacity:.65}.btn-primary{color:#fff;background-color:#0d6efd;border-color:#0d6efd}.btn-primary:hover{color:#fff;background-color:#0b5ed7;border-color:#0a58ca}.btn-check:focus+.btn-primary,.btn-primary:focus{color:#fff;background-color:#0b5ed7;border-color:#0a58ca;box-shadow:0 0 0 .25rem rgba(49,132,253,.5)}.btn-check:active+.btn-primary,.btn-check:checked+.btn-primary,.btn-primary.active,.btn-primary:active,.show>.btn-primary.dropdown-toggle{color:#fff;background-color:#0a58ca;border-color:#0a53be}.btn-check:active+.btn-primary:focus,.btn-check:checked+.btn-primary:focus,.btn-primary.active:focus,.btn-primary:active:focus,.show>.btn-primary.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(49,132,253,.5)}.btn-primary.disabled,.btn-primary:disabled{color:#fff;background-color:#0d6efd;border-color:#0d6efd}.btn-secondary{color:#fff;background-color:#6c757d;border-color:#6c757d}.btn-secondary:hover{color:#fff;background-color:#5c636a;border-color:#565e64}.btn-check:focus+.btn-secondary,.btn-secondary:focus{color:#fff;background-color:#5c636a;border-color:#565e64;box-shadow:0 0 0 .25rem rgba(130,138,145,.5)}.btn-check:active+.btn-secondary,.btn-check:checked+.btn-secondary,.btn-secondary.active,.btn-secondary:active,.show>.btn-secondary.dropdown-toggle{color:#fff;background-color:#565e64;border-color:#51585e}.btn-check:active+.btn-secondary:focus,.btn-check:checked+.btn-secondary:focus,.btn-secondary.active:focus,.btn-secondary:active:focus,.show>.btn-secondary.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(130,138,145,.5)}.btn-secondary.disabled,.btn-secondary:disabled{color:#fff;background-color:#6c757d;border-color:#6c757d}.btn-success{color:#fff;background-color:#198754;border-color:#198754}.btn-success:hover{color:#fff;background-color:#157347;border-color:#146c43}.btn-check:focus+.btn-success,.btn-success:focus{color:#fff;background-color:#157347;border-color:#146c43;box-shadow:0 0 0 .25rem rgba(60,153,110,.5)}.btn-check:active+.btn-success,.btn-check:checked+.btn-success,.btn-success.active,.btn-success:active,.show>.btn-success.dropdown-toggle{color:#fff;background-color:#146c43;border-color:#13653f}.btn-check:active+.btn-success:focus,.btn-check:checked+.btn-success:focus,.btn-success.active:focus,.btn-success:active:focus,.show>.btn-success.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(60,153,110,.5)}.btn-success.disabled,.btn-success:disabled{color:#fff;background-color:#198754;border-color:#198754}.btn-info{color:#000;background-color:#0dcaf0;border-color:#0dcaf0}.btn-info:hover{color:#000;background-color:#31d2f2;border-color:#25cff2}.btn-check:focus+.btn-info,.btn-info:focus{color:#000;background-color:#31d2f2;border-color:#25cff2;box-shadow:0 0 0 .25rem rgba(11,172,204,.5)}.btn-check:active+.btn-info,.btn-check:checked+.btn-info,.btn-info.active,.btn-info:active,.show>.btn-info.dropdown-toggle{color:#000;background-color:#3dd5f3;border-color:#25cff2}.btn-check:active+.btn-info:focus,.btn-check:checked+.btn-info:focus,.btn-info.active:focus,.btn-info:active:focus,.show>.btn-info.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(11,172,204,.5)}.btn-info.disabled,.btn-info:disabled{color:#000;background-color:#0dcaf0;border-color:#0dcaf0}.btn-warning{color:#000;background-color:#ffc107;border-color:#ffc107}.btn-warning:hover{color:#000;background-color:#ffca2c;border-color:#ffc720}.btn-check:focus+.btn-warning,.btn-warning:focus{color:#000;background-color:#ffca2c;border-color:#ffc720;box-shadow:0 0 0 .25rem rgba(217,164,6,.5)}.btn-check:active+.btn-warning,.btn-check:checked+.btn-warning,.btn-warning.active,.btn-warning:active,.show>.btn-warning.dropdown-toggle{color:#000;background-color:#ffcd39;border-color:#ffc720}.btn-check:active+.btn-warning:focus,.btn-check:checked+.btn-warning:focus,.btn-warning.active:focus,.btn-warning:active:focus,.show>.btn-warning.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(217,164,6,.5)}.btn-warning.disabled,.btn-warning:disabled{color:#000;background-color:#ffc107;border-color:#ffc107}.btn-danger{color:#fff;background-color:#dc3545;border-color:#dc3545}.btn-danger:hover{color:#fff;background-color:#bb2d3b;border-color:#b02a37}.btn-check:focus+.btn-danger,.btn-danger:focus{color:#fff;background-color:#bb2d3b;border-color:#b02a37;box-shadow:0 0 0 .25rem rgba(225,83,97,.5)}.btn-check:active+.btn-danger,.btn-check:checked+.btn-danger,.btn-danger.active,.btn-danger:active,.show>.btn-danger.dropdown-toggle{color:#fff;background-color:#b02a37;border-color:#a52834}.btn-check:active+.btn-danger:focus,.btn-check:checked+.btn-danger:focus,.btn-danger.active:focus,.btn-danger:active:focus,.show>.btn-danger.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(225,83,97,.5)}.btn-danger.disabled,.btn-danger:disabled{color:#fff;background-color:#dc3545;border-color:#dc3545}.btn-light{color:#000;background-color:#f8f9fa;border-color:#f8f9fa}.btn-light:hover{color:#000;background-color:#f9fafb;border-color:#f9fafb}.btn-check:focus+.btn-light,.btn-light:focus{color:#000;background-color:#f9fafb;border-color:#f9fafb;box-shadow:0 0 0 .25rem rgba(211,212,213,.5)}.btn-check:active+.btn-light,.btn-check:checked+.btn-light,.btn-light.active,.btn-light:active,.show>.btn-light.dropdown-toggle{color:#000;background-color:#f9fafb;border-color:#f9fafb}.btn-check:active+.btn-light:focus,.btn-check:checked+.btn-light:focus,.btn-light.active:focus,.btn-light:active:focus,.show>.btn-light.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(211,212,213,.5)}.btn-light.disabled,.btn-light:disabled{color:#000;background-color:#f8f9fa;border-color:#f8f9fa}.btn-dark{color:#fff;background-color:#212529;border-color:#212529}.btn-dark:hover{color:#fff;background-color:#1c1f23;border-color:#1a1e21}.btn-check:focus+.btn-dark,.btn-dark:focus{color:#fff;background-color:#1c1f23;border-color:#1a1e21;box-shadow:0 0 0 .25rem rgba(66,70,73,.5)}.btn-check:active+.btn-dark,.btn-check:checked+.btn-dark,.btn-dark.active,.btn-dark:active,.show>.btn-dark.dropdown-toggle{color:#fff;background-color:#1a1e21;border-color:#191c1f}.btn-check:active+.btn-dark:focus,.btn-check:checked+.btn-dark:focus,.btn-dark.active:focus,.btn-dark:active:focus,.show>.btn-dark.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(66,70,73,.5)}.btn-dark.disabled,.btn-dark:disabled{color:#fff;background-color:#212529;border-color:#212529}.btn-outline-primary{color:#0d6efd;border-color:#0d6efd}.btn-outline-primary:hover{color:#fff;background-color:#0d6efd;border-color:#0d6efd}.btn-check:focus+.btn-outline-primary,.btn-outline-primary:focus{box-shadow:0 0 0 .25rem rgba(13,110,253,.5)}.btn-check:active+.btn-outline-primary,.btn-check:checked+.btn-outline-primary,.btn-outline-primary.active,.btn-outline-primary.dropdown-toggle.show,.btn-outline-primary:active{color:#fff;background-color:#0d6efd;border-color:#0d6efd}.btn-check:active+.btn-outline-primary:focus,.btn-check:checked+.btn-outline-primary:focus,.btn-outline-primary.active:focus,.btn-outline-primary.dropdown-toggle.show:focus,.btn-outline-primary:active:focus{box-shadow:0 0 0 .25rem rgba(13,110,253,.5)}.btn-outline-primary.disabled,.btn-outline-primary:disabled{color:#0d6efd;background-color:transparent}.btn-outline-secondary{color:#6c757d;border-color:#6c757d}.btn-outline-secondary:hover{color:#fff;background-color:#6c757d;border-color:#6c757d}.btn-check:focus+.btn-outline-secondary,.btn-outline-secondary:focus{box-shadow:0 0 0 .25rem rgba(108,117,125,.5)}.btn-check:active+.btn-outline-secondary,.btn-check:checked+.btn-outline-secondary,.btn-outline-secondary.active,.btn-outline-secondary.dropdown-toggle.show,.btn-outline-secondary:active{color:#fff;background-color:#6c757d;border-color:#6c757d}.btn-check:active+.btn-outline-secondary:focus,.btn-check:checked+.btn-outline-secondary:focus,.btn-outline-secondary.active:focus,.btn-outline-secondary.dropdown-toggle.show:focus,.btn-outline-secondary:active:focus{box-shadow:0 0 0 .25rem rgba(108,117,125,.5)}.btn-outline-secondary.disabled,.btn-outline-secondary:disabled{color:#6c757d;background-color:transparent}.btn-outline-success{color:#198754;border-color:#198754}.btn-outline-success:hover{color:#fff;background-color:#198754;border-color:#198754}.btn-check:focus+.btn-outline-success,.btn-outline-success:focus{box-shadow:0 0 0 .25rem rgba(25,135,84,.5)}.btn-check:active+.btn-outline-success,.btn-check:checked+.btn-outline-success,.btn-outline-success.active,.btn-outline-success.dropdown-toggle.show,.btn-outline-success:active{color:#fff;background-color:#198754;border-color:#198754}.btn-check:active+.btn-outline-success:focus,.btn-check:checked+.btn-outline-success:focus,.btn-outline-success.active:focus,.btn-outline-success.dropdown-toggle.show:focus,.btn-outline-success:active:focus{box-shadow:0 0 0 .25rem rgba(25,135,84,.5)}.btn-outline-success.disabled,.btn-outline-success:disabled{color:#198754;background-color:transparent}.btn-outline-info{color:#0dcaf0;border-color:#0dcaf0}.btn-outline-info:hover{color:#000;background-color:#0dcaf0;border-color:#0dcaf0}.btn-check:focus+.btn-outline-info,.btn-outline-info:focus{box-shadow:0 0 0 .25rem rgba(13,202,240,.5)}.btn-check:active+.btn-outline-info,.btn-check:checked+.btn-outline-info,.btn-outline-info.active,.btn-outline-info.dropdown-toggle.show,.btn-outline-info:active{color:#000;background-color:#0dcaf0;border-color:#0dcaf0}.btn-check:active+.btn-outline-info:focus,.btn-check:checked+.btn-outline-info:focus,.btn-outline-info.active:focus,.btn-outline-info.dropdown-toggle.show:focus,.btn-outline-info:active:focus{box-shadow:0 0 0 .25rem rgba(13,202,240,.5)}.btn-outline-info.disabled,.btn-outline-info:disabled{color:#0dcaf0;background-color:transparent}.btn-outline-warning{color:#ffc107;border-color:#ffc107}.btn-outline-warning:hover{color:#000;background-color:#ffc107;border-color:#ffc107}.btn-check:focus+.btn-outline-warning,.btn-outline-warning:focus{box-shadow:0 0 0 .25rem rgba(255,193,7,.5)}.btn-check:active+.btn-outline-warning,.btn-check:checked+.btn-outline-warning,.btn-outline-warning.active,.btn-outline-warning.dropdown-toggle.show,.btn-outline-warning:active{color:#000;background-color:#ffc107;border-color:#ffc107}.btn-check:active+.btn-outline-warning:focus,.btn-check:checked+.btn-outline-warning:focus,.btn-outline-warning.active:focus,.btn-outline-warning.dropdown-toggle.show:focus,.btn-outline-warning:active:focus{box-shadow:0 0 0 .25rem rgba(255,193,7,.5)}.btn-outline-warning.disabled,.btn-outline-warning:disabled{color:#ffc107;background-color:transparent}.btn-outline-danger{color:#dc3545;border-color:#dc3545}.btn-outline-danger:hover{color:#fff;background-color:#dc3545;border-color:#dc3545}.btn-check:focus+.btn-outline-danger,.btn-outline-danger:focus{box-shadow:0 0 0 .25rem rgba(220,53,69,.5)}.btn-check:active+.btn-outline-danger,.btn-check:checked+.btn-outline-danger,.btn-outline-danger.active,.btn-outline-danger.dropdown-toggle.show,.btn-outline-danger:active{color:#fff;background-color:#dc3545;border-color:#dc3545}.btn-check:active+.btn-outline-danger:focus,.btn-check:checked+.btn-outline-danger:focus,.btn-outline-danger.active:focus,.btn-outline-danger.dropdown-toggle.show:focus,.btn-outline-danger:active:focus{box-shadow:0 0 0 .25rem rgba(220,53,69,.5)}.btn-outline-danger.disabled,.btn-outline-danger:disabled{color:#dc3545;background-color:transparent}.btn-outline-light{color:#f8f9fa;border-color:#f8f9fa}.btn-outline-light:hover{color:#000;background-color:#f8f9fa;border-color:#f8f9fa}.btn-check:focus+.btn-outline-light,.btn-outline-light:focus{box-shadow:0 0 0 .25rem rgba(248,249,250,.5)}.btn-check:active+.btn-outline-light,.btn-check:checked+.btn-outline-light,.btn-outline-light.active,.btn-outline-light.dropdown-toggle.show,.btn-outline-light:active{color:#000;background-color:#f8f9fa;border-color:#f8f9fa}.btn-check:active+.btn-outline-light:focus,.btn-check:checked+.btn-outline-light:focus,.btn-outline-light.active:focus,.btn-outline-light.dropdown-toggle.show:focus,.btn-outline-light:active:focus{box-shadow:0 0 0 .25rem rgba(248,249,250,.5)}.btn-outline-light.disabled,.btn-outline-light:disabled{color:#f8f9fa;background-color:transparent}.btn-outline-dark{color:#212529;border-color:#212529}.btn-outline-dark:hover{color:#fff;background-color:#212529;border-color:#212529}.btn-check:focus+.btn-outline-dark,.btn-outline-dark:focus{box-shadow:0 0 0 .25rem rgba(33,37,41,.5)}.btn-check:active+.btn-outline-dark,.btn-check:checked+.btn-outline-dark,.btn-outline-dark.active,.btn-outline-dark.dropdown-toggle.show,.btn-outline-dark:active{color:#fff;background-color:#212529;border-color:#212529}.btn-check:active+.btn-outline-dark:focus,.btn-check:checked+.btn-outline-dark:focus,.btn-outline-dark.active:focus,.btn-outline-dark.dropdown-toggle.show:focus,.btn-outline-dark:active:focus{box-shadow:0 0 0 .25rem rgba(33,37,41,.5)}.btn-outline-dark.disabled,.btn-outline-dark:disabled{color:#212529;background-color:transparent}.btn-link{font-weight:400;color:#0d6efd;text-decoration:underline}.btn-link:hover{color:#0a58ca}.btn-link.disabled,.btn-link:disabled{color:#6c757d}.btn-group-lg>.btn,.btn-lg{padding:.5rem 1rem;font-size:1.25rem;border-radius:.3rem}.btn-group-sm>.btn,.btn-sm{padding:.25rem .5rem;font-size:.875rem;border-radius:.2rem}.fade{transition:opacity .15s linear}@media (prefers-reduced-motion:reduce){.fade{transition:none}}.fade:not(.show){opacity:0}.collapse:not(.show){display:none}.collapsing{height:0;overflow:hidden;transition:height .35s ease}@media (prefers-reduced-motion:reduce){.collapsing{transition:none}}.collapsing.collapse-horizontal{width:0;height:auto;transition:width .35s ease}@media (prefers-reduced-motion:reduce){.collapsing.collapse-horizontal{transition:none}}.dropdown,.dropend,.dropstart,.dropup{position:relative}.dropdown-toggle{white-space:nowrap}.dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:"";border-top:.3em solid;border-right:.3em solid transparent;border-bottom:0;border-left:.3em solid transparent}.dropdown-toggle:empty::after{margin-left:0}.dropdown-menu{position:absolute;z-index:1000;display:none;min-width:10rem;padding:.5rem 0;margin:0;font-size:1rem;color:#212529;text-align:left;list-style:none;background-color:#fff;background-clip:padding-box;border:1px solid rgba(0,0,0,.15);border-radius:.25rem}.dropdown-menu[data-bs-popper]{top:100%;left:0;margin-top:.125rem}.dropdown-menu-start{--bs-position:start}.dropdown-menu-start[data-bs-popper]{right:auto;left:0}.dropdown-menu-end{--bs-position:end}.dropdown-menu-end[data-bs-popper]{right:0;left:auto}@media (min-width:576px){.dropdown-menu-sm-start{--bs-position:start}.dropdown-menu-sm-start[data-bs-popper]{right:auto;left:0}.dropdown-menu-sm-end{--bs-position:end}.dropdown-menu-sm-end[data-bs-popper]{right:0;left:auto}}@media (min-width:768px){.dropdown-menu-md-start{--bs-position:start}.dropdown-menu-md-start[data-bs-popper]{right:auto;left:0}.dropdown-menu-md-end{--bs-position:end}.dropdown-menu-md-end[data-bs-popper]{right:0;left:auto}}@media (min-width:992px){.dropdown-menu-lg-start{--bs-position:start}.dropdown-menu-lg-start[data-bs-popper]{right:auto;left:0}.dropdown-menu-lg-end{--bs-position:end}.dropdown-menu-lg-end[data-bs-popper]{right:0;left:auto}}@media (min-width:1200px){.dropdown-menu-xl-start{--bs-position:start}.dropdown-menu-xl-start[data-bs-popper]{right:auto;left:0}.dropdown-menu-xl-end{--bs-position:end}.dropdown-menu-xl-end[data-bs-popper]{right:0;left:auto}}@media (min-width:1400px){.dropdown-menu-xxl-start{--bs-position:start}.dropdown-menu-xxl-start[data-bs-popper]{right:auto;left:0}.dropdown-menu-xxl-end{--bs-position:end}.dropdown-menu-xxl-end[data-bs-popper]{right:0;left:auto}}.dropup .dropdown-menu[data-bs-popper]{top:auto;bottom:100%;margin-top:0;margin-bottom:.125rem}.dropup .dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:"";border-top:0;border-right:.3em solid transparent;border-bottom:.3em solid;border-left:.3em solid transparent}.dropup .dropdown-toggle:empty::after{margin-left:0}.dropend .dropdown-menu[data-bs-popper]{top:0;right:auto;left:100%;margin-top:0;margin-left:.125rem}.dropend .dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:"";border-top:.3em solid transparent;border-right:0;border-bottom:.3em solid transparent;border-left:.3em solid}.dropend .dropdown-toggle:empty::after{margin-left:0}.dropend .dropdown-toggle::after{vertical-align:0}.dropstart .dropdown-menu[data-bs-popper]{top:0;right:100%;left:auto;margin-top:0;margin-right:.125rem}.dropstart .dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:""}.dropstart .dropdown-toggle::after{display:none}.dropstart .dropdown-toggle::before{display:inline-block;margin-right:.255em;vertical-align:.255em;content:"";border-top:.3em solid transparent;border-right:.3em solid;border-bottom:.3em solid transparent}.dropstart .dropdown-toggle:empty::after{margin-left:0}.dropstart .dropdown-toggle::before{vertical-align:0}.dropdown-divider{height:0;margin:.5rem 0;overflow:hidden;border-top:1px solid rgba(0,0,0,.15)}.dropdown-item{display:block;width:100%;padding:.25rem 1rem;clear:both;font-weight:400;color:#212529;text-align:inherit;text-decoration:none;white-space:nowrap;background-color:transparent;border:0}.dropdown-item:focus,.dropdown-item:hover{color:#1e2125;background-color:#e9ecef}.dropdown-item.active,.dropdown-item:active{color:#fff;text-decoration:none;background-color:#0d6efd}.dropdown-item.disabled,.dropdown-item:disabled{color:#adb5bd;pointer-events:none;background-color:transparent}.dropdown-menu.show{display:block}.dropdown-header{display:block;padding:.5rem 1rem;margin-bottom:0;font-size:.875rem;color:#6c757d;white-space:nowrap}.dropdown-item-text{display:block;padding:.25rem 1rem;color:#212529}.dropdown-menu-dark{color:#dee2e6;background-color:#343a40;border-color:rgba(0,0,0,.15)}.dropdown-menu-dark .dropdown-item{color:#dee2e6}.dropdown-menu-dark .dropdown-item:focus,.dropdown-menu-dark .dropdown-item:hover{color:#fff;background-color:rgba(255,255,255,.15)}.dropdown-menu-dark .dropdown-item.active,.dropdown-menu-dark .dropdown-item:active{color:#fff;background-color:#0d6efd}.dropdown-menu-dark .dropdown-item.disabled,.dropdown-menu-dark .dropdown-item:disabled{color:#adb5bd}.dropdown-menu-dark .dropdown-divider{border-color:rgba(0,0,0,.15)}.dropdown-menu-dark .dropdown-item-text{color:#dee2e6}.dropdown-menu-dark .dropdown-header{color:#adb5bd}.btn-group,.btn-group-vertical{position:relative;display:inline-flex;vertical-align:middle}.btn-group-vertical>.btn,.btn-group>.btn{position:relative;flex:1 1 auto}.btn-group-vertical>.btn-check:checked+.btn,.btn-group-vertical>.btn-check:focus+.btn,.btn-group-vertical>.btn.active,.btn-group-vertical>.btn:active,.btn-group-vertical>.btn:focus,.btn-group-vertical>.btn:hover,.btn-group>.btn-check:checked+.btn,.btn-group>.btn-check:focus+.btn,.btn-group>.btn.active,.btn-group>.btn:active,.btn-group>.btn:focus,.btn-group>.btn:hover{z-index:1}.btn-toolbar{display:flex;flex-wrap:wrap;justify-content:flex-start}.btn-toolbar .input-group{width:auto}.btn-group>.btn-group:not(:first-child),.btn-group>.btn:not(:first-child){margin-left:-1px}.btn-group>.btn-group:not(:last-child)>.btn,.btn-group>.btn:not(:last-child):not(.dropdown-toggle){border-top-right-radius:0;border-bottom-right-radius:0}.btn-group>.btn-group:not(:first-child)>.btn,.btn-group>.btn:nth-child(n+3),.btn-group>:not(.btn-check)+.btn{border-top-left-radius:0;border-bottom-left-radius:0}.dropdown-toggle-split{padding-right:.5625rem;padding-left:.5625rem}.dropdown-toggle-split::after,.dropend .dropdown-toggle-split::after,.dropup .dropdown-toggle-split::after{margin-left:0}.dropstart .dropdown-toggle-split::before{margin-right:0}.btn-group-sm>.btn+.dropdown-toggle-split,.btn-sm+.dropdown-toggle-split{padding-right:.375rem;padding-left:.375rem}.btn-group-lg>.btn+.dropdown-toggle-split,.btn-lg+.dropdown-toggle-split{padding-right:.75rem;padding-left:.75rem}.btn-group-vertical{flex-direction:column;align-items:flex-start;justify-content:center}.btn-group-vertical>.btn,.btn-group-vertical>.btn-group{width:100%}.btn-group-vertical>.btn-group:not(:first-child),.btn-group-vertical>.btn:not(:first-child){margin-top:-1px}.btn-group-vertical>.btn-group:not(:last-child)>.btn,.btn-group-vertical>.btn:not(:last-child):not(.dropdown-toggle){border-bottom-right-radius:0;border-bottom-left-radius:0}.btn-group-vertical>.btn-group:not(:first-child)>.btn,.btn-group-vertical>.btn~.btn{border-top-left-radius:0;border-top-right-radius:0}.nav{display:flex;flex-wrap:wrap;padding-left:0;margin-bottom:0;list-style:none}.nav-link{display:block;padding:.5rem 1rem;color:#0d6efd;text-decoration:none;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out}@media (prefers-reduced-motion:reduce){.nav-link{transition:none}}.nav-link:focus,.nav-link:hover{color:#0a58ca}.nav-link.disabled{color:#6c757d;pointer-events:none;cursor:default}.nav-tabs{border-bottom:1px solid #dee2e6}.nav-tabs .nav-link{margin-bottom:-1px;background:0 0;border:1px solid transparent;border-top-left-radius:.25rem;border-top-right-radius:.25rem}.nav-tabs .nav-link:focus,.nav-tabs .nav-link:hover{border-color:#e9ecef #e9ecef #dee2e6;isolation:isolate}.nav-tabs .nav-link.disabled{color:#6c757d;background-color:transparent;border-color:transparent}.nav-tabs .nav-item.show .nav-link,.nav-tabs .nav-link.active{color:#495057;background-color:#fff;border-color:#dee2e6 #dee2e6 #fff}.nav-tabs .dropdown-menu{margin-top:-1px;border-top-left-radius:0;border-top-right-radius:0}.nav-pills .nav-link{background:0 0;border:0;border-radius:.25rem}.nav-pills .nav-link.active,.nav-pills .show>.nav-link{color:#fff;background-color:#0d6efd}.nav-fill .nav-item,.nav-fill>.nav-link{flex:1 1 auto;text-align:center}.nav-justified .nav-item,.nav-justified>.nav-link{flex-basis:0;flex-grow:1;text-align:center}.nav-fill .nav-item .nav-link,.nav-justified .nav-item .nav-link{width:100%}.tab-content>.tab-pane{display:none}.tab-content>.active{display:block}.navbar{position:relative;display:flex;flex-wrap:wrap;align-items:center;justify-content:space-between;padding-top:.5rem;padding-bottom:.5rem}.navbar>.container,.navbar>.container-fluid,.navbar>.container-lg,.navbar>.container-md,.navbar>.container-sm,.navbar>.container-xl,.navbar>.container-xxl{display:flex;flex-wrap:inherit;align-items:center;justify-content:space-between}.navbar-brand{padding-top:.3125rem;padding-bottom:.3125rem;margin-right:1rem;font-size:1.25rem;text-decoration:none;white-space:nowrap}.navbar-nav{display:flex;flex-direction:column;padding-left:0;margin-bottom:0;list-style:none}.navbar-nav .nav-link{padding-right:0;padding-left:0}.navbar-nav .dropdown-menu{position:static}.navbar-text{padding-top:.5rem;padding-bottom:.5rem}.navbar-collapse{flex-basis:100%;flex-grow:1;align-items:center}.navbar-toggler{padding:.25rem .75rem;font-size:1.25rem;line-height:1;background-color:transparent;border:1px solid transparent;border-radius:.25rem;transition:box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.navbar-toggler{transition:none}}.navbar-toggler:hover{text-decoration:none}.navbar-toggler:focus{text-decoration:none;outline:0;box-shadow:0 0 0 .25rem}.navbar-toggler-icon{display:inline-block;width:1.5em;height:1.5em;vertical-align:middle;background-repeat:no-repeat;background-position:center;background-size:100%}.navbar-nav-scroll{max-height:var(--bs-scroll-height,75vh);overflow-y:auto}@media (min-width:576px){.navbar-expand-sm{flex-wrap:nowrap;justify-content:flex-start}.navbar-expand-sm .navbar-nav{flex-direction:row}.navbar-expand-sm .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-sm .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-sm .navbar-nav-scroll{overflow:visible}.navbar-expand-sm .navbar-collapse{display:flex!important;flex-basis:auto}.navbar-expand-sm .navbar-toggler{display:none}.navbar-expand-sm .offcanvas-header{display:none}.navbar-expand-sm .offcanvas{position:inherit;bottom:0;z-index:1000;flex-grow:1;visibility:visible!important;background-color:transparent;border-right:0;border-left:0;transition:none;transform:none}.navbar-expand-sm .offcanvas-bottom,.navbar-expand-sm .offcanvas-top{height:auto;border-top:0;border-bottom:0}.navbar-expand-sm .offcanvas-body{display:flex;flex-grow:0;padding:0;overflow-y:visible}}@media (min-width:768px){.navbar-expand-md{flex-wrap:nowrap;justify-content:flex-start}.navbar-expand-md .navbar-nav{flex-direction:row}.navbar-expand-md .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-md .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-md .navbar-nav-scroll{overflow:visible}.navbar-expand-md .navbar-collapse{display:flex!important;flex-basis:auto}.navbar-expand-md .navbar-toggler{display:none}.navbar-expand-md .offcanvas-header{display:none}.navbar-expand-md .offcanvas{position:inherit;bottom:0;z-index:1000;flex-grow:1;visibility:visible!important;background-color:transparent;border-right:0;border-left:0;transition:none;transform:none}.navbar-expand-md .offcanvas-bottom,.navbar-expand-md .offcanvas-top{height:auto;border-top:0;border-bottom:0}.navbar-expand-md .offcanvas-body{display:flex;flex-grow:0;padding:0;overflow-y:visible}}@media (min-width:992px){.navbar-expand-lg{flex-wrap:nowrap;justify-content:flex-start}.navbar-expand-lg .navbar-nav{flex-direction:row}.navbar-expand-lg .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-lg .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-lg .navbar-nav-scroll{overflow:visible}.navbar-expand-lg .navbar-collapse{display:flex!important;flex-basis:auto}.navbar-expand-lg .navbar-toggler{display:none}.navbar-expand-lg .offcanvas-header{display:none}.navbar-expand-lg .offcanvas{position:inherit;bottom:0;z-index:1000;flex-grow:1;visibility:visible!important;background-color:transparent;border-right:0;border-left:0;transition:none;transform:none}.navbar-expand-lg .offcanvas-bottom,.navbar-expand-lg .offcanvas-top{height:auto;border-top:0;border-bottom:0}.navbar-expand-lg .offcanvas-body{display:flex;flex-grow:0;padding:0;overflow-y:visible}}@media (min-width:1200px){.navbar-expand-xl{flex-wrap:nowrap;justify-content:flex-start}.navbar-expand-xl .navbar-nav{flex-direction:row}.navbar-expand-xl .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-xl .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-xl .navbar-nav-scroll{overflow:visible}.navbar-expand-xl .navbar-collapse{display:flex!important;flex-basis:auto}.navbar-expand-xl .navbar-toggler{display:none}.navbar-expand-xl .offcanvas-header{display:none}.navbar-expand-xl .offcanvas{position:inherit;bottom:0;z-index:1000;flex-grow:1;visibility:visible!important;background-color:transparent;border-right:0;border-left:0;transition:none;transform:none}.navbar-expand-xl .offcanvas-bottom,.navbar-expand-xl .offcanvas-top{height:auto;border-top:0;border-bottom:0}.navbar-expand-xl .offcanvas-body{display:flex;flex-grow:0;padding:0;overflow-y:visible}}@media (min-width:1400px){.navbar-expand-xxl{flex-wrap:nowrap;justify-content:flex-start}.navbar-expand-xxl .navbar-nav{flex-direction:row}.navbar-expand-xxl .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-xxl .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-xxl .navbar-nav-scroll{overflow:visible}.navbar-expand-xxl .navbar-collapse{display:flex!important;flex-basis:auto}.navbar-expand-xxl .navbar-toggler{display:none}.navbar-expand-xxl .offcanvas-header{display:none}.navbar-expand-xxl .offcanvas{position:inherit;bottom:0;z-index:1000;flex-grow:1;visibility:visible!important;background-color:transparent;border-right:0;border-left:0;transition:none;transform:none}.navbar-expand-xxl .offcanvas-bottom,.navbar-expand-xxl .offcanvas-top{height:auto;border-top:0;border-bottom:0}.navbar-expand-xxl .offcanvas-body{display:flex;flex-grow:0;padding:0;overflow-y:visible}}.navbar-expand{flex-wrap:nowrap;justify-content:flex-start}.navbar-expand .navbar-nav{flex-direction:row}.navbar-expand .navbar-nav .dropdown-menu{position:absolute}.navbar-expand .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand .navbar-nav-scroll{overflow:visible}.navbar-expand .navbar-collapse{display:flex!important;flex-basis:auto}.navbar-expand .navbar-toggler{display:none}.navbar-expand .offcanvas-header{display:none}.navbar-expand .offcanvas{position:inherit;bottom:0;z-index:1000;flex-grow:1;visibility:visible!important;background-color:transparent;border-right:0;border-left:0;transition:none;transform:none}.navbar-expand .offcanvas-bottom,.navbar-expand .offcanvas-top{height:auto;border-top:0;border-bottom:0}.navbar-expand .offcanvas-body{display:flex;flex-grow:0;padding:0;overflow-y:visible}.navbar-light .navbar-brand{color:rgba(0,0,0,.9)}.navbar-light .navbar-brand:focus,.navbar-light .navbar-brand:hover{color:rgba(0,0,0,.9)}.navbar-light .navbar-nav .nav-link{color:rgba(0,0,0,.55)}.navbar-light .navbar-nav .nav-link:focus,.navbar-light .navbar-nav .nav-link:hover{color:rgba(0,0,0,.7)}.navbar-light .navbar-nav .nav-link.disabled{color:rgba(0,0,0,.3)}.navbar-light .navbar-nav .nav-link.active,.navbar-light .navbar-nav .show>.nav-link{color:rgba(0,0,0,.9)}.navbar-light .navbar-toggler{color:rgba(0,0,0,.55);border-color:rgba(0,0,0,.1)}.navbar-light .navbar-toggler-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 30 30'%3e%3cpath stroke='rgba%280, 0, 0, 0.55%29' stroke-linecap='round' stroke-miterlimit='10' stroke-width='2' d='M4 7h22M4 15h22M4 23h22'/%3e%3c/svg%3e")}.navbar-light .navbar-text{color:rgba(0,0,0,.55)}.navbar-light .navbar-text a,.navbar-light .navbar-text a:focus,.navbar-light .navbar-text a:hover{color:rgba(0,0,0,.9)}.navbar-dark .navbar-brand{color:#fff}.navbar-dark .navbar-brand:focus,.navbar-dark .navbar-brand:hover{color:#fff}.navbar-dark .navbar-nav .nav-link{color:rgba(255,255,255,.55)}.navbar-dark .navbar-nav .nav-link:focus,.navbar-dark .navbar-nav .nav-link:hover{color:rgba(255,255,255,.75)}.navbar-dark .navbar-nav .nav-link.disabled{color:rgba(255,255,255,.25)}.navbar-dark .navbar-nav .nav-link.active,.navbar-dark .navbar-nav .show>.nav-link{color:#fff}.navbar-dark .navbar-toggler{color:rgba(255,255,255,.55);border-color:rgba(255,255,255,.1)}.navbar-dark .navbar-toggler-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 30 30'%3e%3cpath stroke='rgba%28255, 255, 255, 0.55%29' stroke-linecap='round' stroke-miterlimit='10' stroke-width='2' d='M4 7h22M4 15h22M4 23h22'/%3e%3c/svg%3e")}.navbar-dark .navbar-text{color:rgba(255,255,255,.55)}.navbar-dark .navbar-text a,.navbar-dark .navbar-text a:focus,.navbar-dark .navbar-text a:hover{color:#fff}.card{position:relative;display:flex;flex-direction:column;min-width:0;word-wrap:break-word;background-color:#fff;background-clip:border-box;border:1px solid rgba(0,0,0,.125);border-radius:.25rem}.card>hr{margin-right:0;margin-left:0}.card>.list-group{border-top:inherit;border-bottom:inherit}.card>.list-group:first-child{border-top-width:0;border-top-left-radius:calc(.25rem - 1px);border-top-right-radius:calc(.25rem - 1px)}.card>.list-group:last-child{border-bottom-width:0;border-bottom-right-radius:calc(.25rem - 1px);border-bottom-left-radius:calc(.25rem - 1px)}.card>.card-header+.list-group,.card>.list-group+.card-footer{border-top:0}.card-body{flex:1 1 auto;padding:1rem 1rem}.card-title{margin-bottom:.5rem}.card-subtitle{margin-top:-.25rem;margin-bottom:0}.card-text:last-child{margin-bottom:0}.card-link+.card-link{margin-left:1rem}.card-header{padding:.5rem 1rem;margin-bottom:0;background-color:rgba(0,0,0,.03);border-bottom:1px solid rgba(0,0,0,.125)}.card-header:first-child{border-radius:calc(.25rem - 1px) calc(.25rem - 1px) 0 0}.card-footer{padding:.5rem 1rem;background-color:rgba(0,0,0,.03);border-top:1px solid rgba(0,0,0,.125)}.card-footer:last-child{border-radius:0 0 calc(.25rem - 1px) calc(.25rem - 1px)}.card-header-tabs{margin-right:-.5rem;margin-bottom:-.5rem;margin-left:-.5rem;border-bottom:0}.card-header-pills{margin-right:-.5rem;margin-left:-.5rem}.card-img-overlay{position:absolute;top:0;right:0;bottom:0;left:0;padding:1rem;border-radius:calc(.25rem - 1px)}.card-img,.card-img-bottom,.card-img-top{width:100%}.card-img,.card-img-top{border-top-left-radius:calc(.25rem - 1px);border-top-right-radius:calc(.25rem - 1px)}.card-img,.card-img-bottom{border-bottom-right-radius:calc(.25rem - 1px);border-bottom-left-radius:calc(.25rem - 1px)}.card-group>.card{margin-bottom:.75rem}@media (min-width:576px){.card-group{display:flex;flex-flow:row wrap}.card-group>.card{flex:1 0 0%;margin-bottom:0}.card-group>.card+.card{margin-left:0;border-left:0}.card-group>.card:not(:last-child){border-top-right-radius:0;border-bottom-right-radius:0}.card-group>.card:not(:last-child) .card-header,.card-group>.card:not(:last-child) .card-img-top{border-top-right-radius:0}.card-group>.card:not(:last-child) .card-footer,.card-group>.card:not(:last-child) .card-img-bottom{border-bottom-right-radius:0}.card-group>.card:not(:first-child){border-top-left-radius:0;border-bottom-left-radius:0}.card-group>.card:not(:first-child) .card-header,.card-group>.card:not(:first-child) .card-img-top{border-top-left-radius:0}.card-group>.card:not(:first-child) .card-footer,.card-group>.card:not(:first-child) .card-img-bottom{border-bottom-left-radius:0}}.accordion-button{position:relative;display:flex;align-items:center;width:100%;padding:1rem 1.25rem;font-size:1rem;color:#212529;text-align:left;background-color:#fff;border:0;border-radius:0;overflow-anchor:none;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out,border-radius .15s ease}@media (prefers-reduced-motion:reduce){.accordion-button{transition:none}}.accordion-button:not(.collapsed){color:#0c63e4;background-color:#e7f1ff;box-shadow:inset 0 -1px 0 rgba(0,0,0,.125)}.accordion-button:not(.collapsed)::after{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%230c63e4'%3e%3cpath fill-rule='evenodd' d='M1.646 4.646a.5.5 0 0 1 .708 0L8 10.293l5.646-5.647a.5.5 0 0 1 .708.708l-6 6a.5.5 0 0 1-.708 0l-6-6a.5.5 0 0 1 0-.708z'/%3e%3c/svg%3e");transform:rotate(-180deg)}.accordion-button::after{flex-shrink:0;width:1.25rem;height:1.25rem;margin-left:auto;content:"";background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23212529'%3e%3cpath fill-rule='evenodd' d='M1.646 4.646a.5.5 0 0 1 .708 0L8 10.293l5.646-5.647a.5.5 0 0 1 .708.708l-6 6a.5.5 0 0 1-.708 0l-6-6a.5.5 0 0 1 0-.708z'/%3e%3c/svg%3e");background-repeat:no-repeat;background-size:1.25rem;transition:transform .2s ease-in-out}@media (prefers-reduced-motion:reduce){.accordion-button::after{transition:none}}.accordion-button:hover{z-index:2}.accordion-button:focus{z-index:3;border-color:#86b7fe;outline:0;box-shadow:0 0 0 .25rem rgba(13,110,253,.25)}.accordion-header{margin-bottom:0}.accordion-item{background-color:#fff;border:1px solid rgba(0,0,0,.125)}.accordion-item:first-of-type{border-top-left-radius:.25rem;border-top-right-radius:.25rem}.accordion-item:first-of-type .accordion-button{border-top-left-radius:calc(.25rem - 1px);border-top-right-radius:calc(.25rem - 1px)}.accordion-item:not(:first-of-type){border-top:0}.accordion-item:last-of-type{border-bottom-right-radius:.25rem;border-bottom-left-radius:.25rem}.accordion-item:last-of-type .accordion-button.collapsed{border-bottom-right-radius:calc(.25rem - 1px);border-bottom-left-radius:calc(.25rem - 1px)}.accordion-item:last-of-type .accordion-collapse{border-bottom-right-radius:.25rem;border-bottom-left-radius:.25rem}.accordion-body{padding:1rem 1.25rem}.accordion-flush .accordion-collapse{border-width:0}.accordion-flush .accordion-item{border-right:0;border-left:0;border-radius:0}.accordion-flush .accordion-item:first-child{border-top:0}.accordion-flush .accordion-item:last-child{border-bottom:0}.accordion-flush .accordion-item .accordion-button{border-radius:0}.breadcrumb{display:flex;flex-wrap:wrap;padding:0 0;margin-bottom:1rem;list-style:none}.breadcrumb-item+.breadcrumb-item{padding-left:.5rem}.breadcrumb-item+.breadcrumb-item::before{float:left;padding-right:.5rem;color:#6c757d;content:var(--bs-breadcrumb-divider, "/")}.breadcrumb-item.active{color:#6c757d}.pagination{display:flex;padding-left:0;list-style:none}.page-link{position:relative;display:block;color:#0d6efd;text-decoration:none;background-color:#fff;border:1px solid #dee2e6;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.page-link{transition:none}}.page-link:hover{z-index:2;color:#0a58ca;background-color:#e9ecef;border-color:#dee2e6}.page-link:focus{z-index:3;color:#0a58ca;background-color:#e9ecef;outline:0;box-shadow:0 0 0 .25rem rgba(13,110,253,.25)}.page-item:not(:first-child) .page-link{margin-left:-1px}.page-item.active .page-link{z-index:3;color:#fff;background-color:#0d6efd;border-color:#0d6efd}.page-item.disabled .page-link{color:#6c757d;pointer-events:none;background-color:#fff;border-color:#dee2e6}.page-link{padding:.375rem .75rem}.page-item:first-child .page-link{border-top-left-radius:.25rem;border-bottom-left-radius:.25rem}.page-item:last-child .page-link{border-top-right-radius:.25rem;border-bottom-right-radius:.25rem}.pagination-lg .page-link{padding:.75rem 1.5rem;font-size:1.25rem}.pagination-lg .page-item:first-child .page-link{border-top-left-radius:.3rem;border-bottom-left-radius:.3rem}.pagination-lg .page-item:last-child .page-link{border-top-right-radius:.3rem;border-bottom-right-radius:.3rem}.pagination-sm .page-link{padding:.25rem .5rem;font-size:.875rem}.pagination-sm .page-item:first-child .page-link{border-top-left-radius:.2rem;border-bottom-left-radius:.2rem}.pagination-sm .page-item:last-child .page-link{border-top-right-radius:.2rem;border-bottom-right-radius:.2rem}.badge{display:inline-block;padding:.35em .65em;font-size:.75em;font-weight:700;line-height:1;color:#fff;text-align:center;white-space:nowrap;vertical-align:baseline;border-radius:.25rem}.badge:empty{display:none}.btn .badge{position:relative;top:-1px}.alert{position:relative;padding:1rem 1rem;margin-bottom:1rem;border:1px solid transparent;border-radius:.25rem}.alert-heading{color:inherit}.alert-link{font-weight:700}.alert-dismissible{padding-right:3rem}.alert-dismissible .btn-close{position:absolute;top:0;right:0;z-index:2;padding:1.25rem 1rem}.alert-primary{color:#084298;background-color:#cfe2ff;border-color:#b6d4fe}.alert-primary .alert-link{color:#06357a}.alert-secondary{color:#41464b;background-color:#e2e3e5;border-color:#d3d6d8}.alert-secondary .alert-link{color:#34383c}.alert-success{color:#0f5132;background-color:#d1e7dd;border-color:#badbcc}.alert-success .alert-link{color:#0c4128}.alert-info{color:#055160;background-color:#cff4fc;border-color:#b6effb}.alert-info .alert-link{color:#04414d}.alert-warning{color:#664d03;background-color:#fff3cd;border-color:#ffecb5}.alert-warning .alert-link{color:#523e02}.alert-danger{color:#842029;background-color:#f8d7da;border-color:#f5c2c7}.alert-danger .alert-link{color:#6a1a21}.alert-light{color:#636464;background-color:#fefefe;border-color:#fdfdfe}.alert-light .alert-link{color:#4f5050}.alert-dark{color:#141619;background-color:#d3d3d4;border-color:#bcbebf}.alert-dark .alert-link{color:#101214}@-webkit-keyframes progress-bar-stripes{0%{background-position-x:1rem}}@keyframes progress-bar-stripes{0%{background-position-x:1rem}}.progress{display:flex;height:1rem;overflow:hidden;font-size:.75rem;background-color:#e9ecef;border-radius:.25rem}.progress-bar{display:flex;flex-direction:column;justify-content:center;overflow:hidden;color:#fff;text-align:center;white-space:nowrap;background-color:#0d6efd;transition:width .6s ease}@media (prefers-reduced-motion:reduce){.progress-bar{transition:none}}.progress-bar-striped{background-image:linear-gradient(45deg,rgba(255,255,255,.15) 25%,transparent 25%,transparent 50%,rgba(255,255,255,.15) 50%,rgba(255,255,255,.15) 75%,transparent 75%,transparent);background-size:1rem 1rem}.progress-bar-animated{-webkit-animation:1s linear infinite progress-bar-stripes;animation:1s linear infinite progress-bar-stripes}@media (prefers-reduced-motion:reduce){.progress-bar-animated{-webkit-animation:none;animation:none}}.list-group{display:flex;flex-direction:column;padding-left:0;margin-bottom:0;border-radius:.25rem}.list-group-numbered{list-style-type:none;counter-reset:section}.list-group-numbered>li::before{content:counters(section, ".") ". ";counter-increment:section}.list-group-item-action{width:100%;color:#495057;text-align:inherit}.list-group-item-action:focus,.list-group-item-action:hover{z-index:1;color:#495057;text-decoration:none;background-color:#f8f9fa}.list-group-item-action:active{color:#212529;background-color:#e9ecef}.list-group-item{position:relative;display:block;padding:.5rem 1rem;color:#212529;text-decoration:none;background-color:#fff;border:1px solid rgba(0,0,0,.125)}.list-group-item:first-child{border-top-left-radius:inherit;border-top-right-radius:inherit}.list-group-item:last-child{border-bottom-right-radius:inherit;border-bottom-left-radius:inherit}.list-group-item.disabled,.list-group-item:disabled{color:#6c757d;pointer-events:none;background-color:#fff}.list-group-item.active{z-index:2;color:#fff;background-color:#0d6efd;border-color:#0d6efd}.list-group-item+.list-group-item{border-top-width:0}.list-group-item+.list-group-item.active{margin-top:-1px;border-top-width:1px}.list-group-horizontal{flex-direction:row}.list-group-horizontal>.list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal>.list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal>.list-group-item.active{margin-top:0}.list-group-horizontal>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}@media (min-width:576px){.list-group-horizontal-sm{flex-direction:row}.list-group-horizontal-sm>.list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal-sm>.list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal-sm>.list-group-item.active{margin-top:0}.list-group-horizontal-sm>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-sm>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}@media (min-width:768px){.list-group-horizontal-md{flex-direction:row}.list-group-horizontal-md>.list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal-md>.list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal-md>.list-group-item.active{margin-top:0}.list-group-horizontal-md>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-md>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}@media (min-width:992px){.list-group-horizontal-lg{flex-direction:row}.list-group-horizontal-lg>.list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal-lg>.list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal-lg>.list-group-item.active{margin-top:0}.list-group-horizontal-lg>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-lg>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}@media (min-width:1200px){.list-group-horizontal-xl{flex-direction:row}.list-group-horizontal-xl>.list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal-xl>.list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal-xl>.list-group-item.active{margin-top:0}.list-group-horizontal-xl>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-xl>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}@media (min-width:1400px){.list-group-horizontal-xxl{flex-direction:row}.list-group-horizontal-xxl>.list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal-xxl>.list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal-xxl>.list-group-item.active{margin-top:0}.list-group-horizontal-xxl>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-xxl>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}.list-group-flush{border-radius:0}.list-group-flush>.list-group-item{border-width:0 0 1px}.list-group-flush>.list-group-item:last-child{border-bottom-width:0}.list-group-item-primary{color:#084298;background-color:#cfe2ff}.list-group-item-primary.list-group-item-action:focus,.list-group-item-primary.list-group-item-action:hover{color:#084298;background-color:#bacbe6}.list-group-item-primary.list-group-item-action.active{color:#fff;background-color:#084298;border-color:#084298}.list-group-item-secondary{color:#41464b;background-color:#e2e3e5}.list-group-item-secondary.list-group-item-action:focus,.list-group-item-secondary.list-group-item-action:hover{color:#41464b;background-color:#cbccce}.list-group-item-secondary.list-group-item-action.active{color:#fff;background-color:#41464b;border-color:#41464b}.list-group-item-success{color:#0f5132;background-color:#d1e7dd}.list-group-item-success.list-group-item-action:focus,.list-group-item-success.list-group-item-action:hover{color:#0f5132;background-color:#bcd0c7}.list-group-item-success.list-group-item-action.active{color:#fff;background-color:#0f5132;border-color:#0f5132}.list-group-item-info{color:#055160;background-color:#cff4fc}.list-group-item-info.list-group-item-action:focus,.list-group-item-info.list-group-item-action:hover{color:#055160;background-color:#badce3}.list-group-item-info.list-group-item-action.active{color:#fff;background-color:#055160;border-color:#055160}.list-group-item-warning{color:#664d03;background-color:#fff3cd}.list-group-item-warning.list-group-item-action:focus,.list-group-item-warning.list-group-item-action:hover{color:#664d03;background-color:#e6dbb9}.list-group-item-warning.list-group-item-action.active{color:#fff;background-color:#664d03;border-color:#664d03}.list-group-item-danger{color:#842029;background-color:#f8d7da}.list-group-item-danger.list-group-item-action:focus,.list-group-item-danger.list-group-item-action:hover{color:#842029;background-color:#dfc2c4}.list-group-item-danger.list-group-item-action.active{color:#fff;background-color:#842029;border-color:#842029}.list-group-item-light{color:#636464;background-color:#fefefe}.list-group-item-light.list-group-item-action:focus,.list-group-item-light.list-group-item-action:hover{color:#636464;background-color:#e5e5e5}.list-group-item-light.list-group-item-action.active{color:#fff;background-color:#636464;border-color:#636464}.list-group-item-dark{color:#141619;background-color:#d3d3d4}.list-group-item-dark.list-group-item-action:focus,.list-group-item-dark.list-group-item-action:hover{color:#141619;background-color:#bebebf}.list-group-item-dark.list-group-item-action.active{color:#fff;background-color:#141619;border-color:#141619}.btn-close{box-sizing:content-box;width:1em;height:1em;padding:.25em .25em;color:#000;background:transparent url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23000'%3e%3cpath d='M.293.293a1 1 0 011.414 0L8 6.586 14.293.293a1 1 0 111.414 1.414L9.414 8l6.293 6.293a1 1 0 01-1.414 1.414L8 9.414l-6.293 6.293a1 1 0 01-1.414-1.414L6.586 8 .293 1.707a1 1 0 010-1.414z'/%3e%3c/svg%3e") center/1em auto no-repeat;border:0;border-radius:.25rem;opacity:.5}.btn-close:hover{color:#000;text-decoration:none;opacity:.75}.btn-close:focus{outline:0;box-shadow:0 0 0 .25rem rgba(13,110,253,.25);opacity:1}.btn-close.disabled,.btn-close:disabled{pointer-events:none;-webkit-user-select:none;-moz-user-select:none;user-select:none;opacity:.25}.btn-close-white{filter:invert(1) grayscale(100%) brightness(200%)}.toast{width:350px;max-width:100%;font-size:.875rem;pointer-events:auto;background-color:rgba(255,255,255,.85);background-clip:padding-box;border:1px solid rgba(0,0,0,.1);box-shadow:0 .5rem 1rem rgba(0,0,0,.15);border-radius:.25rem}.toast.showing{opacity:0}.toast:not(.show){display:none}.toast-container{width:-webkit-max-content;width:-moz-max-content;width:max-content;max-width:100%;pointer-events:none}.toast-container>:not(:last-child){margin-bottom:.75rem}.toast-header{display:flex;align-items:center;padding:.5rem .75rem;color:#6c757d;background-color:rgba(255,255,255,.85);background-clip:padding-box;border-bottom:1px solid rgba(0,0,0,.05);border-top-left-radius:calc(.25rem - 1px);border-top-right-radius:calc(.25rem - 1px)}.toast-header .btn-close{margin-right:-.375rem;margin-left:.75rem}.toast-body{padding:.75rem;word-wrap:break-word}.modal{position:fixed;top:0;left:0;z-index:1055;display:none;width:100%;height:100%;overflow-x:hidden;overflow-y:auto;outline:0}.modal-dialog{position:relative;width:auto;margin:.5rem;pointer-events:none}.modal.fade .modal-dialog{transition:transform .3s ease-out;transform:translate(0,-50px)}@media (prefers-reduced-motion:reduce){.modal.fade .modal-dialog{transition:none}}.modal.show .modal-dialog{transform:none}.modal.modal-static .modal-dialog{transform:scale(1.02)}.modal-dialog-scrollable{height:calc(100% - 1rem)}.modal-dialog-scrollable .modal-content{max-height:100%;overflow:hidden}.modal-dialog-scrollable .modal-body{overflow-y:auto}.modal-dialog-centered{display:flex;align-items:center;min-height:calc(100% - 1rem)}.modal-content{position:relative;display:flex;flex-direction:column;width:100%;pointer-events:auto;background-color:#fff;background-clip:padding-box;border:1px solid rgba(0,0,0,.2);border-radius:.3rem;outline:0}.modal-backdrop{position:fixed;top:0;left:0;z-index:1050;width:100vw;height:100vh;background-color:#000}.modal-backdrop.fade{opacity:0}.modal-backdrop.show{opacity:.5}.modal-header{display:flex;flex-shrink:0;align-items:center;justify-content:space-between;padding:1rem 1rem;border-bottom:1px solid #dee2e6;border-top-left-radius:calc(.3rem - 1px);border-top-right-radius:calc(.3rem - 1px)}.modal-header .btn-close{padding:.5rem .5rem;margin:-.5rem -.5rem -.5rem auto}.modal-title{margin-bottom:0;line-height:1.5}.modal-body{position:relative;flex:1 1 auto;padding:1rem}.modal-footer{display:flex;flex-wrap:wrap;flex-shrink:0;align-items:center;justify-content:flex-end;padding:.75rem;border-top:1px solid #dee2e6;border-bottom-right-radius:calc(.3rem - 1px);border-bottom-left-radius:calc(.3rem - 1px)}.modal-footer>*{margin:.25rem}@media (min-width:576px){.modal-dialog{max-width:500px;margin:1.75rem auto}.modal-dialog-scrollable{height:calc(100% - 3.5rem)}.modal-dialog-centered{min-height:calc(100% - 3.5rem)}.modal-sm{max-width:300px}}@media (min-width:992px){.modal-lg,.modal-xl{max-width:800px}}@media (min-width:1200px){.modal-xl{max-width:1140px}}.modal-fullscreen{width:100vw;max-width:none;height:100%;margin:0}.modal-fullscreen .modal-content{height:100%;border:0;border-radius:0}.modal-fullscreen .modal-header{border-radius:0}.modal-fullscreen .modal-body{overflow-y:auto}.modal-fullscreen .modal-footer{border-radius:0}@media (max-width:575.98px){.modal-fullscreen-sm-down{width:100vw;max-width:none;height:100%;margin:0}.modal-fullscreen-sm-down .modal-content{height:100%;border:0;border-radius:0}.modal-fullscreen-sm-down .modal-header{border-radius:0}.modal-fullscreen-sm-down .modal-body{overflow-y:auto}.modal-fullscreen-sm-down .modal-footer{border-radius:0}}@media (max-width:767.98px){.modal-fullscreen-md-down{width:100vw;max-width:none;height:100%;margin:0}.modal-fullscreen-md-down .modal-content{height:100%;border:0;border-radius:0}.modal-fullscreen-md-down .modal-header{border-radius:0}.modal-fullscreen-md-down .modal-body{overflow-y:auto}.modal-fullscreen-md-down .modal-footer{border-radius:0}}@media (max-width:991.98px){.modal-fullscreen-lg-down{width:100vw;max-width:none;height:100%;margin:0}.modal-fullscreen-lg-down .modal-content{height:100%;border:0;border-radius:0}.modal-fullscreen-lg-down .modal-header{border-radius:0}.modal-fullscreen-lg-down .modal-body{overflow-y:auto}.modal-fullscreen-lg-down .modal-footer{border-radius:0}}@media (max-width:1199.98px){.modal-fullscreen-xl-down{width:100vw;max-width:none;height:100%;margin:0}.modal-fullscreen-xl-down .modal-content{height:100%;border:0;border-radius:0}.modal-fullscreen-xl-down .modal-header{border-radius:0}.modal-fullscreen-xl-down .modal-body{overflow-y:auto}.modal-fullscreen-xl-down .modal-footer{border-radius:0}}@media (max-width:1399.98px){.modal-fullscreen-xxl-down{width:100vw;max-width:none;height:100%;margin:0}.modal-fullscreen-xxl-down .modal-content{height:100%;border:0;border-radius:0}.modal-fullscreen-xxl-down .modal-header{border-radius:0}.modal-fullscreen-xxl-down .modal-body{overflow-y:auto}.modal-fullscreen-xxl-down .modal-footer{border-radius:0}}.tooltip{position:absolute;z-index:1080;display:block;margin:0;font-family:var(--bs-font-sans-serif);font-style:normal;font-weight:400;line-height:1.5;text-align:left;text-align:start;text-decoration:none;text-shadow:none;text-transform:none;letter-spacing:normal;word-break:normal;word-spacing:normal;white-space:normal;line-break:auto;font-size:.875rem;word-wrap:break-word;opacity:0}.tooltip.show{opacity:.9}.tooltip .tooltip-arrow{position:absolute;display:block;width:.8rem;height:.4rem}.tooltip .tooltip-arrow::before{position:absolute;content:"";border-color:transparent;border-style:solid}.bs-tooltip-auto[data-popper-placement^=top],.bs-tooltip-top{padding:.4rem 0}.bs-tooltip-auto[data-popper-placement^=top] .tooltip-arrow,.bs-tooltip-top .tooltip-arrow{bottom:0}.bs-tooltip-auto[data-popper-placement^=top] .tooltip-arrow::before,.bs-tooltip-top .tooltip-arrow::before{top:-1px;border-width:.4rem .4rem 0;border-top-color:#000}.bs-tooltip-auto[data-popper-placement^=right],.bs-tooltip-end{padding:0 .4rem}.bs-tooltip-auto[data-popper-placement^=right] .tooltip-arrow,.bs-tooltip-end .tooltip-arrow{left:0;width:.4rem;height:.8rem}.bs-tooltip-auto[data-popper-placement^=right] .tooltip-arrow::before,.bs-tooltip-end .tooltip-arrow::before{right:-1px;border-width:.4rem .4rem .4rem 0;border-right-color:#000}.bs-tooltip-auto[data-popper-placement^=bottom],.bs-tooltip-bottom{padding:.4rem 0}.bs-tooltip-auto[data-popper-placement^=bottom] .tooltip-arrow,.bs-tooltip-bottom .tooltip-arrow{top:0}.bs-tooltip-auto[data-popper-placement^=bottom] .tooltip-arrow::before,.bs-tooltip-bottom .tooltip-arrow::before{bottom:-1px;border-width:0 .4rem .4rem;border-bottom-color:#000}.bs-tooltip-auto[data-popper-placement^=left],.bs-tooltip-start{padding:0 .4rem}.bs-tooltip-auto[data-popper-placement^=left] .tooltip-arrow,.bs-tooltip-start .tooltip-arrow{right:0;width:.4rem;height:.8rem}.bs-tooltip-auto[data-popper-placement^=left] .tooltip-arrow::before,.bs-tooltip-start .tooltip-arrow::before{left:-1px;border-width:.4rem 0 .4rem .4rem;border-left-color:#000}.tooltip-inner{max-width:200px;padding:.25rem .5rem;color:#fff;text-align:center;background-color:#000;border-radius:.25rem}.popover{position:absolute;top:0;left:0;z-index:1070;display:block;max-width:276px;font-family:var(--bs-font-sans-serif);font-style:normal;font-weight:400;line-height:1.5;text-align:left;text-align:start;text-decoration:none;text-shadow:none;text-transform:none;letter-spacing:normal;word-break:normal;word-spacing:normal;white-space:normal;line-break:auto;font-size:.875rem;word-wrap:break-word;background-color:#fff;background-clip:padding-box;border:1px solid rgba(0,0,0,.2);border-radius:.3rem}.popover .popover-arrow{position:absolute;display:block;width:1rem;height:.5rem}.popover .popover-arrow::after,.popover .popover-arrow::before{position:absolute;display:block;content:"";border-color:transparent;border-style:solid}.bs-popover-auto[data-popper-placement^=top]>.popover-arrow,.bs-popover-top>.popover-arrow{bottom:calc(-.5rem - 1px)}.bs-popover-auto[data-popper-placement^=top]>.popover-arrow::before,.bs-popover-top>.popover-arrow::before{bottom:0;border-width:.5rem .5rem 0;border-top-color:rgba(0,0,0,.25)}.bs-popover-auto[data-popper-placement^=top]>.popover-arrow::after,.bs-popover-top>.popover-arrow::after{bottom:1px;border-width:.5rem .5rem 0;border-top-color:#fff}.bs-popover-auto[data-popper-placement^=right]>.popover-arrow,.bs-popover-end>.popover-arrow{left:calc(-.5rem - 1px);width:.5rem;height:1rem}.bs-popover-auto[data-popper-placement^=right]>.popover-arrow::before,.bs-popover-end>.popover-arrow::before{left:0;border-width:.5rem .5rem .5rem 0;border-right-color:rgba(0,0,0,.25)}.bs-popover-auto[data-popper-placement^=right]>.popover-arrow::after,.bs-popover-end>.popover-arrow::after{left:1px;border-width:.5rem .5rem .5rem 0;border-right-color:#fff}.bs-popover-auto[data-popper-placement^=bottom]>.popover-arrow,.bs-popover-bottom>.popover-arrow{top:calc(-.5rem - 1px)}.bs-popover-auto[data-popper-placement^=bottom]>.popover-arrow::before,.bs-popover-bottom>.popover-arrow::before{top:0;border-width:0 .5rem .5rem .5rem;border-bottom-color:rgba(0,0,0,.25)}.bs-popover-auto[data-popper-placement^=bottom]>.popover-arrow::after,.bs-popover-bottom>.popover-arrow::after{top:1px;border-width:0 .5rem .5rem .5rem;border-bottom-color:#fff}.bs-popover-auto[data-popper-placement^=bottom] .popover-header::before,.bs-popover-bottom .popover-header::before{position:absolute;top:0;left:50%;display:block;width:1rem;margin-left:-.5rem;content:"";border-bottom:1px solid #f0f0f0}.bs-popover-auto[data-popper-placement^=left]>.popover-arrow,.bs-popover-start>.popover-arrow{right:calc(-.5rem - 1px);width:.5rem;height:1rem}.bs-popover-auto[data-popper-placement^=left]>.popover-arrow::before,.bs-popover-start>.popover-arrow::before{right:0;border-width:.5rem 0 .5rem .5rem;border-left-color:rgba(0,0,0,.25)}.bs-popover-auto[data-popper-placement^=left]>.popover-arrow::after,.bs-popover-start>.popover-arrow::after{right:1px;border-width:.5rem 0 .5rem .5rem;border-left-color:#fff}.popover-header{padding:.5rem 1rem;margin-bottom:0;font-size:1rem;background-color:#f0f0f0;border-bottom:1px solid rgba(0,0,0,.2);border-top-left-radius:calc(.3rem - 1px);border-top-right-radius:calc(.3rem - 1px)}.popover-header:empty{display:none}.popover-body{padding:1rem 1rem;color:#212529}.carousel{position:relative}.carousel.pointer-event{touch-action:pan-y}.carousel-inner{position:relative;width:100%;overflow:hidden}.carousel-inner::after{display:block;clear:both;content:""}.carousel-item{position:relative;display:none;float:left;width:100%;margin-right:-100%;-webkit-backface-visibility:hidden;backface-visibility:hidden;transition:transform .6s ease-in-out}@media (prefers-reduced-motion:reduce){.carousel-item{transition:none}}.carousel-item-next,.carousel-item-prev,.carousel-item.active{display:block}.active.carousel-item-end,.carousel-item-next:not(.carousel-item-start){transform:translateX(100%)}.active.carousel-item-start,.carousel-item-prev:not(.carousel-item-end){transform:translateX(-100%)}.carousel-fade .carousel-item{opacity:0;transition-property:opacity;transform:none}.carousel-fade .carousel-item-next.carousel-item-start,.carousel-fade .carousel-item-prev.carousel-item-end,.carousel-fade .carousel-item.active{z-index:1;opacity:1}.carousel-fade .active.carousel-item-end,.carousel-fade .active.carousel-item-start{z-index:0;opacity:0;transition:opacity 0s .6s}@media (prefers-reduced-motion:reduce){.carousel-fade .active.carousel-item-end,.carousel-fade .active.carousel-item-start{transition:none}}.carousel-control-next,.carousel-control-prev{position:absolute;top:0;bottom:0;z-index:1;display:flex;align-items:center;justify-content:center;width:15%;padding:0;color:#fff;text-align:center;background:0 0;border:0;opacity:.5;transition:opacity .15s ease}@media (prefers-reduced-motion:reduce){.carousel-control-next,.carousel-control-prev{transition:none}}.carousel-control-next:focus,.carousel-control-next:hover,.carousel-control-prev:focus,.carousel-control-prev:hover{color:#fff;text-decoration:none;outline:0;opacity:.9}.carousel-control-prev{left:0}.carousel-control-next{right:0}.carousel-control-next-icon,.carousel-control-prev-icon{display:inline-block;width:2rem;height:2rem;background-repeat:no-repeat;background-position:50%;background-size:100% 100%}.carousel-control-prev-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23fff'%3e%3cpath d='M11.354 1.646a.5.5 0 0 1 0 .708L5.707 8l5.647 5.646a.5.5 0 0 1-.708.708l-6-6a.5.5 0 0 1 0-.708l6-6a.5.5 0 0 1 .708 0z'/%3e%3c/svg%3e")}.carousel-control-next-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23fff'%3e%3cpath d='M4.646 1.646a.5.5 0 0 1 .708 0l6 6a.5.5 0 0 1 0 .708l-6 6a.5.5 0 0 1-.708-.708L10.293 8 4.646 2.354a.5.5 0 0 1 0-.708z'/%3e%3c/svg%3e")}.carousel-indicators{position:absolute;right:0;bottom:0;left:0;z-index:2;display:flex;justify-content:center;padding:0;margin-right:15%;margin-bottom:1rem;margin-left:15%;list-style:none}.carousel-indicators [data-bs-target]{box-sizing:content-box;flex:0 1 auto;width:30px;height:3px;padding:0;margin-right:3px;margin-left:3px;text-indent:-999px;cursor:pointer;background-color:#fff;background-clip:padding-box;border:0;border-top:10px solid transparent;border-bottom:10px solid transparent;opacity:.5;transition:opacity .6s ease}@media (prefers-reduced-motion:reduce){.carousel-indicators [data-bs-target]{transition:none}}.carousel-indicators .active{opacity:1}.carousel-caption{position:absolute;right:15%;bottom:1.25rem;left:15%;padding-top:1.25rem;padding-bottom:1.25rem;color:#fff;text-align:center}.carousel-dark .carousel-control-next-icon,.carousel-dark .carousel-control-prev-icon{filter:invert(1) grayscale(100)}.carousel-dark .carousel-indicators [data-bs-target]{background-color:#000}.carousel-dark .carousel-caption{color:#000}@-webkit-keyframes spinner-border{to{transform:rotate(360deg)}}@keyframes spinner-border{to{transform:rotate(360deg)}}.spinner-border{display:inline-block;width:2rem;height:2rem;vertical-align:-.125em;border:.25em solid currentColor;border-right-color:transparent;border-radius:50%;-webkit-animation:.75s linear infinite spinner-border;animation:.75s linear infinite spinner-border}.spinner-border-sm{width:1rem;height:1rem;border-width:.2em}@-webkit-keyframes spinner-grow{0%{transform:scale(0)}50%{opacity:1;transform:none}}@keyframes spinner-grow{0%{transform:scale(0)}50%{opacity:1;transform:none}}.spinner-grow{display:inline-block;width:2rem;height:2rem;vertical-align:-.125em;background-color:currentColor;border-radius:50%;opacity:0;-webkit-animation:.75s linear infinite spinner-grow;animation:.75s linear infinite spinner-grow}.spinner-grow-sm{width:1rem;height:1rem}@media (prefers-reduced-motion:reduce){.spinner-border,.spinner-grow{-webkit-animation-duration:1.5s;animation-duration:1.5s}}.offcanvas{position:fixed;bottom:0;z-index:1045;display:flex;flex-direction:column;max-width:100%;visibility:hidden;background-color:#fff;background-clip:padding-box;outline:0;transition:transform .3s ease-in-out}@media (prefers-reduced-motion:reduce){.offcanvas{transition:none}}.offcanvas-backdrop{position:fixed;top:0;left:0;z-index:1040;width:100vw;height:100vh;background-color:#000}.offcanvas-backdrop.fade{opacity:0}.offcanvas-backdrop.show{opacity:.5}.offcanvas-header{display:flex;align-items:center;justify-content:space-between;padding:1rem 1rem}.offcanvas-header .btn-close{padding:.5rem .5rem;margin-top:-.5rem;margin-right:-.5rem;margin-bottom:-.5rem}.offcanvas-title{margin-bottom:0;line-height:1.5}.offcanvas-body{flex-grow:1;padding:1rem 1rem;overflow-y:auto}.offcanvas-start{top:0;left:0;width:400px;border-right:1px solid rgba(0,0,0,.2);transform:translateX(-100%)}.offcanvas-end{top:0;right:0;width:400px;border-left:1px solid rgba(0,0,0,.2);transform:translateX(100%)}.offcanvas-top{top:0;right:0;left:0;height:30vh;max-height:100%;border-bottom:1px solid rgba(0,0,0,.2);transform:translateY(-100%)}.offcanvas-bottom{right:0;left:0;height:30vh;max-height:100%;border-top:1px solid rgba(0,0,0,.2);transform:translateY(100%)}.offcanvas.show{transform:none}.placeholder{display:inline-block;min-height:1em;vertical-align:middle;cursor:wait;background-color:currentColor;opacity:.5}.placeholder.btn::before{display:inline-block;content:""}.placeholder-xs{min-height:.6em}.placeholder-sm{min-height:.8em}.placeholder-lg{min-height:1.2em}.placeholder-glow .placeholder{-webkit-animation:placeholder-glow 2s ease-in-out infinite;animation:placeholder-glow 2s ease-in-out infinite}@-webkit-keyframes placeholder-glow{50%{opacity:.2}}@keyframes placeholder-glow{50%{opacity:.2}}.placeholder-wave{-webkit-mask-image:linear-gradient(130deg,#000 55%,rgba(0,0,0,0.8) 75%,#000 95%);mask-image:linear-gradient(130deg,#000 55%,rgba(0,0,0,0.8) 75%,#000 95%);-webkit-mask-size:200% 100%;mask-size:200% 100%;-webkit-animation:placeholder-wave 2s linear infinite;animation:placeholder-wave 2s linear infinite}@-webkit-keyframes placeholder-wave{100%{-webkit-mask-position:-200% 0%;mask-position:-200% 0%}}@keyframes placeholder-wave{100%{-webkit-mask-position:-200% 0%;mask-position:-200% 0%}}.clearfix::after{display:block;clear:both;content:""}.link-primary{color:#0d6efd}.link-primary:focus,.link-primary:hover{color:#0a58ca}.link-secondary{color:#6c757d}.link-secondary:focus,.link-secondary:hover{color:#565e64}.link-success{color:#198754}.link-success:focus,.link-success:hover{color:#146c43}.link-info{color:#0dcaf0}.link-info:focus,.link-info:hover{color:#3dd5f3}.link-warning{color:#ffc107}.link-warning:focus,.link-warning:hover{color:#ffcd39}.link-danger{color:#dc3545}.link-danger:focus,.link-danger:hover{color:#b02a37}.link-light{color:#f8f9fa}.link-light:focus,.link-light:hover{color:#f9fafb}.link-dark{color:#212529}.link-dark:focus,.link-dark:hover{color:#1a1e21}.ratio{position:relative;width:100%}.ratio::before{display:block;padding-top:var(--bs-aspect-ratio);content:""}.ratio>*{position:absolute;top:0;left:0;width:100%;height:100%}.ratio-1x1{--bs-aspect-ratio:100%}.ratio-4x3{--bs-aspect-ratio:75%}.ratio-16x9{--bs-aspect-ratio:56.25%}.ratio-21x9{--bs-aspect-ratio:42.8571428571%}.fixed-top{position:fixed;top:0;right:0;left:0;z-index:1030}.fixed-bottom{position:fixed;right:0;bottom:0;left:0;z-index:1030}.sticky-top{position:-webkit-sticky;position:sticky;top:0;z-index:1020}@media (min-width:576px){.sticky-sm-top{position:-webkit-sticky;position:sticky;top:0;z-index:1020}}@media (min-width:768px){.sticky-md-top{position:-webkit-sticky;position:sticky;top:0;z-index:1020}}@media (min-width:992px){.sticky-lg-top{position:-webkit-sticky;position:sticky;top:0;z-index:1020}}@media (min-width:1200px){.sticky-xl-top{position:-webkit-sticky;position:sticky;top:0;z-index:1020}}@media (min-width:1400px){.sticky-xxl-top{position:-webkit-sticky;position:sticky;top:0;z-index:1020}}.hstack{display:flex;flex-direction:row;align-items:center;align-self:stretch}.vstack{display:flex;flex:1 1 auto;flex-direction:column;align-self:stretch}.visually-hidden,.visually-hidden-focusable:not(:focus):not(:focus-within){position:absolute!important;width:1px!important;height:1px!important;padding:0!important;margin:-1px!important;overflow:hidden!important;clip:rect(0,0,0,0)!important;white-space:nowrap!important;border:0!important}.stretched-link::after{position:absolute;top:0;right:0;bottom:0;left:0;z-index:1;content:""}.text-truncate{overflow:hidden;text-overflow:ellipsis;white-space:nowrap}.vr{display:inline-block;align-self:stretch;width:1px;min-height:1em;background-color:currentColor;opacity:.25}.align-baseline{vertical-align:baseline!important}.align-top{vertical-align:top!important}.align-middle{vertical-align:middle!important}.align-bottom{vertical-align:bottom!important}.align-text-bottom{vertical-align:text-bottom!important}.align-text-top{vertical-align:text-top!important}.float-start{float:left!important}.float-end{float:right!important}.float-none{float:none!important}.opacity-0{opacity:0!important}.opacity-25{opacity:.25!important}.opacity-50{opacity:.5!important}.opacity-75{opacity:.75!important}.opacity-100{opacity:1!important}.overflow-auto{overflow:auto!important}.overflow-hidden{overflow:hidden!important}.overflow-visible{overflow:visible!important}.overflow-scroll{overflow:scroll!important}.d-inline{display:inline!important}.d-inline-block{display:inline-block!important}.d-block{display:block!important}.d-grid{display:grid!important}.d-table{display:table!important}.d-table-row{display:table-row!important}.d-table-cell{display:table-cell!important}.d-flex{display:flex!important}.d-inline-flex{display:inline-flex!important}.d-none{display:none!important}.shadow{box-shadow:0 .5rem 1rem rgba(0,0,0,.15)!important}.shadow-sm{box-shadow:0 .125rem .25rem rgba(0,0,0,.075)!important}.shadow-lg{box-shadow:0 1rem 3rem rgba(0,0,0,.175)!important}.shadow-none{box-shadow:none!important}.position-static{position:static!important}.position-relative{position:relative!important}.position-absolute{position:absolute!important}.position-fixed{position:fixed!important}.position-sticky{position:-webkit-sticky!important;position:sticky!important}.top-0{top:0!important}.top-50{top:50%!important}.top-100{top:100%!important}.bottom-0{bottom:0!important}.bottom-50{bottom:50%!important}.bottom-100{bottom:100%!important}.start-0{left:0!important}.start-50{left:50%!important}.start-100{left:100%!important}.end-0{right:0!important}.end-50{right:50%!important}.end-100{right:100%!important}.translate-middle{transform:translate(-50%,-50%)!important}.translate-middle-x{transform:translateX(-50%)!important}.translate-middle-y{transform:translateY(-50%)!important}.border{border:1px solid #dee2e6!important}.border-0{border:0!important}.border-top{border-top:1px solid #dee2e6!important}.border-top-0{border-top:0!important}.border-end{border-right:1px solid #dee2e6!important}.border-end-0{border-right:0!important}.border-bottom{border-bottom:1px solid #dee2e6!important}.border-bottom-0{border-bottom:0!important}.border-start{border-left:1px solid #dee2e6!important}.border-start-0{border-left:0!important}.border-primary{border-color:#0d6efd!important}.border-secondary{border-color:#6c757d!important}.border-success{border-color:#198754!important}.border-info{border-color:#0dcaf0!important}.border-warning{border-color:#ffc107!important}.border-danger{border-color:#dc3545!important}.border-light{border-color:#f8f9fa!important}.border-dark{border-color:#212529!important}.border-white{border-color:#fff!important}.border-1{border-width:1px!important}.border-2{border-width:2px!important}.border-3{border-width:3px!important}.border-4{border-width:4px!important}.border-5{border-width:5px!important}.w-25{width:25%!important}.w-50{width:50%!important}.w-75{width:75%!important}.w-100{width:100%!important}.w-auto{width:auto!important}.mw-100{max-width:100%!important}.vw-100{width:100vw!important}.min-vw-100{min-width:100vw!important}.h-25{height:25%!important}.h-50{height:50%!important}.h-75{height:75%!important}.h-100{height:100%!important}.h-auto{height:auto!important}.mh-100{max-height:100%!important}.vh-100{height:100vh!important}.min-vh-100{min-height:100vh!important}.flex-fill{flex:1 1 auto!important}.flex-row{flex-direction:row!important}.flex-column{flex-direction:column!important}.flex-row-reverse{flex-direction:row-reverse!important}.flex-column-reverse{flex-direction:column-reverse!important}.flex-grow-0{flex-grow:0!important}.flex-grow-1{flex-grow:1!important}.flex-shrink-0{flex-shrink:0!important}.flex-shrink-1{flex-shrink:1!important}.flex-wrap{flex-wrap:wrap!important}.flex-nowrap{flex-wrap:nowrap!important}.flex-wrap-reverse{flex-wrap:wrap-reverse!important}.gap-0{gap:0!important}.gap-1{gap:.25rem!important}.gap-2{gap:.5rem!important}.gap-3{gap:1rem!important}.gap-4{gap:1.5rem!important}.gap-5{gap:3rem!important}.justify-content-start{justify-content:flex-start!important}.justify-content-end{justify-content:flex-end!important}.justify-content-center{justify-content:center!important}.justify-content-between{justify-content:space-between!important}.justify-content-around{justify-content:space-around!important}.justify-content-evenly{justify-content:space-evenly!important}.align-items-start{align-items:flex-start!important}.align-items-end{align-items:flex-end!important}.align-items-center{align-items:center!important}.align-items-baseline{align-items:baseline!important}.align-items-stretch{align-items:stretch!important}.align-content-start{align-content:flex-start!important}.align-content-end{align-content:flex-end!important}.align-content-center{align-content:center!important}.align-content-between{align-content:space-between!important}.align-content-around{align-content:space-around!important}.align-content-stretch{align-content:stretch!important}.align-self-auto{align-self:auto!important}.align-self-start{align-self:flex-start!important}.align-self-end{align-self:flex-end!important}.align-self-center{align-self:center!important}.align-self-baseline{align-self:baseline!important}.align-self-stretch{align-self:stretch!important}.order-first{order:-1!important}.order-0{order:0!important}.order-1{order:1!important}.order-2{order:2!important}.order-3{order:3!important}.order-4{order:4!important}.order-5{order:5!important}.order-last{order:6!important}.m-0{margin:0!important}.m-1{margin:.25rem!important}.m-2{margin:.5rem!important}.m-3{margin:1rem!important}.m-4{margin:1.5rem!important}.m-5{margin:3rem!important}.m-auto{margin:auto!important}.mx-0{margin-right:0!important;margin-left:0!important}.mx-1{margin-right:.25rem!important;margin-left:.25rem!important}.mx-2{margin-right:.5rem!important;margin-left:.5rem!important}.mx-3{margin-right:1rem!important;margin-left:1rem!important}.mx-4{margin-right:1.5rem!important;margin-left:1.5rem!important}.mx-5{margin-right:3rem!important;margin-left:3rem!important}.mx-auto{margin-right:auto!important;margin-left:auto!important}.my-0{margin-top:0!important;margin-bottom:0!important}.my-1{margin-top:.25rem!important;margin-bottom:.25rem!important}.my-2{margin-top:.5rem!important;margin-bottom:.5rem!important}.my-3{margin-top:1rem!important;margin-bottom:1rem!important}.my-4{margin-top:1.5rem!important;margin-bottom:1.5rem!important}.my-5{margin-top:3rem!important;margin-bottom:3rem!important}.my-auto{margin-top:auto!important;margin-bottom:auto!important}.mt-0{margin-top:0!important}.mt-1{margin-top:.25rem!important}.mt-2{margin-top:.5rem!important}.mt-3{margin-top:1rem!important}.mt-4{margin-top:1.5rem!important}.mt-5{margin-top:3rem!important}.mt-auto{margin-top:auto!important}.me-0{margin-right:0!important}.me-1{margin-right:.25rem!important}.me-2{margin-right:.5rem!important}.me-3{margin-right:1rem!important}.me-4{margin-right:1.5rem!important}.me-5{margin-right:3rem!important}.me-auto{margin-right:auto!important}.mb-0{margin-bottom:0!important}.mb-1{margin-bottom:.25rem!important}.mb-2{margin-bottom:.5rem!important}.mb-3{margin-bottom:1rem!important}.mb-4{margin-bottom:1.5rem!important}.mb-5{margin-bottom:3rem!important}.mb-auto{margin-bottom:auto!important}.ms-0{margin-left:0!important}.ms-1{margin-left:.25rem!important}.ms-2{margin-left:.5rem!important}.ms-3{margin-left:1rem!important}.ms-4{margin-left:1.5rem!important}.ms-5{margin-left:3rem!important}.ms-auto{margin-left:auto!important}.p-0{padding:0!important}.p-1{padding:.25rem!important}.p-2{padding:.5rem!important}.p-3{padding:1rem!important}.p-4{padding:1.5rem!important}.p-5{padding:3rem!important}.px-0{padding-right:0!important;padding-left:0!important}.px-1{padding-right:.25rem!important;padding-left:.25rem!important}.px-2{padding-right:.5rem!important;padding-left:.5rem!important}.px-3{padding-right:1rem!important;padding-left:1rem!important}.px-4{padding-right:1.5rem!important;padding-left:1.5rem!important}.px-5{padding-right:3rem!important;padding-left:3rem!important}.py-0{padding-top:0!important;padding-bottom:0!important}.py-1{padding-top:.25rem!important;padding-bottom:.25rem!important}.py-2{padding-top:.5rem!important;padding-bottom:.5rem!important}.py-3{padding-top:1rem!important;padding-bottom:1rem!important}.py-4{padding-top:1.5rem!important;padding-bottom:1.5rem!important}.py-5{padding-top:3rem!important;padding-bottom:3rem!important}.pt-0{padding-top:0!important}.pt-1{padding-top:.25rem!important}.pt-2{padding-top:.5rem!important}.pt-3{padding-top:1rem!important}.pt-4{padding-top:1.5rem!important}.pt-5{padding-top:3rem!important}.pe-0{padding-right:0!important}.pe-1{padding-right:.25rem!important}.pe-2{padding-right:.5rem!important}.pe-3{padding-right:1rem!important}.pe-4{padding-right:1.5rem!important}.pe-5{padding-right:3rem!important}.pb-0{padding-bottom:0!important}.pb-1{padding-bottom:.25rem!important}.pb-2{padding-bottom:.5rem!important}.pb-3{padding-bottom:1rem!important}.pb-4{padding-bottom:1.5rem!important}.pb-5{padding-bottom:3rem!important}.ps-0{padding-left:0!important}.ps-1{padding-left:.25rem!important}.ps-2{padding-left:.5rem!important}.ps-3{padding-left:1rem!important}.ps-4{padding-left:1.5rem!important}.ps-5{padding-left:3rem!important}.font-monospace{font-family:var(--bs-font-monospace)!important}.fs-1{font-size:calc(1.375rem + 1.5vw)!important}.fs-2{font-size:calc(1.325rem + .9vw)!important}.fs-3{font-size:calc(1.3rem + .6vw)!important}.fs-4{font-size:calc(1.275rem + .3vw)!important}.fs-5{font-size:1.25rem!important}.fs-6{font-size:1rem!important}.fst-italic{font-style:italic!important}.fst-normal{font-style:normal!important}.fw-light{font-weight:300!important}.fw-lighter{font-weight:lighter!important}.fw-normal{font-weight:400!important}.fw-bold{font-weight:700!important}.fw-bolder{font-weight:bolder!important}.lh-1{line-height:1!important}.lh-sm{line-height:1.25!important}.lh-base{line-height:1.5!important}.lh-lg{line-height:2!important}.text-start{text-align:left!important}.text-end{text-align:right!important}.text-center{text-align:center!important}.text-decoration-none{text-decoration:none!important}.text-decoration-underline{text-decoration:underline!important}.text-decoration-line-through{text-decoration:line-through!important}.text-lowercase{text-transform:lowercase!important}.text-uppercase{text-transform:uppercase!important}.text-capitalize{text-transform:capitalize!important}.text-wrap{white-space:normal!important}.text-nowrap{white-space:nowrap!important}.text-break{word-wrap:break-word!important;word-break:break-word!important}.text-primary{--bs-text-opacity:1;color:rgba(var(--bs-primary-rgb),var(--bs-text-opacity))!important}.text-secondary{--bs-text-opacity:1;color:rgba(var(--bs-secondary-rgb),var(--bs-text-opacity))!important}.text-success{--bs-text-opacity:1;color:rgba(var(--bs-success-rgb),var(--bs-text-opacity))!important}.text-info{--bs-text-opacity:1;color:rgba(var(--bs-info-rgb),var(--bs-text-opacity))!important}.text-warning{--bs-text-opacity:1;color:rgba(var(--bs-warning-rgb),var(--bs-text-opacity))!important}.text-danger{--bs-text-opacity:1;color:rgba(var(--bs-danger-rgb),var(--bs-text-opacity))!important}.text-light{--bs-text-opacity:1;color:rgba(var(--bs-light-rgb),var(--bs-text-opacity))!important}.text-dark{--bs-text-opacity:1;color:rgba(var(--bs-dark-rgb),var(--bs-text-opacity))!important}.text-black{--bs-text-opacity:1;color:rgba(var(--bs-black-rgb),var(--bs-text-opacity))!important}.text-white{--bs-text-opacity:1;color:rgba(var(--bs-white-rgb),var(--bs-text-opacity))!important}.text-body{--bs-text-opacity:1;color:rgba(var(--bs-body-color-rgb),var(--bs-text-opacity))!important}.text-muted{--bs-text-opacity:1;color:#6c757d!important}.text-black-50{--bs-text-opacity:1;color:rgba(0,0,0,.5)!important}.text-white-50{--bs-text-opacity:1;color:rgba(255,255,255,.5)!important}.text-reset{--bs-text-opacity:1;color:inherit!important}.text-opacity-25{--bs-text-opacity:0.25}.text-opacity-50{--bs-text-opacity:0.5}.text-opacity-75{--bs-text-opacity:0.75}.text-opacity-100{--bs-text-opacity:1}.bg-primary{--bs-bg-opacity:1;background-color:rgba(var(--bs-primary-rgb),var(--bs-bg-opacity))!important}.bg-secondary{--bs-bg-opacity:1;background-color:rgba(var(--bs-secondary-rgb),var(--bs-bg-opacity))!important}.bg-success{--bs-bg-opacity:1;background-color:rgba(var(--bs-success-rgb),var(--bs-bg-opacity))!important}.bg-info{--bs-bg-opacity:1;background-color:rgba(var(--bs-info-rgb),var(--bs-bg-opacity))!important}.bg-warning{--bs-bg-opacity:1;background-color:rgba(var(--bs-warning-rgb),var(--bs-bg-opacity))!important}.bg-danger{--bs-bg-opacity:1;background-color:rgba(var(--bs-danger-rgb),var(--bs-bg-opacity))!important}.bg-light{--bs-bg-opacity:1;background-color:rgba(var(--bs-light-rgb),var(--bs-bg-opacity))!important}.bg-dark{--bs-bg-opacity:1;background-color:rgba(var(--bs-dark-rgb),var(--bs-bg-opacity))!important}.bg-black{--bs-bg-opacity:1;background-color:rgba(var(--bs-black-rgb),var(--bs-bg-opacity))!important}.bg-white{--bs-bg-opacity:1;background-color:rgba(var(--bs-white-rgb),var(--bs-bg-opacity))!important}.bg-body{--bs-bg-opacity:1;background-color:rgba(var(--bs-body-bg-rgb),var(--bs-bg-opacity))!important}.bg-transparent{--bs-bg-opacity:1;background-color:transparent!important}.bg-opacity-10{--bs-bg-opacity:0.1}.bg-opacity-25{--bs-bg-opacity:0.25}.bg-opacity-50{--bs-bg-opacity:0.5}.bg-opacity-75{--bs-bg-opacity:0.75}.bg-opacity-100{--bs-bg-opacity:1}.bg-gradient{background-image:var(--bs-gradient)!important}.user-select-all{-webkit-user-select:all!important;-moz-user-select:all!important;user-select:all!important}.user-select-auto{-webkit-user-select:auto!important;-moz-user-select:auto!important;user-select:auto!important}.user-select-none{-webkit-user-select:none!important;-moz-user-select:none!important;user-select:none!important}.pe-none{pointer-events:none!important}.pe-auto{pointer-events:auto!important}.rounded{border-radius:.25rem!important}.rounded-0{border-radius:0!important}.rounded-1{border-radius:.2rem!important}.rounded-2{border-radius:.25rem!important}.rounded-3{border-radius:.3rem!important}.rounded-circle{border-radius:50%!important}.rounded-pill{border-radius:50rem!important}.rounded-top{border-top-left-radius:.25rem!important;border-top-right-radius:.25rem!important}.rounded-end{border-top-right-radius:.25rem!important;border-bottom-right-radius:.25rem!important}.rounded-bottom{border-bottom-right-radius:.25rem!important;border-bottom-left-radius:.25rem!important}.rounded-start{border-bottom-left-radius:.25rem!important;border-top-left-radius:.25rem!important}.visible{visibility:visible!important}.invisible{visibility:hidden!important}@media (min-width:576px){.float-sm-start{float:left!important}.float-sm-end{float:right!important}.float-sm-none{float:none!important}.d-sm-inline{display:inline!important}.d-sm-inline-block{display:inline-block!important}.d-sm-block{display:block!important}.d-sm-grid{display:grid!important}.d-sm-table{display:table!important}.d-sm-table-row{display:table-row!important}.d-sm-table-cell{display:table-cell!important}.d-sm-flex{display:flex!important}.d-sm-inline-flex{display:inline-flex!important}.d-sm-none{display:none!important}.flex-sm-fill{flex:1 1 auto!important}.flex-sm-row{flex-direction:row!important}.flex-sm-column{flex-direction:column!important}.flex-sm-row-reverse{flex-direction:row-reverse!important}.flex-sm-column-reverse{flex-direction:column-reverse!important}.flex-sm-grow-0{flex-grow:0!important}.flex-sm-grow-1{flex-grow:1!important}.flex-sm-shrink-0{flex-shrink:0!important}.flex-sm-shrink-1{flex-shrink:1!important}.flex-sm-wrap{flex-wrap:wrap!important}.flex-sm-nowrap{flex-wrap:nowrap!important}.flex-sm-wrap-reverse{flex-wrap:wrap-reverse!important}.gap-sm-0{gap:0!important}.gap-sm-1{gap:.25rem!important}.gap-sm-2{gap:.5rem!important}.gap-sm-3{gap:1rem!important}.gap-sm-4{gap:1.5rem!important}.gap-sm-5{gap:3rem!important}.justify-content-sm-start{justify-content:flex-start!important}.justify-content-sm-end{justify-content:flex-end!important}.justify-content-sm-center{justify-content:center!important}.justify-content-sm-between{justify-content:space-between!important}.justify-content-sm-around{justify-content:space-around!important}.justify-content-sm-evenly{justify-content:space-evenly!important}.align-items-sm-start{align-items:flex-start!important}.align-items-sm-end{align-items:flex-end!important}.align-items-sm-center{align-items:center!important}.align-items-sm-baseline{align-items:baseline!important}.align-items-sm-stretch{align-items:stretch!important}.align-content-sm-start{align-content:flex-start!important}.align-content-sm-end{align-content:flex-end!important}.align-content-sm-center{align-content:center!important}.align-content-sm-between{align-content:space-between!important}.align-content-sm-around{align-content:space-around!important}.align-content-sm-stretch{align-content:stretch!important}.align-self-sm-auto{align-self:auto!important}.align-self-sm-start{align-self:flex-start!important}.align-self-sm-end{align-self:flex-end!important}.align-self-sm-center{align-self:center!important}.align-self-sm-baseline{align-self:baseline!important}.align-self-sm-stretch{align-self:stretch!important}.order-sm-first{order:-1!important}.order-sm-0{order:0!important}.order-sm-1{order:1!important}.order-sm-2{order:2!important}.order-sm-3{order:3!important}.order-sm-4{order:4!important}.order-sm-5{order:5!important}.order-sm-last{order:6!important}.m-sm-0{margin:0!important}.m-sm-1{margin:.25rem!important}.m-sm-2{margin:.5rem!important}.m-sm-3{margin:1rem!important}.m-sm-4{margin:1.5rem!important}.m-sm-5{margin:3rem!important}.m-sm-auto{margin:auto!important}.mx-sm-0{margin-right:0!important;margin-left:0!important}.mx-sm-1{margin-right:.25rem!important;margin-left:.25rem!important}.mx-sm-2{margin-right:.5rem!important;margin-left:.5rem!important}.mx-sm-3{margin-right:1rem!important;margin-left:1rem!important}.mx-sm-4{margin-right:1.5rem!important;margin-left:1.5rem!important}.mx-sm-5{margin-right:3rem!important;margin-left:3rem!important}.mx-sm-auto{margin-right:auto!important;margin-left:auto!important}.my-sm-0{margin-top:0!important;margin-bottom:0!important}.my-sm-1{margin-top:.25rem!important;margin-bottom:.25rem!important}.my-sm-2{margin-top:.5rem!important;margin-bottom:.5rem!important}.my-sm-3{margin-top:1rem!important;margin-bottom:1rem!important}.my-sm-4{margin-top:1.5rem!important;margin-bottom:1.5rem!important}.my-sm-5{margin-top:3rem!important;margin-bottom:3rem!important}.my-sm-auto{margin-top:auto!important;margin-bottom:auto!important}.mt-sm-0{margin-top:0!important}.mt-sm-1{margin-top:.25rem!important}.mt-sm-2{margin-top:.5rem!important}.mt-sm-3{margin-top:1rem!important}.mt-sm-4{margin-top:1.5rem!important}.mt-sm-5{margin-top:3rem!important}.mt-sm-auto{margin-top:auto!important}.me-sm-0{margin-right:0!important}.me-sm-1{margin-right:.25rem!important}.me-sm-2{margin-right:.5rem!important}.me-sm-3{margin-right:1rem!important}.me-sm-4{margin-right:1.5rem!important}.me-sm-5{margin-right:3rem!important}.me-sm-auto{margin-right:auto!important}.mb-sm-0{margin-bottom:0!important}.mb-sm-1{margin-bottom:.25rem!important}.mb-sm-2{margin-bottom:.5rem!important}.mb-sm-3{margin-bottom:1rem!important}.mb-sm-4{margin-bottom:1.5rem!important}.mb-sm-5{margin-bottom:3rem!important}.mb-sm-auto{margin-bottom:auto!important}.ms-sm-0{margin-left:0!important}.ms-sm-1{margin-left:.25rem!important}.ms-sm-2{margin-left:.5rem!important}.ms-sm-3{margin-left:1rem!important}.ms-sm-4{margin-left:1.5rem!important}.ms-sm-5{margin-left:3rem!important}.ms-sm-auto{margin-left:auto!important}.p-sm-0{padding:0!important}.p-sm-1{padding:.25rem!important}.p-sm-2{padding:.5rem!important}.p-sm-3{padding:1rem!important}.p-sm-4{padding:1.5rem!important}.p-sm-5{padding:3rem!important}.px-sm-0{padding-right:0!important;padding-left:0!important}.px-sm-1{padding-right:.25rem!important;padding-left:.25rem!important}.px-sm-2{padding-right:.5rem!important;padding-left:.5rem!important}.px-sm-3{padding-right:1rem!important;padding-left:1rem!important}.px-sm-4{padding-right:1.5rem!important;padding-left:1.5rem!important}.px-sm-5{padding-right:3rem!important;padding-left:3rem!important}.py-sm-0{padding-top:0!important;padding-bottom:0!important}.py-sm-1{padding-top:.25rem!important;padding-bottom:.25rem!important}.py-sm-2{padding-top:.5rem!important;padding-bottom:.5rem!important}.py-sm-3{padding-top:1rem!important;padding-bottom:1rem!important}.py-sm-4{padding-top:1.5rem!important;padding-bottom:1.5rem!important}.py-sm-5{padding-top:3rem!important;padding-bottom:3rem!important}.pt-sm-0{padding-top:0!important}.pt-sm-1{padding-top:.25rem!important}.pt-sm-2{padding-top:.5rem!important}.pt-sm-3{padding-top:1rem!important}.pt-sm-4{padding-top:1.5rem!important}.pt-sm-5{padding-top:3rem!important}.pe-sm-0{padding-right:0!important}.pe-sm-1{padding-right:.25rem!important}.pe-sm-2{padding-right:.5rem!important}.pe-sm-3{padding-right:1rem!important}.pe-sm-4{padding-right:1.5rem!important}.pe-sm-5{padding-right:3rem!important}.pb-sm-0{padding-bottom:0!important}.pb-sm-1{padding-bottom:.25rem!important}.pb-sm-2{padding-bottom:.5rem!important}.pb-sm-3{padding-bottom:1rem!important}.pb-sm-4{padding-bottom:1.5rem!important}.pb-sm-5{padding-bottom:3rem!important}.ps-sm-0{padding-left:0!important}.ps-sm-1{padding-left:.25rem!important}.ps-sm-2{padding-left:.5rem!important}.ps-sm-3{padding-left:1rem!important}.ps-sm-4{padding-left:1.5rem!important}.ps-sm-5{padding-left:3rem!important}.text-sm-start{text-align:left!important}.text-sm-end{text-align:right!important}.text-sm-center{text-align:center!important}}@media (min-width:768px){.float-md-start{float:left!important}.float-md-end{float:right!important}.float-md-none{float:none!important}.d-md-inline{display:inline!important}.d-md-inline-block{display:inline-block!important}.d-md-block{display:block!important}.d-md-grid{display:grid!important}.d-md-table{display:table!important}.d-md-table-row{display:table-row!important}.d-md-table-cell{display:table-cell!important}.d-md-flex{display:flex!important}.d-md-inline-flex{display:inline-flex!important}.d-md-none{display:none!important}.flex-md-fill{flex:1 1 auto!important}.flex-md-row{flex-direction:row!important}.flex-md-column{flex-direction:column!important}.flex-md-row-reverse{flex-direction:row-reverse!important}.flex-md-column-reverse{flex-direction:column-reverse!important}.flex-md-grow-0{flex-grow:0!important}.flex-md-grow-1{flex-grow:1!important}.flex-md-shrink-0{flex-shrink:0!important}.flex-md-shrink-1{flex-shrink:1!important}.flex-md-wrap{flex-wrap:wrap!important}.flex-md-nowrap{flex-wrap:nowrap!important}.flex-md-wrap-reverse{flex-wrap:wrap-reverse!important}.gap-md-0{gap:0!important}.gap-md-1{gap:.25rem!important}.gap-md-2{gap:.5rem!important}.gap-md-3{gap:1rem!important}.gap-md-4{gap:1.5rem!important}.gap-md-5{gap:3rem!important}.justify-content-md-start{justify-content:flex-start!important}.justify-content-md-end{justify-content:flex-end!important}.justify-content-md-center{justify-content:center!important}.justify-content-md-between{justify-content:space-between!important}.justify-content-md-around{justify-content:space-around!important}.justify-content-md-evenly{justify-content:space-evenly!important}.align-items-md-start{align-items:flex-start!important}.align-items-md-end{align-items:flex-end!important}.align-items-md-center{align-items:center!important}.align-items-md-baseline{align-items:baseline!important}.align-items-md-stretch{align-items:stretch!important}.align-content-md-start{align-content:flex-start!important}.align-content-md-end{align-content:flex-end!important}.align-content-md-center{align-content:center!important}.align-content-md-between{align-content:space-between!important}.align-content-md-around{align-content:space-around!important}.align-content-md-stretch{align-content:stretch!important}.align-self-md-auto{align-self:auto!important}.align-self-md-start{align-self:flex-start!important}.align-self-md-end{align-self:flex-end!important}.align-self-md-center{align-self:center!important}.align-self-md-baseline{align-self:baseline!important}.align-self-md-stretch{align-self:stretch!important}.order-md-first{order:-1!important}.order-md-0{order:0!important}.order-md-1{order:1!important}.order-md-2{order:2!important}.order-md-3{order:3!important}.order-md-4{order:4!important}.order-md-5{order:5!important}.order-md-last{order:6!important}.m-md-0{margin:0!important}.m-md-1{margin:.25rem!important}.m-md-2{margin:.5rem!important}.m-md-3{margin:1rem!important}.m-md-4{margin:1.5rem!important}.m-md-5{margin:3rem!important}.m-md-auto{margin:auto!important}.mx-md-0{margin-right:0!important;margin-left:0!important}.mx-md-1{margin-right:.25rem!important;margin-left:.25rem!important}.mx-md-2{margin-right:.5rem!important;margin-left:.5rem!important}.mx-md-3{margin-right:1rem!important;margin-left:1rem!important}.mx-md-4{margin-right:1.5rem!important;margin-left:1.5rem!important}.mx-md-5{margin-right:3rem!important;margin-left:3rem!important}.mx-md-auto{margin-right:auto!important;margin-left:auto!important}.my-md-0{margin-top:0!important;margin-bottom:0!important}.my-md-1{margin-top:.25rem!important;margin-bottom:.25rem!important}.my-md-2{margin-top:.5rem!important;margin-bottom:.5rem!important}.my-md-3{margin-top:1rem!important;margin-bottom:1rem!important}.my-md-4{margin-top:1.5rem!important;margin-bottom:1.5rem!important}.my-md-5{margin-top:3rem!important;margin-bottom:3rem!important}.my-md-auto{margin-top:auto!important;margin-bottom:auto!important}.mt-md-0{margin-top:0!important}.mt-md-1{margin-top:.25rem!important}.mt-md-2{margin-top:.5rem!important}.mt-md-3{margin-top:1rem!important}.mt-md-4{margin-top:1.5rem!important}.mt-md-5{margin-top:3rem!important}.mt-md-auto{margin-top:auto!important}.me-md-0{margin-right:0!important}.me-md-1{margin-right:.25rem!important}.me-md-2{margin-right:.5rem!important}.me-md-3{margin-right:1rem!important}.me-md-4{margin-right:1.5rem!important}.me-md-5{margin-right:3rem!important}.me-md-auto{margin-right:auto!important}.mb-md-0{margin-bottom:0!important}.mb-md-1{margin-bottom:.25rem!important}.mb-md-2{margin-bottom:.5rem!important}.mb-md-3{margin-bottom:1rem!important}.mb-md-4{margin-bottom:1.5rem!important}.mb-md-5{margin-bottom:3rem!important}.mb-md-auto{margin-bottom:auto!important}.ms-md-0{margin-left:0!important}.ms-md-1{margin-left:.25rem!important}.ms-md-2{margin-left:.5rem!important}.ms-md-3{margin-left:1rem!important}.ms-md-4{margin-left:1.5rem!important}.ms-md-5{margin-left:3rem!important}.ms-md-auto{margin-left:auto!important}.p-md-0{padding:0!important}.p-md-1{padding:.25rem!important}.p-md-2{padding:.5rem!important}.p-md-3{padding:1rem!important}.p-md-4{padding:1.5rem!important}.p-md-5{padding:3rem!important}.px-md-0{padding-right:0!important;padding-left:0!important}.px-md-1{padding-right:.25rem!important;padding-left:.25rem!important}.px-md-2{padding-right:.5rem!important;padding-left:.5rem!important}.px-md-3{padding-right:1rem!important;padding-left:1rem!important}.px-md-4{padding-right:1.5rem!important;padding-left:1.5rem!important}.px-md-5{padding-right:3rem!important;padding-left:3rem!important}.py-md-0{padding-top:0!important;padding-bottom:0!important}.py-md-1{padding-top:.25rem!important;padding-bottom:.25rem!important}.py-md-2{padding-top:.5rem!important;padding-bottom:.5rem!important}.py-md-3{padding-top:1rem!important;padding-bottom:1rem!important}.py-md-4{padding-top:1.5rem!important;padding-bottom:1.5rem!important}.py-md-5{padding-top:3rem!important;padding-bottom:3rem!important}.pt-md-0{padding-top:0!important}.pt-md-1{padding-top:.25rem!important}.pt-md-2{padding-top:.5rem!important}.pt-md-3{padding-top:1rem!important}.pt-md-4{padding-top:1.5rem!important}.pt-md-5{padding-top:3rem!important}.pe-md-0{padding-right:0!important}.pe-md-1{padding-right:.25rem!important}.pe-md-2{padding-right:.5rem!important}.pe-md-3{padding-right:1rem!important}.pe-md-4{padding-right:1.5rem!important}.pe-md-5{padding-right:3rem!important}.pb-md-0{padding-bottom:0!important}.pb-md-1{padding-bottom:.25rem!important}.pb-md-2{padding-bottom:.5rem!important}.pb-md-3{padding-bottom:1rem!important}.pb-md-4{padding-bottom:1.5rem!important}.pb-md-5{padding-bottom:3rem!important}.ps-md-0{padding-left:0!important}.ps-md-1{padding-left:.25rem!important}.ps-md-2{padding-left:.5rem!important}.ps-md-3{padding-left:1rem!important}.ps-md-4{padding-left:1.5rem!important}.ps-md-5{padding-left:3rem!important}.text-md-start{text-align:left!important}.text-md-end{text-align:right!important}.text-md-center{text-align:center!important}}@media (min-width:992px){.float-lg-start{float:left!important}.float-lg-end{float:right!important}.float-lg-none{float:none!important}.d-lg-inline{display:inline!important}.d-lg-inline-block{display:inline-block!important}.d-lg-block{display:block!important}.d-lg-grid{display:grid!important}.d-lg-table{display:table!important}.d-lg-table-row{display:table-row!important}.d-lg-table-cell{display:table-cell!important}.d-lg-flex{display:flex!important}.d-lg-inline-flex{display:inline-flex!important}.d-lg-none{display:none!important}.flex-lg-fill{flex:1 1 auto!important}.flex-lg-row{flex-direction:row!important}.flex-lg-column{flex-direction:column!important}.flex-lg-row-reverse{flex-direction:row-reverse!important}.flex-lg-column-reverse{flex-direction:column-reverse!important}.flex-lg-grow-0{flex-grow:0!important}.flex-lg-grow-1{flex-grow:1!important}.flex-lg-shrink-0{flex-shrink:0!important}.flex-lg-shrink-1{flex-shrink:1!important}.flex-lg-wrap{flex-wrap:wrap!important}.flex-lg-nowrap{flex-wrap:nowrap!important}.flex-lg-wrap-reverse{flex-wrap:wrap-reverse!important}.gap-lg-0{gap:0!important}.gap-lg-1{gap:.25rem!important}.gap-lg-2{gap:.5rem!important}.gap-lg-3{gap:1rem!important}.gap-lg-4{gap:1.5rem!important}.gap-lg-5{gap:3rem!important}.justify-content-lg-start{justify-content:flex-start!important}.justify-content-lg-end{justify-content:flex-end!important}.justify-content-lg-center{justify-content:center!important}.justify-content-lg-between{justify-content:space-between!important}.justify-content-lg-around{justify-content:space-around!important}.justify-content-lg-evenly{justify-content:space-evenly!important}.align-items-lg-start{align-items:flex-start!important}.align-items-lg-end{align-items:flex-end!important}.align-items-lg-center{align-items:center!important}.align-items-lg-baseline{align-items:baseline!important}.align-items-lg-stretch{align-items:stretch!important}.align-content-lg-start{align-content:flex-start!important}.align-content-lg-end{align-content:flex-end!important}.align-content-lg-center{align-content:center!important}.align-content-lg-between{align-content:space-between!important}.align-content-lg-around{align-content:space-around!important}.align-content-lg-stretch{align-content:stretch!important}.align-self-lg-auto{align-self:auto!important}.align-self-lg-start{align-self:flex-start!important}.align-self-lg-end{align-self:flex-end!important}.align-self-lg-center{align-self:center!important}.align-self-lg-baseline{align-self:baseline!important}.align-self-lg-stretch{align-self:stretch!important}.order-lg-first{order:-1!important}.order-lg-0{order:0!important}.order-lg-1{order:1!important}.order-lg-2{order:2!important}.order-lg-3{order:3!important}.order-lg-4{order:4!important}.order-lg-5{order:5!important}.order-lg-last{order:6!important}.m-lg-0{margin:0!important}.m-lg-1{margin:.25rem!important}.m-lg-2{margin:.5rem!important}.m-lg-3{margin:1rem!important}.m-lg-4{margin:1.5rem!important}.m-lg-5{margin:3rem!important}.m-lg-auto{margin:auto!important}.mx-lg-0{margin-right:0!important;margin-left:0!important}.mx-lg-1{margin-right:.25rem!important;margin-left:.25rem!important}.mx-lg-2{margin-right:.5rem!important;margin-left:.5rem!important}.mx-lg-3{margin-right:1rem!important;margin-left:1rem!important}.mx-lg-4{margin-right:1.5rem!important;margin-left:1.5rem!important}.mx-lg-5{margin-right:3rem!important;margin-left:3rem!important}.mx-lg-auto{margin-right:auto!important;margin-left:auto!important}.my-lg-0{margin-top:0!important;margin-bottom:0!important}.my-lg-1{margin-top:.25rem!important;margin-bottom:.25rem!important}.my-lg-2{margin-top:.5rem!important;margin-bottom:.5rem!important}.my-lg-3{margin-top:1rem!important;margin-bottom:1rem!important}.my-lg-4{margin-top:1.5rem!important;margin-bottom:1.5rem!important}.my-lg-5{margin-top:3rem!important;margin-bottom:3rem!important}.my-lg-auto{margin-top:auto!important;margin-bottom:auto!important}.mt-lg-0{margin-top:0!important}.mt-lg-1{margin-top:.25rem!important}.mt-lg-2{margin-top:.5rem!important}.mt-lg-3{margin-top:1rem!important}.mt-lg-4{margin-top:1.5rem!important}.mt-lg-5{margin-top:3rem!important}.mt-lg-auto{margin-top:auto!important}.me-lg-0{margin-right:0!important}.me-lg-1{margin-right:.25rem!important}.me-lg-2{margin-right:.5rem!important}.me-lg-3{margin-right:1rem!important}.me-lg-4{margin-right:1.5rem!important}.me-lg-5{margin-right:3rem!important}.me-lg-auto{margin-right:auto!important}.mb-lg-0{margin-bottom:0!important}.mb-lg-1{margin-bottom:.25rem!important}.mb-lg-2{margin-bottom:.5rem!important}.mb-lg-3{margin-bottom:1rem!important}.mb-lg-4{margin-bottom:1.5rem!important}.mb-lg-5{margin-bottom:3rem!important}.mb-lg-auto{margin-bottom:auto!important}.ms-lg-0{margin-left:0!important}.ms-lg-1{margin-left:.25rem!important}.ms-lg-2{margin-left:.5rem!important}.ms-lg-3{margin-left:1rem!important}.ms-lg-4{margin-left:1.5rem!important}.ms-lg-5{margin-left:3rem!important}.ms-lg-auto{margin-left:auto!important}.p-lg-0{padding:0!important}.p-lg-1{padding:.25rem!important}.p-lg-2{padding:.5rem!important}.p-lg-3{padding:1rem!important}.p-lg-4{padding:1.5rem!important}.p-lg-5{padding:3rem!important}.px-lg-0{padding-right:0!important;padding-left:0!important}.px-lg-1{padding-right:.25rem!important;padding-left:.25rem!important}.px-lg-2{padding-right:.5rem!important;padding-left:.5rem!important}.px-lg-3{padding-right:1rem!important;padding-left:1rem!important}.px-lg-4{padding-right:1.5rem!important;padding-left:1.5rem!important}.px-lg-5{padding-right:3rem!important;padding-left:3rem!important}.py-lg-0{padding-top:0!important;padding-bottom:0!important}.py-lg-1{padding-top:.25rem!important;padding-bottom:.25rem!important}.py-lg-2{padding-top:.5rem!important;padding-bottom:.5rem!important}.py-lg-3{padding-top:1rem!important;padding-bottom:1rem!important}.py-lg-4{padding-top:1.5rem!important;padding-bottom:1.5rem!important}.py-lg-5{padding-top:3rem!important;padding-bottom:3rem!important}.pt-lg-0{padding-top:0!important}.pt-lg-1{padding-top:.25rem!important}.pt-lg-2{padding-top:.5rem!important}.pt-lg-3{padding-top:1rem!important}.pt-lg-4{padding-top:1.5rem!important}.pt-lg-5{padding-top:3rem!important}.pe-lg-0{padding-right:0!important}.pe-lg-1{padding-right:.25rem!important}.pe-lg-2{padding-right:.5rem!important}.pe-lg-3{padding-right:1rem!important}.pe-lg-4{padding-right:1.5rem!important}.pe-lg-5{padding-right:3rem!important}.pb-lg-0{padding-bottom:0!important}.pb-lg-1{padding-bottom:.25rem!important}.pb-lg-2{padding-bottom:.5rem!important}.pb-lg-3{padding-bottom:1rem!important}.pb-lg-4{padding-bottom:1.5rem!important}.pb-lg-5{padding-bottom:3rem!important}.ps-lg-0{padding-left:0!important}.ps-lg-1{padding-left:.25rem!important}.ps-lg-2{padding-left:.5rem!important}.ps-lg-3{padding-left:1rem!important}.ps-lg-4{padding-left:1.5rem!important}.ps-lg-5{padding-left:3rem!important}.text-lg-start{text-align:left!important}.text-lg-end{text-align:right!important}.text-lg-center{text-align:center!important}}@media (min-width:1200px){.float-xl-start{float:left!important}.float-xl-end{float:right!important}.float-xl-none{float:none!important}.d-xl-inline{display:inline!important}.d-xl-inline-block{display:inline-block!important}.d-xl-block{display:block!important}.d-xl-grid{display:grid!important}.d-xl-table{display:table!important}.d-xl-table-row{display:table-row!important}.d-xl-table-cell{display:table-cell!important}.d-xl-flex{display:flex!important}.d-xl-inline-flex{display:inline-flex!important}.d-xl-none{display:none!important}.flex-xl-fill{flex:1 1 auto!important}.flex-xl-row{flex-direction:row!important}.flex-xl-column{flex-direction:column!important}.flex-xl-row-reverse{flex-direction:row-reverse!important}.flex-xl-column-reverse{flex-direction:column-reverse!important}.flex-xl-grow-0{flex-grow:0!important}.flex-xl-grow-1{flex-grow:1!important}.flex-xl-shrink-0{flex-shrink:0!important}.flex-xl-shrink-1{flex-shrink:1!important}.flex-xl-wrap{flex-wrap:wrap!important}.flex-xl-nowrap{flex-wrap:nowrap!important}.flex-xl-wrap-reverse{flex-wrap:wrap-reverse!important}.gap-xl-0{gap:0!important}.gap-xl-1{gap:.25rem!important}.gap-xl-2{gap:.5rem!important}.gap-xl-3{gap:1rem!important}.gap-xl-4{gap:1.5rem!important}.gap-xl-5{gap:3rem!important}.justify-content-xl-start{justify-content:flex-start!important}.justify-content-xl-end{justify-content:flex-end!important}.justify-content-xl-center{justify-content:center!important}.justify-content-xl-between{justify-content:space-between!important}.justify-content-xl-around{justify-content:space-around!important}.justify-content-xl-evenly{justify-content:space-evenly!important}.align-items-xl-start{align-items:flex-start!important}.align-items-xl-end{align-items:flex-end!important}.align-items-xl-center{align-items:center!important}.align-items-xl-baseline{align-items:baseline!important}.align-items-xl-stretch{align-items:stretch!important}.align-content-xl-start{align-content:flex-start!important}.align-content-xl-end{align-content:flex-end!important}.align-content-xl-center{align-content:center!important}.align-content-xl-between{align-content:space-between!important}.align-content-xl-around{align-content:space-around!important}.align-content-xl-stretch{align-content:stretch!important}.align-self-xl-auto{align-self:auto!important}.align-self-xl-start{align-self:flex-start!important}.align-self-xl-end{align-self:flex-end!important}.align-self-xl-center{align-self:center!important}.align-self-xl-baseline{align-self:baseline!important}.align-self-xl-stretch{align-self:stretch!important}.order-xl-first{order:-1!important}.order-xl-0{order:0!important}.order-xl-1{order:1!important}.order-xl-2{order:2!important}.order-xl-3{order:3!important}.order-xl-4{order:4!important}.order-xl-5{order:5!important}.order-xl-last{order:6!important}.m-xl-0{margin:0!important}.m-xl-1{margin:.25rem!important}.m-xl-2{margin:.5rem!important}.m-xl-3{margin:1rem!important}.m-xl-4{margin:1.5rem!important}.m-xl-5{margin:3rem!important}.m-xl-auto{margin:auto!important}.mx-xl-0{margin-right:0!important;margin-left:0!important}.mx-xl-1{margin-right:.25rem!important;margin-left:.25rem!important}.mx-xl-2{margin-right:.5rem!important;margin-left:.5rem!important}.mx-xl-3{margin-right:1rem!important;margin-left:1rem!important}.mx-xl-4{margin-right:1.5rem!important;margin-left:1.5rem!important}.mx-xl-5{margin-right:3rem!important;margin-left:3rem!important}.mx-xl-auto{margin-right:auto!important;margin-left:auto!important}.my-xl-0{margin-top:0!important;margin-bottom:0!important}.my-xl-1{margin-top:.25rem!important;margin-bottom:.25rem!important}.my-xl-2{margin-top:.5rem!important;margin-bottom:.5rem!important}.my-xl-3{margin-top:1rem!important;margin-bottom:1rem!important}.my-xl-4{margin-top:1.5rem!important;margin-bottom:1.5rem!important}.my-xl-5{margin-top:3rem!important;margin-bottom:3rem!important}.my-xl-auto{margin-top:auto!important;margin-bottom:auto!important}.mt-xl-0{margin-top:0!important}.mt-xl-1{margin-top:.25rem!important}.mt-xl-2{margin-top:.5rem!important}.mt-xl-3{margin-top:1rem!important}.mt-xl-4{margin-top:1.5rem!important}.mt-xl-5{margin-top:3rem!important}.mt-xl-auto{margin-top:auto!important}.me-xl-0{margin-right:0!important}.me-xl-1{margin-right:.25rem!important}.me-xl-2{margin-right:.5rem!important}.me-xl-3{margin-right:1rem!important}.me-xl-4{margin-right:1.5rem!important}.me-xl-5{margin-right:3rem!important}.me-xl-auto{margin-right:auto!important}.mb-xl-0{margin-bottom:0!important}.mb-xl-1{margin-bottom:.25rem!important}.mb-xl-2{margin-bottom:.5rem!important}.mb-xl-3{margin-bottom:1rem!important}.mb-xl-4{margin-bottom:1.5rem!important}.mb-xl-5{margin-bottom:3rem!important}.mb-xl-auto{margin-bottom:auto!important}.ms-xl-0{margin-left:0!important}.ms-xl-1{margin-left:.25rem!important}.ms-xl-2{margin-left:.5rem!important}.ms-xl-3{margin-left:1rem!important}.ms-xl-4{margin-left:1.5rem!important}.ms-xl-5{margin-left:3rem!important}.ms-xl-auto{margin-left:auto!important}.p-xl-0{padding:0!important}.p-xl-1{padding:.25rem!important}.p-xl-2{padding:.5rem!important}.p-xl-3{padding:1rem!important}.p-xl-4{padding:1.5rem!important}.p-xl-5{padding:3rem!important}.px-xl-0{padding-right:0!important;padding-left:0!important}.px-xl-1{padding-right:.25rem!important;padding-left:.25rem!important}.px-xl-2{padding-right:.5rem!important;padding-left:.5rem!important}.px-xl-3{padding-right:1rem!important;padding-left:1rem!important}.px-xl-4{padding-right:1.5rem!important;padding-left:1.5rem!important}.px-xl-5{padding-right:3rem!important;padding-left:3rem!important}.py-xl-0{padding-top:0!important;padding-bottom:0!important}.py-xl-1{padding-top:.25rem!important;padding-bottom:.25rem!important}.py-xl-2{padding-top:.5rem!important;padding-bottom:.5rem!important}.py-xl-3{padding-top:1rem!important;padding-bottom:1rem!important}.py-xl-4{padding-top:1.5rem!important;padding-bottom:1.5rem!important}.py-xl-5{padding-top:3rem!important;padding-bottom:3rem!important}.pt-xl-0{padding-top:0!important}.pt-xl-1{padding-top:.25rem!important}.pt-xl-2{padding-top:.5rem!important}.pt-xl-3{padding-top:1rem!important}.pt-xl-4{padding-top:1.5rem!important}.pt-xl-5{padding-top:3rem!important}.pe-xl-0{padding-right:0!important}.pe-xl-1{padding-right:.25rem!important}.pe-xl-2{padding-right:.5rem!important}.pe-xl-3{padding-right:1rem!important}.pe-xl-4{padding-right:1.5rem!important}.pe-xl-5{padding-right:3rem!important}.pb-xl-0{padding-bottom:0!important}.pb-xl-1{padding-bottom:.25rem!important}.pb-xl-2{padding-bottom:.5rem!important}.pb-xl-3{padding-bottom:1rem!important}.pb-xl-4{padding-bottom:1.5rem!important}.pb-xl-5{padding-bottom:3rem!important}.ps-xl-0{padding-left:0!important}.ps-xl-1{padding-left:.25rem!important}.ps-xl-2{padding-left:.5rem!important}.ps-xl-3{padding-left:1rem!important}.ps-xl-4{padding-left:1.5rem!important}.ps-xl-5{padding-left:3rem!important}.text-xl-start{text-align:left!important}.text-xl-end{text-align:right!important}.text-xl-center{text-align:center!important}}@media (min-width:1400px){.float-xxl-start{float:left!important}.float-xxl-end{float:right!important}.float-xxl-none{float:none!important}.d-xxl-inline{display:inline!important}.d-xxl-inline-block{display:inline-block!important}.d-xxl-block{display:block!important}.d-xxl-grid{display:grid!important}.d-xxl-table{display:table!important}.d-xxl-table-row{display:table-row!important}.d-xxl-table-cell{display:table-cell!important}.d-xxl-flex{display:flex!important}.d-xxl-inline-flex{display:inline-flex!important}.d-xxl-none{display:none!important}.flex-xxl-fill{flex:1 1 auto!important}.flex-xxl-row{flex-direction:row!important}.flex-xxl-column{flex-direction:column!important}.flex-xxl-row-reverse{flex-direction:row-reverse!important}.flex-xxl-column-reverse{flex-direction:column-reverse!important}.flex-xxl-grow-0{flex-grow:0!important}.flex-xxl-grow-1{flex-grow:1!important}.flex-xxl-shrink-0{flex-shrink:0!important}.flex-xxl-shrink-1{flex-shrink:1!important}.flex-xxl-wrap{flex-wrap:wrap!important}.flex-xxl-nowrap{flex-wrap:nowrap!important}.flex-xxl-wrap-reverse{flex-wrap:wrap-reverse!important}.gap-xxl-0{gap:0!important}.gap-xxl-1{gap:.25rem!important}.gap-xxl-2{gap:.5rem!important}.gap-xxl-3{gap:1rem!important}.gap-xxl-4{gap:1.5rem!important}.gap-xxl-5{gap:3rem!important}.justify-content-xxl-start{justify-content:flex-start!important}.justify-content-xxl-end{justify-content:flex-end!important}.justify-content-xxl-center{justify-content:center!important}.justify-content-xxl-between{justify-content:space-between!important}.justify-content-xxl-around{justify-content:space-around!important}.justify-content-xxl-evenly{justify-content:space-evenly!important}.align-items-xxl-start{align-items:flex-start!important}.align-items-xxl-end{align-items:flex-end!important}.align-items-xxl-center{align-items:center!important}.align-items-xxl-baseline{align-items:baseline!important}.align-items-xxl-stretch{align-items:stretch!important}.align-content-xxl-start{align-content:flex-start!important}.align-content-xxl-end{align-content:flex-end!important}.align-content-xxl-center{align-content:center!important}.align-content-xxl-between{align-content:space-between!important}.align-content-xxl-around{align-content:space-around!important}.align-content-xxl-stretch{align-content:stretch!important}.align-self-xxl-auto{align-self:auto!important}.align-self-xxl-start{align-self:flex-start!important}.align-self-xxl-end{align-self:flex-end!important}.align-self-xxl-center{align-self:center!important}.align-self-xxl-baseline{align-self:baseline!important}.align-self-xxl-stretch{align-self:stretch!important}.order-xxl-first{order:-1!important}.order-xxl-0{order:0!important}.order-xxl-1{order:1!important}.order-xxl-2{order:2!important}.order-xxl-3{order:3!important}.order-xxl-4{order:4!important}.order-xxl-5{order:5!important}.order-xxl-last{order:6!important}.m-xxl-0{margin:0!important}.m-xxl-1{margin:.25rem!important}.m-xxl-2{margin:.5rem!important}.m-xxl-3{margin:1rem!important}.m-xxl-4{margin:1.5rem!important}.m-xxl-5{margin:3rem!important}.m-xxl-auto{margin:auto!important}.mx-xxl-0{margin-right:0!important;margin-left:0!important}.mx-xxl-1{margin-right:.25rem!important;margin-left:.25rem!important}.mx-xxl-2{margin-right:.5rem!important;margin-left:.5rem!important}.mx-xxl-3{margin-right:1rem!important;margin-left:1rem!important}.mx-xxl-4{margin-right:1.5rem!important;margin-left:1.5rem!important}.mx-xxl-5{margin-right:3rem!important;margin-left:3rem!important}.mx-xxl-auto{margin-right:auto!important;margin-left:auto!important}.my-xxl-0{margin-top:0!important;margin-bottom:0!important}.my-xxl-1{margin-top:.25rem!important;margin-bottom:.25rem!important}.my-xxl-2{margin-top:.5rem!important;margin-bottom:.5rem!important}.my-xxl-3{margin-top:1rem!important;margin-bottom:1rem!important}.my-xxl-4{margin-top:1.5rem!important;margin-bottom:1.5rem!important}.my-xxl-5{margin-top:3rem!important;margin-bottom:3rem!important}.my-xxl-auto{margin-top:auto!important;margin-bottom:auto!important}.mt-xxl-0{margin-top:0!important}.mt-xxl-1{margin-top:.25rem!important}.mt-xxl-2{margin-top:.5rem!important}.mt-xxl-3{margin-top:1rem!important}.mt-xxl-4{margin-top:1.5rem!important}.mt-xxl-5{margin-top:3rem!important}.mt-xxl-auto{margin-top:auto!important}.me-xxl-0{margin-right:0!important}.me-xxl-1{margin-right:.25rem!important}.me-xxl-2{margin-right:.5rem!important}.me-xxl-3{margin-right:1rem!important}.me-xxl-4{margin-right:1.5rem!important}.me-xxl-5{margin-right:3rem!important}.me-xxl-auto{margin-right:auto!important}.mb-xxl-0{margin-bottom:0!important}.mb-xxl-1{margin-bottom:.25rem!important}.mb-xxl-2{margin-bottom:.5rem!important}.mb-xxl-3{margin-bottom:1rem!important}.mb-xxl-4{margin-bottom:1.5rem!important}.mb-xxl-5{margin-bottom:3rem!important}.mb-xxl-auto{margin-bottom:auto!important}.ms-xxl-0{margin-left:0!important}.ms-xxl-1{margin-left:.25rem!important}.ms-xxl-2{margin-left:.5rem!important}.ms-xxl-3{margin-left:1rem!important}.ms-xxl-4{margin-left:1.5rem!important}.ms-xxl-5{margin-left:3rem!important}.ms-xxl-auto{margin-left:auto!important}.p-xxl-0{padding:0!important}.p-xxl-1{padding:.25rem!important}.p-xxl-2{padding:.5rem!important}.p-xxl-3{padding:1rem!important}.p-xxl-4{padding:1.5rem!important}.p-xxl-5{padding:3rem!important}.px-xxl-0{padding-right:0!important;padding-left:0!important}.px-xxl-1{padding-right:.25rem!important;padding-left:.25rem!important}.px-xxl-2{padding-right:.5rem!important;padding-left:.5rem!important}.px-xxl-3{padding-right:1rem!important;padding-left:1rem!important}.px-xxl-4{padding-right:1.5rem!important;padding-left:1.5rem!important}.px-xxl-5{padding-right:3rem!important;padding-left:3rem!important}.py-xxl-0{padding-top:0!important;padding-bottom:0!important}.py-xxl-1{padding-top:.25rem!important;padding-bottom:.25rem!important}.py-xxl-2{padding-top:.5rem!important;padding-bottom:.5rem!important}.py-xxl-3{padding-top:1rem!important;padding-bottom:1rem!important}.py-xxl-4{padding-top:1.5rem!important;padding-bottom:1.5rem!important}.py-xxl-5{padding-top:3rem!important;padding-bottom:3rem!important}.pt-xxl-0{padding-top:0!important}.pt-xxl-1{padding-top:.25rem!important}.pt-xxl-2{padding-top:.5rem!important}.pt-xxl-3{padding-top:1rem!important}.pt-xxl-4{padding-top:1.5rem!important}.pt-xxl-5{padding-top:3rem!important}.pe-xxl-0{padding-right:0!important}.pe-xxl-1{padding-right:.25rem!important}.pe-xxl-2{padding-right:.5rem!important}.pe-xxl-3{padding-right:1rem!important}.pe-xxl-4{padding-right:1.5rem!important}.pe-xxl-5{padding-right:3rem!important}.pb-xxl-0{padding-bottom:0!important}.pb-xxl-1{padding-bottom:.25rem!important}.pb-xxl-2{padding-bottom:.5rem!important}.pb-xxl-3{padding-bottom:1rem!important}.pb-xxl-4{padding-bottom:1.5rem!important}.pb-xxl-5{padding-bottom:3rem!important}.ps-xxl-0{padding-left:0!important}.ps-xxl-1{padding-left:.25rem!important}.ps-xxl-2{padding-left:.5rem!important}.ps-xxl-3{padding-left:1rem!important}.ps-xxl-4{padding-left:1.5rem!important}.ps-xxl-5{padding-left:3rem!important}.text-xxl-start{text-align:left!important}.text-xxl-end{text-align:right!important}.text-xxl-center{text-align:center!important}}@media (min-width:1200px){.fs-1{font-size:2.5rem!important}.fs-2{font-size:2rem!important}.fs-3{font-size:1.75rem!important}.fs-4{font-size:1.5rem!important}}@media print{.d-print-inline{display:inline!important}.d-print-inline-block{display:inline-block!important}.d-print-block{display:block!important}.d-print-grid{display:grid!important}.d-print-table{display:table!important}.d-print-table-row{display:table-row!important}.d-print-table-cell{display:table-cell!important}.d-print-flex{display:flex!important}.d-print-inline-flex{display:inline-flex!important}.d-print-none{display:none!important}} -/*# sourceMappingURL=bootstrap.min.css.map */ \ No newline at end of file diff --git a/spaces/Woocy/541GPT/Dockerfile b/spaces/Woocy/541GPT/Dockerfile deleted file mode 100644 index 8cbd335b09b1d1975bfd83a053b5fcaf398147ea..0000000000000000000000000000000000000000 --- a/spaces/Woocy/541GPT/Dockerfile +++ /dev/null @@ -1,14 +0,0 @@ -FROM python:3.9 as builder -RUN apt-get update && apt-get install -y build-essential -COPY requirements.txt . -RUN pip install --user -r requirements.txt - -FROM python:3.9 -MAINTAINER iskoldt -COPY --from=builder /root/.local /root/.local -ENV PATH=/root/.local/bin:$PATH -COPY . /app -WORKDIR /app -ENV my_api_key empty -ENV dockerrun yes -CMD ["python3", "-u", "ChuanhuChatbot.py", "2>&1", "|", "tee", "/var/log/application.log"] diff --git a/spaces/WorldlineChanger/sayashi-vits-uma-genshin-honkai/attentions.py b/spaces/WorldlineChanger/sayashi-vits-uma-genshin-honkai/attentions.py deleted file mode 100644 index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000 --- a/spaces/WorldlineChanger/sayashi-vits-uma-genshin-honkai/attentions.py +++ /dev/null @@ -1,300 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/Wrathless/Dkrotzer-MusicalMagic/audiocraft/data/audio_dataset.py b/spaces/Wrathless/Dkrotzer-MusicalMagic/audiocraft/data/audio_dataset.py deleted file mode 100644 index cf21422ea0059cb2d6553f93e608b8f9fa0d3a50..0000000000000000000000000000000000000000 --- a/spaces/Wrathless/Dkrotzer-MusicalMagic/audiocraft/data/audio_dataset.py +++ /dev/null @@ -1,525 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import copy -from concurrent.futures import ThreadPoolExecutor, Future -from dataclasses import dataclass, fields -from contextlib import ExitStack -import gzip -import json -import logging -import os -from pathlib import Path -import random -import sys -import typing as tp - -import torch -import torch.nn.functional as F - -from .audio import audio_read, audio_info -from .audio_utils import convert_audio -from .zip import PathInZip - -try: - import dora -except ImportError: - dora = None # type: ignore - - -@dataclass(order=True) -class BaseInfo: - - @classmethod - def _dict2fields(cls, dictionary: dict): - return { - field.name: dictionary[field.name] - for field in fields(cls) if field.name in dictionary - } - - @classmethod - def from_dict(cls, dictionary: dict): - _dictionary = cls._dict2fields(dictionary) - return cls(**_dictionary) - - def to_dict(self): - return { - field.name: self.__getattribute__(field.name) - for field in fields(self) - } - - -@dataclass(order=True) -class AudioMeta(BaseInfo): - path: str - duration: float - sample_rate: int - amplitude: tp.Optional[float] = None - weight: tp.Optional[float] = None - # info_path is used to load additional information about the audio file that is stored in zip files. - info_path: tp.Optional[PathInZip] = None - - @classmethod - def from_dict(cls, dictionary: dict): - base = cls._dict2fields(dictionary) - if 'info_path' in base and base['info_path'] is not None: - base['info_path'] = PathInZip(base['info_path']) - return cls(**base) - - def to_dict(self): - d = super().to_dict() - if d['info_path'] is not None: - d['info_path'] = str(d['info_path']) - return d - - -@dataclass(order=True) -class SegmentInfo(BaseInfo): - meta: AudioMeta - seek_time: float - n_frames: int # actual number of frames without padding - total_frames: int # total number of frames, padding included - sample_rate: int # actual sample rate - - -DEFAULT_EXTS = ['.wav', '.mp3', '.flac', '.ogg', '.m4a'] - -logger = logging.getLogger(__name__) - - -def _get_audio_meta(file_path: str, minimal: bool = True) -> AudioMeta: - """AudioMeta from a path to an audio file. - - Args: - file_path (str): Resolved path of valid audio file. - minimal (bool): Whether to only load the minimal set of metadata (takes longer if not). - Returns: - AudioMeta: Audio file path and its metadata. - """ - info = audio_info(file_path) - amplitude: tp.Optional[float] = None - if not minimal: - wav, sr = audio_read(file_path) - amplitude = wav.abs().max().item() - return AudioMeta(file_path, info.duration, info.sample_rate, amplitude) - - -def _resolve_audio_meta(m: AudioMeta, fast: bool = True) -> AudioMeta: - """If Dora is available as a dependency, try to resolve potential relative paths - in list of AudioMeta. This method is expected to be used when loading meta from file. - - Args: - m (AudioMeta): Audio meta to resolve. - fast (bool): If True, uses a really fast check for determining if a file is already absolute or not. - Only valid on Linux/Mac. - Returns: - AudioMeta: Audio meta with resolved path. - """ - def is_abs(m): - if fast: - return str(m)[0] == '/' - else: - os.path.isabs(str(m)) - - if not dora: - return m - - if not is_abs(m.path): - m.path = dora.git_save.to_absolute_path(m.path) - if m.info_path is not None and not is_abs(m.info_path.zip_path): - m.info_path.zip_path = dora.git_save.to_absolute_path(m.path) - return m - - -def find_audio_files(path: tp.Union[Path, str], - exts: tp.List[str] = DEFAULT_EXTS, - resolve: bool = True, - minimal: bool = True, - progress: bool = False, - workers: int = 0) -> tp.List[AudioMeta]: - """Build a list of AudioMeta from a given path, - collecting relevant audio files and fetching meta info. - - Args: - path (str or Path): Path to folder containing audio files. - exts (list of str): List of file extensions to consider for audio files. - minimal (bool): Whether to only load the minimal set of metadata (takes longer if not). - progress (bool): Whether to log progress on audio files collection. - workers (int): number of parallel workers, if 0, use only the current thread. - Returns: - List[AudioMeta]: List of audio file path and its metadata. - """ - audio_files = [] - futures: tp.List[Future] = [] - pool: tp.Optional[ThreadPoolExecutor] = None - with ExitStack() as stack: - if workers > 0: - pool = ThreadPoolExecutor(workers) - stack.enter_context(pool) - - if progress: - print("Finding audio files...") - for root, folders, files in os.walk(path, followlinks=True): - for file in files: - full_path = Path(root) / file - if full_path.suffix.lower() in exts: - audio_files.append(full_path) - if pool is not None: - futures.append(pool.submit(_get_audio_meta, str(audio_files[-1]), minimal)) - if progress: - print(format(len(audio_files), " 8d"), end='\r', file=sys.stderr) - - if progress: - print("Getting audio metadata...") - meta: tp.List[AudioMeta] = [] - for idx, file_path in enumerate(audio_files): - try: - if pool is None: - m = _get_audio_meta(str(file_path), minimal) - else: - m = futures[idx].result() - if resolve: - m = _resolve_audio_meta(m) - except Exception as err: - print("Error with", str(file_path), err, file=sys.stderr) - continue - meta.append(m) - if progress: - print(format((1 + idx) / len(audio_files), " 3.1%"), end='\r', file=sys.stderr) - meta.sort() - return meta - - -def load_audio_meta(path: tp.Union[str, Path], - resolve: bool = True, fast: bool = True) -> tp.List[AudioMeta]: - """Load list of AudioMeta from an optionally compressed json file. - - Args: - path (str or Path): Path to JSON file. - resolve (bool): Whether to resolve the path from AudioMeta (default=True). - fast (bool): activates some tricks to make things faster. - Returns: - List[AudioMeta]: List of audio file path and its total duration. - """ - open_fn = gzip.open if str(path).lower().endswith('.gz') else open - with open_fn(path, 'rb') as fp: # type: ignore - lines = fp.readlines() - meta = [] - for line in lines: - d = json.loads(line) - m = AudioMeta.from_dict(d) - if resolve: - m = _resolve_audio_meta(m, fast=fast) - meta.append(m) - return meta - - -def save_audio_meta(path: tp.Union[str, Path], meta: tp.List[AudioMeta]): - """Save the audio metadata to the file pointer as json. - - Args: - path (str or Path): Path to JSON file. - metadata (list of BaseAudioMeta): List of audio meta to save. - """ - Path(path).parent.mkdir(exist_ok=True, parents=True) - open_fn = gzip.open if str(path).lower().endswith('.gz') else open - with open_fn(path, 'wb') as fp: # type: ignore - for m in meta: - json_str = json.dumps(m.to_dict()) + '\n' - json_bytes = json_str.encode('utf-8') - fp.write(json_bytes) - - -class AudioDataset: - """Base audio dataset. - - The dataset takes a list of AudioMeta and create a dataset composed of segments of audio - and potentially additional information, by creating random segments from the list of audio - files referenced in the metadata and applying minimal data pre-processing such as resampling, - mixing of channels, padding, etc. - - If no segment_duration value is provided, the AudioDataset will return the full wav for each - audio file. Otherwise, it will randomly sample audio files and create a segment of the specified - duration, applying padding if required. - - By default, only the torch Tensor corresponding to the waveform is returned. Setting return_info=True - allows to return a tuple containing the torch Tensor and additional metadata on the segment and the - original audio meta. - - Args: - meta (tp.List[AudioMeta]): List of audio files metadata. - segment_duration (float): Optional segment duration of audio to load. - If not specified, the dataset will load the full audio segment from the file. - shuffle (bool): Set to `True` to have the data reshuffled at every epoch. - sample_rate (int): Target sample rate of the loaded audio samples. - channels (int): Target number of channels of the loaded audio samples. - sample_on_duration (bool): Set to `True` to sample segments with probability - dependent on audio file duration. This is only used if `segment_duration` is provided. - sample_on_weight (bool): Set to `True` to sample segments using the `weight` entry of - `AudioMeta`. If `sample_on_duration` is also True, the actual weight will be the product - of the file duration and file weight. This is only used if `segment_duration` is provided. - min_segment_ratio (float): Minimum segment ratio to use when the audio file - is shorter than the desired segment. - max_read_retry (int): Maximum number of retries to sample an audio segment from the dataset. - return_info (bool): Whether to return the wav only or return wav along with segment info and metadata. - min_audio_duration (tp.Optional[float], optional): Minimum audio file duration, in seconds, if provided - audio shorter than this will be filtered out. - max_audio_duration (tp.Optional[float], optional): Maximal audio file duration in seconds, if provided - audio longer than this will be filtered out. - """ - def __init__(self, - meta: tp.List[AudioMeta], - segment_duration: tp.Optional[float] = None, - shuffle: bool = True, - num_samples: int = 10_000, - sample_rate: int = 48_000, - channels: int = 2, - pad: bool = True, - sample_on_duration: bool = True, - sample_on_weight: bool = True, - min_segment_ratio: float = 0.5, - max_read_retry: int = 10, - return_info: bool = False, - min_audio_duration: tp.Optional[float] = None, - max_audio_duration: tp.Optional[float] = None - ): - assert len(meta) > 0, 'No audio meta provided to AudioDataset. Please check loading of audio meta.' - assert segment_duration is None or segment_duration > 0 - assert segment_duration is None or min_segment_ratio >= 0 - logging.debug(f'sample_on_duration: {sample_on_duration}') - logging.debug(f'sample_on_weight: {sample_on_weight}') - logging.debug(f'pad: {pad}') - logging.debug(f'min_segment_ratio: {min_segment_ratio}') - - self.segment_duration = segment_duration - self.min_segment_ratio = min_segment_ratio - self.max_audio_duration = max_audio_duration - self.min_audio_duration = min_audio_duration - if self.min_audio_duration is not None and self.max_audio_duration is not None: - assert self.min_audio_duration <= self.max_audio_duration - self.meta: tp.List[AudioMeta] = self._filter_duration(meta) - assert len(self.meta) # Fail fast if all data has been filtered. - self.total_duration = sum(d.duration for d in self.meta) - - if segment_duration is None: - num_samples = len(self.meta) - self.num_samples = num_samples - self.shuffle = shuffle - self.sample_rate = sample_rate - self.channels = channels - self.pad = pad - self.sample_on_weight = sample_on_weight - self.sample_on_duration = sample_on_duration - self.sampling_probabilities = self._get_sampling_probabilities() - self.max_read_retry = max_read_retry - self.return_info = return_info - - def __len__(self): - return self.num_samples - - def _get_sampling_probabilities(self, normalized: bool = True): - """Return the sampling probabilities for each file inside `self.meta`. - """ - scores: tp.List[float] = [] - for file_meta in self.meta: - score = 1. - if self.sample_on_weight and file_meta.weight is not None: - score *= file_meta.weight - if self.sample_on_duration: - score *= file_meta.duration - scores.append(score) - probabilities = torch.tensor(scores) - if normalized: - probabilities /= probabilities.sum() - return probabilities - - def sample_file(self, rng: torch.Generator) -> AudioMeta: - """Sample a given file from `self.meta`. Can be overriden in subclasses. - This is only called if `segment_duration` is not None. - - You must use the provided random number generator `rng` for reproducibility. - """ - if not self.sample_on_weight and not self.sample_on_duration: - file_index = int(torch.randint(len(self.sampling_probabilities), (1,), generator=rng).item()) - else: - file_index = int(torch.multinomial(self.sampling_probabilities, 1, generator=rng).item()) - - return self.meta[file_index] - - def __getitem__(self, index: int) -> tp.Union[torch.Tensor, tp.Tuple[torch.Tensor, SegmentInfo]]: - if self.segment_duration is None: - file_meta = self.meta[index] - out, sr = audio_read(file_meta.path) - out = convert_audio(out, sr, self.sample_rate, self.channels) - n_frames = out.shape[-1] - segment_info = SegmentInfo(file_meta, seek_time=0., n_frames=n_frames, total_frames=n_frames, - sample_rate=self.sample_rate) - else: - rng = torch.Generator() - if self.shuffle: - # We use index, plus extra randomness - rng.manual_seed(index + self.num_samples * random.randint(0, 2**24)) - else: - # We only use index - rng.manual_seed(index) - - for retry in range(self.max_read_retry): - file_meta = self.sample_file(rng) - # We add some variance in the file position even if audio file is smaller than segment - # without ending up with empty segments - max_seek = max(0, file_meta.duration - self.segment_duration * self.min_segment_ratio) - seek_time = torch.rand(1, generator=rng).item() * max_seek - try: - out, sr = audio_read(file_meta.path, seek_time, self.segment_duration, pad=False) - out = convert_audio(out, sr, self.sample_rate, self.channels) - n_frames = out.shape[-1] - target_frames = int(self.segment_duration * self.sample_rate) - if self.pad: - out = F.pad(out, (0, target_frames - n_frames)) - segment_info = SegmentInfo(file_meta, seek_time, n_frames=n_frames, total_frames=target_frames, - sample_rate=self.sample_rate) - except Exception as exc: - logger.warning("Error opening file %s: %r", file_meta.path, exc) - if retry == self.max_read_retry - 1: - raise - else: - break - - if self.return_info: - # Returns the wav and additional information on the wave segment - return out, segment_info - else: - return out - - def collater(self, samples): - """The collater function has to be provided to the dataloader - if AudioDataset has return_info=True in order to properly collate - the samples of a batch. - """ - if self.segment_duration is None and len(samples) > 1: - assert self.pad, "Must allow padding when batching examples of different durations." - - # In this case the audio reaching the collater is of variable length as segment_duration=None. - to_pad = self.segment_duration is None and self.pad - if to_pad: - max_len = max([wav.shape[-1] for wav, _ in samples]) - - def _pad_wav(wav): - return F.pad(wav, (0, max_len - wav.shape[-1])) - - if self.return_info: - if len(samples) > 0: - assert len(samples[0]) == 2 - assert isinstance(samples[0][0], torch.Tensor) - assert isinstance(samples[0][1], SegmentInfo) - - wavs = [wav for wav, _ in samples] - segment_infos = [copy.deepcopy(info) for _, info in samples] - - if to_pad: - # Each wav could be of a different duration as they are not segmented. - for i in range(len(samples)): - # Determines the total legth of the signal with padding, so we update here as we pad. - segment_infos[i].total_frames = max_len - wavs[i] = _pad_wav(wavs[i]) - - wav = torch.stack(wavs) - return wav, segment_infos - else: - assert isinstance(samples[0], torch.Tensor) - if to_pad: - samples = [_pad_wav(s) for s in samples] - return torch.stack(samples) - - def _filter_duration(self, meta: tp.List[AudioMeta]) -> tp.List[AudioMeta]: - """Filters out audio files with short durations. - Removes from meta files that have durations that will not allow to samples examples from them. - """ - orig_len = len(meta) - - # Filter data that is too short. - if self.min_audio_duration is not None: - meta = [m for m in meta if m.duration >= self.min_audio_duration] - - # Filter data that is too long. - if self.max_audio_duration is not None: - meta = [m for m in meta if m.duration <= self.max_audio_duration] - - filtered_len = len(meta) - removed_percentage = 100*(1-float(filtered_len)/orig_len) - msg = 'Removed %.2f percent of the data because it was too short or too long.' % removed_percentage - if removed_percentage < 10: - logging.debug(msg) - else: - logging.warning(msg) - return meta - - @classmethod - def from_meta(cls, root: tp.Union[str, Path], **kwargs): - """Instantiate AudioDataset from a path to a directory containing a manifest as a jsonl file. - - Args: - root (str or Path): Path to root folder containing audio files. - kwargs: Additional keyword arguments for the AudioDataset. - """ - root = Path(root) - if root.is_dir(): - if (root / 'data.jsonl').exists(): - root = root / 'data.jsonl' - elif (root / 'data.jsonl.gz').exists(): - root = root / 'data.jsonl.gz' - else: - raise ValueError("Don't know where to read metadata from in the dir. " - "Expecting either a data.jsonl or data.jsonl.gz file but none found.") - meta = load_audio_meta(root) - return cls(meta, **kwargs) - - @classmethod - def from_path(cls, root: tp.Union[str, Path], minimal_meta: bool = True, - exts: tp.List[str] = DEFAULT_EXTS, **kwargs): - """Instantiate AudioDataset from a path containing (possibly nested) audio files. - - Args: - root (str or Path): Path to root folder containing audio files. - minimal_meta (bool): Whether to only load minimal metadata or not. - exts (list of str): Extensions for audio files. - kwargs: Additional keyword arguments for the AudioDataset. - """ - root = Path(root) - if root.is_file(): - meta = load_audio_meta(root, resolve=True) - else: - meta = find_audio_files(root, exts, minimal=minimal_meta, resolve=True) - return cls(meta, **kwargs) - - -def main(): - logging.basicConfig(stream=sys.stderr, level=logging.INFO) - parser = argparse.ArgumentParser( - prog='audio_dataset', - description='Generate .jsonl files by scanning a folder.') - parser.add_argument('root', help='Root folder with all the audio files') - parser.add_argument('output_meta_file', - help='Output file to store the metadata, ') - parser.add_argument('--complete', - action='store_false', dest='minimal', default=True, - help='Retrieve all metadata, even the one that are expansive ' - 'to compute (e.g. normalization).') - parser.add_argument('--resolve', - action='store_true', default=False, - help='Resolve the paths to be absolute and with no symlinks.') - parser.add_argument('--workers', - default=10, type=int, - help='Number of workers.') - args = parser.parse_args() - meta = find_audio_files(args.root, DEFAULT_EXTS, progress=True, - resolve=args.resolve, minimal=args.minimal, workers=args.workers) - save_audio_meta(args.output_meta_file, meta) - - -if __name__ == '__main__': - main() diff --git a/spaces/XAI/Cleaning-ImageNet-Hard/app.py b/spaces/XAI/Cleaning-ImageNet-Hard/app.py deleted file mode 100644 index 2cad5df73a544945a58c8c860c8a6106504aa87a..0000000000000000000000000000000000000000 --- a/spaces/XAI/Cleaning-ImageNet-Hard/app.py +++ /dev/null @@ -1,458 +0,0 @@ -import csv -import json -import os -import pickle -import random -import string -import sys -import time -from glob import glob - -import datasets -import gdown -import gradio as gr -import matplotlib.pyplot as plt -import numpy as np -import pandas as pd -import torch -import torchvision -from huggingface_hub import HfApi, login, snapshot_download -from PIL import Image -import re -from fnmatch import translate - - -session_token = os.environ.get("SessionToken") -login(token=session_token) - -csv.field_size_limit(sys.maxsize) - -np.random.seed(int(time.time())) - -with open("./imagenet_hard_nearest_indices.pkl", "rb") as f: - knn_results = pickle.load(f) - -with open("imagenet-labels.json") as f: - wnid_to_label = json.load(f) - -with open("id_to_label.json", "r") as f: - id_to_labels = json.load(f) - -imagenet_training_samples_path = "imagenet_samples" - -bad_items = open("./ex2.txt", "r").read().split("\n") -bad_items = [x.split(".")[0] for x in bad_items] -bad_items = [int(x) for x in bad_items if x != ""] - -NUMBER_OF_IMAGES = len(bad_items) - - -gdown.cached_download( - url="https://huggingface.co/datasets/taesiri/imagenet_hard_review_samples/resolve/main/data.zip", - path="./data.zip", - quiet=False, - md5="ece2720fed664e71799f316a881d4324", -) - -# EXTRACT if needed - -if not os.path.exists("./imagenet_samples") or not os.path.exists( - "./knn_cache_for_imagenet_hard" -): - torchvision.datasets.utils.extract_archive( - from_path="data.zip", - to_path="./", - remove_finished=False, - ) - -imagenet_hard = datasets.load_dataset("taesiri/imagenet-hard", split="validation") - - -def update_snapshot(username): - pattern = f"*{username}*.json" - - output_dir = snapshot_download( - repo_id="taesiri/imagenet_hard_review_data", - allow_patterns=pattern, - repo_type="dataset", - ) - files = glob(f"{output_dir}/*.json") - - df = pd.DataFrame() - columns = ["id", "user_id", "time", "decision"] - rows = [] - for file in files: - with open(file) as f: - data = json.load(f) - tdf = [data[x] for x in columns] - rows.append(tdf) - - df = pd.DataFrame(rows, columns=columns) - df = df[df["user_id"] == username] - - return df - - -def generate_dataset(username): - global NUMBER_OF_IMAGES - df = update_snapshot(username) - - all_images = set(bad_items) - answered = set(df.id) - remaining = list(all_images - answered) - # shuffle remaining - random.shuffle(remaining) - - NUMBER_OF_IMAGES = len(bad_items) - - print(f"NUMBER_OF_IMAGES: {NUMBER_OF_IMAGES}") - print(f"Remaining: {len(remaining)}") - - if NUMBER_OF_IMAGES == 0: - return [] - - data = [] - for i, image in enumerate(remaining): - data.append( - { - "id": remaining[i], - } - ) - return data - - -def string_to_image(text): - text = text.replace("_", " ").lower().replace(", ", "\n") - # Create a blank white square image - img = np.ones((220, 75, 3)) - - fig, ax = plt.subplots(figsize=(6, 2.25)) - ax.imshow(img, extent=[0, 1, 0, 1]) - ax.text(0.5, 0.75, text, fontsize=18, ha="center", va="center") - ax.set_xticks([]) - ax.set_yticks([]) - ax.set_xticklabels([]) - ax.set_yticklabels([]) - for spine in ax.spines.values(): - spine.set_visible(False) - - return fig - - -all_samples = glob("./imagenet_samples/*.JPEG") -qid_to_sample = { - int(x.split("/")[-1].split(".")[0].split("_")[0]): x for x in all_samples -} - - -def get_training_samples(qid): - labels_id = imagenet_hard[int(qid)]["label"] - samples = [qid_to_sample[x] for x in labels_id] - return samples - - -def load_sample(data, current_index): - image_id = data[current_index]["id"] - qimage = imagenet_hard[int(image_id)]["image"] - # labels = data[current_index]["correct_label"] - labels = imagenet_hard[int(image_id)]["english_label"] - # print(f"Image ID: {image_id}") - # print(f"Labels: {labels}") - - return qimage, labels - - -def preprocessing(data, current_index, history, username): - data = generate_dataset(username) - - remaining_images = len(data) - labeled_images = len(bad_items) - remaining_images - - if remaining_images == 0: - fake_plot = string_to_image("No more images to review") - empty_image = Image.new("RGB", (224, 224)) - return ( - empty_image, - fake_plot, - current_index, - history, - data, - None, - labeled_images, - ) - - current_index = 0 - qimage, labels = load_sample(data, current_index) - image_id = data[current_index]["id"] - training_samples_image = get_training_samples(image_id) - training_samples_image = [ - Image.open(x).convert("RGB") for x in training_samples_image - ] - - # labels is a list of labels, conver it to a string - labels = ", ".join(labels) - label_plot = string_to_image(labels) - - return ( - qimage, - label_plot, - current_index, - history, - data, - training_samples_image, - labeled_images, - ) - - -def update_app(decision, data, current_index, history, username): - global NUMBER_OF_IMAGES - if current_index == -1: - fake_plot = string_to_image("Please Enter your username and load samples") - empty_image = Image.new("RGB", (224, 224)) - return empty_image, fake_plot, current_index, history, data, None, 0 - - if current_index == NUMBER_OF_IMAGES - 1: - time_stamp = int(time.time()) - - image_id = data[current_index]["id"] - # convert to percentage - dicision_dict = { - "id": int(image_id), - "user_id": username, - "time": time_stamp, - "decision": decision, - } - - # upload the decision to the server - temp_filename = f"results_{username}_{time_stamp}.json" - # convert decision_dict to json and save it on the disk - with open(temp_filename, "w") as f: - json.dump(dicision_dict, f) - - api = HfApi() - api.upload_file( - path_or_fileobj=temp_filename, - path_in_repo=temp_filename, - repo_id="taesiri/imagenet_hard_review_data", - repo_type="dataset", - ) - - os.remove(temp_filename) - - fake_plot = string_to_image("Thank you for your time!") - empty_image = Image.new("RGB", (224, 224)) - - remaining_images = len(data) - labeled_images = (len(bad_items) - remaining_images) + current_index - - return ( - empty_image, - fake_plot, - current_index, - history, - data, - None, - labeled_images + 1, - ) - - if current_index >= 0 and current_index < NUMBER_OF_IMAGES - 1: - time_stamp = int(time.time()) - - image_id = data[current_index]["id"] - # convert to percentage - dicision_dict = { - "id": int(image_id), - "user_id": username, - "time": time_stamp, - "decision": decision, - } - - # upload the decision to the server - temp_filename = f"results_{username}_{time_stamp}.json" - # convert decision_dict to json and save it on the disk - with open(temp_filename, "w") as f: - json.dump(dicision_dict, f) - - api = HfApi() - api.upload_file( - path_or_fileobj=temp_filename, - path_in_repo=temp_filename, - repo_id="taesiri/imagenet_hard_review_data", - repo_type="dataset", - ) - - os.remove(temp_filename) - - # Load the Next Image - - current_index += 1 - qimage, labels = load_sample(data, current_index) - image_id = data[current_index]["id"] - training_samples_image = get_training_samples(image_id) - training_samples_image = [ - Image.open(x).convert("RGB") for x in training_samples_image - ] - - # labels is a list of labels, conver it to a string - labels = ", ".join(labels) - label_plot = string_to_image(labels) - - remaining_images = len(data) - labeled_images = (len(bad_items) - remaining_images) + current_index - - return ( - qimage, - label_plot, - current_index, - history, - data, - training_samples_image, - labeled_images, - ) - - -newcss = """ -#query_image{ -} - -#nn_gallery { - height: auto !important; -} - -#sample_gallery { - height: auto !important; -} - - -/* Set display to flex for the parent element */ -.svelte-parentrowclass { - display: flex; -} - -/* Set the flex-grow property for the children elements */ -.svelte-parentrowclass > #query_image { - min-width: min(400px, 40%); - flex : 1; - flex-grow: 0; !important; - border-style: solid; - height: auto !important; -} - -.svelte-parentrowclass > .svelte-rightcolumn { - flex: 2; - flex-grow: 0; !important; - min-width: min(600px, 60%); -} - - - -""" - -with gr.Blocks(css=newcss, theme=gr.themes.Soft()) as demo: - data_gr = gr.State({}) - current_index = gr.State(-1) - history = gr.State({}) - - gr.Markdown("# Help Us to Clean `ImageNet-Hard`!") - - gr.Markdown("## Instructions") - gr.Markdown( - "Please enter your username and press `Load Samples`. The loading process might take up to a minute. Once the loading is done, you can start reviewing the samples." - ) - gr.Markdown( - """For each image, please select one of the following options: `Accept`, `Not Sure!`, `Reject`. - - If you think any of the labels are correct, please select `Accept`. - - If you think none of the labels matching the image, please select `Reject`. - - If you are not sure about the label, please select `Not Sure!`. - - You can refer to `Training samples` if you are not sure about the target label. - """ - ) - - random_str = "".join( - random.choice(string.ascii_lowercase + string.digits) for _ in range(5) - ) - - with gr.Column(): - with gr.Row(): - username = gr.Textbox(label="Username", value=f"user-{random_str}") - labeled_images = gr.Textbox(label="Labeled Images", value="0") - total_images = gr.Textbox(label="Total Images", value=len(bad_items)) - - prepare_btn = gr.Button(value="Load Samples") - - with gr.Column(): - with gr.Row(): - accept_btn = gr.Button(value="Accept") - myabe_btn = gr.Button(value="Not Sure!") - reject_btn = gr.Button(value="Reject") - with gr.Row(elem_id="parent_row", elem_classes="svelte-parentrowclass"): - query_image = gr.Image(type="pil", label="Query", elem_id="query_image") - with gr.Column( - elem_id="samples_col", - elem_classes="svelte-rightcolumn", - ): - label_plot = gr.Plot( - label="Is this a correct label for this image?", type="fig" - ) - training_samples = gr.Gallery( - type="pil", label="Training samples", elem_id="sample_gallery" - ) - - accept_btn.click( - update_app, - inputs=[accept_btn, data_gr, current_index, history, username], - outputs=[ - query_image, - label_plot, - current_index, - history, - data_gr, - training_samples, - labeled_images, - ], - ) - myabe_btn.click( - update_app, - inputs=[myabe_btn, data_gr, current_index, history, username], - outputs=[ - query_image, - label_plot, - current_index, - history, - data_gr, - training_samples, - labeled_images, - ], - ) - - reject_btn.click( - update_app, - inputs=[reject_btn, data_gr, current_index, history, username], - outputs=[ - query_image, - label_plot, - current_index, - history, - data_gr, - training_samples, - labeled_images, - ], - ) - - prepare_btn.click( - preprocessing, - inputs=[data_gr, current_index, history, username], - outputs=[ - query_image, - label_plot, - current_index, - history, - data_gr, - training_samples, - labeled_images, - ], - ) - - -demo.launch(debug=False) diff --git a/spaces/XzJosh/Diana-Bert-VITS2/README.md b/spaces/XzJosh/Diana-Bert-VITS2/README.md deleted file mode 100644 index a7748f382b35bb8bfad10023bdaebac7e542eb34..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Diana-Bert-VITS2/README.md +++ /dev/null @@ -1,5 +0,0 @@ ---- -license: mit -sdk: gradio -title: AI嘉然① ---- \ No newline at end of file diff --git a/spaces/Y-T-G/Blur-Anything/CHANGELOG.md b/spaces/Y-T-G/Blur-Anything/CHANGELOG.md deleted file mode 100644 index 543faa9d3e55e8bb4b289f490c4aa69703bf7ccc..0000000000000000000000000000000000000000 --- a/spaces/Y-T-G/Blur-Anything/CHANGELOG.md +++ /dev/null @@ -1,12 +0,0 @@ -# Changelog - -## v0.2.0 - 2023-08-11 - -### MobileSAM -- Added quantized ONNX MobileSAM model. Pass `--sam_model_type vit_t` to use it. - -## v0.1.0 - 2023-05-06 - -### Blur-Anything Initial Release -- Added blur implementation -- Using pims instead of storing frames in memory for better memory usage \ No newline at end of file diff --git a/spaces/YUANAI/DiffspeechResearch/modules/commons/conformer/espnet_transformer_attn.py b/spaces/YUANAI/DiffspeechResearch/modules/commons/conformer/espnet_transformer_attn.py deleted file mode 100644 index a479a27ea6fd4202359da435234408ba074f7577..0000000000000000000000000000000000000000 --- a/spaces/YUANAI/DiffspeechResearch/modules/commons/conformer/espnet_transformer_attn.py +++ /dev/null @@ -1,186 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- - -# Copyright 2019 Shigeki Karita -# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) - -"""Multi-Head Attention layer definition.""" - -import math - -import numpy -import torch -from torch import nn - - -class MultiHeadedAttention(nn.Module): - """Multi-Head Attention layer. - Args: - n_head (int): The number of heads. - n_feat (int): The number of features. - dropout_rate (float): Dropout rate. - """ - - def __init__(self, n_head, n_feat, dropout_rate): - """Construct an MultiHeadedAttention object.""" - super(MultiHeadedAttention, self).__init__() - assert n_feat % n_head == 0 - # We assume d_v always equals d_k - self.d_k = n_feat // n_head - self.h = n_head - self.linear_q = nn.Linear(n_feat, n_feat) - self.linear_k = nn.Linear(n_feat, n_feat) - self.linear_v = nn.Linear(n_feat, n_feat) - self.linear_out = nn.Linear(n_feat, n_feat) - self.attn = None - self.dropout = nn.Dropout(p=dropout_rate) - - def forward_qkv(self, query, key, value): - """Transform query, key and value. - Args: - query (torch.Tensor): Query tensor (#batch, time1, size). - key (torch.Tensor): Key tensor (#batch, time2, size). - value (torch.Tensor): Value tensor (#batch, time2, size). - Returns: - torch.Tensor: Transformed query tensor (#batch, n_head, time1, d_k). - torch.Tensor: Transformed key tensor (#batch, n_head, time2, d_k). - torch.Tensor: Transformed value tensor (#batch, n_head, time2, d_k). - """ - n_batch = query.size(0) - q = self.linear_q(query).view(n_batch, -1, self.h, self.d_k) - k = self.linear_k(key).view(n_batch, -1, self.h, self.d_k) - v = self.linear_v(value).view(n_batch, -1, self.h, self.d_k) - q = q.transpose(1, 2) # (batch, head, time1, d_k) - k = k.transpose(1, 2) # (batch, head, time2, d_k) - v = v.transpose(1, 2) # (batch, head, time2, d_k) - - return q, k, v - - def forward_attention(self, value, scores, mask): - """Compute attention context vector. - Args: - value (torch.Tensor): Transformed value (#batch, n_head, time2, d_k). - scores (torch.Tensor): Attention score (#batch, n_head, time1, time2). - mask (torch.Tensor): Mask (#batch, 1, time2) or (#batch, time1, time2). - Returns: - torch.Tensor: Transformed value (#batch, time1, d_model) - weighted by the attention score (#batch, time1, time2). - """ - n_batch = value.size(0) - if mask is not None: - mask = mask.unsqueeze(1).eq(0) # (batch, 1, *, time2) - min_value = float( - numpy.finfo(torch.tensor(0, dtype=scores.dtype).numpy().dtype).min - ) - scores = scores.masked_fill(mask, min_value) - self.attn = torch.softmax(scores, dim=-1).masked_fill( - mask, 0.0 - ) # (batch, head, time1, time2) - else: - self.attn = torch.softmax(scores, dim=-1) # (batch, head, time1, time2) - - p_attn = self.dropout(self.attn) - x = torch.matmul(p_attn, value) # (batch, head, time1, d_k) - x = ( - x.transpose(1, 2).contiguous().view(n_batch, -1, self.h * self.d_k) - ) # (batch, time1, d_model) - - return self.linear_out(x) # (batch, time1, d_model) - - def forward(self, query, key, value, mask): - """Compute scaled dot product attention. - Args: - query (torch.Tensor): Query tensor (#batch, time1, size). - key (torch.Tensor): Key tensor (#batch, time2, size). - value (torch.Tensor): Value tensor (#batch, time2, size). - mask (torch.Tensor): Mask tensor (#batch, 1, time2) or - (#batch, time1, time2). - Returns: - torch.Tensor: Output tensor (#batch, time1, d_model). - """ - q, k, v = self.forward_qkv(query, key, value) - scores = torch.matmul(q, k.transpose(-2, -1)) / math.sqrt(self.d_k) - return self.forward_attention(v, scores, mask) - - -class RelPositionMultiHeadedAttention(MultiHeadedAttention): - """Multi-Head Attention layer with relative position encoding. - Paper: https://arxiv.org/abs/1901.02860 - Args: - n_head (int): The number of heads. - n_feat (int): The number of features. - dropout_rate (float): Dropout rate. - """ - - def __init__(self, n_head, n_feat, dropout_rate): - """Construct an RelPositionMultiHeadedAttention object.""" - super().__init__(n_head, n_feat, dropout_rate) - # linear transformation for positional ecoding - self.linear_pos = nn.Linear(n_feat, n_feat, bias=False) - # these two learnable bias are used in matrix c and matrix d - # as described in https://arxiv.org/abs/1901.02860 Section 3.3 - self.pos_bias_u = nn.Parameter(torch.Tensor(self.h, self.d_k)) - self.pos_bias_v = nn.Parameter(torch.Tensor(self.h, self.d_k)) - torch.nn.init.xavier_uniform_(self.pos_bias_u) - torch.nn.init.xavier_uniform_(self.pos_bias_v) - - def rel_shift(self, x, zero_triu=False): - """Compute relative positinal encoding. - Args: - x (torch.Tensor): Input tensor (batch, time, size). - zero_triu (bool): If true, return the lower triangular part of the matrix. - Returns: - torch.Tensor: Output tensor. - """ - zero_pad = torch.zeros((*x.size()[:3], 1), device=x.device, dtype=x.dtype) - x_padded = torch.cat([zero_pad, x], dim=-1) - - x_padded = x_padded.view(*x.size()[:2], x.size(3) + 1, x.size(2)) - x = x_padded[:, :, 1:].view_as(x) - - if zero_triu: - ones = torch.ones((x.size(2), x.size(3))) - x = x * torch.tril(ones, x.size(3) - x.size(2))[None, None, :, :] - - return x - - def forward(self, query, key, value, pos_emb, mask): - """Compute 'Scaled Dot Product Attention' with rel. positional encoding. - Args: - query (torch.Tensor): Query tensor (#batch, time1, size). - key (torch.Tensor): Key tensor (#batch, time2, size). - value (torch.Tensor): Value tensor (#batch, time2, size). - pos_emb (torch.Tensor): Positional embedding tensor (#batch, time2, size). - mask (torch.Tensor): Mask tensor (#batch, 1, time2) or - (#batch, time1, time2). - Returns: - torch.Tensor: Output tensor (#batch, time1, d_model). - """ - q, k, v = self.forward_qkv(query, key, value) - q = q.transpose(1, 2) # (batch, time1, head, d_k) - - n_batch_pos = pos_emb.size(0) - p = self.linear_pos(pos_emb).view(n_batch_pos, -1, self.h, self.d_k) - p = p.transpose(1, 2) # (batch, head, time1, d_k) - - # (batch, head, time1, d_k) - q_with_bias_u = (q + self.pos_bias_u).transpose(1, 2) - # (batch, head, time1, d_k) - q_with_bias_v = (q + self.pos_bias_v).transpose(1, 2) - - # compute attention score - # first compute matrix a and matrix c - # as described in https://arxiv.org/abs/1901.02860 Section 3.3 - # (batch, head, time1, time2) - matrix_ac = torch.matmul(q_with_bias_u, k.transpose(-2, -1)) - - # compute matrix b and matrix d - # (batch, head, time1, time2) - matrix_bd = torch.matmul(q_with_bias_v, p.transpose(-2, -1)) - matrix_bd = self.rel_shift(matrix_bd) - - scores = (matrix_ac + matrix_bd) / math.sqrt( - self.d_k - ) # (batch, head, time1, time2) - - return self.forward_attention(v, scores, mask) diff --git a/spaces/ZeroTwo3/WavJourney/scripts/kill_services.py b/spaces/ZeroTwo3/WavJourney/scripts/kill_services.py deleted file mode 100644 index 83d6e297f6989b94127c015266237bff0a8186d1..0000000000000000000000000000000000000000 --- a/spaces/ZeroTwo3/WavJourney/scripts/kill_services.py +++ /dev/null @@ -1,11 +0,0 @@ -import os - -# Extract values for each application -service_port = os.environ.get('WAVJOURNEY_SERVICE_PORT') - -# Execute the commands -os.system(f'kill $(lsof -t -i :{service_port})') - - - - diff --git a/spaces/ZeroTwo3/one_shot_talking_face_from_text/README.md b/spaces/ZeroTwo3/one_shot_talking_face_from_text/README.md deleted file mode 100644 index 992731d2ca6af7c09ebe904b6a41f502b7222fe4..0000000000000000000000000000000000000000 --- a/spaces/ZeroTwo3/one_shot_talking_face_from_text/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: One Shot Talking Face From Text -emoji: 🐠 -colorFrom: yellow -colorTo: pink -sdk: docker -pinned: false -duplicated_from: pragnakalp/one_shot_talking_face_from_text ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/bbox/assigners/assign_result.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/bbox/assigners/assign_result.py deleted file mode 100644 index 4639fbdba0a5b92778e1ab87d61182e54bfb9b6f..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/bbox/assigners/assign_result.py +++ /dev/null @@ -1,204 +0,0 @@ -import torch - -from mmdet.utils import util_mixins - - -class AssignResult(util_mixins.NiceRepr): - """Stores assignments between predicted and truth boxes. - - Attributes: - num_gts (int): the number of truth boxes considered when computing this - assignment - - gt_inds (LongTensor): for each predicted box indicates the 1-based - index of the assigned truth box. 0 means unassigned and -1 means - ignore. - - max_overlaps (FloatTensor): the iou between the predicted box and its - assigned truth box. - - labels (None | LongTensor): If specified, for each predicted box - indicates the category label of the assigned truth box. - - Example: - >>> # An assign result between 4 predicted boxes and 9 true boxes - >>> # where only two boxes were assigned. - >>> num_gts = 9 - >>> max_overlaps = torch.LongTensor([0, .5, .9, 0]) - >>> gt_inds = torch.LongTensor([-1, 1, 2, 0]) - >>> labels = torch.LongTensor([0, 3, 4, 0]) - >>> self = AssignResult(num_gts, gt_inds, max_overlaps, labels) - >>> print(str(self)) # xdoctest: +IGNORE_WANT - - >>> # Force addition of gt labels (when adding gt as proposals) - >>> new_labels = torch.LongTensor([3, 4, 5]) - >>> self.add_gt_(new_labels) - >>> print(str(self)) # xdoctest: +IGNORE_WANT - - """ - - def __init__(self, num_gts, gt_inds, max_overlaps, labels=None): - self.num_gts = num_gts - self.gt_inds = gt_inds - self.max_overlaps = max_overlaps - self.labels = labels - # Interface for possible user-defined properties - self._extra_properties = {} - - @property - def num_preds(self): - """int: the number of predictions in this assignment""" - return len(self.gt_inds) - - def set_extra_property(self, key, value): - """Set user-defined new property.""" - assert key not in self.info - self._extra_properties[key] = value - - def get_extra_property(self, key): - """Get user-defined property.""" - return self._extra_properties.get(key, None) - - @property - def info(self): - """dict: a dictionary of info about the object""" - basic_info = { - 'num_gts': self.num_gts, - 'num_preds': self.num_preds, - 'gt_inds': self.gt_inds, - 'max_overlaps': self.max_overlaps, - 'labels': self.labels, - } - basic_info.update(self._extra_properties) - return basic_info - - def __nice__(self): - """str: a "nice" summary string describing this assign result""" - parts = [] - parts.append(f'num_gts={self.num_gts!r}') - if self.gt_inds is None: - parts.append(f'gt_inds={self.gt_inds!r}') - else: - parts.append(f'gt_inds.shape={tuple(self.gt_inds.shape)!r}') - if self.max_overlaps is None: - parts.append(f'max_overlaps={self.max_overlaps!r}') - else: - parts.append('max_overlaps.shape=' - f'{tuple(self.max_overlaps.shape)!r}') - if self.labels is None: - parts.append(f'labels={self.labels!r}') - else: - parts.append(f'labels.shape={tuple(self.labels.shape)!r}') - return ', '.join(parts) - - @classmethod - def random(cls, **kwargs): - """Create random AssignResult for tests or debugging. - - Args: - num_preds: number of predicted boxes - num_gts: number of true boxes - p_ignore (float): probability of a predicted box assinged to an - ignored truth - p_assigned (float): probability of a predicted box not being - assigned - p_use_label (float | bool): with labels or not - rng (None | int | numpy.random.RandomState): seed or state - - Returns: - :obj:`AssignResult`: Randomly generated assign results. - - Example: - >>> from mmdet.core.bbox.assigners.assign_result import * # NOQA - >>> self = AssignResult.random() - >>> print(self.info) - """ - from mmdet.core.bbox import demodata - rng = demodata.ensure_rng(kwargs.get('rng', None)) - - num_gts = kwargs.get('num_gts', None) - num_preds = kwargs.get('num_preds', None) - p_ignore = kwargs.get('p_ignore', 0.3) - p_assigned = kwargs.get('p_assigned', 0.7) - p_use_label = kwargs.get('p_use_label', 0.5) - num_classes = kwargs.get('p_use_label', 3) - - if num_gts is None: - num_gts = rng.randint(0, 8) - if num_preds is None: - num_preds = rng.randint(0, 16) - - if num_gts == 0: - max_overlaps = torch.zeros(num_preds, dtype=torch.float32) - gt_inds = torch.zeros(num_preds, dtype=torch.int64) - if p_use_label is True or p_use_label < rng.rand(): - labels = torch.zeros(num_preds, dtype=torch.int64) - else: - labels = None - else: - import numpy as np - # Create an overlap for each predicted box - max_overlaps = torch.from_numpy(rng.rand(num_preds)) - - # Construct gt_inds for each predicted box - is_assigned = torch.from_numpy(rng.rand(num_preds) < p_assigned) - # maximum number of assignments constraints - n_assigned = min(num_preds, min(num_gts, is_assigned.sum())) - - assigned_idxs = np.where(is_assigned)[0] - rng.shuffle(assigned_idxs) - assigned_idxs = assigned_idxs[0:n_assigned] - assigned_idxs.sort() - - is_assigned[:] = 0 - is_assigned[assigned_idxs] = True - - is_ignore = torch.from_numpy( - rng.rand(num_preds) < p_ignore) & is_assigned - - gt_inds = torch.zeros(num_preds, dtype=torch.int64) - - true_idxs = np.arange(num_gts) - rng.shuffle(true_idxs) - true_idxs = torch.from_numpy(true_idxs) - gt_inds[is_assigned] = true_idxs[:n_assigned] - - gt_inds = torch.from_numpy( - rng.randint(1, num_gts + 1, size=num_preds)) - gt_inds[is_ignore] = -1 - gt_inds[~is_assigned] = 0 - max_overlaps[~is_assigned] = 0 - - if p_use_label is True or p_use_label < rng.rand(): - if num_classes == 0: - labels = torch.zeros(num_preds, dtype=torch.int64) - else: - labels = torch.from_numpy( - # remind that we set FG labels to [0, num_class-1] - # since mmdet v2.0 - # BG cat_id: num_class - rng.randint(0, num_classes, size=num_preds)) - labels[~is_assigned] = 0 - else: - labels = None - - self = cls(num_gts, gt_inds, max_overlaps, labels) - return self - - def add_gt_(self, gt_labels): - """Add ground truth as assigned results. - - Args: - gt_labels (torch.Tensor): Labels of gt boxes - """ - self_inds = torch.arange( - 1, len(gt_labels) + 1, dtype=torch.long, device=gt_labels.device) - self.gt_inds = torch.cat([self_inds, self.gt_inds]) - - self.max_overlaps = torch.cat( - [self.max_overlaps.new_ones(len(gt_labels)), self.max_overlaps]) - - if self.labels is not None: - self.labels = torch.cat([gt_labels, self.labels]) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/shared_heads/__init__.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/shared_heads/__init__.py deleted file mode 100644 index bbe70145b8bf7c304370f725f5afa8db98666679..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/shared_heads/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .res_layer import ResLayer - -__all__ = ['ResLayer'] diff --git a/spaces/agamthind/foodvision_mini/model.py b/spaces/agamthind/foodvision_mini/model.py deleted file mode 100644 index 52c2696c874740179528f0bdae8ce87b774a138f..0000000000000000000000000000000000000000 --- a/spaces/agamthind/foodvision_mini/model.py +++ /dev/null @@ -1,36 +0,0 @@ -import torch -import torchvision - -from torch import nn - - -def create_effnetb2_model(num_classes:int=3, - seed:int=42): - """Creates an EfficientNetB2 feature extractor model and transforms. - - Args: - num_classes (int, optional): number of classes in the classifier head. - Defaults to 3. - seed (int, optional): random seed value. Defaults to 42. - - Returns: - model (torch.nn.Module): EffNetB2 feature extractor model. - transforms (torchvision.transforms): EffNetB2 image transforms. - """ - # Create EffNetB2 pretrained weights, transforms and model - weights = torchvision.models.EfficientNet_B2_Weights.DEFAULT - transforms = weights.transforms() - model = torchvision.models.efficientnet_b2(weights=weights) - - # Freeze all layers in base model - for param in model.parameters(): - param.requires_grad = False - - # Change classifier head with random seed for reproducibility - torch.manual_seed(seed) - model.classifier = nn.Sequential( - nn.Dropout(p=0.3, inplace=True), - nn.Linear(in_features=1408, out_features=num_classes), - ) - - return model, transforms diff --git a/spaces/akhaliq/GPEN/face_detect/facemodels/retinaface.py b/spaces/akhaliq/GPEN/face_detect/facemodels/retinaface.py deleted file mode 100644 index b7092a2bc2f35d06ce99d25473bce913ef3fd8e7..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/GPEN/face_detect/facemodels/retinaface.py +++ /dev/null @@ -1,127 +0,0 @@ -import torch -import torch.nn as nn -import torchvision.models.detection.backbone_utils as backbone_utils -import torchvision.models._utils as _utils -import torch.nn.functional as F -from collections import OrderedDict - -from facemodels.net import MobileNetV1 as MobileNetV1 -from facemodels.net import FPN as FPN -from facemodels.net import SSH as SSH - - - -class ClassHead(nn.Module): - def __init__(self,inchannels=512,num_anchors=3): - super(ClassHead,self).__init__() - self.num_anchors = num_anchors - self.conv1x1 = nn.Conv2d(inchannels,self.num_anchors*2,kernel_size=(1,1),stride=1,padding=0) - - def forward(self,x): - out = self.conv1x1(x) - out = out.permute(0,2,3,1).contiguous() - - return out.view(out.shape[0], -1, 2) - -class BboxHead(nn.Module): - def __init__(self,inchannels=512,num_anchors=3): - super(BboxHead,self).__init__() - self.conv1x1 = nn.Conv2d(inchannels,num_anchors*4,kernel_size=(1,1),stride=1,padding=0) - - def forward(self,x): - out = self.conv1x1(x) - out = out.permute(0,2,3,1).contiguous() - - return out.view(out.shape[0], -1, 4) - -class LandmarkHead(nn.Module): - def __init__(self,inchannels=512,num_anchors=3): - super(LandmarkHead,self).__init__() - self.conv1x1 = nn.Conv2d(inchannels,num_anchors*10,kernel_size=(1,1),stride=1,padding=0) - - def forward(self,x): - out = self.conv1x1(x) - out = out.permute(0,2,3,1).contiguous() - - return out.view(out.shape[0], -1, 10) - -class RetinaFace(nn.Module): - def __init__(self, cfg = None, phase = 'train'): - """ - :param cfg: Network related settings. - :param phase: train or test. - """ - super(RetinaFace,self).__init__() - self.phase = phase - backbone = None - if cfg['name'] == 'mobilenet0.25': - backbone = MobileNetV1() - if cfg['pretrain']: - checkpoint = torch.load("./weights/mobilenetV1X0.25_pretrain.tar", map_location=torch.device('cpu')) - from collections import OrderedDict - new_state_dict = OrderedDict() - for k, v in checkpoint['state_dict'].items(): - name = k[7:] # remove module. - new_state_dict[name] = v - # load params - backbone.load_state_dict(new_state_dict) - elif cfg['name'] == 'Resnet50': - import torchvision.models as models - backbone = models.resnet50(pretrained=cfg['pretrain']) - - self.body = _utils.IntermediateLayerGetter(backbone, cfg['return_layers']) - in_channels_stage2 = cfg['in_channel'] - in_channels_list = [ - in_channels_stage2 * 2, - in_channels_stage2 * 4, - in_channels_stage2 * 8, - ] - out_channels = cfg['out_channel'] - self.fpn = FPN(in_channels_list,out_channels) - self.ssh1 = SSH(out_channels, out_channels) - self.ssh2 = SSH(out_channels, out_channels) - self.ssh3 = SSH(out_channels, out_channels) - - self.ClassHead = self._make_class_head(fpn_num=3, inchannels=cfg['out_channel']) - self.BboxHead = self._make_bbox_head(fpn_num=3, inchannels=cfg['out_channel']) - self.LandmarkHead = self._make_landmark_head(fpn_num=3, inchannels=cfg['out_channel']) - - def _make_class_head(self,fpn_num=3,inchannels=64,anchor_num=2): - classhead = nn.ModuleList() - for i in range(fpn_num): - classhead.append(ClassHead(inchannels,anchor_num)) - return classhead - - def _make_bbox_head(self,fpn_num=3,inchannels=64,anchor_num=2): - bboxhead = nn.ModuleList() - for i in range(fpn_num): - bboxhead.append(BboxHead(inchannels,anchor_num)) - return bboxhead - - def _make_landmark_head(self,fpn_num=3,inchannels=64,anchor_num=2): - landmarkhead = nn.ModuleList() - for i in range(fpn_num): - landmarkhead.append(LandmarkHead(inchannels,anchor_num)) - return landmarkhead - - def forward(self,inputs): - out = self.body(inputs) - - # FPN - fpn = self.fpn(out) - - # SSH - feature1 = self.ssh1(fpn[0]) - feature2 = self.ssh2(fpn[1]) - feature3 = self.ssh3(fpn[2]) - features = [feature1, feature2, feature3] - - bbox_regressions = torch.cat([self.BboxHead[i](feature) for i, feature in enumerate(features)], dim=1) - classifications = torch.cat([self.ClassHead[i](feature) for i, feature in enumerate(features)],dim=1) - ldm_regressions = torch.cat([self.LandmarkHead[i](feature) for i, feature in enumerate(features)], dim=1) - - if self.phase == 'train': - output = (bbox_regressions, classifications, ldm_regressions) - else: - output = (bbox_regressions, F.softmax(classifications, dim=-1), ldm_regressions) - return output \ No newline at end of file diff --git a/spaces/akhaliq/Mask2Former/mask2former/data/datasets/__init__.py b/spaces/akhaliq/Mask2Former/mask2former/data/datasets/__init__.py deleted file mode 100644 index 403a678e3ba6655135f36e788ad53587f05d6d1e..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Mask2Former/mask2former/data/datasets/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from . import ( - register_ade20k_full, - register_ade20k_panoptic, - register_coco_stuff_10k, - register_mapillary_vistas, - register_coco_panoptic_annos_semseg, - register_ade20k_instance, - register_mapillary_vistas_panoptic, -) diff --git a/spaces/akhaliq/webui-orangemixs/README.md b/spaces/akhaliq/webui-orangemixs/README.md deleted file mode 100644 index 013d12c9f3a56698056ae1bdbbfb0ec009805237..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/webui-orangemixs/README.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: Stable Diffusion Web UI -emoji: 🚧 -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 3.9 -app_file: app.py -pinned: false -duplicated_from: camenduru/webui ---- - -## Stable Diffusion Web UI -[https://github.com/AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) - -## Documentation -[https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki) - -## Models License -https://huggingface.co/spaces/CompVis/stable-diffusion-license \ No newline at end of file diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/commands/debug.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/commands/debug.py deleted file mode 100644 index d3f1f28de4c9529607576d19daac3104a3fd7b0a..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/commands/debug.py +++ /dev/null @@ -1,202 +0,0 @@ -import locale -import logging -import os -import sys -from optparse import Values -from types import ModuleType -from typing import Any, Dict, List, Optional - -import pip._vendor -from pip._vendor.certifi import where -from pip._vendor.packaging.version import parse as parse_version - -from pip import __file__ as pip_location -from pip._internal.cli import cmdoptions -from pip._internal.cli.base_command import Command -from pip._internal.cli.cmdoptions import make_target_python -from pip._internal.cli.status_codes import SUCCESS -from pip._internal.configuration import Configuration -from pip._internal.metadata import get_environment -from pip._internal.utils.logging import indent_log -from pip._internal.utils.misc import get_pip_version - -logger = logging.getLogger(__name__) - - -def show_value(name: str, value: Any) -> None: - logger.info("%s: %s", name, value) - - -def show_sys_implementation() -> None: - logger.info("sys.implementation:") - implementation_name = sys.implementation.name - with indent_log(): - show_value("name", implementation_name) - - -def create_vendor_txt_map() -> Dict[str, str]: - vendor_txt_path = os.path.join( - os.path.dirname(pip_location), "_vendor", "vendor.txt" - ) - - with open(vendor_txt_path) as f: - # Purge non version specifying lines. - # Also, remove any space prefix or suffixes (including comments). - lines = [ - line.strip().split(" ", 1)[0] for line in f.readlines() if "==" in line - ] - - # Transform into "module" -> version dict. - return dict(line.split("==", 1) for line in lines) # type: ignore - - -def get_module_from_module_name(module_name: str) -> ModuleType: - # Module name can be uppercase in vendor.txt for some reason... - module_name = module_name.lower() - # PATCH: setuptools is actually only pkg_resources. - if module_name == "setuptools": - module_name = "pkg_resources" - - __import__(f"pip._vendor.{module_name}", globals(), locals(), level=0) - return getattr(pip._vendor, module_name) - - -def get_vendor_version_from_module(module_name: str) -> Optional[str]: - module = get_module_from_module_name(module_name) - version = getattr(module, "__version__", None) - - if not version: - # Try to find version in debundled module info. - env = get_environment([os.path.dirname(module.__file__)]) - dist = env.get_distribution(module_name) - if dist: - version = str(dist.version) - - return version - - -def show_actual_vendor_versions(vendor_txt_versions: Dict[str, str]) -> None: - """Log the actual version and print extra info if there is - a conflict or if the actual version could not be imported. - """ - for module_name, expected_version in vendor_txt_versions.items(): - extra_message = "" - actual_version = get_vendor_version_from_module(module_name) - if not actual_version: - extra_message = ( - " (Unable to locate actual module version, using" - " vendor.txt specified version)" - ) - actual_version = expected_version - elif parse_version(actual_version) != parse_version(expected_version): - extra_message = ( - " (CONFLICT: vendor.txt suggests version should" - " be {})".format(expected_version) - ) - logger.info("%s==%s%s", module_name, actual_version, extra_message) - - -def show_vendor_versions() -> None: - logger.info("vendored library versions:") - - vendor_txt_versions = create_vendor_txt_map() - with indent_log(): - show_actual_vendor_versions(vendor_txt_versions) - - -def show_tags(options: Values) -> None: - tag_limit = 10 - - target_python = make_target_python(options) - tags = target_python.get_tags() - - # Display the target options that were explicitly provided. - formatted_target = target_python.format_given() - suffix = "" - if formatted_target: - suffix = f" (target: {formatted_target})" - - msg = "Compatible tags: {}{}".format(len(tags), suffix) - logger.info(msg) - - if options.verbose < 1 and len(tags) > tag_limit: - tags_limited = True - tags = tags[:tag_limit] - else: - tags_limited = False - - with indent_log(): - for tag in tags: - logger.info(str(tag)) - - if tags_limited: - msg = ( - "...\n[First {tag_limit} tags shown. Pass --verbose to show all.]" - ).format(tag_limit=tag_limit) - logger.info(msg) - - -def ca_bundle_info(config: Configuration) -> str: - levels = set() - for key, _ in config.items(): - levels.add(key.split(".")[0]) - - if not levels: - return "Not specified" - - levels_that_override_global = ["install", "wheel", "download"] - global_overriding_level = [ - level for level in levels if level in levels_that_override_global - ] - if not global_overriding_level: - return "global" - - if "global" in levels: - levels.remove("global") - return ", ".join(levels) - - -class DebugCommand(Command): - """ - Display debug information. - """ - - usage = """ - %prog """ - ignore_require_venv = True - - def add_options(self) -> None: - cmdoptions.add_target_python_options(self.cmd_opts) - self.parser.insert_option_group(0, self.cmd_opts) - self.parser.config.load() - - def run(self, options: Values, args: List[str]) -> int: - logger.warning( - "This command is only meant for debugging. " - "Do not use this with automation for parsing and getting these " - "details, since the output and options of this command may " - "change without notice." - ) - show_value("pip version", get_pip_version()) - show_value("sys.version", sys.version) - show_value("sys.executable", sys.executable) - show_value("sys.getdefaultencoding", sys.getdefaultencoding()) - show_value("sys.getfilesystemencoding", sys.getfilesystemencoding()) - show_value( - "locale.getpreferredencoding", - locale.getpreferredencoding(), - ) - show_value("sys.platform", sys.platform) - show_sys_implementation() - - show_value("'cert' config value", ca_bundle_info(self.parser.config)) - show_value("REQUESTS_CA_BUNDLE", os.environ.get("REQUESTS_CA_BUNDLE")) - show_value("CURL_CA_BUNDLE", os.environ.get("CURL_CA_BUNDLE")) - show_value("pip._vendor.certifi.where()", where()) - show_value("pip._vendor.DEBUNDLED", pip._vendor.DEBUNDLED) - - show_vendor_versions() - - show_tags(options) - - return SUCCESS diff --git a/spaces/aliabid94/AutoGPT/autogpt/json_utils/json_fix_llm.py b/spaces/aliabid94/AutoGPT/autogpt/json_utils/json_fix_llm.py deleted file mode 100644 index 869aed125cfb8cd7a69ed02eeb389cc72a3e296b..0000000000000000000000000000000000000000 --- a/spaces/aliabid94/AutoGPT/autogpt/json_utils/json_fix_llm.py +++ /dev/null @@ -1,220 +0,0 @@ -"""This module contains functions to fix JSON strings generated by LLM models, such as ChatGPT, using the assistance -of the ChatGPT API or LLM models.""" -from __future__ import annotations - -import contextlib -import json -from typing import Any, Dict - -from colorama import Fore -from regex import regex - -from autogpt.config import Config -from autogpt.json_utils.json_fix_general import correct_json -from autogpt.llm_utils import call_ai_function -from autogpt.logs import logger -from autogpt.speech import say_text - -JSON_SCHEMA = """ -{ - "command": { - "name": "command name", - "args": { - "arg name": "value" - } - }, - "thoughts": - { - "text": "thought", - "reasoning": "reasoning", - "plan": "- short bulleted\n- list that conveys\n- long-term plan", - "criticism": "constructive self-criticism", - "speak": "thoughts summary to say to user" - } -} -""" - -CFG = Config() - - -def auto_fix_json(json_string: str, schema: str) -> str: - """Fix the given JSON string to make it parseable and fully compliant with - the provided schema using GPT-3. - - Args: - json_string (str): The JSON string to fix. - schema (str): The schema to use to fix the JSON. - Returns: - str: The fixed JSON string. - """ - # Try to fix the JSON using GPT: - function_string = "def fix_json(json_string: str, schema:str=None) -> str:" - args = [f"'''{json_string}'''", f"'''{schema}'''"] - description_string = ( - "This function takes a JSON string and ensures that it" - " is parseable and fully compliant with the provided schema. If an object" - " or field specified in the schema isn't contained within the correct JSON," - " it is omitted. The function also escapes any double quotes within JSON" - " string values to ensure that they are valid. If the JSON string contains" - " any None or NaN values, they are replaced with null before being parsed." - ) - - # If it doesn't already start with a "`", add one: - if not json_string.startswith("`"): - json_string = "```json\n" + json_string + "\n```" - result_string = call_ai_function( - function_string, args, description_string, model=CFG.fast_llm_model - ) - logger.debug("------------ JSON FIX ATTEMPT ---------------") - logger.debug(f"Original JSON: {json_string}") - logger.debug("-----------") - logger.debug(f"Fixed JSON: {result_string}") - logger.debug("----------- END OF FIX ATTEMPT ----------------") - - try: - json.loads(result_string) # just check the validity - return result_string - except json.JSONDecodeError: # noqa: E722 - # Get the call stack: - # import traceback - # call_stack = traceback.format_exc() - # print(f"Failed to fix JSON: '{json_string}' "+call_stack) - return "failed" - - -def fix_json_using_multiple_techniques(assistant_reply: str) -> Dict[Any, Any]: - """Fix the given JSON string to make it parseable and fully compliant with two techniques. - - Args: - json_string (str): The JSON string to fix. - - Returns: - str: The fixed JSON string. - """ - - # Parse and print Assistant response - assistant_reply_json = fix_and_parse_json(assistant_reply) - if assistant_reply_json == {}: - assistant_reply_json = attempt_to_fix_json_by_finding_outermost_brackets( - assistant_reply - ) - - if assistant_reply_json != {}: - return assistant_reply_json - - logger.error( - "Error: The following AI output couldn't be converted to a JSON:\n", - assistant_reply, - ) - if CFG.speak_mode: - say_text("I have received an invalid JSON response from the OpenAI API.") - - return {} - - -def fix_and_parse_json( - json_to_load: str, try_to_fix_with_gpt: bool = True -) -> Dict[Any, Any]: - """Fix and parse JSON string - - Args: - json_to_load (str): The JSON string. - try_to_fix_with_gpt (bool, optional): Try to fix the JSON with GPT. - Defaults to True. - - Returns: - str or dict[Any, Any]: The parsed JSON. - """ - - with contextlib.suppress(json.JSONDecodeError): - json_to_load = json_to_load.replace("\t", "") - return json.loads(json_to_load) - - with contextlib.suppress(json.JSONDecodeError): - json_to_load = correct_json(json_to_load) - return json.loads(json_to_load) - # Let's do something manually: - # sometimes GPT responds with something BEFORE the braces: - # "I'm sorry, I don't understand. Please try again." - # {"text": "I'm sorry, I don't understand. Please try again.", - # "confidence": 0.0} - # So let's try to find the first brace and then parse the rest - # of the string - try: - brace_index = json_to_load.index("{") - maybe_fixed_json = json_to_load[brace_index:] - last_brace_index = maybe_fixed_json.rindex("}") - maybe_fixed_json = maybe_fixed_json[: last_brace_index + 1] - return json.loads(maybe_fixed_json) - except (json.JSONDecodeError, ValueError) as e: - return try_ai_fix(try_to_fix_with_gpt, e, json_to_load) - - -def try_ai_fix( - try_to_fix_with_gpt: bool, exception: Exception, json_to_load: str -) -> Dict[Any, Any]: - """Try to fix the JSON with the AI - - Args: - try_to_fix_with_gpt (bool): Whether to try to fix the JSON with the AI. - exception (Exception): The exception that was raised. - json_to_load (str): The JSON string to load. - - Raises: - exception: If try_to_fix_with_gpt is False. - - Returns: - str or dict[Any, Any]: The JSON string or dictionary. - """ - if not try_to_fix_with_gpt: - raise exception - if CFG.debug_mode: - logger.warn( - "Warning: Failed to parse AI output, attempting to fix." - "\n If you see this warning frequently, it's likely that" - " your prompt is confusing the AI. Try changing it up" - " slightly." - ) - # Now try to fix this up using the ai_functions - ai_fixed_json = auto_fix_json(json_to_load, JSON_SCHEMA) - - if ai_fixed_json != "failed": - return json.loads(ai_fixed_json) - # This allows the AI to react to the error message, - # which usually results in it correcting its ways. - # logger.error("Failed to fix AI output, telling the AI.") - return {} - - -def attempt_to_fix_json_by_finding_outermost_brackets(json_string: str): - if CFG.speak_mode and CFG.debug_mode: - say_text( - "I have received an invalid JSON response from the OpenAI API. " - "Trying to fix it now." - ) - logger.error("Attempting to fix JSON by finding outermost brackets\n") - - try: - json_pattern = regex.compile(r"\{(?:[^{}]|(?R))*\}") - json_match = json_pattern.search(json_string) - - if json_match: - # Extract the valid JSON object from the string - json_string = json_match.group(0) - logger.typewriter_log( - title="Apparently json was fixed.", title_color=Fore.GREEN - ) - if CFG.speak_mode and CFG.debug_mode: - say_text("Apparently json was fixed.") - else: - return {} - - except (json.JSONDecodeError, ValueError): - if CFG.debug_mode: - logger.error(f"Error: Invalid JSON: {json_string}\n") - if CFG.speak_mode: - say_text("Didn't work. I will have to ignore this response then.") - logger.error("Error: Invalid JSON, setting it to empty JSON now.\n") - json_string = {} - - return fix_and_parse_json(json_string) diff --git a/spaces/allknowingroger/AI.Dashboard.Gradio.Streamlit.HTML5/README.md b/spaces/allknowingroger/AI.Dashboard.Gradio.Streamlit.HTML5/README.md deleted file mode 100644 index 0d93a78511d933f02a7767c1b98e6de361bd86ed..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/AI.Dashboard.Gradio.Streamlit.HTML5/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: AI.Dashboard.Gradio.Streamlit.HTML5 -emoji: 🏃 -colorFrom: red -colorTo: gray -sdk: static -pinned: false -license: mit -duplicated_from: awacke1/AI.Dashboard.Gradio.Streamlit.HTML5 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/allknowingroger/Image-Models-Test4/README.md b/spaces/allknowingroger/Image-Models-Test4/README.md deleted file mode 100644 index ff5c0f60d0099974c9d4b8a16d34aef170f6ae56..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test4/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: More Image Models -emoji: 😻 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: true -duplicated_from: allknowingroger/Image-Models-Test3 ---- - - \ No newline at end of file diff --git a/spaces/amin2809/rvc-models2023/config.py b/spaces/amin2809/rvc-models2023/config.py deleted file mode 100644 index c0c16e0017efbcaf250cb539a1d0edb4e83575e4..0000000000000000000000000000000000000000 --- a/spaces/amin2809/rvc-models2023/config.py +++ /dev/null @@ -1,88 +0,0 @@ -########################硬件参数######################## - -# 填写cuda:x, cpu 或 mps, x指代第几张卡,只支持 N卡 / Apple Silicon 加速 -device = "cuda:0" - -# 9-10-20-30-40系显卡无脑True,不影响质量,>=20显卡开启有加速 -is_half = True - -# 默认0用上所有线程,写数字限制CPU资源使用 -n_cpu = 0 - -########################硬件参数######################## - - -##################下为参数处理逻辑,勿动################## - -########################命令行参数######################## -import argparse - -parser = argparse.ArgumentParser() -parser.add_argument("--port", type=int, default=7865, help="Listen port") -parser.add_argument("--pycmd", type=str, default="python", help="Python command") -parser.add_argument("--colab", action="store_true", help="Launch in colab") -parser.add_argument( - "--noparallel", action="store_true", help="Disable parallel processing" -) -parser.add_argument( - "--noautoopen", action="store_true", help="Do not open in browser automatically" -) -cmd_opts, unknown = parser.parse_known_args() - -python_cmd = cmd_opts.pycmd -listen_port = cmd_opts.port -iscolab = cmd_opts.colab -noparallel = cmd_opts.noparallel -noautoopen = cmd_opts.noautoopen -########################命令行参数######################## - -import sys -import torch - - -# has_mps is only available in nightly pytorch (for now) and MasOS 12.3+. -# check `getattr` and try it for compatibility -def has_mps() -> bool: - if sys.platform != "darwin": - return False - else: - if not getattr(torch, "has_mps", False): - return False - try: - torch.zeros(1).to(torch.device("mps")) - return True - except Exception: - return False - - -if not torch.cuda.is_available(): - if has_mps(): - print("没有发现支持的N卡, 使用MPS进行推理") - device = "mps" - else: - print("没有发现支持的N卡, 使用CPU进行推理") - device = "cpu" - is_half = False - -if device not in ["cpu", "mps"]: - gpu_name = torch.cuda.get_device_name(int(device.split(":")[-1])) - if "16" in gpu_name or "MX" in gpu_name: - print("16系显卡/MX系显卡强制单精度") - is_half = False - -from multiprocessing import cpu_count - -if n_cpu == 0: - n_cpu = cpu_count() -if is_half: - # 6G显存配置 - x_pad = 3 - x_query = 10 - x_center = 60 - x_max = 65 -else: - # 5G显存配置 - x_pad = 1 - x_query = 6 - x_center = 38 - x_max = 41 diff --git a/spaces/amsterdamNLP/contrastive-pairs/README.md b/spaces/amsterdamNLP/contrastive-pairs/README.md deleted file mode 100644 index 44b504dfc7c221c8098265015fd7623e0704a9eb..0000000000000000000000000000000000000000 --- a/spaces/amsterdamNLP/contrastive-pairs/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Contrastive Pairs -emoji: 🌍 -colorFrom: yellow -colorTo: indigo -sdk: gradio -sdk_version: 3.46.1 -app_file: app.py -pinned: false ---- - -A demo showing what GPT-2 thinks is the most likely of the pair for a sample of Crow-S sentence pairs diff --git a/spaces/anhnv125/FRN/config.py b/spaces/anhnv125/FRN/config.py deleted file mode 100644 index e6e60bb068158313e479d68b2afefbb38369ad92..0000000000000000000000000000000000000000 --- a/spaces/anhnv125/FRN/config.py +++ /dev/null @@ -1,59 +0,0 @@ -class CONFIG: - gpus = "0,1" # List of gpu devices - - class TRAIN: - batch_size = 90 # number of audio files per batch - lr = 1e-4 # learning rate - epochs = 150 # max training epochs - workers = 12 # number of dataloader workers - val_split = 0.1 # validation set proportion - clipping_val = 1.0 # gradient clipping value - patience = 3 # learning rate scheduler's patience - factor = 0.5 # learning rate reduction factor - - # Model config - class MODEL: - enc_layers = 4 # number of MLP blocks in the encoder - enc_in_dim = 384 # dimension of the input projection layer in the encoder - enc_dim = 768 # dimension of the MLP blocks - pred_dim = 512 # dimension of the LSTM in the predictor - pred_layers = 1 # number of LSTM layers in the predictor - - # Dataset config - class DATA: - dataset = 'vctk' # dataset to use - ''' - Dictionary that specifies paths to root directories and train/test text files of each datasets. - 'root' is the path to the dataset and each line of the train.txt/test.txt files should contains the path to an - audio file from 'root'. - ''' - data_dir = {'vctk': {'root': 'data/vctk/wav48', - 'train': "data/vctk/train.txt", - 'test': "data/vctk/test.txt"}, - } - - assert dataset in data_dir.keys(), 'Unknown dataset.' - sr = 48000 # audio sampling rate - audio_chunk_len = 122880 # size of chunk taken in each audio files - window_size = 960 # window size of the STFT operation, equivalent to packet size - stride = 480 # stride of the STFT operation - - class TRAIN: - packet_sizes = [256, 512, 768, 960, 1024, - 1536] # packet sizes for training. All sizes should be divisible by 'audio_chunk_len' - transition_probs = ((0.9, 0.1), (0.5, 0.1), (0.5, 0.5)) # list of trainsition probs for Markow Chain - - class EVAL: - packet_size = 960 # 20ms - transition_probs = [(0.9, 0.1)] # (0.9, 0.1) ~ 10%; (0.8, 0.2) ~ 20%; (0.6, 0.4) ~ 40% - masking = 'gen' # whether using simulation or real traces from Microsoft to generate masks - assert masking in ['gen', 'real'] - trace_path = 'test_samples/blind/lossy_singals' # must be clarified if masking = 'real' - - class LOG: - log_dir = 'lightning_logs' # checkpoint and log directory - sample_path = 'audio_samples' # path to save generated audio samples in evaluation. - - class TEST: - in_dir = 'test_samples/blind/lossy_signals' # path to test audio inputs - out_dir = 'test_samples/blind/lossy_signals_out' # path to generated outputs diff --git a/spaces/anirbans403/wikisummarizer/app.py b/spaces/anirbans403/wikisummarizer/app.py deleted file mode 100644 index a0dec09f87caf3efd3b7ab0336232f8cf31bb0f9..0000000000000000000000000000000000000000 --- a/spaces/anirbans403/wikisummarizer/app.py +++ /dev/null @@ -1,64 +0,0 @@ -import streamlit as st -import bs4 as bs -import urllib.request -import re - -def main(): - st.title("Wikipedia Summarizer") - url_topull= st.text_input("Enter the Wikipedia URL to pull - ") - if url_topull!='': - scraped_data = urllib.request.urlopen(url_topull) - article = scraped_data.read() - - parsed_article=bs.BeautifulSoup(article,'lxml') - - paragraphs = parsed_article.find_all('p') - - article_text = "" - - for p in paragraphs: - article_text += p.text - article_text = re.sub(r'\[[0-9]*\]', ' ', article_text) - article_text = re.sub(r'\s+', ' ', article_text) - - import nltk - nltk.download('punkt') - nltk.download('stopwords') - import heapq - number=st.text_input('How many sentences long do you want your summary to be?') - if number!='': - sent_num = int(number) - formatted_article_text = re.sub('[^a-zA-Z]', ' ', article_text ) - formatted_article_text = re.sub(r'\s+', ' ', formatted_article_text) - sentence_list = nltk.sent_tokenize(article_text) - - stopwords = nltk.corpus.stopwords.words('english') - word_frequencies = {} - for word in nltk.word_tokenize(formatted_article_text): - if word not in stopwords: - if word not in word_frequencies.keys(): - word_frequencies[word] = 1 - else: - word_frequencies[word] += 1 - - maximum_frequncy = max(word_frequencies.values()) - - for word in word_frequencies.keys(): - word_frequencies[word] = (word_frequencies[word]/maximum_frequncy) - sentence_scores = {} - for sent in sentence_list: - for word in nltk.word_tokenize(sent.lower()): - if word in word_frequencies.keys(): - if len(sent.split(' ')) < 30: - if sent not in sentence_scores.keys(): - sentence_scores[sent] = word_frequencies[word] - else: - sentence_scores[sent] += word_frequencies[word] - - summary_sentences = heapq.nlargest(sent_num, sentence_scores, key=sentence_scores.get) - summary = ' '.join(summary_sentences) - st.markdown("# Summary: ") - st.write(summary) - -if __name__ == '__main__': - main() diff --git a/spaces/aodianyun/stable-diffusion-webui/javascript/localization.js b/spaces/aodianyun/stable-diffusion-webui/javascript/localization.js deleted file mode 100644 index 7b4affabd7bd9e5d9e8193e51e1e40267f0d4cde..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/stable-diffusion-webui/javascript/localization.js +++ /dev/null @@ -1,165 +0,0 @@ - -// localization = {} -- the dict with translations is created by the backend - -ignore_ids_for_localization={ - setting_sd_hypernetwork: 'OPTION', - setting_sd_model_checkpoint: 'OPTION', - setting_realesrgan_enabled_models: 'OPTION', - modelmerger_primary_model_name: 'OPTION', - modelmerger_secondary_model_name: 'OPTION', - modelmerger_tertiary_model_name: 'OPTION', - train_embedding: 'OPTION', - train_hypernetwork: 'OPTION', - txt2img_styles: 'OPTION', - img2img_styles: 'OPTION', - setting_random_artist_categories: 'SPAN', - setting_face_restoration_model: 'SPAN', - setting_realesrgan_enabled_models: 'SPAN', - extras_upscaler_1: 'SPAN', - extras_upscaler_2: 'SPAN', -} - -re_num = /^[\.\d]+$/ -re_emoji = /[\p{Extended_Pictographic}\u{1F3FB}-\u{1F3FF}\u{1F9B0}-\u{1F9B3}]/u - -original_lines = {} -translated_lines = {} - -function textNodesUnder(el){ - var n, a=[], walk=document.createTreeWalker(el,NodeFilter.SHOW_TEXT,null,false); - while(n=walk.nextNode()) a.push(n); - return a; -} - -function canBeTranslated(node, text){ - if(! text) return false; - if(! node.parentElement) return false; - - parentType = node.parentElement.nodeName - if(parentType=='SCRIPT' || parentType=='STYLE' || parentType=='TEXTAREA') return false; - - if (parentType=='OPTION' || parentType=='SPAN'){ - pnode = node - for(var level=0; level<4; level++){ - pnode = pnode.parentElement - if(! pnode) break; - - if(ignore_ids_for_localization[pnode.id] == parentType) return false; - } - } - - if(re_num.test(text)) return false; - if(re_emoji.test(text)) return false; - return true -} - -function getTranslation(text){ - if(! text) return undefined - - if(translated_lines[text] === undefined){ - original_lines[text] = 1 - } - - tl = localization[text] - if(tl !== undefined){ - translated_lines[tl] = 1 - } - - return tl -} - -function processTextNode(node){ - text = node.textContent.trim() - - if(! canBeTranslated(node, text)) return - - tl = getTranslation(text) - if(tl !== undefined){ - node.textContent = tl - } -} - -function processNode(node){ - if(node.nodeType == 3){ - processTextNode(node) - return - } - - if(node.title){ - tl = getTranslation(node.title) - if(tl !== undefined){ - node.title = tl - } - } - - if(node.placeholder){ - tl = getTranslation(node.placeholder) - if(tl !== undefined){ - node.placeholder = tl - } - } - - textNodesUnder(node).forEach(function(node){ - processTextNode(node) - }) -} - -function dumpTranslations(){ - dumped = {} - if (localization.rtl) { - dumped.rtl = true - } - - Object.keys(original_lines).forEach(function(text){ - if(dumped[text] !== undefined) return - - dumped[text] = localization[text] || text - }) - - return dumped -} - -onUiUpdate(function(m){ - m.forEach(function(mutation){ - mutation.addedNodes.forEach(function(node){ - processNode(node) - }) - }); -}) - - -document.addEventListener("DOMContentLoaded", function() { - processNode(gradioApp()) - - if (localization.rtl) { // if the language is from right to left, - (new MutationObserver((mutations, observer) => { // wait for the style to load - mutations.forEach(mutation => { - mutation.addedNodes.forEach(node => { - if (node.tagName === 'STYLE') { - observer.disconnect(); - - for (const x of node.sheet.rules) { // find all rtl media rules - if (Array.from(x.media || []).includes('rtl')) { - x.media.appendMedium('all'); // enable them - } - } - } - }) - }); - })).observe(gradioApp(), { childList: true }); - } -}) - -function download_localization() { - text = JSON.stringify(dumpTranslations(), null, 4) - - var element = document.createElement('a'); - element.setAttribute('href', 'data:text/plain;charset=utf-8,' + encodeURIComponent(text)); - element.setAttribute('download', "localization.json"); - element.style.display = 'none'; - document.body.appendChild(element); - - element.click(); - - document.body.removeChild(element); -} diff --git a/spaces/ardha27/rvc-hololive/README.md b/spaces/ardha27/rvc-hololive/README.md deleted file mode 100644 index f077cd85340c26ebfcb0857816d0f1f511408242..0000000000000000000000000000000000000000 --- a/spaces/ardha27/rvc-hololive/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Rvc Models -emoji: 🎤 -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: ardha27/rvc-models ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/__init__.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/__init__.py deleted file mode 100644 index d30aedbdf8f6bd0a31720307d51c6965c97f2289..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/__init__.py +++ /dev/null @@ -1,17 +0,0 @@ -import os - - -def iter_examples(): - """Iterate over the examples in this directory. - - Each item is a dict with the following keys: - - "name" : the unique name of the example - - "filename" : the full file path to the example - """ - example_dir = os.path.abspath(os.path.dirname(__file__)) - for filename in os.listdir(example_dir): - name, ext = os.path.splitext(filename) - if name.startswith('_') or ext != '.py': - continue - yield {'name': name, - 'filename': os.path.join(example_dir, filename)} diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vegalite/display.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vegalite/display.py deleted file mode 100644 index 91c5f33e093b32cf81accd6fdeeb8a18292c28c0..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vegalite/display.py +++ /dev/null @@ -1,11 +0,0 @@ -from ..utils.display import Displayable, default_renderer_base, json_renderer_base -from ..utils.display import RendererRegistry, HTMLRenderer - - -__all__ = ( - "Displayable", - "default_renderer_base", - "json_renderer_base", - "RendererRegistry", - "HTMLRenderer", -) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/antlr4/tree/TokenTagToken.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/antlr4/tree/TokenTagToken.py deleted file mode 100644 index d00327ae9af3cca086f57212147b20fa3462d479..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/antlr4/tree/TokenTagToken.py +++ /dev/null @@ -1,47 +0,0 @@ -# -# Copyright (c) 2012-2017 The ANTLR Project. All rights reserved. -# Use of this file is governed by the BSD 3-clause license that -# can be found in the LICENSE.txt file in the project root. -# - -# -# A {@link Token} object representing a token of a particular type; e.g., -# {@code }. These tokens are created for {@link TagChunk} chunks where the -# tag corresponds to a lexer rule or token type. -# -from antlr4.Token import CommonToken - - -class TokenTagToken(CommonToken): - - # Constructs a new instance of {@link TokenTagToken} with the specified - # token name, type, and label. - # - # @param tokenName The token name. - # @param type The token type. - # @param label The label associated with the token tag, or {@code null} if - # the token tag is unlabeled. - # - def __init__(self, tokenName:str, type:int, label:str=None): - super().__init__(type=type) - self.tokenName = tokenName - self.label = label - self._text = self.getText() - - # - # {@inheritDoc} - # - #

    The implementation for {@link TokenTagToken} returns the token tag - # formatted with {@code <} and {@code >} delimiters.

    - # - def getText(self): - if self.label is None: - return "<" + self.tokenName + ">" - else: - return "<" + self.label + ":" + self.tokenName + ">" - - #

    The implementation for {@link TokenTagToken} returns a string of the form - # {@code tokenName:type}.

    - # - def __str__(self): - return self.tokenName + ":" + str(self.type) diff --git a/spaces/asciicorp/Legal-ai/poli2.py b/spaces/asciicorp/Legal-ai/poli2.py deleted file mode 100644 index cde57a1880c88c8712890104a4307dc1b7f0d81e..0000000000000000000000000000000000000000 --- a/spaces/asciicorp/Legal-ai/poli2.py +++ /dev/null @@ -1,27 +0,0 @@ -from langchain import PromptTemplate, LLMChain -from langchain.llms import OpenAI -import openai -import os -os.environ["OPENAI_API_KEY"] = "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" - -template = """Does the following agreement follow the company policy fully? -give a detailed answer with a through analysis. -if there are any conflicting points at all, quote them. - -Policy: -{policy_text} - -Agreement: -{agreement_text} - -Answer in Markdown:""" - -prompt = PromptTemplate(template=template, input_variables=["policy_text", "agreement_text"]) - -llm = OpenAI(temperature=0) - -llm_chain = LLMChain(prompt=prompt, llm=llm) - -def check_agreement(policy_text, agreement_text): - response = llm_chain.run(policy_text=policy_text, agreement_text=agreement_text) - return response diff --git a/spaces/asimokby/cv-parser-huggingface/app.py b/spaces/asimokby/cv-parser-huggingface/app.py deleted file mode 100644 index 3ea7484cbb49893ce4ee231b2ecc635ea2a79319..0000000000000000000000000000000000000000 --- a/spaces/asimokby/cv-parser-huggingface/app.py +++ /dev/null @@ -1,18 +0,0 @@ -from pydoc import describe -import gradio as gr -from Main import Main - - -main = Main() - -def parse_cv(cv): - return main.parse_cv(cv.name) - - -description = """A demo for a CV parser built with HuggingFace's transformers.""" -article = "Find the code on GitHub 🚀: https://github.com/asimokby/cv-parser-huggingface" -file_input = gr.inputs.File(file_count="single", type="file", label="Upload a CV: .PDF Or .TXT", optional=False) -iface = gr.Interface(fn=parse_cv, inputs=file_input, outputs="json", allow_flagging="never", - allow_screenshot=False, title="CV Parser", theme="dark", description=description, article=article) - -iface.launch() \ No newline at end of file diff --git a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/David Peletz.html b/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/David Peletz.html deleted file mode 100644 index bb32a7a1e9fdc20a3b8fa262736765711d5de0e9..0000000000000000000000000000000000000000 --- a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/David Peletz.html +++ /dev/null @@ -1,134 +0,0 @@ - - - - David Peletz - - - - -
    -

    David Peletz

    - -
    -
    How did you hear about SM?
    • LinkedIn

    Brief background
    • DS at Future (fitness tech)
    • Before: DS at Avisa
    • MSc at UC berkly
    • background in CS and DS
    • fitness and NLP

    Mentorship exp
    • informally throughout my career (interns and younger teammates)
    • in the master's program - gave feedback in resumes and stuff
    • T.A and python tutoring

    What do beginners need and how can you help?
    • Information overload
    • knowing where to start
    • knowledge (prop, stats, business domain, coding) 
    • and strategy (reaching out, applying)
    • the structure can help and accountability
    • weekly /bi-weekly call - communicating clearly on both ends!
    • prepping for tech interviews - so much
    -
    -
    Questions about SM:
    • What are the terms of the contract?
    • Placement rate for mentees?
    • What are the successes look like, the failures?
    • What does the application look like?
    -
    - -
    - - - \ No newline at end of file diff --git a/spaces/austin/adr-detection/app.py b/spaces/austin/adr-detection/app.py deleted file mode 100644 index 61ce8206ff29c71dd5e27fe9e685ab4c5f2c9ca7..0000000000000000000000000000000000000000 --- a/spaces/austin/adr-detection/app.py +++ /dev/null @@ -1,78 +0,0 @@ -import matplotlib.cm as cm -import html -import torch -import numpy as np -from transformers import pipeline -import gradio as gr -def value2rgba(x, cmap=cm.RdYlGn, alpha_mult=1.0): - "Convert a value `x` from 0 to 1 (inclusive) to an RGBA tuple according to `cmap` times transparency `alpha_mult`." - c = cmap(x) - rgb = (np.array(c[:-1]) * 255).astype(int) - a = c[-1] * alpha_mult - return tuple(rgb.tolist() + [a]) -def piece_prob_html(pieces, prob, colors, label, sep=' ', **kwargs): - html_code,spans = [''], [] - for p, a, cols, l in zip(pieces, prob, colors, label): - p = html.escape(p) - c = str(value2rgba(a, cmap=cols, alpha_mult=0.5, **kwargs)) - spans.append(f'{p}') - html_code.append(sep.join(spans)) - html_code.append('') - return ''.join(html_code) -def nothing_ent(i, word): - return { - 'entity': 'O', - 'score': 0, - 'index': i, - 'word': word, - 'start': 0, - 'end': 0 -} -def _gradio_highlighting(text): - result = ner_model(text) - tokens = ner_model.tokenizer.tokenize(text) - label_indeces = [i['index'] - 1 for i in result] - entities = list() - for i, word in enumerate(tokens): - if i in label_indeces: - entities.append(result[label_indeces.index(i)]) - else: - entities.append(nothing_ent(i, word)) - entities = ner_model.group_entities(entities) - spans = [e['word'] for e in entities] - probs = [e['score'] for e in entities] - labels = [e['entity_group'] for e in entities] - colors = [cm.RdPu if label == 'ADR' else cm.YlGn for i, label in enumerate(labels)] - return piece_prob_html(spans, probs, colors, labels, sep=' ') - -default_text = """# Pancreatitis - - Lipase: 535 -> 154 -> 145 - - Managed with NBM, IV fluids - - CT AP and abdo USS: normal - - Likely secondary to Azathioprine - ceased, never to be used again. - - Resolved with conservative measures -""" - -title = "Adverse Drug Reaction Highlighting" -description = "Named Entity Recognition model to detect ADRs in discharge summaries" -article = """This app was made to accompany our recent [paper](https://www.medrxiv.org/content/10.1101/2021.12.11.21267504v2).
    -ADRs will be highlighted in purple, offending medications in green.
    -Hover over a word to see the strength of each prediction on a 0-1 scale.
    -Our training code can be found at [github](https://github.com/AustinMOS/adr-nlp). -""" - -ner_model = pipeline(task = 'token-classification', model = "austin/adr-ner") -iface = gr.Interface(_gradio_highlighting, - [ - gr.inputs.Textbox( - lines=7, - label="Text", - default=default_text), - ], - gr.outputs.HTML(label="ADR Prediction"), - title = title, - description = description, - article = article, - theme = "huggingface" -) -iface.launch() \ No newline at end of file diff --git a/spaces/avorozhko/funbot/app.py b/spaces/avorozhko/funbot/app.py deleted file mode 100644 index 495fed57a231b336be16f3434f2dbfcf89756ddf..0000000000000000000000000000000000000000 --- a/spaces/avorozhko/funbot/app.py +++ /dev/null @@ -1,128 +0,0 @@ -# app.py -import random -import torch -import gradio as gr -from transformers import AutoModelForCausalLM, AutoTokenizer -from util_funcs import getLengthParam, calcAnswerLengthByProbability, cropContext - -def chat_function(Message, History): # model, tokenizer - - input_user = Message - - history = History or [] - - chat_history_ids = torch.zeros((1, 0), dtype=torch.int) if history == [] else torch.tensor(history[-1][2], dtype=torch.long) - - # encode the new user input, add parameters and return a tensor in Pytorch - lengthId = getLengthParam(input_user, tokenizer) - new_user_input_ids = tokenizer.encode(f"|0|{lengthId}|" \ - + input_user + tokenizer.eos_token, return_tensors="pt") - # append the new user input tokens to the chat history - chat_history_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) - - # Длину ожидаемой фразы мы рассчитаем на основании последнего инпута - # Например, я не люблю когда на мой длинный ответ отвечают короткой фразой - # Но пойдем через вероятности: - # при длинном инпуте 60% что будет длинный ответ (3), 30% что средний (2), 10% что короткий (1) - # при среднем инпуте 50% что ответ будет средний (2), и по 25% на оба остальных случая - # при коротком инпуте 50% что ответ будет короткий (1), 30% что средний (2) и 20% что длинный (3) - # см. функцию calcAnswerLengthByProbability() - - next_len = calcAnswerLengthByProbability(lengthId) - - # encode the new user input, add parameters and return a tensor in Pytorch - new_user_input_ids = tokenizer.encode(f"|1|{next_len}|", return_tensors="pt") - - # append the new user input tokens to the chat history - chat_history_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) - - chat_history_ids = cropContext(chat_history_ids, 10) - - print(tokenizer.decode(chat_history_ids[-1]))# uncomment for debug - - # save previous len - input_len = chat_history_ids.shape[-1] - # generated a response; PS you can read about the parameters at hf.co/blog/how-to-generate - - temperature = 0.6 - - # Обрезаем контекст до нужной длины с конца - - # Создадим копию изначальных данных на случай если придется перегенерировать ответ - chat_history_ids_initial = chat_history_ids - - while True: - chat_history_ids = model.generate( - chat_history_ids, - num_return_sequences=1, - min_length = 2, - max_length=512, - no_repeat_ngram_size=3, - do_sample=True, - top_k=50, - top_p=0.9, - temperature = temperature, - mask_token_id=tokenizer.mask_token_id, - eos_token_id=tokenizer.eos_token_id, - unk_token_id=tokenizer.unk_token_id, - pad_token_id=tokenizer.pad_token_id, - device='cpu' - ) - - answer = tokenizer.decode(chat_history_ids[:, input_len:][0], skip_special_tokens=True) - - if (len(answer) > 0 and answer[-1] != ',' and answer[-1] != ':'): - break - else: - if (temperature <= 0.1): - temperature -= 0.1 - - # Случай когда надо перегенерировать ответ наступил, берем изначальный тензор - chat_history_ids = chat_history_ids_initial - - history.append((input_user, answer, chat_history_ids.tolist())) - html = "
    " - for user_msg, resp_msg, _ in history: - if user_msg != '-': - html += f"
    {user_msg}
    " - if resp_msg != '-': - html += f"
    {resp_msg}
    " - html += "
    " - return html, history - -# Download checkpoint: - -checkpoint = "avorozhko/ruDialoGpt3-medium-finetuned-context" -tokenizer = AutoTokenizer.from_pretrained(checkpoint) -model = AutoModelForCausalLM.from_pretrained(checkpoint) -model = model.eval() - -# Gradio -title = "Чат-бот для поднятия настроения" -description = """ -Данный бот постарается поднять вам настроение, так как он знает 26700 анекдотов. -Но чувство юмора у него весьма специфичное. -Бот не знает матерных слов и откровенных пошлостей, но кто такой Вовочка и Поручик Ржевский знает ) - """ -article = "

    Бот на основе дообученной GPT-3

    " - -iface = gr.Interface(fn=chat_function, - inputs=[gr.inputs.Textbox(lines=3, placeholder="Что вы хотите сказать боту..."), "state"], - outputs=["html", "state"], - title=title, description=description, article=article, - theme='dark-grass', - css= """ - .chatbox {display:flex;flex-direction:column} - .user_msg, .resp_msg {padding:4px;margin-bottom:4px;border-radius:4px;width:80%} - .user_msg {background-color:#1e4282;color:white;align-self:start} - .resp_msg {background-color:#552a2a;align-self:self-end} - .panels.unaligned {flex-direction: column !important;align-items: initial!important;} - .panels.unaligned :last-child {order: -1 !important;} - """, - allow_screenshot=False, - allow_flagging='never' - ) - -if __name__ == "__main__": - iface.launch(debug=True, share=False) - \ No newline at end of file diff --git a/spaces/awacke1/GLB.Loader.HTML5/README.md b/spaces/awacke1/GLB.Loader.HTML5/README.md deleted file mode 100644 index 66b2b5ff3817c78b06b5bd61825111043b9dcf78..0000000000000000000000000000000000000000 --- a/spaces/awacke1/GLB.Loader.HTML5/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: GLB.Loader.HTML5 -emoji: 📊 -colorFrom: indigo -colorTo: gray -sdk: static -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/Knowledge-graphs/app.py b/spaces/awacke1/Knowledge-graphs/app.py deleted file mode 100644 index 1e21bbd7b299083a0920e1a64efa2ec452e28adb..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Knowledge-graphs/app.py +++ /dev/null @@ -1,256 +0,0 @@ -from logging import disable -from pkg_resources import EggMetadata -import streamlit as st -import streamlit.components.v1 as components -import networkx as nx -import matplotlib.pyplot as plt -from pyvis.network import Network -from streamlit.state.session_state import SessionState -from streamlit.type_util import Key -import rebel -import wikipedia -from utils import clip_text -from datetime import datetime as dt -import os - -MAX_TOPICS = 3 - -wiki_state_variables = { - 'has_run_wiki':False, - 'wiki_suggestions': [], - 'wiki_text' : [], - 'nodes':[], - "topics":[], - "html_wiki":"" -} - -free_text_state_variables = { - 'has_run_free':False, - "html_free":"" - -} - -BUTTON_COLUMS = 4 - -def wiki_init_state_variables(): - for k in free_text_state_variables.keys(): - if k in st.session_state: - del st.session_state[k] - - for k, v in wiki_state_variables.items(): - if k not in st.session_state: - st.session_state[k] = v - -def wiki_generate_graph(): - st.session_state["GRAPH_FILENAME"] = str(dt.now().timestamp()*1000) + ".html" - - if 'wiki_text' not in st.session_state: - return - if len(st.session_state['wiki_text']) == 0: - st.error("please enter a topic and select a wiki page first") - return - with st.spinner(text="Generating graph..."): - texts = st.session_state['wiki_text'] - st.session_state['nodes'] = [] - nodes = rebel.generate_knowledge_graph(texts, st.session_state["GRAPH_FILENAME"]) - HtmlFile = open(st.session_state["GRAPH_FILENAME"], 'r', encoding='utf-8') - source_code = HtmlFile.read() - st.session_state["html_wiki"] = source_code - os.remove(st.session_state["GRAPH_FILENAME"]) - for n in nodes: - n = n.lower() - if n not in st.session_state['topics']: - possible_topics = wikipedia.search(n, results = 2) - st.session_state['nodes'].extend(possible_topics) - st.session_state['nodes'] = list(set(st.session_state['nodes'])) - st.session_state['has_run_wiki'] = True - st.success('Done!') - -def wiki_show_suggestion(): - st.session_state['wiki_suggestions'] = [] - with st.spinner(text="fetching wiki topics..."): - if st.session_state['input_method'] == "wikipedia": - text = st.session_state.text - if (text is not None) and (text != ""): - subjects = text.split(",")[:MAX_TOPICS] - for subj in subjects: - st.session_state['wiki_suggestions'] += wikipedia.search(subj, results = 3) - -def wiki_show_text(page_title): - with st.spinner(text="fetching wiki page..."): - try: - page = wikipedia.page(title=page_title, auto_suggest=False) - st.session_state['wiki_text'].append(clip_text(page.summary)) - st.session_state['topics'].append(page_title.lower()) - st.session_state['wiki_suggestions'].remove(page_title) - - except wikipedia.DisambiguationError as e: - with st.spinner(text="Woops, ambigious term, recalculating options..."): - st.session_state['wiki_suggestions'].remove(page_title) - temp = st.session_state['wiki_suggestions'] + e.options[:3] - st.session_state['wiki_suggestions'] = list(set(temp)) - except wikipedia.WikipediaException: - st.session_state['wiki_suggestions'].remove(page_title) - -def wiki_add_text(term): - if len(st.session_state['wiki_text']) > MAX_TOPICS: - return - try: - page = wikipedia.page(title=term, auto_suggest=False) - extra_text = clip_text(page.summary) - - st.session_state['wiki_text'].append(extra_text) - st.session_state['topics'].append(term.lower()) - st.session_state['nodes'].remove(term) - - except wikipedia.DisambiguationError as e: - print(e) - with st.spinner(text="Woops, ambigious term, recalculating options..."): - st.session_state['nodes'].remove(term) - temp = st.session_state['nodes'] + e.options[:3] - st.session_state['nodes'] = list(set(temp)) - except wikipedia.WikipediaException as e: - print(e) - st.session_state['nodes'].remove(term) - -def wiki_reset_session(): - for k in wiki_state_variables: - del st.session_state[k] - -def free_reset_session(): - for k in free_text_state_variables: - del st.session_state[k] - -def free_text_generate(): - st.session_state["GRAPH_FILENAME"] = str(dt.now().timestamp()*1000) + ".html" - text = st.session_state['free_text'][0:100] - rebel.generate_knowledge_graph([text], st.session_state["GRAPH_FILENAME"]) - HtmlFile = open(st.session_state["GRAPH_FILENAME"], 'r', encoding='utf-8') - source_code = HtmlFile.read() - st.session_state["html_free"] = source_code - os.remove(st.session_state["GRAPH_FILENAME"]) - st.session_state['has_run_free'] = True - -def free_text_layout(): - st.text_area("Free text", key="free_text", height=5, value="Tardigrades, known colloquially as water bears or moss piglets, are a phylum of eight-legged segmented micro-animals.") - st.button("Generate", on_click=free_text_generate, key="free_text_generate") - -def free_test_init_state_variables(): - for k in wiki_state_variables.keys(): - if k in st.session_state: - del st.session_state[k] - - for k, v in free_text_state_variables.items(): - if k not in st.session_state: - st.session_state[k] = v - -st.title('RE:Belle') -st.markdown( -""" -### Building Beautiful Knowledge Graphs With REBEL -""") -st.selectbox( - 'input method', - ('wikipedia', 'free text'), key="input_method") - - -def show_wiki_hub_page(): - st.sidebar.button("Reset", on_click=wiki_reset_session, key="reset_key") - - st.sidebar.markdown( -""" -## How To Create a Graph: -- Enter wikipedia search terms, separated by comma's -- Choose one or more of the suggested topics (max 3) -- Click generate! -""" -) - cols = st.columns([8, 1]) - with cols[0]: - st.text_input("wikipedia search term", on_change=wiki_show_suggestion, key="text", value="graphs, are, awesome") - with cols[1]: - st.text('') - st.text('') - st.button("Search", on_click=wiki_show_suggestion, key="show_suggestion_key") - - if len(st.session_state['wiki_suggestions']) != 0: - num_buttons = len(st.session_state['wiki_suggestions']) - num_cols = num_buttons if 0 < num_buttons < BUTTON_COLUMS else BUTTON_COLUMS - columns = st.columns([1] * num_cols ) - for q in range(1 + num_buttons//num_cols): - for i, (c, s) in enumerate(zip(columns, st.session_state['wiki_suggestions'][q*num_cols: (q+1)*num_cols])): - with c: - st.button(s, on_click=wiki_show_text, args=(s,), key=str(i)+s+"wiki_suggestion") - - if len(st.session_state['wiki_text']) != 0: - for i, t in enumerate(st.session_state['wiki_text']): - new_expander = st.expander(label=t[:30] + "...", expanded=(i==0)) - with new_expander: - st.markdown(t) - - if len(st.session_state['wiki_text']) > 0: - st.button("Generate", on_click=wiki_generate_graph, key="gen_graph") - st.sidebar.markdown( - """ - ## How to expand the graph - - Click a button below the graph to expand that node - (Only nodes that have wiki pages will be expanded) - - Hit the Generate button again to expand your graph! - """ - ) - - if st.session_state['has_run_wiki']: - - components.html(st.session_state["html_wiki"], width=720, height=600) - num_buttons = len(st.session_state["nodes"]) - num_cols = num_buttons if 0 < num_buttons < BUTTON_COLUMS else BUTTON_COLUMS - columns = st.columns([1] * num_cols + [1]) - - for q in range(1 + num_buttons//num_cols): - for i, (c, s) in enumerate(zip(columns, st.session_state["nodes"][q*num_cols: (q+1)*num_cols])): - with c: - st.button(s, on_click=wiki_add_text, args=(s,), key=str(i)+s) - -def show_free_text_hub_page(): - st.sidebar.button("Reset", on_click=free_reset_session, key="free_reset_key") - st.sidebar.markdown( -""" -## How To Create a Graph: -- Enter a text you'd like to see as a graph. -- Click generate! -""" -) - - free_text_layout() - - if st.session_state['has_run_free']: - components.html(st.session_state["html_free"], width=720, height=600) - -if st.session_state['input_method'] == "wikipedia": - wiki_init_state_variables() - show_wiki_hub_page() -else: - free_test_init_state_variables() - show_free_text_hub_page() - - - -st.sidebar.markdown( -""" -## What This Is And Why We Built it - -This space shows how a transformer network can be used to convert *human* text into a computer-queryable format: a **knowledge graph**. Knowledge graphs are graphs where each node (or *vertex* if you're fancy) represent a concept/person/thing and each edge the link between those concepts. If you'd like to know more, you can read [this blogpost](https://www.ml6.eu/knowhow/knowledge-graphs-an-introduction-and-business-applications). - -Knowledge graphs aren't just cool to look at, they are an extremely versatile way of storing data, and are used in machine learning to perform tasks like fraud detection. You can read more about the applications of knowledge graphs in ML in [this blogpost](https://blog.ml6.eu/how-are-knowledge-graphs-and-machine-learning-related-ff6f5c1760b5). - -There is one problem though: building knowledge graphs from scratch is a time-consuming and tedious task, so it would be a lot easier if we could leverage machine learning to **create** them from existing texts. This demo shows how a model named **REBEL** has been trained to do just that: it reads summaries from Wikipedia (or any other text you input), and generates a graph containing the information it distills from the text. -""" -) - -st.sidebar.markdown( -""" -*Credits for the REBEL model go out to Pere-Lluís Huguet Cabot and Roberto Navigli. -The code can be found [here](https://github.com/Babelscape/rebel), -and the original paper [here](https://github.com/Babelscape/rebel/blob/main/docs/EMNLP_2021_REBEL__Camera_Ready_.pdf)* -""" -) \ No newline at end of file diff --git a/spaces/awacke1/StreamlitAIPP1/README.md b/spaces/awacke1/StreamlitAIPP1/README.md deleted file mode 100644 index 1ca9490c3b4e2083a6bfa093faefb36921c10488..0000000000000000000000000000000000000000 --- a/spaces/awacke1/StreamlitAIPP1/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: StreamlitAIPP1 -emoji: 🐢 -colorFrom: blue -colorTo: pink -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/azusarang/so-vits-svc-models-ba_P/diffusion/logger/__init__.py b/spaces/azusarang/so-vits-svc-models-ba_P/diffusion/logger/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/badayvedat/LLaVA/llava/model/llava_arch.py b/spaces/badayvedat/LLaVA/llava/model/llava_arch.py deleted file mode 100644 index fd538c93764347a496ba6cdb0859cd8ffcb02044..0000000000000000000000000000000000000000 --- a/spaces/badayvedat/LLaVA/llava/model/llava_arch.py +++ /dev/null @@ -1,248 +0,0 @@ -# Copyright 2023 Haotian Liu -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -from abc import ABC, abstractmethod - -import torch -import torch.nn as nn - -from .multimodal_encoder.builder import build_vision_tower -from .multimodal_projector.builder import build_vision_projector - -from llava.constants import IGNORE_INDEX, IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_PATCH_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN - - -class LlavaMetaModel: - - def __init__(self, config): - super(LlavaMetaModel, self).__init__(config) - - if hasattr(config, "mm_vision_tower"): - self.vision_tower = build_vision_tower(config, delay_load=True) - self.mm_projector = build_vision_projector(config) - - def get_vision_tower(self): - vision_tower = getattr(self, 'vision_tower', None) - if type(vision_tower) is list: - vision_tower = vision_tower[0] - return vision_tower - - def initialize_vision_modules(self, model_args, fsdp=None): - vision_tower = model_args.vision_tower - mm_vision_select_layer = model_args.mm_vision_select_layer - mm_vision_select_feature = model_args.mm_vision_select_feature - pretrain_mm_mlp_adapter = model_args.pretrain_mm_mlp_adapter - - self.config.mm_vision_tower = vision_tower - - vision_tower = build_vision_tower(model_args) - - if fsdp is not None and len(fsdp) > 0: - self.vision_tower = [vision_tower] - else: - self.vision_tower = vision_tower - - self.config.use_mm_proj = True - self.config.mm_projector_type = getattr(model_args, 'mm_projector_type', 'linear') - self.config.mm_hidden_size = vision_tower.hidden_size - self.config.mm_vision_select_layer = mm_vision_select_layer - self.config.mm_vision_select_feature = mm_vision_select_feature - - self.mm_projector = build_vision_projector(self.config) - - if pretrain_mm_mlp_adapter is not None: - mm_projector_weights = torch.load(pretrain_mm_mlp_adapter, map_location='cpu') - def get_w(weights, keyword): - return {k.split(keyword + '.')[1]: v for k, v in weights.items() if keyword in k} - - self.mm_projector.load_state_dict(get_w(mm_projector_weights, 'mm_projector')) - - -class LlavaMetaForCausalLM(ABC): - - @abstractmethod - def get_model(self): - pass - - def get_vision_tower(self): - return self.get_model().get_vision_tower() - - def encode_images(self, images): - image_features = self.get_model().get_vision_tower()(images) - image_features = self.get_model().mm_projector(image_features) - return image_features - - def prepare_inputs_labels_for_multimodal( - self, input_ids, attention_mask, past_key_values, labels, images - ): - vision_tower = self.get_vision_tower() - if vision_tower is None or images is None or input_ids.shape[1] == 1: - if past_key_values is not None and vision_tower is not None and images is not None and input_ids.shape[1] == 1: - attention_mask = torch.ones((attention_mask.shape[0], past_key_values[-1][-1].shape[-2] + 1), dtype=attention_mask.dtype, device=attention_mask.device) - return input_ids, attention_mask, past_key_values, None, labels - - if type(images) is list or images.ndim == 5: - concat_images = torch.cat([image for image in images], dim=0) - image_features = self.encode_images(concat_images) - split_sizes = [image.shape[0] for image in images] - image_features = torch.split(image_features, split_sizes, dim=0) - image_features = [x.flatten(0, 1) for x in image_features] - else: - image_features = self.encode_images(images) - - new_input_embeds = [] - new_labels = [] if labels is not None else None - cur_image_idx = 0 - for batch_idx, cur_input_ids in enumerate(input_ids): - if (cur_input_ids == IMAGE_TOKEN_INDEX).sum() == 0: - # multimodal LLM, but the current sample is not multimodal - # FIXME: this is a hacky fix, for deepspeed zero3 to work - half_len = cur_input_ids.shape[0] // 2 - cur_image_features = image_features[cur_image_idx] - cur_input_embeds_1 = self.get_model().embed_tokens(cur_input_ids[:half_len]) - cur_input_embeds_2 = self.get_model().embed_tokens(cur_input_ids[half_len:]) - cur_input_embeds = torch.cat([cur_input_embeds_1, cur_image_features[0:0], cur_input_embeds_2], dim=0) - new_input_embeds.append(cur_input_embeds) - if labels is not None: - new_labels.append(labels[batch_idx]) - cur_image_idx += 1 - continue - image_token_indices = torch.where(cur_input_ids == IMAGE_TOKEN_INDEX)[0] - cur_new_input_embeds = [] - if labels is not None: - cur_labels = labels[batch_idx] - cur_new_labels = [] - assert cur_labels.shape == cur_input_ids.shape - while image_token_indices.numel() > 0: - cur_image_features = image_features[cur_image_idx] - image_token_start = image_token_indices[0] - if getattr(self.config, 'tune_mm_mlp_adapter', False) and getattr(self.config, 'mm_use_im_start_end', False): - cur_new_input_embeds.append(self.get_model().embed_tokens(cur_input_ids[:image_token_start-1]).detach()) - cur_new_input_embeds.append(self.get_model().embed_tokens(cur_input_ids[image_token_start-1:image_token_start])) - cur_new_input_embeds.append(cur_image_features) - cur_new_input_embeds.append(self.get_model().embed_tokens(cur_input_ids[image_token_start+1:image_token_start+2])) - if labels is not None: - cur_new_labels.append(cur_labels[:image_token_start]) - cur_new_labels.append(torch.full((cur_image_features.shape[0],), IGNORE_INDEX, device=labels.device, dtype=labels.dtype)) - cur_new_labels.append(cur_labels[image_token_start:image_token_start+1]) - cur_labels = cur_labels[image_token_start+2:] - else: - cur_new_input_embeds.append(self.get_model().embed_tokens(cur_input_ids[:image_token_start])) - cur_new_input_embeds.append(cur_image_features) - if labels is not None: - cur_new_labels.append(cur_labels[:image_token_start]) - cur_new_labels.append(torch.full((cur_image_features.shape[0],), IGNORE_INDEX, device=labels.device, dtype=labels.dtype)) - cur_labels = cur_labels[image_token_start+1:] - cur_image_idx += 1 - if getattr(self.config, 'tune_mm_mlp_adapter', False) and getattr(self.config, 'mm_use_im_start_end', False): - cur_input_ids = cur_input_ids[image_token_start+2:] - else: - cur_input_ids = cur_input_ids[image_token_start+1:] - image_token_indices = torch.where(cur_input_ids == IMAGE_TOKEN_INDEX)[0] - if cur_input_ids.numel() > 0: - if getattr(self.config, 'tune_mm_mlp_adapter', False) and getattr(self.config, 'mm_use_im_start_end', False): - cur_new_input_embeds.append(self.get_model().embed_tokens(cur_input_ids).detach()) - else: - cur_new_input_embeds.append(self.get_model().embed_tokens(cur_input_ids)) - if labels is not None: - cur_new_labels.append(cur_labels) - cur_new_input_embeds = [x.to(device=self.device) for x in cur_new_input_embeds] - cur_new_input_embeds = torch.cat(cur_new_input_embeds, dim=0) - new_input_embeds.append(cur_new_input_embeds) - if labels is not None: - cur_new_labels = torch.cat(cur_new_labels, dim=0) - new_labels.append(cur_new_labels) - - if any(x.shape != new_input_embeds[0].shape for x in new_input_embeds): - max_len = max(x.shape[0] for x in new_input_embeds) - - new_input_embeds_align = [] - for cur_new_embed in new_input_embeds: - cur_new_embed = torch.cat((cur_new_embed, torch.zeros((max_len - cur_new_embed.shape[0], cur_new_embed.shape[1]), dtype=cur_new_embed.dtype, device=cur_new_embed.device)), dim=0) - new_input_embeds_align.append(cur_new_embed) - new_input_embeds = torch.stack(new_input_embeds_align, dim=0) - - if labels is not None: - new_labels_align = [] - _new_labels = new_labels - for cur_new_label in new_labels: - cur_new_label = torch.cat((cur_new_label, torch.full((max_len - cur_new_label.shape[0],), IGNORE_INDEX, dtype=cur_new_label.dtype, device=cur_new_label.device)), dim=0) - new_labels_align.append(cur_new_label) - new_labels = torch.stack(new_labels_align, dim=0) - - if attention_mask is not None: - new_attention_mask = [] - for cur_attention_mask, cur_new_labels, cur_new_labels_align in zip(attention_mask, _new_labels, new_labels): - new_attn_mask_pad_left = torch.full((cur_new_labels.shape[0] - labels.shape[1],), True, dtype=attention_mask.dtype, device=attention_mask.device) - new_attn_mask_pad_right = torch.full((cur_new_labels_align.shape[0] - cur_new_labels.shape[0],), False, dtype=attention_mask.dtype, device=attention_mask.device) - cur_new_attention_mask = torch.cat((new_attn_mask_pad_left, cur_attention_mask, new_attn_mask_pad_right), dim=0) - new_attention_mask.append(cur_new_attention_mask) - attention_mask = torch.stack(new_attention_mask, dim=0) - assert attention_mask.shape == new_labels.shape - else: - new_input_embeds = torch.stack(new_input_embeds, dim=0) - if labels is not None: - new_labels = torch.stack(new_labels, dim=0) - - if attention_mask is not None: - new_attn_mask_pad_left = torch.full((attention_mask.shape[0], new_input_embeds.shape[1] - input_ids.shape[1]), True, dtype=attention_mask.dtype, device=attention_mask.device) - attention_mask = torch.cat((new_attn_mask_pad_left, attention_mask), dim=1) - assert attention_mask.shape == new_input_embeds.shape[:2] - - return None, attention_mask, past_key_values, new_input_embeds, new_labels - - def initialize_vision_tokenizer(self, model_args, tokenizer): - if model_args.mm_use_im_patch_token: - tokenizer.add_tokens([DEFAULT_IMAGE_PATCH_TOKEN], special_tokens=True) - self.resize_token_embeddings(len(tokenizer)) - - if model_args.mm_use_im_start_end: - num_new_tokens = tokenizer.add_tokens([DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN], special_tokens=True) - self.resize_token_embeddings(len(tokenizer)) - - if num_new_tokens > 0: - input_embeddings = self.get_input_embeddings().weight.data - output_embeddings = self.get_output_embeddings().weight.data - - input_embeddings_avg = input_embeddings[:-num_new_tokens].mean( - dim=0, keepdim=True) - output_embeddings_avg = output_embeddings[:-num_new_tokens].mean( - dim=0, keepdim=True) - - input_embeddings[-num_new_tokens:] = input_embeddings_avg - output_embeddings[-num_new_tokens:] = output_embeddings_avg - - if model_args.tune_mm_mlp_adapter: - for p in self.get_input_embeddings().parameters(): - p.requires_grad = True - for p in self.get_output_embeddings().parameters(): - p.requires_grad = False - - if model_args.pretrain_mm_mlp_adapter: - mm_projector_weights = torch.load(model_args.pretrain_mm_mlp_adapter, map_location='cpu') - embed_tokens_weight = mm_projector_weights['model.embed_tokens.weight'] - assert num_new_tokens == 2 - if input_embeddings.shape == embed_tokens_weight.shape: - input_embeddings[-num_new_tokens:] = embed_tokens_weight[-num_new_tokens:] - elif embed_tokens_weight.shape[0] == num_new_tokens: - input_embeddings[-num_new_tokens:] = embed_tokens_weight - else: - raise ValueError(f"Unexpected embed_tokens_weight shape. Pretrained: {embed_tokens_weight.shape}. Current: {input_embeddings.shape}. Numer of new tokens: {num_new_tokens}.") - elif model_args.mm_use_im_patch_token: - if model_args.tune_mm_mlp_adapter: - for p in self.get_input_embeddings().parameters(): - p.requires_grad = False - for p in self.get_output_embeddings().parameters(): - p.requires_grad = False diff --git a/spaces/banana-projects/web3d/node_modules/three/src/loaders/ObjectLoader.js b/spaces/banana-projects/web3d/node_modules/three/src/loaders/ObjectLoader.js deleted file mode 100644 index 53594b10244e12a5b9267be35dbcc7836d10ac0d..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/loaders/ObjectLoader.js +++ /dev/null @@ -1,1023 +0,0 @@ -import { - UVMapping, - CubeReflectionMapping, - CubeRefractionMapping, - EquirectangularReflectionMapping, - EquirectangularRefractionMapping, - SphericalReflectionMapping, - CubeUVReflectionMapping, - CubeUVRefractionMapping, - - RepeatWrapping, - ClampToEdgeWrapping, - MirroredRepeatWrapping, - - NearestFilter, - NearestMipMapNearestFilter, - NearestMipMapLinearFilter, - LinearFilter, - LinearMipMapNearestFilter, - LinearMipMapLinearFilter -} from '../constants.js'; -import { Color } from '../math/Color.js'; -import { Object3D } from '../core/Object3D.js'; -import { Group } from '../objects/Group.js'; -import { Sprite } from '../objects/Sprite.js'; -import { Points } from '../objects/Points.js'; -import { Line } from '../objects/Line.js'; -import { LineLoop } from '../objects/LineLoop.js'; -import { LineSegments } from '../objects/LineSegments.js'; -import { LOD } from '../objects/LOD.js'; -import { Mesh } from '../objects/Mesh.js'; -import { SkinnedMesh } from '../objects/SkinnedMesh.js'; -import { Shape } from '../extras/core/Shape.js'; -import { Fog } from '../scenes/Fog.js'; -import { FogExp2 } from '../scenes/FogExp2.js'; -import { HemisphereLight } from '../lights/HemisphereLight.js'; -import { SpotLight } from '../lights/SpotLight.js'; -import { PointLight } from '../lights/PointLight.js'; -import { DirectionalLight } from '../lights/DirectionalLight.js'; -import { AmbientLight } from '../lights/AmbientLight.js'; -import { RectAreaLight } from '../lights/RectAreaLight.js'; -import { OrthographicCamera } from '../cameras/OrthographicCamera.js'; -import { PerspectiveCamera } from '../cameras/PerspectiveCamera.js'; -import { Scene } from '../scenes/Scene.js'; -import { CubeTexture } from '../textures/CubeTexture.js'; -import { Texture } from '../textures/Texture.js'; -import { ImageLoader } from './ImageLoader.js'; -import { LoadingManager, DefaultLoadingManager } from './LoadingManager.js'; -import { AnimationClip } from '../animation/AnimationClip.js'; -import { MaterialLoader } from './MaterialLoader.js'; -import { LoaderUtils } from './LoaderUtils.js'; -import { BufferGeometryLoader } from './BufferGeometryLoader.js'; -import { FileLoader } from './FileLoader.js'; -import * as Geometries from '../geometries/Geometries.js'; -import * as Curves from '../extras/curves/Curves.js'; - -/** - * @author mrdoob / http://mrdoob.com/ - */ - -function ObjectLoader( manager ) { - - this.manager = ( manager !== undefined ) ? manager : DefaultLoadingManager; - this.resourcePath = ''; - -} - -Object.assign( ObjectLoader.prototype, { - - crossOrigin: 'anonymous', - - load: function ( url, onLoad, onProgress, onError ) { - - var scope = this; - - var path = ( this.path === undefined ) ? LoaderUtils.extractUrlBase( url ) : this.path; - this.resourcePath = this.resourcePath || path; - - var loader = new FileLoader( scope.manager ); - loader.setPath( this.path ); - loader.load( url, function ( text ) { - - var json = null; - - try { - - json = JSON.parse( text ); - - } catch ( error ) { - - if ( onError !== undefined ) onError( error ); - - console.error( 'THREE:ObjectLoader: Can\'t parse ' + url + '.', error.message ); - - return; - - } - - var metadata = json.metadata; - - if ( metadata === undefined || metadata.type === undefined || metadata.type.toLowerCase() === 'geometry' ) { - - console.error( 'THREE.ObjectLoader: Can\'t load ' + url ); - return; - - } - - scope.parse( json, onLoad ); - - }, onProgress, onError ); - - }, - - setPath: function ( value ) { - - this.path = value; - return this; - - }, - - setResourcePath: function ( value ) { - - this.resourcePath = value; - return this; - - }, - - setCrossOrigin: function ( value ) { - - this.crossOrigin = value; - return this; - - }, - - parse: function ( json, onLoad ) { - - var shapes = this.parseShape( json.shapes ); - var geometries = this.parseGeometries( json.geometries, shapes ); - - var images = this.parseImages( json.images, function () { - - if ( onLoad !== undefined ) onLoad( object ); - - } ); - - var textures = this.parseTextures( json.textures, images ); - var materials = this.parseMaterials( json.materials, textures ); - - var object = this.parseObject( json.object, geometries, materials ); - - if ( json.animations ) { - - object.animations = this.parseAnimations( json.animations ); - - } - - if ( json.images === undefined || json.images.length === 0 ) { - - if ( onLoad !== undefined ) onLoad( object ); - - } - - return object; - - }, - - parseShape: function ( json ) { - - var shapes = {}; - - if ( json !== undefined ) { - - for ( var i = 0, l = json.length; i < l; i ++ ) { - - var shape = new Shape().fromJSON( json[ i ] ); - - shapes[ shape.uuid ] = shape; - - } - - } - - return shapes; - - }, - - parseGeometries: function ( json, shapes ) { - - var geometries = {}; - - if ( json !== undefined ) { - - var bufferGeometryLoader = new BufferGeometryLoader(); - - for ( var i = 0, l = json.length; i < l; i ++ ) { - - var geometry; - var data = json[ i ]; - - switch ( data.type ) { - - case 'PlaneGeometry': - case 'PlaneBufferGeometry': - - geometry = new Geometries[ data.type ]( - data.width, - data.height, - data.widthSegments, - data.heightSegments - ); - - break; - - case 'BoxGeometry': - case 'BoxBufferGeometry': - case 'CubeGeometry': // backwards compatible - - geometry = new Geometries[ data.type ]( - data.width, - data.height, - data.depth, - data.widthSegments, - data.heightSegments, - data.depthSegments - ); - - break; - - case 'CircleGeometry': - case 'CircleBufferGeometry': - - geometry = new Geometries[ data.type ]( - data.radius, - data.segments, - data.thetaStart, - data.thetaLength - ); - - break; - - case 'CylinderGeometry': - case 'CylinderBufferGeometry': - - geometry = new Geometries[ data.type ]( - data.radiusTop, - data.radiusBottom, - data.height, - data.radialSegments, - data.heightSegments, - data.openEnded, - data.thetaStart, - data.thetaLength - ); - - break; - - case 'ConeGeometry': - case 'ConeBufferGeometry': - - geometry = new Geometries[ data.type ]( - data.radius, - data.height, - data.radialSegments, - data.heightSegments, - data.openEnded, - data.thetaStart, - data.thetaLength - ); - - break; - - case 'SphereGeometry': - case 'SphereBufferGeometry': - - geometry = new Geometries[ data.type ]( - data.radius, - data.widthSegments, - data.heightSegments, - data.phiStart, - data.phiLength, - data.thetaStart, - data.thetaLength - ); - - break; - - case 'DodecahedronGeometry': - case 'DodecahedronBufferGeometry': - case 'IcosahedronGeometry': - case 'IcosahedronBufferGeometry': - case 'OctahedronGeometry': - case 'OctahedronBufferGeometry': - case 'TetrahedronGeometry': - case 'TetrahedronBufferGeometry': - - geometry = new Geometries[ data.type ]( - data.radius, - data.detail - ); - - break; - - case 'RingGeometry': - case 'RingBufferGeometry': - - geometry = new Geometries[ data.type ]( - data.innerRadius, - data.outerRadius, - data.thetaSegments, - data.phiSegments, - data.thetaStart, - data.thetaLength - ); - - break; - - case 'TorusGeometry': - case 'TorusBufferGeometry': - - geometry = new Geometries[ data.type ]( - data.radius, - data.tube, - data.radialSegments, - data.tubularSegments, - data.arc - ); - - break; - - case 'TorusKnotGeometry': - case 'TorusKnotBufferGeometry': - - geometry = new Geometries[ data.type ]( - data.radius, - data.tube, - data.tubularSegments, - data.radialSegments, - data.p, - data.q - ); - - break; - - case 'TubeGeometry': - case 'TubeBufferGeometry': - - // This only works for built-in curves (e.g. CatmullRomCurve3). - // User defined curves or instances of CurvePath will not be deserialized. - geometry = new Geometries[ data.type ]( - new Curves[ data.path.type ]().fromJSON( data.path ), - data.tubularSegments, - data.radius, - data.radialSegments, - data.closed - ); - - break; - - case 'LatheGeometry': - case 'LatheBufferGeometry': - - geometry = new Geometries[ data.type ]( - data.points, - data.segments, - data.phiStart, - data.phiLength - ); - - break; - - case 'PolyhedronGeometry': - case 'PolyhedronBufferGeometry': - - geometry = new Geometries[ data.type ]( - data.vertices, - data.indices, - data.radius, - data.details - ); - - break; - - case 'ShapeGeometry': - case 'ShapeBufferGeometry': - - var geometryShapes = []; - - for ( var j = 0, jl = data.shapes.length; j < jl; j ++ ) { - - var shape = shapes[ data.shapes[ j ] ]; - - geometryShapes.push( shape ); - - } - - geometry = new Geometries[ data.type ]( - geometryShapes, - data.curveSegments - ); - - break; - - - case 'ExtrudeGeometry': - case 'ExtrudeBufferGeometry': - - var geometryShapes = []; - - for ( var j = 0, jl = data.shapes.length; j < jl; j ++ ) { - - var shape = shapes[ data.shapes[ j ] ]; - - geometryShapes.push( shape ); - - } - - var extrudePath = data.options.extrudePath; - - if ( extrudePath !== undefined ) { - - data.options.extrudePath = new Curves[ extrudePath.type ]().fromJSON( extrudePath ); - - } - - geometry = new Geometries[ data.type ]( - geometryShapes, - data.options - ); - - break; - - case 'BufferGeometry': - - geometry = bufferGeometryLoader.parse( data ); - - break; - - case 'Geometry': - - if ( 'THREE' in window && 'LegacyJSONLoader' in THREE ) { - - var geometryLoader = new THREE.LegacyJSONLoader(); - geometry = geometryLoader.parse( data, this.resourcePath ).geometry; - - - } else { - - console.error( 'THREE.ObjectLoader: You have to import LegacyJSONLoader in order load geometry data of type "Geometry".' ); - - } - - break; - - default: - - console.warn( 'THREE.ObjectLoader: Unsupported geometry type "' + data.type + '"' ); - - continue; - - } - - geometry.uuid = data.uuid; - - if ( data.name !== undefined ) geometry.name = data.name; - if ( geometry.isBufferGeometry === true && data.userData !== undefined ) geometry.userData = data.userData; - - geometries[ data.uuid ] = geometry; - - } - - } - - return geometries; - - }, - - parseMaterials: function ( json, textures ) { - - var cache = {}; // MultiMaterial - var materials = {}; - - if ( json !== undefined ) { - - var loader = new MaterialLoader(); - loader.setTextures( textures ); - - for ( var i = 0, l = json.length; i < l; i ++ ) { - - var data = json[ i ]; - - if ( data.type === 'MultiMaterial' ) { - - // Deprecated - - var array = []; - - for ( var j = 0; j < data.materials.length; j ++ ) { - - var material = data.materials[ j ]; - - if ( cache[ material.uuid ] === undefined ) { - - cache[ material.uuid ] = loader.parse( material ); - - } - - array.push( cache[ material.uuid ] ); - - } - - materials[ data.uuid ] = array; - - } else { - - if ( cache[ data.uuid ] === undefined ) { - - cache[ data.uuid ] = loader.parse( data ); - - } - - materials[ data.uuid ] = cache[ data.uuid ]; - - } - - } - - } - - return materials; - - }, - - parseAnimations: function ( json ) { - - var animations = []; - - for ( var i = 0; i < json.length; i ++ ) { - - var data = json[ i ]; - - var clip = AnimationClip.parse( data ); - - if ( data.uuid !== undefined ) clip.uuid = data.uuid; - - animations.push( clip ); - - } - - return animations; - - }, - - parseImages: function ( json, onLoad ) { - - var scope = this; - var images = {}; - - function loadImage( url ) { - - scope.manager.itemStart( url ); - - return loader.load( url, function () { - - scope.manager.itemEnd( url ); - - }, undefined, function () { - - scope.manager.itemError( url ); - scope.manager.itemEnd( url ); - - } ); - - } - - if ( json !== undefined && json.length > 0 ) { - - var manager = new LoadingManager( onLoad ); - - var loader = new ImageLoader( manager ); - loader.setCrossOrigin( this.crossOrigin ); - - for ( var i = 0, il = json.length; i < il; i ++ ) { - - var image = json[ i ]; - var url = image.url; - - if ( Array.isArray( url ) ) { - - // load array of images e.g CubeTexture - - images[ image.uuid ] = []; - - for ( var j = 0, jl = url.length; j < jl; j ++ ) { - - var currentUrl = url[ j ]; - - var path = /^(\/\/)|([a-z]+:(\/\/)?)/i.test( currentUrl ) ? currentUrl : scope.resourcePath + currentUrl; - - images[ image.uuid ].push( loadImage( path ) ); - - } - - } else { - - // load single image - - var path = /^(\/\/)|([a-z]+:(\/\/)?)/i.test( image.url ) ? image.url : scope.resourcePath + image.url; - - images[ image.uuid ] = loadImage( path ); - - } - - } - - } - - return images; - - }, - - parseTextures: function ( json, images ) { - - function parseConstant( value, type ) { - - if ( typeof value === 'number' ) return value; - - console.warn( 'THREE.ObjectLoader.parseTexture: Constant should be in numeric form.', value ); - - return type[ value ]; - - } - - var textures = {}; - - if ( json !== undefined ) { - - for ( var i = 0, l = json.length; i < l; i ++ ) { - - var data = json[ i ]; - - if ( data.image === undefined ) { - - console.warn( 'THREE.ObjectLoader: No "image" specified for', data.uuid ); - - } - - if ( images[ data.image ] === undefined ) { - - console.warn( 'THREE.ObjectLoader: Undefined image', data.image ); - - } - - var texture; - - if ( Array.isArray( images[ data.image ] ) ) { - - texture = new CubeTexture( images[ data.image ] ); - - } else { - - texture = new Texture( images[ data.image ] ); - - } - - texture.needsUpdate = true; - - texture.uuid = data.uuid; - - if ( data.name !== undefined ) texture.name = data.name; - - if ( data.mapping !== undefined ) texture.mapping = parseConstant( data.mapping, TEXTURE_MAPPING ); - - if ( data.offset !== undefined ) texture.offset.fromArray( data.offset ); - if ( data.repeat !== undefined ) texture.repeat.fromArray( data.repeat ); - if ( data.center !== undefined ) texture.center.fromArray( data.center ); - if ( data.rotation !== undefined ) texture.rotation = data.rotation; - - if ( data.wrap !== undefined ) { - - texture.wrapS = parseConstant( data.wrap[ 0 ], TEXTURE_WRAPPING ); - texture.wrapT = parseConstant( data.wrap[ 1 ], TEXTURE_WRAPPING ); - - } - - if ( data.format !== undefined ) texture.format = data.format; - if ( data.type !== undefined ) texture.type = data.type; - if ( data.encoding !== undefined ) texture.encoding = data.encoding; - - if ( data.minFilter !== undefined ) texture.minFilter = parseConstant( data.minFilter, TEXTURE_FILTER ); - if ( data.magFilter !== undefined ) texture.magFilter = parseConstant( data.magFilter, TEXTURE_FILTER ); - if ( data.anisotropy !== undefined ) texture.anisotropy = data.anisotropy; - - if ( data.flipY !== undefined ) texture.flipY = data.flipY; - - if ( data.premultiplyAlpha !== undefined ) texture.premultiplyAlpha = data.premultiplyAlpha; - if ( data.unpackAlignment !== undefined ) texture.unpackAlignment = data.unpackAlignment; - - textures[ data.uuid ] = texture; - - } - - } - - return textures; - - }, - - parseObject: function ( data, geometries, materials ) { - - var object; - - function getGeometry( name ) { - - if ( geometries[ name ] === undefined ) { - - console.warn( 'THREE.ObjectLoader: Undefined geometry', name ); - - } - - return geometries[ name ]; - - } - - function getMaterial( name ) { - - if ( name === undefined ) return undefined; - - if ( Array.isArray( name ) ) { - - var array = []; - - for ( var i = 0, l = name.length; i < l; i ++ ) { - - var uuid = name[ i ]; - - if ( materials[ uuid ] === undefined ) { - - console.warn( 'THREE.ObjectLoader: Undefined material', uuid ); - - } - - array.push( materials[ uuid ] ); - - } - - return array; - - } - - if ( materials[ name ] === undefined ) { - - console.warn( 'THREE.ObjectLoader: Undefined material', name ); - - } - - return materials[ name ]; - - } - - switch ( data.type ) { - - case 'Scene': - - object = new Scene(); - - if ( data.background !== undefined ) { - - if ( Number.isInteger( data.background ) ) { - - object.background = new Color( data.background ); - - } - - } - - if ( data.fog !== undefined ) { - - if ( data.fog.type === 'Fog' ) { - - object.fog = new Fog( data.fog.color, data.fog.near, data.fog.far ); - - } else if ( data.fog.type === 'FogExp2' ) { - - object.fog = new FogExp2( data.fog.color, data.fog.density ); - - } - - } - - break; - - case 'PerspectiveCamera': - - object = new PerspectiveCamera( data.fov, data.aspect, data.near, data.far ); - - if ( data.focus !== undefined ) object.focus = data.focus; - if ( data.zoom !== undefined ) object.zoom = data.zoom; - if ( data.filmGauge !== undefined ) object.filmGauge = data.filmGauge; - if ( data.filmOffset !== undefined ) object.filmOffset = data.filmOffset; - if ( data.view !== undefined ) object.view = Object.assign( {}, data.view ); - - break; - - case 'OrthographicCamera': - - object = new OrthographicCamera( data.left, data.right, data.top, data.bottom, data.near, data.far ); - - if ( data.zoom !== undefined ) object.zoom = data.zoom; - if ( data.view !== undefined ) object.view = Object.assign( {}, data.view ); - - break; - - case 'AmbientLight': - - object = new AmbientLight( data.color, data.intensity ); - - break; - - case 'DirectionalLight': - - object = new DirectionalLight( data.color, data.intensity ); - - break; - - case 'PointLight': - - object = new PointLight( data.color, data.intensity, data.distance, data.decay ); - - break; - - case 'RectAreaLight': - - object = new RectAreaLight( data.color, data.intensity, data.width, data.height ); - - break; - - case 'SpotLight': - - object = new SpotLight( data.color, data.intensity, data.distance, data.angle, data.penumbra, data.decay ); - - break; - - case 'HemisphereLight': - - object = new HemisphereLight( data.color, data.groundColor, data.intensity ); - - break; - - case 'SkinnedMesh': - - console.warn( 'THREE.ObjectLoader.parseObject() does not support SkinnedMesh yet.' ); - - case 'Mesh': - - var geometry = getGeometry( data.geometry ); - var material = getMaterial( data.material ); - - if ( geometry.bones && geometry.bones.length > 0 ) { - - object = new SkinnedMesh( geometry, material ); - - } else { - - object = new Mesh( geometry, material ); - - } - - if ( data.drawMode !== undefined ) object.setDrawMode( data.drawMode ); - - break; - - case 'LOD': - - object = new LOD(); - - break; - - case 'Line': - - object = new Line( getGeometry( data.geometry ), getMaterial( data.material ), data.mode ); - - break; - - case 'LineLoop': - - object = new LineLoop( getGeometry( data.geometry ), getMaterial( data.material ) ); - - break; - - case 'LineSegments': - - object = new LineSegments( getGeometry( data.geometry ), getMaterial( data.material ) ); - - break; - - case 'PointCloud': - case 'Points': - - object = new Points( getGeometry( data.geometry ), getMaterial( data.material ) ); - - break; - - case 'Sprite': - - object = new Sprite( getMaterial( data.material ) ); - - break; - - case 'Group': - - object = new Group(); - - break; - - default: - - object = new Object3D(); - - } - - object.uuid = data.uuid; - - if ( data.name !== undefined ) object.name = data.name; - - if ( data.matrix !== undefined ) { - - object.matrix.fromArray( data.matrix ); - - if ( data.matrixAutoUpdate !== undefined ) object.matrixAutoUpdate = data.matrixAutoUpdate; - if ( object.matrixAutoUpdate ) object.matrix.decompose( object.position, object.quaternion, object.scale ); - - } else { - - if ( data.position !== undefined ) object.position.fromArray( data.position ); - if ( data.rotation !== undefined ) object.rotation.fromArray( data.rotation ); - if ( data.quaternion !== undefined ) object.quaternion.fromArray( data.quaternion ); - if ( data.scale !== undefined ) object.scale.fromArray( data.scale ); - - } - - if ( data.castShadow !== undefined ) object.castShadow = data.castShadow; - if ( data.receiveShadow !== undefined ) object.receiveShadow = data.receiveShadow; - - if ( data.shadow ) { - - if ( data.shadow.bias !== undefined ) object.shadow.bias = data.shadow.bias; - if ( data.shadow.radius !== undefined ) object.shadow.radius = data.shadow.radius; - if ( data.shadow.mapSize !== undefined ) object.shadow.mapSize.fromArray( data.shadow.mapSize ); - if ( data.shadow.camera !== undefined ) object.shadow.camera = this.parseObject( data.shadow.camera ); - - } - - if ( data.visible !== undefined ) object.visible = data.visible; - if ( data.frustumCulled !== undefined ) object.frustumCulled = data.frustumCulled; - if ( data.renderOrder !== undefined ) object.renderOrder = data.renderOrder; - if ( data.userData !== undefined ) object.userData = data.userData; - if ( data.layers !== undefined ) object.layers.mask = data.layers; - - if ( data.children !== undefined ) { - - var children = data.children; - - for ( var i = 0; i < children.length; i ++ ) { - - object.add( this.parseObject( children[ i ], geometries, materials ) ); - - } - - } - - if ( data.type === 'LOD' ) { - - var levels = data.levels; - - for ( var l = 0; l < levels.length; l ++ ) { - - var level = levels[ l ]; - var child = object.getObjectByProperty( 'uuid', level.object ); - - if ( child !== undefined ) { - - object.addLevel( child, level.distance ); - - } - - } - - } - - return object; - - } - -} ); - -var TEXTURE_MAPPING = { - UVMapping: UVMapping, - CubeReflectionMapping: CubeReflectionMapping, - CubeRefractionMapping: CubeRefractionMapping, - EquirectangularReflectionMapping: EquirectangularReflectionMapping, - EquirectangularRefractionMapping: EquirectangularRefractionMapping, - SphericalReflectionMapping: SphericalReflectionMapping, - CubeUVReflectionMapping: CubeUVReflectionMapping, - CubeUVRefractionMapping: CubeUVRefractionMapping -}; - -var TEXTURE_WRAPPING = { - RepeatWrapping: RepeatWrapping, - ClampToEdgeWrapping: ClampToEdgeWrapping, - MirroredRepeatWrapping: MirroredRepeatWrapping -}; - -var TEXTURE_FILTER = { - NearestFilter: NearestFilter, - NearestMipMapNearestFilter: NearestMipMapNearestFilter, - NearestMipMapLinearFilter: NearestMipMapLinearFilter, - LinearFilter: LinearFilter, - LinearMipMapNearestFilter: LinearMipMapNearestFilter, - LinearMipMapLinearFilter: LinearMipMapLinearFilter -}; - - -export { ObjectLoader }; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/math/Frustum.d.ts b/spaces/banana-projects/web3d/node_modules/three/src/math/Frustum.d.ts deleted file mode 100644 index 837a5c561808da7e89bf922bb037061496cf055d..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/math/Frustum.d.ts +++ /dev/null @@ -1,43 +0,0 @@ -import { Plane } from './Plane'; -import { Matrix4 } from './Matrix4'; -import { Object3D } from './../core/Object3D'; -import { Sprite } from './../objects/Sprite'; -import { Sphere } from './Sphere'; -import { Box3 } from './Box3'; -import { Vector3 } from './Vector3'; - -/** - * Frustums are used to determine what is inside the camera's field of view. They help speed up the rendering process. - */ -export class Frustum { - constructor( - p0?: Plane, - p1?: Plane, - p2?: Plane, - p3?: Plane, - p4?: Plane, - p5?: Plane - ); - - /** - * Array of 6 vectors. - */ - planes: Plane[]; - - set( - p0?: number, - p1?: number, - p2?: number, - p3?: number, - p4?: number, - p5?: number - ): Frustum; - clone(): this; - copy(frustum: Frustum): this; - setFromMatrix(m: Matrix4): Frustum; - intersectsObject(object: Object3D): boolean; - intersectsObject(sprite: Sprite): boolean; - intersectsSphere(sphere: Sphere): boolean; - intersectsBox(box: Box3): boolean; - containsPoint(point: Vector3): boolean; -} diff --git a/spaces/banana-projects/web3d/node_modules/three/src/renderers/webgl/WebGLAnimation.js b/spaces/banana-projects/web3d/node_modules/three/src/renderers/webgl/WebGLAnimation.js deleted file mode 100644 index 3943d991c8253c139fbc2463c85593b1bd057e0f..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/renderers/webgl/WebGLAnimation.js +++ /dev/null @@ -1,56 +0,0 @@ -/** - * @author mrdoob / http://mrdoob.com/ - */ - -function WebGLAnimation() { - - var context = null; - var isAnimating = false; - var animationLoop = null; - - function onAnimationFrame( time, frame ) { - - if ( isAnimating === false ) return; - - animationLoop( time, frame ); - - context.requestAnimationFrame( onAnimationFrame ); - - } - - return { - - start: function () { - - if ( isAnimating === true ) return; - if ( animationLoop === null ) return; - - context.requestAnimationFrame( onAnimationFrame ); - - isAnimating = true; - - }, - - stop: function () { - - isAnimating = false; - - }, - - setAnimationLoop: function ( callback ) { - - animationLoop = callback; - - }, - - setContext: function ( value ) { - - context = value; - - } - - }; - -} - -export { WebGLAnimation }; diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327012402.py b/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327012402.py deleted file mode 100644 index bac12b405186135515e78ef6807e23410338b289..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327012402.py +++ /dev/null @@ -1,66 +0,0 @@ -import os -#os.system("pip install gfpgan") - -#os.system("pip freeze") -#os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth -P .") -import random -import gradio as gr -from PIL import Image -import torch -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/ab/Abraham_Lincoln_O-77_matte_collodion_print.jpg/1024px-Abraham_Lincoln_O-77_matte_collodion_print.jpg', 'lincoln.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/5/50/Albert_Einstein_%28Nobel%29.png', 'einstein.png') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Thomas_Edison2.jpg/1024px-Thomas_Edison2.jpg', 'edison.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/Henry_Ford_1888.jpg/1024px-Henry_Ford_1888.jpg', 'Henry.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/0/06/Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg/800px-Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg', 'Frida.jpg') - - -import cv2 -import glob -import numpy as np -from basicsr.utils import imwrite -from gfpgan import GFPGANer - -import warnings -warnings.warn('The unoptimized RealESRGAN is very slow on CPU. We do not use it. ' - 'If you really want to use it, please modify the corresponding codes.') -bg_upsampler = None - - - -# set up GFPGAN restorer -restorer = GFPGANer( - model_path='experiments/pretrained_models/GFPGANv1.3.pth', - upscale=2, - arch='clean', - channel_multiplier=2, - bg_upsampler=bg_upsampler) - - -def inference(img): - input_img = cv2.imread(img, cv2.IMREAD_COLOR) - cropped_faces, restored_faces, restored_img = restorer.enhance( - input_img, has_aligned=False, only_center_face=False, paste_back=True) - - #return Image.fromarray(restored_faces[0][:,:,::-1]) - return Image.fromarray(restored_img[]) - -title = "GFP-GAN" -description = "Gradio demo for GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below. Please click submit only once" -article = "

    Towards Real-World Blind Face Restoration with Generative Facial Prior | Github Repo

    visitor badge
    " -gr.Interface( - inference, - [gr.inputs.Image(type="filepath", label="Input")], - gr.outputs.Image(type="pil", label="Output"), - title=title, - description=description, - article=article, - examples=[ - ['lincoln.jpg'], - ['einstein.png'], - ['edison.jpg'], - ['Henry.jpg'], - ['Frida.jpg'] - ] - ).launch(enable_queue=True,cache_examples=True) - - diff --git a/spaces/bigPear/digitalWDF/src/utils/common.py b/spaces/bigPear/digitalWDF/src/utils/common.py deleted file mode 100644 index 782a2b247b48eb1e8984a35ed9471f3b57843480..0000000000000000000000000000000000000000 --- a/spaces/bigPear/digitalWDF/src/utils/common.py +++ /dev/null @@ -1,507 +0,0 @@ -import os -import sys -import torch -import hashlib -from typing import Literal, Optional, Tuple - -import transformers -from transformers import ( - AutoConfig, - AutoModel, - AutoTokenizer, - HfArgumentParser, - Seq2SeqTrainingArguments -) -from transformers.utils import check_min_version -from transformers.utils.versions import require_version -from transformers.modeling_utils import PreTrainedModel -from transformers.tokenization_utils import PreTrainedTokenizer - -import datasets -from datasets import Dataset, concatenate_datasets, load_dataset - -from peft import ( - PeftModel, - TaskType, - LoraConfig, - get_peft_model -) - -from trl import AutoModelForCausalLMWithValueHead - -from .config import ( - ModelArguments, - DataTrainingArguments, - FinetuningArguments -) - -from .other import ( - get_logger, - load_trainable_params, - load_valuehead_params, - print_trainable_params, - prepare_model_for_training, - IGNORE_INDEX, - FINETUNING_ARGS_NAME -) - - -logger = get_logger(__name__) - - -check_min_version("4.27.4") -require_version("datasets>=2.10.0", "To fix: pip install datasets>=2.10.0") -require_version("peft>=0.3.0", "To fix: pip install peft>=0.3.0") -require_version("trl>=0.4.1", "To fix: pip install trl>=0.4.1") - - -def init_adapter( - model: PreTrainedModel, - model_args: ModelArguments, - finetuning_args: FinetuningArguments, - is_trainable: bool -) -> PreTrainedModel: - r""" - Initializes the adapters. - - Note that the trainable parameters must be cast to float32. - """ - - if finetuning_args.finetuning_type == "none" and is_trainable: - raise ValueError("You cannot use finetuning_type=none while training.") - - if finetuning_args.finetuning_type == "full": - logger.info("Fine-tuning method: Full") - model = model.float() - - if model_args.checkpoint_dir is not None: - load_trainable_params(model, model_args.checkpoint_dir[0]) - - if finetuning_args.finetuning_type == "freeze": - logger.info("Fine-tuning method: Freeze") - for name, param in model.named_parameters(): - if not any(trainable_layer in name for trainable_layer in finetuning_args.trainable_layers): - param.requires_grad_(False) - else: - param.data = param.data.to(torch.float32) - - if model_args.checkpoint_dir is not None: - load_trainable_params(model, model_args.checkpoint_dir[0]) - - if finetuning_args.finetuning_type == "p_tuning": - logger.info("Fine-tuning method: P-Tuning v2") # nothing to do - - if model_args.checkpoint_dir is not None: - load_trainable_params(model, model_args.checkpoint_dir[0]) - - if finetuning_args.finetuning_type == "lora": - logger.info("Fine-tuning method: LoRA") - lastest_checkpoint = None - - if model_args.checkpoint_dir is not None: - if is_trainable and finetuning_args.resume_lora_training: # continually training on the lora weights - checkpoints_to_merge, lastest_checkpoint = model_args.checkpoint_dir[:-1], model_args.checkpoint_dir[-1] - else: - checkpoints_to_merge = model_args.checkpoint_dir - - for checkpoint in checkpoints_to_merge: - model = PeftModel.from_pretrained(model, checkpoint) - model = model.merge_and_unload() - - logger.info("Merged {} model checkpoint(s).".format(len(checkpoints_to_merge))) - - if lastest_checkpoint is not None: # resume lora training - model = PeftModel.from_pretrained(model, lastest_checkpoint, is_trainable=True) - - if lastest_checkpoint is None: # create new lora weights - lora_config = LoraConfig( - task_type=TaskType.CAUSAL_LM, - inference_mode=False, - r=finetuning_args.lora_rank, - lora_alpha=finetuning_args.lora_alpha, - lora_dropout=finetuning_args.lora_dropout, - target_modules=finetuning_args.lora_target - ) - model = get_peft_model(model, lora_config) - - return model - - -def load_pretrained( - model_args: ModelArguments, - training_args: Optional[Seq2SeqTrainingArguments] = None, - finetuning_args: Optional[FinetuningArguments] = None, - is_trainable: Optional[bool] = False, - stage: Optional[Literal["sft", "rwd", "ppo"]] = "sft" -) -> Tuple[PreTrainedModel, PreTrainedTokenizer]: - r""" - Load pretrained model and tokenizer. - """ - - if (not is_trainable) and (model_args.checkpoint_dir is None): - logger.warning("Checkpoint is not found at evaluation, load the original model.") - finetuning_args = FinetuningArguments(finetuning_type="none") - - if model_args.checkpoint_dir is not None: # load fine-tuned model from checkpoint - for checkpoint_dir in model_args.checkpoint_dir: - if not os.path.isfile(os.path.join(checkpoint_dir, FINETUNING_ARGS_NAME)): - raise ValueError("The fine-tuning arguments are not found in the provided dictionary.") - logger.info("Load fine-tuned model from checkpoint(s): {}".format(",".join(model_args.checkpoint_dir))) - finetuning_args = torch.load(os.path.join(model_args.checkpoint_dir[0], FINETUNING_ARGS_NAME)) - if finetuning_args.finetuning_type != "lora" and len(model_args.checkpoint_dir) > 1: - logger.warning("Only LoRA tuning accepts multiple checkpoints.") - - assert stage == "sft" or finetuning_args.finetuning_type == "lora", "RM and PPO training can only be performed with LoRA method." - - quantization = None - if model_args.quantization_bit is not None: - if is_trainable: - if finetuning_args.finetuning_type == "full": - raise ValueError("Full parameter fine-tuning does not support quantization.") - elif finetuning_args.finetuning_type == "p_tuning": - quantization = "cpm" # use cpm's quantization - else: - quantization = "bnb" # use bnb's quantization - else: - quantization = "cpm" - - config_kwargs = { - "trust_remote_code": True, - "cache_dir": model_args.cache_dir, - "revision": model_args.model_revision, - "use_auth_token": True if model_args.use_auth_token else None, - } - - tokenizer = AutoTokenizer.from_pretrained( - model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path, - use_fast=model_args.use_fast_tokenizer, - padding_side="left", - **config_kwargs - ) - - config = AutoConfig.from_pretrained( - model_args.config_name if model_args.config_name else model_args.model_name_or_path, - **config_kwargs - ) - - # P-Tuning v2 configurations. - # We use the built-in p-tuning method of ChatGLM, we cannot use PEFT since the attention masks of ChatGLM are unusual. >_< - if finetuning_args.finetuning_type == "p_tuning": - config.pre_seq_len = finetuning_args.pre_seq_len # enable this will fix other parameters automatically - config.prefix_projection = finetuning_args.prefix_projection - - # Quantization configurations for Full, Freeze and LoRA in training (using bitsandbytes library). - if quantization == "bnb": - assert model_args.quantization_bit == 8, "Freeze and LoRA fine-tuning only accept 8-bit quantization." - - require_version("bitsandbytes>=0.37.0", "bitsandbytes library is required to use this feature.") - from bitsandbytes.cuda_setup.main import get_compute_capability, get_cuda_lib_handle, is_cublasLt_compatible - cuda = get_cuda_lib_handle() - cc = get_compute_capability(cuda) - assert is_cublasLt_compatible(cc), "The current GPU(s) is incompatible with quantization." - - config_kwargs["load_in_8bit"] = True - config_kwargs["device_map"] = "auto" # it should not be specified outside of load_in_8bit - - # Load and prepare pretrained models (without valuehead). - model = AutoModel.from_pretrained(model_args.model_name_or_path, config=config, **config_kwargs) - model = prepare_model_for_training(model) if is_trainable else model - model = init_adapter(model, model_args, finetuning_args, is_trainable) - - if not is_trainable: - model.requires_grad_(False) # fix all params - model = model.half() # cast all params to float16 - - # Quantization with the built-in method for P-Tuning v2 training or evaluation. - # Model parameters should be cast to float16 in quantized P-Tuning setting. - if quantization == "cpm": - assert model_args.quantization_bit in [4, 8], "P-Tuning v2 and inference mode only accept 4-bit or 8-bit quantization." - assert not (is_trainable and training_args.fp16), "FP16 training conflicts with cpm quantization." - - model.quantize(model_args.quantization_bit) # in-place method - - for name, param in model.named_parameters(): - if "prefix_encoder" not in name: - param.data = param.data.to(torch.float16) # convert all params in half precision except prefix_encoder - - if quantization is not None: - logger.info("Quantized model to {} bit.".format(model_args.quantization_bit)) - - if stage == "rwd" or stage == "ppo": # add value head - assert is_trainable, "Reward and PPO stages cannot be performed at evaluation." - - model = AutoModelForCausalLMWithValueHead.from_pretrained(model) - - if stage == "ppo": # load reward model - assert model_args.reward_model is not None, "Reward model is necessary for PPO training." - model.pretrained_model.load_adapter(model_args.reward_model, "reward", is_trainable=False) - load_valuehead_params(model, model_args.reward_model) - - # Set the parameter _is_int8_training_enabled for the AutoModelForCausalLMWithValueHead model - # To meet the compliance requirements of the transformers library - if quantization == "bnb": - model._is_int8_training_enabled = True - - print_trainable_params(model) - - return model, tokenizer - - -def prepare_args() -> Tuple[ModelArguments, DataTrainingArguments, Seq2SeqTrainingArguments, FinetuningArguments]: - - parser = HfArgumentParser((ModelArguments, DataTrainingArguments, Seq2SeqTrainingArguments, FinetuningArguments)) - - if len(sys.argv) == 2 and sys.argv[1].endswith(".json"): # Provide arguments with a json file. - model_args, data_args, training_args, finetuning_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1])) - else: - model_args, data_args, training_args, finetuning_args = parser.parse_args_into_dataclasses() - - # Setup logging - if training_args.should_log: - # The default of training_args.log_level is passive, so we set log level at info here to have that default. - transformers.utils.logging.set_verbosity_info() - - log_level = training_args.get_process_log_level() - datasets.utils.logging.set_verbosity(log_level) - transformers.utils.logging.set_verbosity(log_level) - transformers.utils.logging.enable_default_handler() - transformers.utils.logging.enable_explicit_format() - - # Check arguments (do not check finetuning_args since it may be loaded from checkpoints) - if int(training_args.do_train) + int(training_args.do_eval) + int(training_args.do_predict) != 1: - raise ValueError("We must perform a single operation among do_train, do_eval and do_predict.") - - if model_args.quantization_bit is not None and training_args.do_train == False: - logger.warning("We do not recommend to evaluaute model in 4/8-bit mode.") - - if training_args.do_train and (not training_args.fp16): - logger.warning("We recommend enable fp16 mixed precision training for ChatGLM-6B.") - - training_args.optim = "adamw_torch" if training_args.optim == "adamw_hf" else training_args.optim # suppress warning - - # Log on each process the small summary: - logger.warning( - f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}\n" - + f" distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}" - ) - logger.info(f"Training/evaluation parameters {training_args}") - - # Set seed before initializing model. - transformers.set_seed(training_args.seed) - - return model_args, data_args, training_args, finetuning_args - - -def prepare_data( - model_args: ModelArguments, - data_args: DataTrainingArguments -) -> Dataset: - - def checksum(file_path, hash): - with open(file_path, "rb") as datafile: - binary_data = datafile.read() - sha1 = hashlib.sha1(binary_data).hexdigest() - if sha1 != hash: - logger.warning("Checksum failed for {}. It may vary depending on the platform.".format(file_path)) - - max_samples = data_args.max_samples - all_datasets = [] # support multiple datasets - - for dataset_info in data_args.dataset_list: - - logger.info("Loading dataset {}...".format(dataset_info)) - - if dataset_info.load_from == "hf_hub": - raw_datasets = load_dataset(dataset_info.dataset_name, cache_dir=model_args.cache_dir) - elif dataset_info.load_from == "script": - raw_datasets = load_dataset( - os.path.join(data_args.dataset_dir, dataset_info.dataset_name), - cache_dir=model_args.cache_dir - ) - elif dataset_info.load_from == "file": - data_file = os.path.join(data_args.dataset_dir, dataset_info.file_name) # support json, jsonl and csv - extension = dataset_info.file_name.split(".")[-1] - - if dataset_info.file_sha1 is not None: - checksum(data_file, dataset_info.file_sha1) - else: - logger.warning("Checksum failed: missing SHA-1 hash value in dataset_info.") - - raw_datasets = load_dataset( - extension, - data_files=data_file, - cache_dir=model_args.cache_dir, - use_auth_token=True if model_args.use_auth_token else None - ) - else: - raise NotImplementedError - - dataset = raw_datasets[data_args.split] - - if max_samples is not None: - max_samples_temp = min(len(dataset), max_samples) - dataset = dataset.select(range(max_samples_temp)) - - dummy_data = [None] * len(dataset) - for column, column_name in [ - ("prompt_column", "prompt"), - ("query_column", "query"), - ("response_column", "response"), - ("history_column", "history") - ]: # every dataset will have 4 columns same as each other - if getattr(dataset_info, column) != column_name: - if getattr(dataset_info, column): - dataset = dataset.rename_column(getattr(dataset_info, column), column_name) - else: # None or empty string - dataset = dataset.add_column(column_name, dummy_data) - all_datasets.append(dataset) - - if len(data_args.dataset_list) == 1: - all_datasets = all_datasets[0] - else: - all_datasets = concatenate_datasets(all_datasets) - - return all_datasets - - -def preprocess_data( - dataset: Dataset, - tokenizer: PreTrainedTokenizer, - data_args: DataTrainingArguments, - training_args: Seq2SeqTrainingArguments, - stage: Optional[Literal["sft", "rwd", "ppo"]] = "sft" -) -> Dataset: - - column_names = list(dataset.column_names) - prefix = data_args.source_prefix if data_args.source_prefix is not None else "" - - def format_example(examples): # support question with a single answer or multiple answers - for i in range(len(examples["prompt"])): - if examples["prompt"][i] and examples["response"][i]: - query, answer = examples["prompt"][i], examples["response"][i] - if examples["query"][i]: - query += examples["query"][i] - if examples["history"][i]: - prompt = "" - history = examples["history"][i] - for j, (old_query, response) in enumerate(history): - prompt += "[Round {}]\n问:{}\n答:{}\n".format(j, old_query, response) - prompt += "[Round {}]\n问:{}\n答:".format(len(history), query) - else: - prompt = query - prompt = prefix + prompt - yield prompt, answer - - def preprocess_function_train(examples): - # build inputs with format `X [gMASK] [BOS] Y [EOS]` and labels with format `[IGNORE] ... [IGNORE] [BOS] Y [EOS]` - model_inputs = {"input_ids": [], "labels": []} - for prompt, answer in format_example(examples): - source_ids = tokenizer.encode(text=prompt, add_special_tokens=False) - target_ids = tokenizer.encode(text=answer, add_special_tokens=False) - - if len(source_ids) > data_args.max_source_length - 2: # gmask and bos tokens - source_ids = source_ids[:data_args.max_source_length - 2] - if len(target_ids) > data_args.max_target_length - 1: # eos token - target_ids = target_ids[:data_args.max_target_length - 1] - - input_ids = tokenizer.build_inputs_with_special_tokens(source_ids, target_ids) - - context_length = input_ids.index(tokenizer.bos_token_id) - labels = [IGNORE_INDEX] * context_length + input_ids[context_length:] - - model_inputs["input_ids"].append(input_ids) - model_inputs["labels"].append(labels) - return model_inputs - - def preprocess_function_eval(examples): - # build inputs with format `[PAD] ... [PAD] X [gMASK] [BOS]` and labels with format `Y [gMASK] [BOS]` - # left-padding is needed for prediction, use the built-in function of the tokenizer - inputs, targets = [], [] - for prompt, answer in format_example(examples): - inputs.append(prompt) - targets.append(answer) - model_inputs = tokenizer(inputs, max_length=data_args.max_source_length, truncation=True, padding=True) - labels = tokenizer(text_target=targets, max_length=data_args.max_target_length, truncation=True) # no padding - if data_args.ignore_pad_token_for_loss: - labels["input_ids"] = [ - [(l_id if l_id != tokenizer.pad_token_id else IGNORE_INDEX) for l_id in label] for label in labels["input_ids"] - ] - model_inputs["labels"] = labels["input_ids"] - return model_inputs - - def preprocess_function_train_pair(examples): - # build input pairs with format `X [gMASK] [BOS] Y1 [EOS]` and `X [gMASK] [BOS] Y2 [EOS]` - model_inputs = {"accept_ids": [], "reject_ids": []} - for prompt, answer in format_example(examples): - source_ids = tokenizer.encode(text=prompt, add_special_tokens=False) - accept_ids = tokenizer.encode(text=answer[0], add_special_tokens=False) - reject_ids = tokenizer.encode(text=answer[1], add_special_tokens=False) - - if len(source_ids) > data_args.max_source_length - 2: # gmask and bos tokens - source_ids = source_ids[:data_args.max_source_length - 2] - if len(accept_ids) > data_args.max_target_length - 1: # eos token - accept_ids = accept_ids[:data_args.max_target_length - 1] - if len(reject_ids) > data_args.max_target_length - 1: # eos token - reject_ids = reject_ids[:data_args.max_target_length - 1] - - accept_ids = tokenizer.build_inputs_with_special_tokens(source_ids[:], accept_ids) # avoid copying error - reject_ids = tokenizer.build_inputs_with_special_tokens(source_ids[:], reject_ids) - - model_inputs["accept_ids"].append(accept_ids) - model_inputs["reject_ids"].append(reject_ids) - return model_inputs - - def preprocess_function_train_ppo(examples): - # build inputs with format `X [gMASK] [BOS]` - model_inputs = {"input_ids": []} - for prompt, _ in format_example(examples): - source_ids = tokenizer.encode(text=prompt, add_special_tokens=False) - - if len(source_ids) > data_args.max_source_length - 2: # gmask and bos tokens - source_ids = source_ids[:data_args.max_source_length - 2] - - input_ids = tokenizer.build_inputs_with_special_tokens(source_ids) - model_inputs["input_ids"].append(input_ids) - return model_inputs - - def print_sft_dataset_example(example): - print("input_ids:\n{}".format(example["input_ids"])) - print("inputs:\n{}".format(tokenizer.decode(example["input_ids"]))) - print("label_ids:\n{}".format(example["labels"])) - print("labels:\n{}".format(tokenizer.decode(example["labels"]))) - - def print_pairwise_dataset_example(example): - print("accept_ids:\n{}".format(example["accept_ids"])) - print("accepts:\n{}".format(tokenizer.decode(example["accept_ids"]))) - print("reject_ids:\n{}".format(example["reject_ids"])) - print("rejects:\n{}".format(tokenizer.decode(example["reject_ids"]))) - - def print_ppo_dataset_example(example): - print("input_ids:\n{}".format(example["input_ids"])) - print("inputs:\n{}".format(tokenizer.decode(example["input_ids"]))) - - if stage == "sft": - preprocess_function = preprocess_function_train if training_args.do_train else preprocess_function_eval - elif stage == "rwd": - preprocess_function = preprocess_function_train_pair - elif stage == "ppo": - preprocess_function = preprocess_function_train_ppo - - with training_args.main_process_first(desc="dataset map pre-processing"): - dataset = dataset.map( - preprocess_function, - batched=True, - num_proc=data_args.preprocessing_num_workers, - remove_columns=column_names, - load_from_cache_file=not data_args.overwrite_cache, - desc="Running tokenizer on dataset" - ) - - if stage == "sft": - print_sft_dataset_example(dataset[0]) - elif stage == "rwd": - print_pairwise_dataset_example(dataset[0]) - elif stage == "ppo": - print_ppo_dataset_example(dataset[0]) - - return dataset diff --git a/spaces/bigjoker/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/clipseg/models/clipseg.py b/spaces/bigjoker/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/clipseg/models/clipseg.py deleted file mode 100644 index a4640b34bbd1ca68a32114471d5585734c4af2fc..0000000000000000000000000000000000000000 --- a/spaces/bigjoker/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/clipseg/models/clipseg.py +++ /dev/null @@ -1,552 +0,0 @@ -import math -from os.path import basename, dirname, join, isfile -import torch -from torch import nn -from torch.nn import functional as nnf -from torch.nn.modules.activation import ReLU - - -def precompute_clip_vectors(): - - from trails.initialization import init_dataset - lvis = init_dataset('LVIS_OneShot3', split='train', mask='text_label', image_size=224, aug=1, normalize=True, - reduce_factor=None, add_bar=False, negative_prob=0.5) - - all_names = list(lvis.category_names.values()) - - import clip - from models.clip_prompts import imagenet_templates - clip_model = clip.load("ViT-B/32", device='cuda', jit=False)[0] - prompt_vectors = {} - for name in all_names[:100]: - with torch.no_grad(): - conditionals = [t.format(name).replace('_', ' ') for t in imagenet_templates] - text_tokens = clip.tokenize(conditionals).cuda() - cond = clip_model.encode_text(text_tokens).cpu() - - for cond, vec in zip(conditionals, cond): - prompt_vectors[cond] = vec.cpu() - - import pickle - - pickle.dump(prompt_vectors, open('precomputed_prompt_vectors.pickle', 'wb')) - - -def get_prompt_list(prompt): - if prompt == 'plain': - return ['{}'] - elif prompt == 'fixed': - return ['a photo of a {}.'] - elif prompt == 'shuffle': - return ['a photo of a {}.', 'a photograph of a {}.', 'an image of a {}.', '{}.'] - elif prompt == 'shuffle+': - return ['a photo of a {}.', 'a photograph of a {}.', 'an image of a {}.', '{}.', - 'a cropped photo of a {}.', 'a good photo of a {}.', 'a photo of one {}.', - 'a bad photo of a {}.', 'a photo of the {}.'] - elif prompt == 'shuffle_clip': - from models.clip_prompts import imagenet_templates - return imagenet_templates - else: - raise ValueError('Invalid value for prompt') - - -def forward_multihead_attention(x, b, with_aff=False, attn_mask=None): - """ - Simplified version of multihead attention (taken from torch source code but without tons of if clauses). - The mlp and layer norm come from CLIP. - x: input. - b: multihead attention module. - """ - - x_ = b.ln_1(x) - q, k, v = nnf.linear(x_, b.attn.in_proj_weight, b.attn.in_proj_bias).chunk(3, dim=-1) - tgt_len, bsz, embed_dim = q.size() - - head_dim = embed_dim // b.attn.num_heads - scaling = float(head_dim) ** -0.5 - - q = q.contiguous().view(tgt_len, bsz * b.attn.num_heads, b.attn.head_dim).transpose(0, 1) - k = k.contiguous().view(-1, bsz * b.attn.num_heads, b.attn.head_dim).transpose(0, 1) - v = v.contiguous().view(-1, bsz * b.attn.num_heads, b.attn.head_dim).transpose(0, 1) - - q = q * scaling - - attn_output_weights = torch.bmm(q, k.transpose(1, 2)) # n_heads * batch_size, tokens^2, tokens^2 - if attn_mask is not None: - - - attn_mask_type, attn_mask = attn_mask - n_heads = attn_output_weights.size(0) // attn_mask.size(0) - attn_mask = attn_mask.repeat(n_heads, 1) - - if attn_mask_type == 'cls_token': - # the mask only affects similarities compared to the readout-token. - attn_output_weights[:, 0, 1:] = attn_output_weights[:, 0, 1:] * attn_mask[None,...] - # attn_output_weights[:, 0, 0] = 0*attn_output_weights[:, 0, 0] - - if attn_mask_type == 'all': - # print(attn_output_weights.shape, attn_mask[:, None].shape) - attn_output_weights[:, 1:, 1:] = attn_output_weights[:, 1:, 1:] * attn_mask[:, None] - - - attn_output_weights = torch.softmax(attn_output_weights, dim=-1) - - attn_output = torch.bmm(attn_output_weights, v) - attn_output = attn_output.transpose(0, 1).contiguous().view(tgt_len, bsz, embed_dim) - attn_output = b.attn.out_proj(attn_output) - - x = x + attn_output - x = x + b.mlp(b.ln_2(x)) - - if with_aff: - return x, attn_output_weights - else: - return x - - -class CLIPDenseBase(nn.Module): - - def __init__(self, version, reduce_cond, reduce_dim, prompt, n_tokens): - super().__init__() - - import clip - - # prec = torch.FloatTensor - self.clip_model, _ = clip.load(version, device='cpu', jit=False) - self.model = self.clip_model.visual - - # if not None, scale conv weights such that we obtain n_tokens. - self.n_tokens = n_tokens - - for p in self.clip_model.parameters(): - p.requires_grad_(False) - - # conditional - if reduce_cond is not None: - self.reduce_cond = nn.Linear(512, reduce_cond) - for p in self.reduce_cond.parameters(): - p.requires_grad_(False) - else: - self.reduce_cond = None - - self.film_mul = nn.Linear(512 if reduce_cond is None else reduce_cond, reduce_dim) - self.film_add = nn.Linear(512 if reduce_cond is None else reduce_cond, reduce_dim) - - self.reduce = nn.Linear(768, reduce_dim) - - self.prompt_list = get_prompt_list(prompt) - - # precomputed prompts - import pickle - if isfile('precomputed_prompt_vectors.pickle'): - precomp = pickle.load(open('precomputed_prompt_vectors.pickle', 'rb')) - self.precomputed_prompts = {k: torch.from_numpy(v) for k, v in precomp.items()} - else: - self.precomputed_prompts = dict() - - def rescaled_pos_emb(self, new_size): - assert len(new_size) == 2 - - a = self.model.positional_embedding[1:].T.view(1, 768, *self.token_shape) - b = nnf.interpolate(a, new_size, mode='bicubic', align_corners=False).squeeze(0).view(768, new_size[0]*new_size[1]).T - return torch.cat([self.model.positional_embedding[:1], b]) - - def visual_forward(self, x_inp, extract_layers=(), skip=False, mask=None): - - - with torch.no_grad(): - - inp_size = x_inp.shape[2:] - - if self.n_tokens is not None: - stride2 = x_inp.shape[2] // self.n_tokens - conv_weight2 = nnf.interpolate(self.model.conv1.weight, (stride2, stride2), mode='bilinear', align_corners=True) - x = nnf.conv2d(x_inp, conv_weight2, bias=self.model.conv1.bias, stride=stride2, dilation=self.model.conv1.dilation) - else: - x = self.model.conv1(x_inp) # shape = [*, width, grid, grid] - - x = x.reshape(x.shape[0], x.shape[1], -1) # shape = [*, width, grid ** 2] - x = x.permute(0, 2, 1) # shape = [*, grid ** 2, width] - - x = torch.cat([self.model.class_embedding.to(x.dtype) + torch.zeros(x.shape[0], 1, x.shape[-1], dtype=x.dtype, device=x.device), x], dim=1) # shape = [*, grid ** 2 + 1, width] - - standard_n_tokens = 50 if self.model.conv1.kernel_size[0] == 32 else 197 - - if x.shape[1] != standard_n_tokens: - new_shape = int(math.sqrt(x.shape[1]-1)) - x = x + self.rescaled_pos_emb((new_shape, new_shape)).to(x.dtype)[None,:,:] - else: - x = x + self.model.positional_embedding.to(x.dtype) - - x = self.model.ln_pre(x) - - x = x.permute(1, 0, 2) # NLD -> LND - - activations, affinities = [], [] - for i, res_block in enumerate(self.model.transformer.resblocks): - - if mask is not None: - mask_layer, mask_type, mask_tensor = mask - if mask_layer == i or mask_layer == 'all': - # import ipdb; ipdb.set_trace() - size = int(math.sqrt(x.shape[0] - 1)) - - attn_mask = (mask_type, nnf.interpolate(mask_tensor.unsqueeze(1).float(), (size, size)).view(mask_tensor.shape[0], size * size)) - - else: - attn_mask = None - else: - attn_mask = None - - x, aff_per_head = forward_multihead_attention(x, res_block, with_aff=True, attn_mask=attn_mask) - - if i in extract_layers: - affinities += [aff_per_head] - - #if self.n_tokens is not None: - # activations += [nnf.interpolate(x, inp_size, mode='bilinear', align_corners=True)] - #else: - activations += [x] - - if len(extract_layers) > 0 and i == max(extract_layers) and skip: - print('early skip') - break - - x = x.permute(1, 0, 2) # LND -> NLD - x = self.model.ln_post(x[:, 0, :]) - - if self.model.proj is not None: - x = x @ self.model.proj - - return x, activations, affinities - - def sample_prompts(self, words, prompt_list=None): - - prompt_list = prompt_list if prompt_list is not None else self.prompt_list - - prompt_indices = torch.multinomial(torch.ones(len(prompt_list)), len(words), replacement=True) - prompts = [prompt_list[i] for i in prompt_indices] - return [promt.format(w) for promt, w in zip(prompts, words)] - - def get_cond_vec(self, conditional, batch_size): - # compute conditional from a single string - if conditional is not None and type(conditional) == str: - cond = self.compute_conditional(conditional) - cond = cond.repeat(batch_size, 1) - - # compute conditional from string list/tuple - elif conditional is not None and type(conditional) in {list, tuple} and type(conditional[0]) == str: - assert len(conditional) == batch_size - cond = self.compute_conditional(conditional) - - # use conditional directly - elif conditional is not None and type(conditional) == torch.Tensor and conditional.ndim == 2: - cond = conditional - - # compute conditional from image - elif conditional is not None and type(conditional) == torch.Tensor: - with torch.no_grad(): - cond, _, _ = self.visual_forward(conditional) - else: - raise ValueError('invalid conditional') - return cond - - def compute_conditional(self, conditional): - import clip - - dev = next(self.parameters()).device - - if type(conditional) in {list, tuple}: - text_tokens = clip.tokenize(conditional).to(dev) - cond = self.clip_model.encode_text(text_tokens) - else: - if conditional in self.precomputed_prompts: - cond = self.precomputed_prompts[conditional].float().to(dev) - else: - text_tokens = clip.tokenize([conditional]).to(dev) - cond = self.clip_model.encode_text(text_tokens)[0] - - if self.shift_vector is not None: - return cond + self.shift_vector - else: - return cond - - -def clip_load_untrained(version): - assert version == 'ViT-B/16' - from clip.model import CLIP - from clip.clip import _MODELS, _download - model = torch.jit.load(_download(_MODELS['ViT-B/16'])).eval() - state_dict = model.state_dict() - - vision_width = state_dict["visual.conv1.weight"].shape[0] - vision_layers = len([k for k in state_dict.keys() if k.startswith("visual.") and k.endswith(".attn.in_proj_weight")]) - vision_patch_size = state_dict["visual.conv1.weight"].shape[-1] - grid_size = round((state_dict["visual.positional_embedding"].shape[0] - 1) ** 0.5) - image_resolution = vision_patch_size * grid_size - embed_dim = state_dict["text_projection"].shape[1] - context_length = state_dict["positional_embedding"].shape[0] - vocab_size = state_dict["token_embedding.weight"].shape[0] - transformer_width = state_dict["ln_final.weight"].shape[0] - transformer_heads = transformer_width // 64 - transformer_layers = len(set(k.split(".")[2] for k in state_dict if k.startswith(f"transformer.resblocks"))) - - return CLIP(embed_dim, image_resolution, vision_layers, vision_width, vision_patch_size, - context_length, vocab_size, transformer_width, transformer_heads, transformer_layers) - - -class CLIPDensePredT(CLIPDenseBase): - - def __init__(self, version='ViT-B/32', extract_layers=(3, 6, 9), cond_layer=0, reduce_dim=128, n_heads=4, prompt='fixed', - extra_blocks=0, reduce_cond=None, fix_shift=False, - learn_trans_conv_only=False, limit_to_clip_only=False, upsample=False, - add_calibration=False, rev_activations=False, trans_conv=None, n_tokens=None): - - super().__init__(version, reduce_cond, reduce_dim, prompt, n_tokens) - # device = 'cpu' - - self.extract_layers = extract_layers - self.cond_layer = cond_layer - self.limit_to_clip_only = limit_to_clip_only - self.process_cond = None - self.rev_activations = rev_activations - - depth = len(extract_layers) - - if add_calibration: - self.calibration_conds = 1 - - self.upsample_proj = nn.Conv2d(reduce_dim, 1, kernel_size=1) if upsample else None - - self.add_activation1 = True - - self.version = version - - self.token_shape = {'ViT-B/32': (7, 7), 'ViT-B/16': (14, 14)}[version] - - if fix_shift: - # self.shift_vector = nn.Parameter(torch.load(join(dirname(basename(__file__)), 'clip_text_shift_vector.pth')), requires_grad=False) - self.shift_vector = nn.Parameter(torch.load(join(dirname(basename(__file__)), 'shift_text_to_vis.pth')), requires_grad=False) - # self.shift_vector = nn.Parameter(-1*torch.load(join(dirname(basename(__file__)), 'shift2.pth')), requires_grad=False) - else: - self.shift_vector = None - - if trans_conv is None: - trans_conv_ks = {'ViT-B/32': (32, 32), 'ViT-B/16': (16, 16)}[version] - else: - # explicitly define transposed conv kernel size - trans_conv_ks = (trans_conv, trans_conv) - - self.trans_conv = nn.ConvTranspose2d(reduce_dim, 1, trans_conv_ks, stride=trans_conv_ks) - - assert len(self.extract_layers) == depth - - self.reduces = nn.ModuleList([nn.Linear(768, reduce_dim) for _ in range(depth)]) - self.blocks = nn.ModuleList([nn.TransformerEncoderLayer(d_model=reduce_dim, nhead=n_heads) for _ in range(len(self.extract_layers))]) - self.extra_blocks = nn.ModuleList([nn.TransformerEncoderLayer(d_model=reduce_dim, nhead=n_heads) for _ in range(extra_blocks)]) - - # refinement and trans conv - - if learn_trans_conv_only: - for p in self.parameters(): - p.requires_grad_(False) - - for p in self.trans_conv.parameters(): - p.requires_grad_(True) - - self.prompt_list = get_prompt_list(prompt) - - - def forward(self, inp_image, conditional=None, return_features=False, mask=None): - - assert type(return_features) == bool - - inp_image = inp_image.to(self.model.positional_embedding.device) - - if mask is not None: - raise ValueError('mask not supported') - - # x_inp = normalize(inp_image) - x_inp = inp_image - - bs, dev = inp_image.shape[0], x_inp.device - - cond = self.get_cond_vec(conditional, bs) - - visual_q, activations, _ = self.visual_forward(x_inp, extract_layers=[0] + list(self.extract_layers)) - - activation1 = activations[0] - activations = activations[1:] - - _activations = activations[::-1] if not self.rev_activations else activations - - a = None - for i, (activation, block, reduce) in enumerate(zip(_activations, self.blocks, self.reduces)): - - if a is not None: - a = reduce(activation) + a - else: - a = reduce(activation) - - if i == self.cond_layer: - if self.reduce_cond is not None: - cond = self.reduce_cond(cond) - - a = self.film_mul(cond) * a + self.film_add(cond) - - a = block(a) - - for block in self.extra_blocks: - a = a + block(a) - - a = a[1:].permute(1, 2, 0) # rm cls token and -> BS, Feats, Tokens - - size = int(math.sqrt(a.shape[2])) - - a = a.view(bs, a.shape[1], size, size) - - a = self.trans_conv(a) - - if self.n_tokens is not None: - a = nnf.interpolate(a, x_inp.shape[2:], mode='bilinear', align_corners=True) - - if self.upsample_proj is not None: - a = self.upsample_proj(a) - a = nnf.interpolate(a, x_inp.shape[2:], mode='bilinear') - - if return_features: - return a, visual_q, cond, [activation1] + activations - else: - return a, - - - -class CLIPDensePredTMasked(CLIPDensePredT): - - def __init__(self, version='ViT-B/32', extract_layers=(3, 6, 9), cond_layer=0, reduce_dim=128, n_heads=4, - prompt='fixed', extra_blocks=0, reduce_cond=None, fix_shift=False, learn_trans_conv_only=False, - refine=None, limit_to_clip_only=False, upsample=False, add_calibration=False, n_tokens=None): - - super().__init__(version=version, extract_layers=extract_layers, cond_layer=cond_layer, reduce_dim=reduce_dim, - n_heads=n_heads, prompt=prompt, extra_blocks=extra_blocks, reduce_cond=reduce_cond, - fix_shift=fix_shift, learn_trans_conv_only=learn_trans_conv_only, - limit_to_clip_only=limit_to_clip_only, upsample=upsample, add_calibration=add_calibration, - n_tokens=n_tokens) - - def visual_forward_masked(self, img_s, seg_s): - return super().visual_forward(img_s, mask=('all', 'cls_token', seg_s)) - - def forward(self, img_q, cond_or_img_s, seg_s=None, return_features=False): - - if seg_s is None: - cond = cond_or_img_s - else: - img_s = cond_or_img_s - - with torch.no_grad(): - cond, _, _ = self.visual_forward_masked(img_s, seg_s) - - return super().forward(img_q, cond, return_features=return_features) - - - -class CLIPDenseBaseline(CLIPDenseBase): - - def __init__(self, version='ViT-B/32', cond_layer=0, - extract_layer=9, reduce_dim=128, reduce2_dim=None, prompt='fixed', - reduce_cond=None, limit_to_clip_only=False, n_tokens=None): - - super().__init__(version, reduce_cond, reduce_dim, prompt, n_tokens) - device = 'cpu' - - # self.cond_layer = cond_layer - self.extract_layer = extract_layer - self.limit_to_clip_only = limit_to_clip_only - self.shift_vector = None - - self.token_shape = {'ViT-B/32': (7, 7), 'ViT-B/16': (14, 14)}[version] - - assert reduce2_dim is not None - - self.reduce2 = nn.Sequential( - nn.Linear(reduce_dim, reduce2_dim), - nn.ReLU(), - nn.Linear(reduce2_dim, reduce_dim) - ) - - trans_conv_ks = {'ViT-B/32': (32, 32), 'ViT-B/16': (16, 16)}[version] - self.trans_conv = nn.ConvTranspose2d(reduce_dim, 1, trans_conv_ks, stride=trans_conv_ks) - - - def forward(self, inp_image, conditional=None, return_features=False): - - inp_image = inp_image.to(self.model.positional_embedding.device) - - # x_inp = normalize(inp_image) - x_inp = inp_image - - bs, dev = inp_image.shape[0], x_inp.device - - cond = self.get_cond_vec(conditional, bs) - - visual_q, activations, affinities = self.visual_forward(x_inp, extract_layers=[self.extract_layer]) - - a = activations[0] - a = self.reduce(a) - a = self.film_mul(cond) * a + self.film_add(cond) - - if self.reduce2 is not None: - a = self.reduce2(a) - - # the original model would execute a transformer block here - - a = a[1:].permute(1, 2, 0) # rm cls token and -> BS, Feats, Tokens - - size = int(math.sqrt(a.shape[2])) - - a = a.view(bs, a.shape[1], size, size) - a = self.trans_conv(a) - - if return_features: - return a, visual_q, cond, activations - else: - return a, - - -class CLIPSegMultiLabel(nn.Module): - - def __init__(self, model) -> None: - super().__init__() - - from third_party.JoEm.data_loader import get_seen_idx, get_unseen_idx, VOC - - self.pascal_classes = VOC - - from models.clipseg import CLIPDensePredT - from general_utils import load_model - # self.clipseg = load_model('rd64-vit16-neg0.2-phrasecut', strict=False) - self.clipseg = load_model(model, strict=False) - - self.clipseg.eval() - - def forward(self, x): - - bs = x.shape[0] - out = torch.ones(21, bs, 352, 352).to(x.device) * -10 - - for class_id, class_name in enumerate(self.pascal_classes): - - fac = 3 if class_name == 'background' else 1 - - with torch.no_grad(): - pred = torch.sigmoid(self.clipseg(x, class_name)[0][:,0]) * fac - - out[class_id] += pred - - - out = out.permute(1, 0, 2, 3) - - return out - - # construct output tensor - \ No newline at end of file diff --git a/spaces/bigjoker/stable-diffusion-webui/modules/call_queue.py b/spaces/bigjoker/stable-diffusion-webui/modules/call_queue.py deleted file mode 100644 index 6eabc06441b531f89e55f805096af97ede4547ee..0000000000000000000000000000000000000000 --- a/spaces/bigjoker/stable-diffusion-webui/modules/call_queue.py +++ /dev/null @@ -1,109 +0,0 @@ -import html -import sys -import threading -import traceback -import time - -from modules import shared, progress - -queue_lock = threading.Lock() - - -def wrap_queued_call(func): - def f(*args, **kwargs): - with queue_lock: - res = func(*args, **kwargs) - - return res - - return f - - -def wrap_gradio_gpu_call(func, extra_outputs=None): - def f(*args, **kwargs): - - # if the first argument is a string that says "task(...)", it is treated as a job id - if len(args) > 0 and type(args[0]) == str and args[0][0:5] == "task(" and args[0][-1] == ")": - id_task = args[0] - progress.add_task_to_queue(id_task) - else: - id_task = None - - with queue_lock: - shared.state.begin() - progress.start_task(id_task) - - try: - res = func(*args, **kwargs) - finally: - progress.finish_task(id_task) - - shared.state.end() - - return res - - return wrap_gradio_call(f, extra_outputs=extra_outputs, add_stats=True) - - -def wrap_gradio_call(func, extra_outputs=None, add_stats=False): - def f(*args, extra_outputs_array=extra_outputs, **kwargs): - run_memmon = shared.opts.memmon_poll_rate > 0 and not shared.mem_mon.disabled and add_stats - if run_memmon: - shared.mem_mon.monitor() - t = time.perf_counter() - - try: - res = list(func(*args, **kwargs)) - except Exception as e: - # When printing out our debug argument list, do not print out more than a MB of text - max_debug_str_len = 131072 # (1024*1024)/8 - - print("Error completing request", file=sys.stderr) - argStr = f"Arguments: {str(args)} {str(kwargs)}" - print(argStr[:max_debug_str_len], file=sys.stderr) - if len(argStr) > max_debug_str_len: - print(f"(Argument list truncated at {max_debug_str_len}/{len(argStr)} characters)", file=sys.stderr) - - print(traceback.format_exc(), file=sys.stderr) - - shared.state.job = "" - shared.state.job_count = 0 - - if extra_outputs_array is None: - extra_outputs_array = [None, ''] - - res = extra_outputs_array + [f"
    {html.escape(type(e).__name__+': '+str(e))}
    "] - - shared.state.skipped = False - shared.state.interrupted = False - shared.state.job_count = 0 - - if not add_stats: - return tuple(res) - - elapsed = time.perf_counter() - t - elapsed_m = int(elapsed // 60) - elapsed_s = elapsed % 60 - elapsed_text = f"{elapsed_s:.2f}s" - if elapsed_m > 0: - elapsed_text = f"{elapsed_m}m "+elapsed_text - - if run_memmon: - mem_stats = {k: -(v//-(1024*1024)) for k, v in shared.mem_mon.stop().items()} - active_peak = mem_stats['active_peak'] - reserved_peak = mem_stats['reserved_peak'] - sys_peak = mem_stats['system_peak'] - sys_total = mem_stats['total'] - sys_pct = round(sys_peak/max(sys_total, 1) * 100, 2) - - vram_html = f"

    Torch active/reserved: {active_peak}/{reserved_peak} MiB, Sys VRAM: {sys_peak}/{sys_total} MiB ({sys_pct}%)

    " - else: - vram_html = '' - - # last item is always HTML - res[-1] += f"

    Time taken: {elapsed_text}

    {vram_html}
    " - - return tuple(res) - - return f - diff --git a/spaces/bioriAsaeru/text-to-voice/Assassins Creed Brotherhood Data3cabrar LINK.md b/spaces/bioriAsaeru/text-to-voice/Assassins Creed Brotherhood Data3cabrar LINK.md deleted file mode 100644 index f2844d30c4241f58fc20e81447e7a40362cb72d1..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Assassins Creed Brotherhood Data3cabrar LINK.md +++ /dev/null @@ -1,18 +0,0 @@ -

    Assassins Creed Brotherhood Data3cabrar


    DOWNLOAD 🗸 https://urloso.com/2uyRhW



    -
    -: ( Creed Brotherhood Data3cabrar 編集する: ( in Curriculum and Instruction - -発行者: ( Creed Brotherhood Data3cabrar 発行者: ( in Curriculum and Instruction - -版権表示: ( Creed Brotherhood Data3cabrar 著者: ( in Curriculum and Instruction - -回避消息: ( Creed Brotherhood Data3cabrar 予防消息: ( in Curriculum and Instruction - -#CREEDBROTHERHOOD2016 #画像 - -アニメ映画 "パラノスカチュア・ペルシャシィ" - -Nepal, an isolated country in the Himalayas in the Central Asian region, is experiencing a period of political turmoil following the massive earthquake of 2015. The people suffer constant poverty, and the Himalayan region, once protected by the pristine and highly valuable Mount Annapurna, is now on the verge of destruction, eroding at a rate of 60 meters per year. In the impoverished periphery of the city of Kathmandu, an orphaned boy named Arno discovers a peculiar set of weapons and ancient fighting techniques that give him incredible powers to stand against the totalitarian government and save his people. 4fefd39f24
    -
    -
    -

    diff --git a/spaces/bioriAsaeru/text-to-voice/Felekten Bir Gece 3 The Hangover 3 Efsanevi lnn Son Maceras Trke Dublaj HD Kalitede zle.md b/spaces/bioriAsaeru/text-to-voice/Felekten Bir Gece 3 The Hangover 3 Efsanevi lnn Son Maceras Trke Dublaj HD Kalitede zle.md deleted file mode 100644 index 0ec51fb7fd390f513bce59b3466e490cfe8f5811..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Felekten Bir Gece 3 The Hangover 3 Efsanevi lnn Son Maceras Trke Dublaj HD Kalitede zle.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Felekten Bir Gece 3 – The Hangover 3 ~ Turkce Dublaj Tek Parca HD Izle


    Download File 🗸🗸🗸 https://urloso.com/2uyQpf



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/bioriAsaeru/text-to-voice/Goddess Movie In Italian ((NEW)) Free Download.md b/spaces/bioriAsaeru/text-to-voice/Goddess Movie In Italian ((NEW)) Free Download.md deleted file mode 100644 index acaaf17535cbc4bed706da9f7101062235e54071..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Goddess Movie In Italian ((NEW)) Free Download.md +++ /dev/null @@ -1,11 +0,0 @@ - -

    At the end of the 19th century, during the British rule of India, the goddess Kali commits a number of murders, among which the man who was conducting a cargo of serum to a village infected with disease. Worried with the vast number of fatalities that will result from stopping this shipment, Dr. Palmer asks for help from Major Talbot. The Major suspects about the murderer turn onto the doctor, and he must take refuge in the jungle, after faking his own death by a tiger. Help does come in the shapely form of Amrita, a native dancer, who will give him shelter, and will even free him from a gang of robbers. Dr. Palmer still has a number of real and fantastic adventures until he sees an end to his tribulations...

    -

    'YIFY subtitles' has a wide range of movies across languages. The site comes with a nice interface that makes a selection of your desired movies suitable easy. While downloading, the interface takes you to a PDF page, which is a bit tricky.

    -

    Goddess Movie In Italian Free Download


    Download ✒ ✒ ✒ https://urloso.com/2uyQw9



    -

    This website has a plethora of subtitles for your favorite movies. It has a very simple and obsolete interface. You can download various movie subtitles like the Harry Potter series. It also contains Ads at the top of the page.

    -

    This site contains movie subtitles only. Ads are pretty annoying and distracting from the page content. For watching DivX/XviD movies with subtitles on Windows Media Player, you need to install a filter called DirectVobSub. As the files are zipped with WinZip, you need to extract them after downloading them.

    -

    Here is another website for serving your purpose. With Addic7ed, you can use subtitles for TV shows and movies alike. The site has options for signing up, though you can download the subtitles without registering. You got to scroll down to see the list.

    -

    While speaking of subtitle downloading, we loved this site. If you could not find a movie subtitle, it helps you Google it right there. High definition videos are available with this website that you can download and enjoy.

    -

    There is no doubt that the internet is full of subtitle downloading sites, and you can get anything on your system. By the way, you can also make captions and subtitles for FB videos. Just pick the one you like and try it freely. Leave the comments below to let us know if the work is well.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Job seekers asked for Facebook passwords How to protect your privacy and rights.md b/spaces/bioriAsaeru/text-to-voice/Job seekers asked for Facebook passwords How to protect your privacy and rights.md deleted file mode 100644 index e84ff95edbc62709811452aea99180cfab27d79b..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Job seekers asked for Facebook passwords How to protect your privacy and rights.md +++ /dev/null @@ -1,9 +0,0 @@ -
    -

    (AP) -- When Justin Bassett interviewed for a new job, he expected the usual questions about experience and references. So he was astonished when the interviewer asked for something else: his Facebook username and password. googletag.cmd.push(function() googletag.display('div-gpt-ad-1449240174198-2'); ); Bassett, a New York City statistician, had just finished answering a few character questions when the interviewer turned to her computer to search for his Facebook page. But she couldn't see his private profile. She turned back and asked him to hand over his login information.Bassett refused and withdrew his application, saying he didn't want to work for a company that would seek such personal information. But as the job market steadily improves, other job candidates are confronting the same question from prospective employers, and some of them cannot afford to say no.In their efforts to vet applicants, some companies and government agencies are going beyond merely glancing at a person's social networking profiles and instead asking to log in as the user to have a look around."It's akin to requiring someone's house keys," said Orin Kerr, a George Washington University law professor and former federal prosecutor who calls it "an egregious privacy violation."Questions have been raised about the legality of the practice, which is also the focus of proposed legislation in Illinois and Maryland that would forbid public agencies from asking for access to social networks.Since the rise of social networking, it has become common for managers to review publically available Facebook profiles, Twitter accounts and other sites to learn more about job candidates. But many users, especially on Facebook, have their profiles set to private, making them available only to selected people or certain networks.Companies that don't ask for passwords have taken other steps - such as asking applicants to friend human resource managers or to log in to a company computer during an interview. Once employed, some workers have been required to sign non-disparagement agreements that ban them from talking negatively about an employer on social media. (adsbygoogle = window.adsbygoogle || []).push(); Asking for a candidate's password is more prevalent among public agencies, especially those seeking to fill law enforcement positions such as police officers or 911 dispatchers.Back in 2010, Robert Collins was returning to his job as a security guard at the Maryland Department of Public Safety and Correctional Services after taking a leave following his mother's death. During a reinstatement interview, he was asked for his login and password, purportedly so the agency could check for any gang affiliations. He was stunned by the request but complied."I needed my job to feed my family. I had to," he recalled,After the ACLU complained about the practice, the agency amended its policy, asking instead for job applicants to log in during interviews."To me, that's still invasive. I can appreciate the desire to learn more about the applicant, but it's still a violation of people's personal privacy," said Collins, whose case inspired Maryland's legislation.Until last year, the city of Bozeman, Mont., had a long-standing policy of asking job applicants for passwords to their email addresses, social-networking websites and other online accounts.And since 2006, the McLean County, Ill., sheriff's office has been one of several Illinois sheriff's departments that ask applicants to sign into social media sites to be screened.Chief Deputy Rusty Thomas defended the practice, saying applicants have a right to refuse. But no one has ever done so. Thomas said that "speaks well of the people we have apply."When asked what sort of material would jeopardize job prospects, Thomas said "it depends on the situation" but could include "inappropriate pictures or relationships with people who are underage, illegal behavior."In Spotsylvania County, Va., the sheriff's department asks applicants to friend background investigators for jobs at the 911 dispatch center and for law enforcement positions."In the past, we've talked to friends and neighbors, but a lot of times we found that applicants interact more through social media sites than they do with real friends," said Capt. Mike Harvey. "Their virtual friends will know more about them than a person living 30 yards away from them."Harvey said investigators look for any "derogatory" behavior that could damage the agency's reputation.E. Chandlee Bryan, a career coach and co-author of the book "The Twitter Job Search Guide," said job seekers should always be aware of what's on their social media sites and assume someone is going to look at it.Bryan said she is troubled by companies asking for logins, but she feels it's not a violation if an employer asks to see a Facebook profile through a friend request. And she's not troubled by non-disparagement agreements."I think that when you work for a company, they are essentially supporting you in exchange for your work. I think if you're dissatisfied, you should go to them and not on a social media site," she said.More companies are also using third-party applications to scour Facebook profiles, Bryan said. One app called BeKnown can sometimes access personal profiles, short of wall messages, if a job seeker allows it.Sears is one of the companies using apps. An applicant has the option of logging into the Sears job site through Facebook by allowing a third-party application to draw information from the profile, such as friend lists.Sears Holdings Inc. spokeswoman Kim Freely said using a Facebook profile to apply allows Sears to be updated on the applicant's work history.The company assumes "that people keep their social profiles updated to the minute, which allows us to consider them for other jobs in the future or for ones that they may not realize are available currently," she said.Giving out Facebook login information violates the social network's terms of service. But those terms have no real legal weight, and experts say the legality of asking for such information remains murky.The Department of Justice regards it as a federal crime to enter a social networking site in violation of the terms of service, but during recent congressional testimony, the agency said such violations would not be prosecuted.But Lori Andrews, law professor at IIT Chicago-Kent College of Law specializing in Internet privacy, is concerned about the pressure placed on applicants, even if they voluntarily provide access to social sites."Volunteering is coercion if you need a job," Andrews said.Neither Facebook nor Twitter responded to repeated requests for comment.In New York, Bassett considered himself lucky that he was able to turn down the consulting gig at a lobbying firm."I think asking for account login credentials is regressive," he said. "If you need to put food on the table for your three kids, you can't afford to stand up for your belief." ©2012 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

    -

    job seekers asked for facebook passwords


    Downloadhttps://urloso.com/2uyS1N



    -

    According to recent reports, employers are beginning to ask prospective employees for their Facebook passwords as part of the interview process before they are hired. In one case, the Associated Press reported a statistician was asked for his Facebook user name and password so that the employer could review private components of his profile as part of the interview process for the job he was applying for. At least two other cases were identified where individuals who were applying for jobs were required to turn over Facebook passwords and user names in order to be considered for the job they were applying for, as well as a city that, until recently, required job applicants to provide access to their email accounts.

    -

    Democratic Rep. La Shawn Ford's measure would allow job-seekers to file lawsuits if asked for access to sites like Facebook. Bosses could still ask for usernames that would allow them to view public information on the sites.

    -

    No. As of February 22, 2022 Facebook Jobs is no longer available on the Facebook Lite app or Facebook mobile website (m.facebook.com/jobs) for both employers and seekers. Read more about this change here.

    -

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Kung Fu Panda Movie Download In Tamil.md b/spaces/bioriAsaeru/text-to-voice/Kung Fu Panda Movie Download In Tamil.md deleted file mode 100644 index 2d6e2c3d4cdf117b1ebb6b2095c55121bc677bb0..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Kung Fu Panda Movie Download In Tamil.md +++ /dev/null @@ -1,6 +0,0 @@ -

    kung fu panda movie download in tamil


    Downloadhttps://urloso.com/2uyRtx



    - -Kung Fu Panda 2008 Tamil Dubbed Movie Download Kung Fu Panda 2008 Tamil Dubbed HD Movie Download Tamil Dubbed Kung Fu Panda 2008 Movie in ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/boomsss/gamedayspx/model_90m.py b/spaces/boomsss/gamedayspx/model_90m.py deleted file mode 100644 index 0fb57fd3ee3ecfccd007a1d205d21d4c3b3844c3..0000000000000000000000000000000000000000 --- a/spaces/boomsss/gamedayspx/model_90m.py +++ /dev/null @@ -1,481 +0,0 @@ -import streamlit as st -import pandas as pd -import pandas_datareader as pdr -import numpy as np -import yfinance as yf -import json -import requests -from bs4 import BeautifulSoup -from typing import List -import xgboost as xgb -from tqdm import tqdm -from sklearn import linear_model -import joblib -import os -from sklearn.metrics import roc_auc_score, precision_score, recall_score -import datetime -from pandas.tseries.offsets import BDay -from datasets import load_dataset -import lightgbm as lgb - -def walk_forward_validation(df, target_column, num_training_rows, num_periods): - - # Create an XGBRegressor model - # model = xgb.XGBRegressor(n_estimators=100, objective='reg:squarederror', random_state = 42) - model = linear_model.LinearRegression() - - overall_results = [] - # Iterate over the rows in the DataFrame, one step at a time - for i in tqdm(range(num_training_rows, df.shape[0] - num_periods + 1),desc='LR Model'): - # Split the data into training and test sets - X_train = df.drop(target_column, axis=1).iloc[:i] - y_train = df[target_column].iloc[:i] - X_test = df.drop(target_column, axis=1).iloc[i:i+num_periods] - y_test = df[target_column].iloc[i:i+num_periods] - - # Fit the model to the training data - model.fit(X_train, y_train) - - # Make a prediction on the test data - predictions = model.predict(X_test) - - # Create a DataFrame to store the true and predicted values - result_df = pd.DataFrame({'True': y_test, 'Predicted': predictions}, index=y_test.index) - - overall_results.append(result_df) - - df_results = pd.concat(overall_results) - # model.save_model('model_lr.bin') - # Return the true and predicted values, and fitted model - return df_results, model - -model_cols = [ - 'BigNewsDay', - 'Quarter', - 'Perf5Day', - 'Perf5Day_n1', - 'DaysGreen', - 'DaysRed', - 'CurrentHigh30toClose', - 'CurrentLow30toClose', - 'CurrentClose30toClose', - 'CurrentRange30', - 'GapFill30', - 'CurrentGap', - 'RangePct', - 'RangePct_n1', - 'RangePct_n2', - 'OHLC4_VIX', - 'OHLC4_VIX_n1', - 'OHLC4_VIX_n2', - 'OpenL1', - 'OpenL2', - 'OpenH1', - 'OpenH2', - 'L1TouchPct', - 'L2TouchPct', - 'H1TouchPct', - 'H2TouchPct', - 'L1BreakPct', - 'L2BreakPct', - 'H1BreakPct', - 'H2BreakPct', - 'GreenProbas', - # 'GapFillGreenProba' -] - -def walk_forward_validation_seq(df, target_column_clf, target_column_regr, num_training_rows, num_periods): - - # Create run the regression model to get its target - res, model1 = walk_forward_validation(df.drop(columns=[target_column_clf]).dropna(), target_column_regr, num_training_rows, num_periods) - # joblib.dump(model1, 'model1.bin') - - # Merge the result df back on the df for feeding into the classifier - for_merge = res[['Predicted']] - for_merge.columns = ['RegrModelOut'] - for_merge['RegrModelOut'] = for_merge['RegrModelOut'] > 0 - df = df.merge(for_merge, left_index=True, right_index=True) - df = df.drop(columns=[target_column_regr]) - df = df[model_cols + ['RegrModelOut', target_column_clf]] - - df[target_column_clf] = df[target_column_clf].astype(bool) - df['RegrModelOut'] = df['RegrModelOut'].astype(bool) - - # Create an XGBRegressor model - # model2 = xgb.XGBClassifier(n_estimators=10, random_state = 42) - model2 = lgb.LGBMClassifier(n_estimators=10, random_state=42, verbosity=-1) - # model = linear_model.LogisticRegression(max_iter=1500) - - overall_results = [] - # Iterate over the rows in the DataFrame, one step at a time - for i in tqdm(range(num_training_rows, df.shape[0] - num_periods + 1),'CLF Model'): - # Split the data into training and test sets - X_train = df.drop(target_column_clf, axis=1).iloc[:i] - y_train = df[target_column_clf].iloc[:i] - X_test = df.drop(target_column_clf, axis=1).iloc[i:i+num_periods] - y_test = df[target_column_clf].iloc[i:i+num_periods] - - # Fit the model to the training data - model2.fit(X_train, y_train) - - # Make a prediction on the test data - predictions = model2.predict_proba(X_test)[:,-1] - - # Create a DataFrame to store the true and predicted values - result_df = pd.DataFrame({'True': y_test, 'Predicted': predictions}, index=y_test.index) - - overall_results.append(result_df) - - df_results = pd.concat(overall_results) - # model1.save_model('model_ensemble.bin') - # joblib.dump(model2, 'model2.bin') - # Return the true and predicted values, and fitted model - return df_results, model1, model2 - -def seq_predict_proba(df, trained_reg_model, trained_clf_model): - regr_pred = trained_reg_model.predict(df) - regr_pred = regr_pred > 0 - new_df = df.copy() - new_df['RegrModelOut'] = regr_pred - clf_pred_proba = trained_clf_model.predict_proba(new_df[model_cols + ['RegrModelOut']])[:,-1] - return clf_pred_proba - -def get_data(): - # f = open('settings.json') - # j = json.load(f) - # API_KEY_FRED = j["API_KEY_FRED"] - - API_KEY_FRED = os.getenv('API_KEY_FRED') - - def parse_release_dates(release_id: str) -> List[str]: - release_dates_url = f'https://api.stlouisfed.org/fred/release/dates?release_id={release_id}&realtime_start=2015-01-01&include_release_dates_with_no_data=true&api_key={API_KEY_FRED}' - r = requests.get(release_dates_url) - text = r.text - soup = BeautifulSoup(text, 'xml') - dates = [] - for release_date_tag in soup.find_all('release_date', {'release_id': release_id}): - dates.append(release_date_tag.text) - return dates - - def parse_release_dates_obs(series_id: str) -> List[str]: - obs_url = f'https://api.stlouisfed.org/fred/series/observations?series_id={series_id}&realtime_start=2015-01-01&include_release_dates_with_no_data=true&api_key={API_KEY_FRED}' - r = requests.get(obs_url) - text = r.text - soup = BeautifulSoup(text, 'xml') - observations = [] - for observation_tag in soup.find_all('observation'): - date = observation_tag.get('date') - value = observation_tag.get('value') - observations.append((date, value)) - return observations - - econ_dfs = {} - - econ_tickers = [ - 'WALCL', - 'NFCI', - 'WRESBAL' - ] - - for et in tqdm(econ_tickers, desc='getting econ tickers'): - # p = parse_release_dates_obs(et) - # df = pd.DataFrame(columns = ['ds',et], data = p) - df = pdr.get_data_fred(et) - df.index = df.index.rename('ds') - # df.index = pd.to_datetime(df.index.rename('ds')).dt.tz_localize(None) - # df['ds'] = pd.to_datetime(df['ds']).dt.tz_localize(None) - econ_dfs[et] = df - - # walcl = pd.DataFrame(columns = ['ds','WALCL'], data = p) - # walcl['ds'] = pd.to_datetime(walcl['ds']).dt.tz_localize(None) - - # nfci = pd.DataFrame(columns = ['ds','NFCI'], data = p2) - # nfci['ds'] = pd.to_datetime(nfci['ds']).dt.tz_localize(None) - - release_ids = [ - "10", # "Consumer Price Index" - "46", # "Producer Price Index" - "50", # "Employment Situation" - "53", # "Gross Domestic Product" - "103", # "Discount Rate Meeting Minutes" - "180", # "Unemployment Insurance Weekly Claims Report" - "194", # "ADP National Employment Report" - "323" # "Trimmed Mean PCE Inflation Rate" - ] - - release_names = [ - "CPI", - "PPI", - "NFP", - "GDP", - "FOMC", - "UNEMP", - "ADP", - "PCE" - ] - - releases = {} - - for rid, n in tqdm(zip(release_ids, release_names), total = len(release_ids), desc='Getting release dates'): - releases[rid] = {} - releases[rid]['dates'] = parse_release_dates(rid) - releases[rid]['name'] = n - - # Create a DF that has all dates with the name of the col as 1 - # Once merged on the main dataframe, days with econ events will be 1 or None. Fill NA with 0 - # This column serves as the true/false indicator of whether there was economic data released that day. - for rid in tqdm(release_ids, desc='Making indicators'): - releases[rid]['df'] = pd.DataFrame( - index=releases[rid]['dates'], - data={ - releases[rid]['name']: 1 - }) - releases[rid]['df'].index = pd.DatetimeIndex(releases[rid]['df'].index) - # releases[rid]['df']['ds'] = pd.to_datetime(releases[rid]['df']['ds']).dt.tz_localize(None) - # releases[rid]['df'] = releases[rid]['df'].set_index('ds') - - vix = yf.Ticker('^VIX') - spx = yf.Ticker('^GSPC') - - - # Pull in data - data = load_dataset("boomsss/spx_intra", split='train') - - rows = [d['text'] for d in data] - rows = [x.split(',') for x in rows] - - fr = pd.DataFrame(columns=[ - 'Datetime','Open','High','Low','Close' - ], data = rows) - - fr['Datetime'] = pd.to_datetime(fr['Datetime']) - fr['Datetime'] = fr['Datetime'].dt.tz_localize('America/New_York') - fr = fr.set_index('Datetime') - fr['Open'] = pd.to_numeric(fr['Open']) - fr['High'] = pd.to_numeric(fr['High']) - fr['Low'] = pd.to_numeric(fr['Low']) - fr['Close'] = pd.to_numeric(fr['Close']) - - # Get incremental date - last_date = fr.index.date[-1] - last_date = last_date + datetime.timedelta(days=1) - # Get incremental data - spx1 = yf.Ticker('^GSPC') - yfp = spx1.history(start=last_date, interval='30m') - - if len(yfp) > 0: - # Concat current and incremental - df_30m = pd.concat([fr, yfp]) - else: - df_30m = fr.copy() - - # Get the first 30 minute bar - df_30m = df_30m.reset_index() - df_30m['Datetime'] = df_30m['Datetime'].dt.date - df_30m = df_30m.groupby('Datetime').head(3) - df_30m = df_30m.set_index('Datetime',drop=True) - # Rename the columns - df_30m = df_30m[['Open','High','Low','Close']] - - opens_1h = df_30m.groupby('Datetime')['Open'].head(1) - highs_1h = df_30m.groupby('Datetime')['High'].max() - lows_1h = df_30m.groupby('Datetime')['Low'].min() - closes_1h = df_30m.groupby('Datetime')['Close'].tail(1) - - df_1h = pd.DataFrame(index=df_30m.index.unique()) - df_1h['Open'] = opens_1h - df_1h['High'] = highs_1h - df_1h['Low'] = lows_1h - df_1h['Close'] = closes_1h - - df_1h.columns = ['Open30','High30','Low30','Close30'] - - prices_vix = vix.history(start='2018-07-01', interval='1d') - prices_spx = spx.history(start='2018-07-01', interval='1d') - prices_spx['index'] = [str(x).split()[0] for x in prices_spx.index] - prices_spx['index'] = pd.to_datetime(prices_spx['index']).dt.date - prices_spx.index = prices_spx['index'] - prices_spx = prices_spx.drop(columns='index') - prices_spx.index = pd.DatetimeIndex(prices_spx.index) - - - prices_vix['index'] = [str(x).split()[0] for x in prices_vix.index] - prices_vix['index'] = pd.to_datetime(prices_vix['index']).dt.date - prices_vix.index = prices_vix['index'] - prices_vix = prices_vix.drop(columns='index') - prices_vix.index = pd.DatetimeIndex(prices_vix.index) - - - data = prices_spx.merge(df_1h, left_index=True, right_index=True) - data = data.merge(prices_vix[['Open','High','Low','Close']], left_index=True, right_index=True, suffixes=['','_VIX']) - - # Features - data['PrevClose'] = data['Close'].shift(1) - data['Perf5Day'] = data['Close'] > data['Close'].shift(5) - data['Perf5Day_n1'] = data['Perf5Day'].shift(1) - data['Perf5Day_n1'] = data['Perf5Day_n1'].astype(bool) - data['GreenDay'] = (data['Close'] > data['PrevClose']) * 1 - data['RedDay'] = (data['Close'] <= data['PrevClose']) * 1 - - data['VIX5Day'] = data['Close_VIX'] > data['Close_VIX'].shift(5) - data['VIX5Day_n1'] = data['VIX5Day'].astype(bool) - - data['Range'] = data[['Open','High']].max(axis=1) - data[['Low','Open']].min(axis=1) # Current day range in points - data['RangePct'] = data['Range'] / data['Close'] - data['VIXLevel'] = pd.qcut(data['Close_VIX'], 4) - data['OHLC4_VIX'] = data[['Open_VIX','High_VIX','Low_VIX','Close_VIX']].mean(axis=1) - data['OHLC4'] = data[['Open','High','Low','Close']].mean(axis=1) - data['OHLC4_Trend'] = data['OHLC4'] > data['OHLC4'].shift(1) - data['OHLC4_Trend_n1'] = data['OHLC4_Trend'].shift(1) - data['OHLC4_Trend_n1'] = data['OHLC4_Trend_n1'].astype(float) - data['OHLC4_Trend_n2'] = data['OHLC4_Trend'].shift(1) - data['OHLC4_Trend_n2'] = data['OHLC4_Trend_n2'].astype(float) - data['RangePct_n1'] = data['RangePct'].shift(1) - data['RangePct_n2'] = data['RangePct'].shift(2) - data['OHLC4_VIX_n1'] = data['OHLC4_VIX'].shift(1) - data['OHLC4_VIX_n2'] = data['OHLC4_VIX'].shift(2) - data['CurrentGap'] = (data['Open'] - data['PrevClose']) / data['PrevClose'] - data['CurrentGapHist'] = data['CurrentGap'].copy() - data['CurrentGap'] = data['CurrentGap'].shift(-1) - data['DayOfWeek'] = pd.to_datetime(data.index) - data['DayOfWeek'] = data['DayOfWeek'].dt.day - - # Intraday features - data['CurrentHigh30'] = data['High30'].shift(-1) - data['CurrentLow30'] = data['Low30'].shift(-1) - data['CurrentClose30'] = data['Close30'].shift(-1) - data['HistClose30toPrevClose'] = (data['Close30'] / data['PrevClose']) - 1 - - - # Open to High - data['CurrentHigh30toClose'] = (data['CurrentHigh30'] / data['Close']) - 1 - data['CurrentLow30toClose'] = (data['CurrentLow30'] / data['Close']) - 1 - data['CurrentClose30toClose'] = (data['CurrentClose30'] / data['Close']) - 1 - data['CurrentRange30'] = (data['CurrentHigh30'] - data['CurrentLow30']) / data['Close'] - data['GapFill30'] = [low <= prev_close if gap > 0 else high >= prev_close for high, low, prev_close, gap in zip(data['CurrentHigh30'], data['CurrentLow30'], data['Close'], data['CurrentGap'])] - - # Target -- the next day's low - data['Target'] = (data['OHLC4'] / data['PrevClose']) - 1 - data['Target'] = data['Target'].shift(-1) - # data['Target'] = data['RangePct'].shift(-1) - - # Target for clf -- whether tomorrow will close above or below today's close - data['Target_clf'] = data['Close'] > data['PrevClose'] - data['Target_clf'] = data['Target_clf'].shift(-1) - data['DayOfWeek'] = pd.to_datetime(data.index) - data['Quarter'] = data['DayOfWeek'].dt.quarter - data['DayOfWeek'] = data['DayOfWeek'].dt.weekday - - # Calculate up - data['up'] = 100 * (data['High'].shift(1) - data['Open'].shift(1)) / data['Close'].shift(1) - - # Calculate upSD - data['upSD'] = data['up'].rolling(30).std(ddof=0) - - # Calculate aveUp - data['aveUp'] = data['up'].rolling(30).mean() - data['H1'] = data['Open'] + (data['aveUp'] / 100) * data['Open'] - data['H2'] = data['Open'] + ((data['aveUp'] + data['upSD']) / 100) * data['Open'] - data['down'] = 100 * (data['Open'].shift(1) - data['Low'].shift(1)) / data['Close'].shift(1) - data['downSD'] = data['down'].rolling(30).std(ddof=0) - data['aveDown'] = data['down'].rolling(30).mean() - data['L1'] = data['Open'] - (data['aveDown'] / 100) * data['Open'] - data['L2'] = data['Open'] - ((data['aveDown'] + data['upSD']) / 100) * data['Open'] - - data = data.assign( - L1Touch = lambda x: x['Low'] < x['L1'], - L2Touch = lambda x: x['Low'] < x['L2'], - H1Touch = lambda x: x['High'] > x['H1'], - H2Touch = lambda x: x['High'] > x['H2'], - L1Break = lambda x: x['Close'] < x['L1'], - L2Break = lambda x: x['Close'] < x['L2'], - H1Break = lambda x: x['Close'] > x['H1'], - H2Break = lambda x: x['Close'] > x['H2'], - OpenL1 = lambda x: x['Open'] / x['L1'], - OpenL2 = lambda x: x['Open'] / x['L2'], - OpenH1 = lambda x: x['Open'] / x['H1'], - OpenH2 = lambda x: x['Open'] / x['H2'] - ) - - level_cols = [ - 'L1Touch', - 'L2Touch', - 'H1Touch', - 'H2Touch', - 'L1Break', - 'L2Break', - 'H1Break', - 'H2Break' - ] - - for col in level_cols: - data[col+'Pct'] = data[col].rolling(100).mean() - data[col+'Pct'] = data[col+'Pct'].shift(-1) - - def get_quintiles(df, col_name, q): - return df.groupby(pd.qcut(df[col_name], q))['GreenDay'].mean() - - probas = [] - for i, pct in enumerate(data['CurrentClose30toClose']): - try: - df_q = get_quintiles(data.iloc[:i], 'HistClose30toPrevClose', 5) - for q in df_q.index: - if q.left <= pct <= q.right: - p = df_q[q] - except: - p = None - - probas.append(p) - - # gapfills = [] - # for i, pct in enumerate(data['CurrentGap']): - # try: - # df_q = get_quintiles(data.iloc[:i], 'CurrentGapHist', 5) - # for q in df_q.index: - # if q.left <= pct <= q.right: - # p = df_q[q] - # except: - # p = None - - # gapfills.append(p) - - data['GreenProbas'] = probas - # data['GapFillGreenProba'] = gapfills - - for rid in tqdm(release_ids, desc='Merging econ data'): - # Get the name of the release - n = releases[rid]['name'] - # Merge the corresponding DF of the release - data = data.merge(releases[rid]['df'], how = 'left', left_index=True, right_index=True) - # Create a column that shifts the value in the merged column up by 1 - data[f'{n}_shift'] = data[n].shift(-1) - # Fill the rest with zeroes - data[n] = data[n].fillna(0) - data[f'{n}_shift'] = data[f'{n}_shift'].fillna(0) - - data['BigNewsDay'] = data[[x for x in data.columns if '_shift' in x]].max(axis=1) - - def cumul_sum(col): - nums = [] - s = 0 - for x in col: - if x == 1: - s += 1 - elif x == 0: - s = 0 - nums.append(s) - return nums - - consec_green = cumul_sum(data['GreenDay'].values) - consec_red = cumul_sum(data['RedDay'].values) - - data['DaysGreen'] = consec_green - data['DaysRed'] = consec_red - - final_row = data.index[-2] - - exp_row = data.index[-1] - - df_final = data.loc[:final_row, model_cols + ['Target','Target_clf']] - df_final = df_final.dropna(subset=['Target','Target_clf','Perf5Day_n1']) - return data, df_final, final_row \ No newline at end of file diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/compression/encodec_audiogen_16khz.py b/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/compression/encodec_audiogen_16khz.py deleted file mode 100644 index c9b41f684045594bb264cfb7f4f15d1da439382c..0000000000000000000000000000000000000000 --- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/compression/encodec_audiogen_16khz.py +++ /dev/null @@ -1,29 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Grid search file, simply list all the exp you want in `explorer`. -Any new exp added there will be scheduled. -You can cancel and experiment by commenting its line. - -This grid shows how to train the new AudioGen EnCodec model at 16 kHz. -""" - -from ._explorers import CompressionExplorer -from ...environment import AudioCraftEnvironment - - -@CompressionExplorer -def explorer(launcher): - partitions = AudioCraftEnvironment.get_slurm_partitions(['team', 'global']) - launcher.slurm_(gpus=8, partition=partitions) - # use configuration for AudioGen's EnCodec model trained on monophonic audio sampled at 16 kHz - # AudioGen's EnCodec is trained with a total stride of 320 leading to a frame rate of 50 hz - launcher.bind_(solver='compression/encodec_audiogen_16khz') - # replace this by the desired sound dataset - launcher.bind_(dset='internal/sounds_16khz') - # launch xp - launcher() diff --git a/spaces/bright1/Sepsis-Prediction-API/src/app/templates/index.html b/spaces/bright1/Sepsis-Prediction-API/src/app/templates/index.html deleted file mode 100644 index f66ea06ca7e6eb533d857e28228efdc6d0056757..0000000000000000000000000000000000000000 --- a/spaces/bright1/Sepsis-Prediction-API/src/app/templates/index.html +++ /dev/null @@ -1,14 +0,0 @@ - - - - - - - - Document - - -

    Welcome to the Sepsis API

    -

    Kindly access the API Documentation link here.

    - - \ No newline at end of file diff --git a/spaces/brightswitch/EleutherAI-llemma_34b/README.md b/spaces/brightswitch/EleutherAI-llemma_34b/README.md deleted file mode 100644 index a081ce9389aa662be6ec4e3055fb29147cba0c24..0000000000000000000000000000000000000000 --- a/spaces/brightswitch/EleutherAI-llemma_34b/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: EleutherAI-llemma 34b -emoji: 📉 -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 3.48.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/tests/modeling/test_backbone.py b/spaces/brjathu/HMR2.0/vendor/detectron2/tests/modeling/test_backbone.py deleted file mode 100644 index 3bb100f9bd5b4939e4646821c5a60d51c8ea65fd..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/tests/modeling/test_backbone.py +++ /dev/null @@ -1,34 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -import unittest -import torch - -import detectron2.export.torchscript # apply patch # noqa -from detectron2 import model_zoo -from detectron2.config import get_cfg -from detectron2.layers import ShapeSpec -from detectron2.modeling.backbone import build_resnet_backbone -from detectron2.modeling.backbone.fpn import build_resnet_fpn_backbone - - -class TestBackBone(unittest.TestCase): - def test_resnet_scriptability(self): - cfg = get_cfg() - resnet = build_resnet_backbone(cfg, ShapeSpec(channels=3)) - - scripted_resnet = torch.jit.script(resnet) - - inp = torch.rand(2, 3, 100, 100) - out1 = resnet(inp)["res4"] - out2 = scripted_resnet(inp)["res4"] - self.assertTrue(torch.allclose(out1, out2)) - - def test_fpn_scriptability(self): - cfg = model_zoo.get_config("Misc/scratch_mask_rcnn_R_50_FPN_3x_gn.yaml") - bb = build_resnet_fpn_backbone(cfg, ShapeSpec(channels=3)) - bb_s = torch.jit.script(bb) - - inp = torch.rand(2, 3, 128, 128) - out1 = bb(inp)["p5"] - out2 = bb_s(inp)["p5"] - self.assertTrue(torch.allclose(out1, out2)) diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/aiohttp/http_writer.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/aiohttp/http_writer.py deleted file mode 100644 index 73f0f96f0ae3f152c22931054c7c4193cc45d8ef..0000000000000000000000000000000000000000 --- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/aiohttp/http_writer.py +++ /dev/null @@ -1,198 +0,0 @@ -"""Http related parsers and protocol.""" - -import asyncio -import zlib -from typing import Any, Awaitable, Callable, NamedTuple, Optional, Union # noqa - -from multidict import CIMultiDict - -from .abc import AbstractStreamWriter -from .base_protocol import BaseProtocol -from .helpers import NO_EXTENSIONS - -__all__ = ("StreamWriter", "HttpVersion", "HttpVersion10", "HttpVersion11") - - -class HttpVersion(NamedTuple): - major: int - minor: int - - -HttpVersion10 = HttpVersion(1, 0) -HttpVersion11 = HttpVersion(1, 1) - - -_T_OnChunkSent = Optional[Callable[[bytes], Awaitable[None]]] -_T_OnHeadersSent = Optional[Callable[["CIMultiDict[str]"], Awaitable[None]]] - - -class StreamWriter(AbstractStreamWriter): - def __init__( - self, - protocol: BaseProtocol, - loop: asyncio.AbstractEventLoop, - on_chunk_sent: _T_OnChunkSent = None, - on_headers_sent: _T_OnHeadersSent = None, - ) -> None: - self._protocol = protocol - - self.loop = loop - self.length = None - self.chunked = False - self.buffer_size = 0 - self.output_size = 0 - - self._eof = False - self._compress: Any = None - self._drain_waiter = None - - self._on_chunk_sent: _T_OnChunkSent = on_chunk_sent - self._on_headers_sent: _T_OnHeadersSent = on_headers_sent - - @property - def transport(self) -> Optional[asyncio.Transport]: - return self._protocol.transport - - @property - def protocol(self) -> BaseProtocol: - return self._protocol - - def enable_chunking(self) -> None: - self.chunked = True - - def enable_compression( - self, encoding: str = "deflate", strategy: int = zlib.Z_DEFAULT_STRATEGY - ) -> None: - zlib_mode = 16 + zlib.MAX_WBITS if encoding == "gzip" else zlib.MAX_WBITS - self._compress = zlib.compressobj(wbits=zlib_mode, strategy=strategy) - - def _write(self, chunk: bytes) -> None: - size = len(chunk) - self.buffer_size += size - self.output_size += size - transport = self.transport - if not self._protocol.connected or transport is None or transport.is_closing(): - raise ConnectionResetError("Cannot write to closing transport") - transport.write(chunk) - - async def write( - self, chunk: bytes, *, drain: bool = True, LIMIT: int = 0x10000 - ) -> None: - """Writes chunk of data to a stream. - - write_eof() indicates end of stream. - writer can't be used after write_eof() method being called. - write() return drain future. - """ - if self._on_chunk_sent is not None: - await self._on_chunk_sent(chunk) - - if isinstance(chunk, memoryview): - if chunk.nbytes != len(chunk): - # just reshape it - chunk = chunk.cast("c") - - if self._compress is not None: - chunk = self._compress.compress(chunk) - if not chunk: - return - - if self.length is not None: - chunk_len = len(chunk) - if self.length >= chunk_len: - self.length = self.length - chunk_len - else: - chunk = chunk[: self.length] - self.length = 0 - if not chunk: - return - - if chunk: - if self.chunked: - chunk_len_pre = ("%x\r\n" % len(chunk)).encode("ascii") - chunk = chunk_len_pre + chunk + b"\r\n" - - self._write(chunk) - - if self.buffer_size > LIMIT and drain: - self.buffer_size = 0 - await self.drain() - - async def write_headers( - self, status_line: str, headers: "CIMultiDict[str]" - ) -> None: - """Write request/response status and headers.""" - if self._on_headers_sent is not None: - await self._on_headers_sent(headers) - - # status + headers - buf = _serialize_headers(status_line, headers) - self._write(buf) - - async def write_eof(self, chunk: bytes = b"") -> None: - if self._eof: - return - - if chunk and self._on_chunk_sent is not None: - await self._on_chunk_sent(chunk) - - if self._compress: - if chunk: - chunk = self._compress.compress(chunk) - - chunk = chunk + self._compress.flush() - if chunk and self.chunked: - chunk_len = ("%x\r\n" % len(chunk)).encode("ascii") - chunk = chunk_len + chunk + b"\r\n0\r\n\r\n" - else: - if self.chunked: - if chunk: - chunk_len = ("%x\r\n" % len(chunk)).encode("ascii") - chunk = chunk_len + chunk + b"\r\n0\r\n\r\n" - else: - chunk = b"0\r\n\r\n" - - if chunk: - self._write(chunk) - - await self.drain() - - self._eof = True - - async def drain(self) -> None: - """Flush the write buffer. - - The intended use is to write - - await w.write(data) - await w.drain() - """ - if self._protocol.transport is not None: - await self._protocol._drain_helper() - - -def _safe_header(string: str) -> str: - if "\r" in string or "\n" in string: - raise ValueError( - "Newline or carriage return detected in headers. " - "Potential header injection attack." - ) - return string - - -def _py_serialize_headers(status_line: str, headers: "CIMultiDict[str]") -> bytes: - headers_gen = (_safe_header(k) + ": " + _safe_header(v) for k, v in headers.items()) - line = status_line + "\r\n" + "\r\n".join(headers_gen) + "\r\n\r\n" - return line.encode("utf-8") - - -_serialize_headers = _py_serialize_headers - -try: - import aiohttp._http_writer as _http_writer # type: ignore[import] - - _c_serialize_headers = _http_writer._serialize_headers - if not NO_EXTENSIONS: - _serialize_headers = _c_serialize_headers -except ImportError: - pass diff --git a/spaces/captainChan/CaptainChan/callbacks.py b/spaces/captainChan/CaptainChan/callbacks.py deleted file mode 100644 index 82fb9e34da2a819ce849857c304bb3cd23973e81..0000000000000000000000000000000000000000 --- a/spaces/captainChan/CaptainChan/callbacks.py +++ /dev/null @@ -1,360 +0,0 @@ -import logging -import shutil -import time - -import editdistance as ed -import torchvision.utils as vutils -from fastai.callbacks.tensorboard import (LearnerTensorboardWriter, - SummaryWriter, TBWriteRequest, - asyncTBWriter) -from fastai.vision import * -from torch.nn.parallel import DistributedDataParallel -from torchvision import transforms - -import dataset -from utils import CharsetMapper, Timer, blend_mask - - -class IterationCallback(LearnerTensorboardWriter): - "A `TrackerCallback` that monitor in each iteration." - def __init__(self, learn:Learner, name:str='model', checpoint_keep_num=5, - show_iters:int=50, eval_iters:int=1000, save_iters:int=20000, - start_iters:int=0, stats_iters=20000): - #if self.learn.rank is not None: time.sleep(self.learn.rank) # keep all event files - super().__init__(learn, base_dir='.', name=learn.path, loss_iters=show_iters, - stats_iters=stats_iters, hist_iters=stats_iters) - self.name, self.bestname = Path(name).name, f'best-{Path(name).name}' - self.show_iters = show_iters - self.eval_iters = eval_iters - self.save_iters = save_iters - self.start_iters = start_iters - self.checpoint_keep_num = checpoint_keep_num - self.metrics_root = 'metrics/' # rewrite - self.timer = Timer() - self.host = self.learn.rank is None or self.learn.rank == 0 - - def _write_metrics(self, iteration:int, names:List[str], last_metrics:MetricsList)->None: - "Writes training metrics to Tensorboard." - for i, name in enumerate(names): - if last_metrics is None or len(last_metrics) < i+1: return - scalar_value = last_metrics[i] - self._write_scalar(name=name, scalar_value=scalar_value, iteration=iteration) - - def _write_sub_loss(self, iteration:int, last_losses:dict)->None: - "Writes sub loss to Tensorboard." - for name, loss in last_losses.items(): - scalar_value = to_np(loss) - tag = self.metrics_root + name - self.tbwriter.add_scalar(tag=tag, scalar_value=scalar_value, global_step=iteration) - - def _save(self, name): - if isinstance(self.learn.model, DistributedDataParallel): - tmp = self.learn.model - self.learn.model = self.learn.model.module - self.learn.save(name) - self.learn.model = tmp - else: self.learn.save(name) - - def _validate(self, dl=None, callbacks=None, metrics=None, keeped_items=False): - "Validate on `dl` with potential `callbacks` and `metrics`." - dl = ifnone(dl, self.learn.data.valid_dl) - metrics = ifnone(metrics, self.learn.metrics) - cb_handler = CallbackHandler(ifnone(callbacks, []), metrics) - cb_handler.on_train_begin(1, None, metrics); cb_handler.on_epoch_begin() - if keeped_items: cb_handler.state_dict.update(dict(keeped_items=[])) - val_metrics = validate(self.learn.model, dl, self.loss_func, cb_handler) - cb_handler.on_epoch_end(val_metrics) - if keeped_items: return cb_handler.state_dict['keeped_items'] - else: return cb_handler.state_dict['last_metrics'] - - def jump_to_epoch_iter(self, epoch:int, iteration:int)->None: - try: - self.learn.load(f'{self.name}_{epoch}_{iteration}', purge=False) - logging.info(f'Loaded {self.name}_{epoch}_{iteration}') - except: logging.info(f'Model {self.name}_{epoch}_{iteration} not found.') - - def on_train_begin(self, n_epochs, **kwargs): - # TODO: can not write graph here - # super().on_train_begin(**kwargs) - self.best = -float('inf') - self.timer.tic() - if self.host: - checkpoint_path = self.learn.path/'checkpoint.yaml' - if checkpoint_path.exists(): - os.remove(checkpoint_path) - open(checkpoint_path, 'w').close() - return {'skip_validate': True, 'iteration':self.start_iters} # disable default validate - - def on_batch_begin(self, **kwargs:Any)->None: - self.timer.toc_data() - super().on_batch_begin(**kwargs) - - def on_batch_end(self, iteration, epoch, last_loss, smooth_loss, train, **kwargs): - super().on_batch_end(last_loss, iteration, train, **kwargs) - if iteration == 0: return - - if iteration % self.loss_iters == 0: - last_losses = self.learn.loss_func.last_losses - self._write_sub_loss(iteration=iteration, last_losses=last_losses) - self.tbwriter.add_scalar(tag=self.metrics_root + 'lr', - scalar_value=self.opt.lr, global_step=iteration) - - if iteration % self.show_iters == 0: - log_str = f'epoch {epoch} iter {iteration}: loss = {last_loss:6.4f}, ' \ - f'smooth loss = {smooth_loss:6.4f}' - logging.info(log_str) - # log_str = f'data time = {self.timer.data_diff:.4f}s, runing time = {self.timer.running_diff:.4f}s' - # logging.info(log_str) - - if iteration % self.eval_iters == 0: - # TODO: or remove time to on_epoch_end - # 1. Record time - log_str = f'average data time = {self.timer.average_data_time():.4f}s, ' \ - f'average running time = {self.timer.average_running_time():.4f}s' - logging.info(log_str) - - # 2. Call validate - last_metrics = self._validate() - self.learn.model.train() - log_str = f'epoch {epoch} iter {iteration}: eval loss = {last_metrics[0]:6.4f}, ' \ - f'ccr = {last_metrics[1]:6.4f}, cwr = {last_metrics[2]:6.4f}, ' \ - f'ted = {last_metrics[3]:6.4f}, ned = {last_metrics[4]:6.4f}, ' \ - f'ted/w = {last_metrics[5]:6.4f}, ' - logging.info(log_str) - names = ['eval_loss', 'ccr', 'cwr', 'ted', 'ned', 'ted/w'] - self._write_metrics(iteration, names, last_metrics) - - # 3. Save best model - current = last_metrics[2] - if current is not None and current > self.best: - logging.info(f'Better model found at epoch {epoch}, '\ - f'iter {iteration} with accuracy value: {current:6.4f}.') - self.best = current - self._save(f'{self.bestname}') - - if iteration % self.save_iters == 0 and self.host: - logging.info(f'Save model {self.name}_{epoch}_{iteration}') - filename = f'{self.name}_{epoch}_{iteration}' - self._save(filename) - - checkpoint_path = self.learn.path/'checkpoint.yaml' - if not checkpoint_path.exists(): - open(checkpoint_path, 'w').close() - with open(checkpoint_path, 'r') as file: - checkpoints = yaml.load(file, Loader=yaml.FullLoader) or dict() - checkpoints['all_checkpoints'] = ( - checkpoints.get('all_checkpoints') or list()) - checkpoints['all_checkpoints'].insert(0, filename) - if len(checkpoints['all_checkpoints']) > self.checpoint_keep_num: - removed_checkpoint = checkpoints['all_checkpoints'].pop() - removed_checkpoint = self.learn.path/self.learn.model_dir/f'{removed_checkpoint}.pth' - os.remove(removed_checkpoint) - checkpoints['current_checkpoint'] = filename - with open(checkpoint_path, 'w') as file: - yaml.dump(checkpoints, file) - - - self.timer.toc_running() - - def on_train_end(self, **kwargs): - #self.learn.load(f'{self.bestname}', purge=False) - pass - - def on_epoch_end(self, last_metrics:MetricsList, iteration:int, **kwargs)->None: - self._write_embedding(iteration=iteration) - - -class TextAccuracy(Callback): - _names = ['ccr', 'cwr', 'ted', 'ned', 'ted/w'] - def __init__(self, charset_path, max_length, case_sensitive, model_eval): - self.charset_path = charset_path - self.max_length = max_length - self.case_sensitive = case_sensitive - self.charset = CharsetMapper(charset_path, self.max_length) - self.names = self._names - - self.model_eval = model_eval or 'alignment' - assert self.model_eval in ['vision', 'language', 'alignment'] - - def on_epoch_begin(self, **kwargs): - self.total_num_char = 0. - self.total_num_word = 0. - self.correct_num_char = 0. - self.correct_num_word = 0. - self.total_ed = 0. - self.total_ned = 0. - - def _get_output(self, last_output): - if isinstance(last_output, (tuple, list)): - for res in last_output: - if res['name'] == self.model_eval: output = res - else: output = last_output - return output - - def _update_output(self, last_output, items): - if isinstance(last_output, (tuple, list)): - for res in last_output: - if res['name'] == self.model_eval: res.update(items) - else: last_output.update(items) - return last_output - - def on_batch_end(self, last_output, last_target, **kwargs): - output = self._get_output(last_output) - logits, pt_lengths = output['logits'], output['pt_lengths'] - pt_text, pt_scores, pt_lengths_ = self.decode(logits) - assert (pt_lengths == pt_lengths_).all(), f'{pt_lengths} != {pt_lengths_} for {pt_text}' - last_output = self._update_output(last_output, {'pt_text':pt_text, 'pt_scores':pt_scores}) - - pt_text = [self.charset.trim(t) for t in pt_text] - label = last_target[0] - if label.dim() == 3: label = label.argmax(dim=-1) # one-hot label - gt_text = [self.charset.get_text(l, trim=True) for l in label] - - for i in range(len(gt_text)): - if not self.case_sensitive: - gt_text[i], pt_text[i] = gt_text[i].lower(), pt_text[i].lower() - distance = ed.eval(gt_text[i], pt_text[i]) - self.total_ed += distance - self.total_ned += float(distance) / max(len(gt_text[i]), 1) - - if gt_text[i] == pt_text[i]: - self.correct_num_word += 1 - self.total_num_word += 1 - - for j in range(min(len(gt_text[i]), len(pt_text[i]))): - if gt_text[i][j] == pt_text[i][j]: - self.correct_num_char += 1 - self.total_num_char += len(gt_text[i]) - - return {'last_output': last_output} - - def on_epoch_end(self, last_metrics, **kwargs): - mets = [self.correct_num_char / self.total_num_char, - self.correct_num_word / self.total_num_word, - self.total_ed, - self.total_ned, - self.total_ed / self.total_num_word] - return add_metrics(last_metrics, mets) - - def decode(self, logit): - """ Greed decode """ - # TODO: test running time and decode on GPU - out = F.softmax(logit, dim=2) - pt_text, pt_scores, pt_lengths = [], [], [] - for o in out: - text = self.charset.get_text(o.argmax(dim=1), padding=False, trim=False) - text = text.split(self.charset.null_char)[0] # end at end-token - pt_text.append(text) - pt_scores.append(o.max(dim=1)[0]) - pt_lengths.append(min(len(text) + 1, self.max_length)) # one for end-token - pt_scores = torch.stack(pt_scores) - pt_lengths = pt_scores.new_tensor(pt_lengths, dtype=torch.long) - return pt_text, pt_scores, pt_lengths - - -class TopKTextAccuracy(TextAccuracy): - _names = ['ccr', 'cwr'] - def __init__(self, k, charset_path, max_length, case_sensitive, model_eval): - self.k = k - self.charset_path = charset_path - self.max_length = max_length - self.case_sensitive = case_sensitive - self.charset = CharsetMapper(charset_path, self.max_length) - self.names = self._names - - def on_epoch_begin(self, **kwargs): - self.total_num_char = 0. - self.total_num_word = 0. - self.correct_num_char = 0. - self.correct_num_word = 0. - - def on_batch_end(self, last_output, last_target, **kwargs): - logits, pt_lengths = last_output['logits'], last_output['pt_lengths'] - gt_labels, gt_lengths = last_target[:] - - for logit, pt_length, label, length in zip(logits, pt_lengths, gt_labels, gt_lengths): - word_flag = True - for i in range(length): - char_logit = logit[i].topk(self.k)[1] - char_label = label[i].argmax(-1) - if char_label in char_logit: self.correct_num_char += 1 - else: word_flag = False - self.total_num_char += 1 - if pt_length == length and word_flag: - self.correct_num_word += 1 - self.total_num_word += 1 - - def on_epoch_end(self, last_metrics, **kwargs): - mets = [self.correct_num_char / self.total_num_char, - self.correct_num_word / self.total_num_word, - 0., 0., 0.] - return add_metrics(last_metrics, mets) - - -class DumpPrediction(LearnerCallback): - - def __init__(self, learn, dataset, charset_path, model_eval, image_only=False, debug=False): - super().__init__(learn=learn) - self.debug = debug - self.model_eval = model_eval or 'alignment' - self.image_only = image_only - assert self.model_eval in ['vision', 'language', 'alignment'] - - self.dataset, self.root = dataset, Path(self.learn.path)/f'{dataset}-{self.model_eval}' - self.attn_root = self.root/'attn' - self.charset = CharsetMapper(charset_path) - if self.root.exists(): shutil.rmtree(self.root) - self.root.mkdir(), self.attn_root.mkdir() - - self.pil = transforms.ToPILImage() - self.tensor = transforms.ToTensor() - size = self.learn.data.img_h, self.learn.data.img_w - self.resize = transforms.Resize(size=size, interpolation=0) - self.c = 0 - - def on_batch_end(self, last_input, last_output, last_target, **kwargs): - if isinstance(last_output, (tuple, list)): - for res in last_output: - if res['name'] == self.model_eval: pt_text = res['pt_text'] - if res['name'] == 'vision': attn_scores = res['attn_scores'].detach().cpu() - if res['name'] == self.model_eval: logits = res['logits'] - else: - pt_text = last_output['pt_text'] - attn_scores = last_output['attn_scores'].detach().cpu() - logits = last_output['logits'] - - images = last_input[0] if isinstance(last_input, (tuple, list)) else last_input - images = images.detach().cpu() - pt_text = [self.charset.trim(t) for t in pt_text] - gt_label = last_target[0] - if gt_label.dim() == 3: gt_label = gt_label.argmax(dim=-1) # one-hot label - gt_text = [self.charset.get_text(l, trim=True) for l in gt_label] - - prediction, false_prediction = [], [] - for gt, pt, image, attn, logit in zip(gt_text, pt_text, images, attn_scores, logits): - prediction.append(f'{gt}\t{pt}\n') - if gt != pt: - if self.debug: - scores = torch.softmax(logit, dim=-1)[:max(len(pt), len(gt)) + 1] - logging.info(f'{self.c} gt {gt}, pt {pt}, logit {logit.shape}, scores {scores.topk(5, dim=-1)}') - false_prediction.append(f'{gt}\t{pt}\n') - - image = self.learn.data.denorm(image) - if not self.image_only: - image_np = np.array(self.pil(image)) - attn_pil = [self.pil(a) for a in attn[:, None, :, :]] - attn = [self.tensor(self.resize(a)).repeat(3, 1, 1) for a in attn_pil] - attn_sum = np.array([np.array(a) for a in attn_pil[:len(pt)]]).sum(axis=0) - blended_sum = self.tensor(blend_mask(image_np, attn_sum)) - blended = [self.tensor(blend_mask(image_np, np.array(a))) for a in attn_pil] - save_image = torch.stack([image] + attn + [blended_sum] + blended) - save_image = save_image.view(2, -1, *save_image.shape[1:]) - save_image = save_image.permute(1, 0, 2, 3, 4).flatten(0, 1) - vutils.save_image(save_image, self.attn_root/f'{self.c}_{gt}_{pt}.jpg', - nrow=2, normalize=True, scale_each=True) - else: - self.pil(image).save(self.attn_root/f'{self.c}_{gt}_{pt}.jpg') - self.c += 1 - - with open(self.root/f'{self.model_eval}.txt', 'a') as f: f.writelines(prediction) - with open(self.root/f'{self.model_eval}-false.txt', 'a') as f: f.writelines(false_prediction) diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/data/transforms/augmentation.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/data/transforms/augmentation.py deleted file mode 100644 index 770348a1640e870b13086cd70eb80df30650f84e..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/data/transforms/augmentation.py +++ /dev/null @@ -1,380 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. - -import inspect -import numpy as np -import pprint -from typing import Any, List, Optional, Tuple, Union -from fvcore.transforms.transform import Transform, TransformList - -""" -See "Data Augmentation" tutorial for an overview of the system: -https://detectron2.readthedocs.io/tutorials/augmentation.html -""" - - -__all__ = [ - "Augmentation", - "AugmentationList", - "AugInput", - "TransformGen", - "apply_transform_gens", - "StandardAugInput", - "apply_augmentations", -] - - -def _check_img_dtype(img): - assert isinstance(img, np.ndarray), "[Augmentation] Needs an numpy array, but got a {}!".format( - type(img) - ) - assert not isinstance(img.dtype, np.integer) or ( - img.dtype == np.uint8 - ), "[Augmentation] Got image of type {}, use uint8 or floating points instead!".format( - img.dtype - ) - assert img.ndim in [2, 3], img.ndim - - -def _get_aug_input_args(aug, aug_input) -> List[Any]: - """ - Get the arguments to be passed to ``aug.get_transform`` from the input ``aug_input``. - """ - if aug.input_args is None: - # Decide what attributes are needed automatically - prms = list(inspect.signature(aug.get_transform).parameters.items()) - # The default behavior is: if there is one parameter, then its "image" - # (work automatically for majority of use cases, and also avoid BC breaking), - # Otherwise, use the argument names. - if len(prms) == 1: - names = ("image",) - else: - names = [] - for name, prm in prms: - if prm.kind in ( - inspect.Parameter.VAR_POSITIONAL, - inspect.Parameter.VAR_KEYWORD, - ): - raise TypeError( - f""" \ -The default implementation of `{type(aug)}.__call__` does not allow \ -`{type(aug)}.get_transform` to use variable-length arguments (*args, **kwargs)! \ -If arguments are unknown, reimplement `__call__` instead. \ -""" - ) - names.append(name) - aug.input_args = tuple(names) - - args = [] - for f in aug.input_args: - try: - args.append(getattr(aug_input, f)) - except AttributeError as e: - raise AttributeError( - f"{type(aug)}.get_transform needs input attribute '{f}', " - f"but it is not an attribute of {type(aug_input)}!" - ) from e - return args - - -class Augmentation: - """ - Augmentation defines (often random) policies/strategies to generate :class:`Transform` - from data. It is often used for pre-processing of input data. - - A "policy" that generates a :class:`Transform` may, in the most general case, - need arbitrary information from input data in order to determine what transforms - to apply. Therefore, each :class:`Augmentation` instance defines the arguments - needed by its :meth:`get_transform` method. When called with the positional arguments, - the :meth:`get_transform` method executes the policy. - - Note that :class:`Augmentation` defines the policies to create a :class:`Transform`, - but not how to execute the actual transform operations to those data. - Its :meth:`__call__` method will use :meth:`AugInput.transform` to execute the transform. - - The returned `Transform` object is meant to describe deterministic transformation, which means - it can be re-applied on associated data, e.g. the geometry of an image and its segmentation - masks need to be transformed together. - (If such re-application is not needed, then determinism is not a crucial requirement.) - """ - - input_args: Optional[Tuple[str]] = None - """ - Stores the attribute names needed by :meth:`get_transform`, e.g. ``("image", "sem_seg")``. - By default, it is just a tuple of argument names in :meth:`self.get_transform`, which often only - contain "image". As long as the argument name convention is followed, there is no need for - users to touch this attribute. - """ - - def _init(self, params=None): - if params: - for k, v in params.items(): - if k != "self" and not k.startswith("_"): - setattr(self, k, v) - - def get_transform(self, *args) -> Transform: - """ - Execute the policy based on input data, and decide what transform to apply to inputs. - - Args: - args: Any fixed-length positional arguments. By default, the name of the arguments - should exist in the :class:`AugInput` to be used. - - Returns: - Transform: Returns the deterministic transform to apply to the input. - - Examples: - :: - class MyAug: - # if a policy needs to know both image and semantic segmentation - def get_transform(image, sem_seg) -> T.Transform: - pass - tfm: Transform = MyAug().get_transform(image, sem_seg) - new_image = tfm.apply_image(image) - - Notes: - Users can freely use arbitrary new argument names in custom - :meth:`get_transform` method, as long as they are available in the - input data. In detectron2 we use the following convention: - - * image: (H,W) or (H,W,C) ndarray of type uint8 in range [0, 255], or - floating point in range [0, 1] or [0, 255]. - * boxes: (N,4) ndarray of float32. It represents the instance bounding boxes - of N instances. Each is in XYXY format in unit of absolute coordinates. - * sem_seg: (H,W) ndarray of type uint8. Each element is an integer label of pixel. - - We do not specify convention for other types and do not include builtin - :class:`Augmentation` that uses other types in detectron2. - """ - raise NotImplementedError - - def __call__(self, aug_input) -> Transform: - """ - Augment the given `aug_input` **in-place**, and return the transform that's used. - - This method will be called to apply the augmentation. In most augmentation, it - is enough to use the default implementation, which calls :meth:`get_transform` - using the inputs. But a subclass can overwrite it to have more complicated logic. - - Args: - aug_input (AugInput): an object that has attributes needed by this augmentation - (defined by ``self.get_transform``). Its ``transform`` method will be called - to in-place transform it. - - Returns: - Transform: the transform that is applied on the input. - """ - args = _get_aug_input_args(self, aug_input) - tfm = self.get_transform(*args) - assert isinstance(tfm, (Transform, TransformList)), ( - f"{type(self)}.get_transform must return an instance of Transform! " - f"Got {type(tfm)} instead." - ) - aug_input.transform(tfm) - return tfm - - def _rand_range(self, low=1.0, high=None, size=None): - """ - Uniform float random number between low and high. - """ - if high is None: - low, high = 0, low - if size is None: - size = [] - return np.random.uniform(low, high, size) - - def __repr__(self): - """ - Produce something like: - "MyAugmentation(field1={self.field1}, field2={self.field2})" - """ - try: - sig = inspect.signature(self.__init__) - classname = type(self).__name__ - argstr = [] - for name, param in sig.parameters.items(): - assert ( - param.kind != param.VAR_POSITIONAL and param.kind != param.VAR_KEYWORD - ), "The default __repr__ doesn't support *args or **kwargs" - assert hasattr(self, name), ( - "Attribute {} not found! " - "Default __repr__ only works if attributes match the constructor.".format(name) - ) - attr = getattr(self, name) - default = param.default - if default is attr: - continue - attr_str = pprint.pformat(attr) - if "\n" in attr_str: - # don't show it if pformat decides to use >1 lines - attr_str = "..." - argstr.append("{}={}".format(name, attr_str)) - return "{}({})".format(classname, ", ".join(argstr)) - except AssertionError: - return super().__repr__() - - __str__ = __repr__ - - -class _TransformToAug(Augmentation): - def __init__(self, tfm: Transform): - self.tfm = tfm - - def get_transform(self, *args): - return self.tfm - - def __repr__(self): - return repr(self.tfm) - - __str__ = __repr__ - - -def _transform_to_aug(tfm_or_aug): - """ - Wrap Transform into Augmentation. - Private, used internally to implement augmentations. - """ - assert isinstance(tfm_or_aug, (Transform, Augmentation)), tfm_or_aug - if isinstance(tfm_or_aug, Augmentation): - return tfm_or_aug - else: - return _TransformToAug(tfm_or_aug) - - -class AugmentationList(Augmentation): - """ - Apply a sequence of augmentations. - - It has ``__call__`` method to apply the augmentations. - - Note that :meth:`get_transform` method is impossible (will throw error if called) - for :class:`AugmentationList`, because in order to apply a sequence of augmentations, - the kth augmentation must be applied first, to provide inputs needed by the (k+1)th - augmentation. - """ - - def __init__(self, augs): - """ - Args: - augs (list[Augmentation or Transform]): - """ - super().__init__() - self.augs = [_transform_to_aug(x) for x in augs] - - def __call__(self, aug_input) -> Transform: - tfms = [] - for x in self.augs: - tfm = x(aug_input) - tfms.append(tfm) - return TransformList(tfms) - - def __repr__(self): - msgs = [str(x) for x in self.augs] - return "AugmentationList[{}]".format(", ".join(msgs)) - - __str__ = __repr__ - - -class AugInput: - """ - Input that can be used with :meth:`Augmentation.__call__`. - This is a standard implementation for the majority of use cases. - This class provides the standard attributes **"image", "boxes", "sem_seg"** - defined in :meth:`__init__` and they may be needed by different augmentations. - Most augmentation policies do not need attributes beyond these three. - - After applying augmentations to these attributes (using :meth:`AugInput.transform`), - the returned transforms can then be used to transform other data structures that users have. - - Examples: - :: - input = AugInput(image, boxes=boxes) - tfms = augmentation(input) - transformed_image = input.image - transformed_boxes = input.boxes - transformed_other_data = tfms.apply_other(other_data) - - An extended project that works with new data types may implement augmentation policies - that need other inputs. An algorithm may need to transform inputs in a way different - from the standard approach defined in this class. In those rare situations, users can - implement a class similar to this class, that satify the following condition: - - * The input must provide access to these data in the form of attribute access - (``getattr``). For example, if an :class:`Augmentation` to be applied needs "image" - and "sem_seg" arguments, its input must have the attribute "image" and "sem_seg". - * The input must have a ``transform(tfm: Transform) -> None`` method which - in-place transforms all its attributes. - """ - - # TODO maybe should support more builtin data types here - def __init__( - self, - image: np.ndarray, - *, - boxes: Optional[np.ndarray] = None, - sem_seg: Optional[np.ndarray] = None, - ): - """ - Args: - image (ndarray): (H,W) or (H,W,C) ndarray of type uint8 in range [0, 255], or - floating point in range [0, 1] or [0, 255]. The meaning of C is up - to users. - boxes (ndarray or None): Nx4 float32 boxes in XYXY_ABS mode - sem_seg (ndarray or None): HxW uint8 semantic segmentation mask. Each element - is an integer label of pixel. - """ - _check_img_dtype(image) - self.image = image - self.boxes = boxes - self.sem_seg = sem_seg - - def transform(self, tfm: Transform) -> None: - """ - In-place transform all attributes of this class. - - By "in-place", it means after calling this method, accessing an attribute such - as ``self.image`` will return transformed data. - """ - self.image = tfm.apply_image(self.image) - if self.boxes is not None: - self.boxes = tfm.apply_box(self.boxes) - if self.sem_seg is not None: - self.sem_seg = tfm.apply_segmentation(self.sem_seg) - - def apply_augmentations( - self, augmentations: List[Union[Augmentation, Transform]] - ) -> TransformList: - """ - Equivalent of ``AugmentationList(augmentations)(self)`` - """ - return AugmentationList(augmentations)(self) - - -def apply_augmentations(augmentations: List[Union[Transform, Augmentation]], inputs): - """ - Use ``T.AugmentationList(augmentations)(inputs)`` instead. - """ - if isinstance(inputs, np.ndarray): - # handle the common case of image-only Augmentation, also for backward compatibility - image_only = True - inputs = AugInput(inputs) - else: - image_only = False - tfms = inputs.apply_augmentations(augmentations) - return inputs.image if image_only else inputs, tfms - - -apply_transform_gens = apply_augmentations -""" -Alias for backward-compatibility. -""" - -TransformGen = Augmentation -""" -Alias for Augmentation, since it is something that generates :class:`Transform`s -""" - -StandardAugInput = AugInput -""" -Alias for compatibility. It's not worth the complexity to have two classes. -""" diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/layers/deform_conv.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/layers/deform_conv.py deleted file mode 100644 index dffb720c2a8d10d9273752dbdd291a3714f91338..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/layers/deform_conv.py +++ /dev/null @@ -1,514 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import math -from functools import lru_cache -import torch -from torch import nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.modules.utils import _pair -from torchvision.ops import deform_conv2d - -from detectron2.utils.develop import create_dummy_class, create_dummy_func - -from .wrappers import _NewEmptyTensorOp - - -class _DeformConv(Function): - @staticmethod - def forward( - ctx, - input, - offset, - weight, - stride=1, - padding=0, - dilation=1, - groups=1, - deformable_groups=1, - im2col_step=64, - ): - if input is not None and input.dim() != 4: - raise ValueError( - "Expected 4D tensor as input, got {}D tensor instead.".format(input.dim()) - ) - ctx.stride = _pair(stride) - ctx.padding = _pair(padding) - ctx.dilation = _pair(dilation) - ctx.groups = groups - ctx.deformable_groups = deformable_groups - ctx.im2col_step = im2col_step - - ctx.save_for_backward(input, offset, weight) - - output = input.new_empty( - _DeformConv._output_size(input, weight, ctx.padding, ctx.dilation, ctx.stride) - ) - - ctx.bufs_ = [input.new_empty(0), input.new_empty(0)] # columns, ones - - if not input.is_cuda: - # TODO: let torchvision support full features of our deformconv. - if deformable_groups != 1: - raise NotImplementedError( - "Deformable Conv with deformable_groups != 1 is not supported on CPUs!" - ) - return deform_conv2d( - input, offset, weight, stride=stride, padding=padding, dilation=dilation - ) - else: - cur_im2col_step = _DeformConv._cal_im2col_step(input.shape[0], ctx.im2col_step) - assert (input.shape[0] % cur_im2col_step) == 0, "im2col step must divide batchsize" - - _C.deform_conv_forward( - input, - weight, - offset, - output, - ctx.bufs_[0], - ctx.bufs_[1], - weight.size(3), - weight.size(2), - ctx.stride[1], - ctx.stride[0], - ctx.padding[1], - ctx.padding[0], - ctx.dilation[1], - ctx.dilation[0], - ctx.groups, - ctx.deformable_groups, - cur_im2col_step, - ) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - input, offset, weight = ctx.saved_tensors - - grad_input = grad_offset = grad_weight = None - - if not grad_output.is_cuda: - raise NotImplementedError("Deformable Conv is not supported on CPUs!") - else: - cur_im2col_step = _DeformConv._cal_im2col_step(input.shape[0], ctx.im2col_step) - assert (input.shape[0] % cur_im2col_step) == 0, "im2col step must divide batchsize" - - if ctx.needs_input_grad[0] or ctx.needs_input_grad[1]: - grad_input = torch.zeros_like(input) - grad_offset = torch.zeros_like(offset) - _C.deform_conv_backward_input( - input, - offset, - grad_output, - grad_input, - grad_offset, - weight, - ctx.bufs_[0], - weight.size(3), - weight.size(2), - ctx.stride[1], - ctx.stride[0], - ctx.padding[1], - ctx.padding[0], - ctx.dilation[1], - ctx.dilation[0], - ctx.groups, - ctx.deformable_groups, - cur_im2col_step, - ) - - if ctx.needs_input_grad[2]: - grad_weight = torch.zeros_like(weight) - _C.deform_conv_backward_filter( - input, - offset, - grad_output, - grad_weight, - ctx.bufs_[0], - ctx.bufs_[1], - weight.size(3), - weight.size(2), - ctx.stride[1], - ctx.stride[0], - ctx.padding[1], - ctx.padding[0], - ctx.dilation[1], - ctx.dilation[0], - ctx.groups, - ctx.deformable_groups, - 1, - cur_im2col_step, - ) - - return grad_input, grad_offset, grad_weight, None, None, None, None, None, None - - @staticmethod - def _output_size(input, weight, padding, dilation, stride): - channels = weight.size(0) - output_size = (input.size(0), channels) - for d in range(input.dim() - 2): - in_size = input.size(d + 2) - pad = padding[d] - kernel = dilation[d] * (weight.size(d + 2) - 1) + 1 - stride_ = stride[d] - output_size += ((in_size + (2 * pad) - kernel) // stride_ + 1,) - if not all(map(lambda s: s > 0, output_size)): - raise ValueError( - "convolution input is too small (output would be {})".format( - "x".join(map(str, output_size)) - ) - ) - return output_size - - @staticmethod - @lru_cache(maxsize=128) - def _cal_im2col_step(input_size, default_size): - """ - Calculate proper im2col step size, which should be divisible by input_size and not larger - than prefer_size. Meanwhile the step size should be as large as possible to be more - efficient. So we choose the largest one among all divisors of input_size which are smaller - than prefer_size. - :param input_size: input batch size . - :param default_size: default preferred im2col step size. - :return: the largest proper step size. - """ - if input_size <= default_size: - return input_size - best_step = 1 - for step in range(2, min(int(math.sqrt(input_size)) + 1, default_size)): - if input_size % step == 0: - if input_size // step <= default_size: - return input_size // step - best_step = step - - return best_step - - -class _ModulatedDeformConv(Function): - @staticmethod - def forward( - ctx, - input, - offset, - mask, - weight, - bias=None, - stride=1, - padding=0, - dilation=1, - groups=1, - deformable_groups=1, - ): - ctx.stride = stride - ctx.padding = padding - ctx.dilation = dilation - ctx.groups = groups - ctx.deformable_groups = deformable_groups - ctx.with_bias = bias is not None - if not ctx.with_bias: - bias = input.new_empty(1) # fake tensor - if not input.is_cuda: - raise NotImplementedError("Deformable Conv is not supported on CPUs!") - if ( - weight.requires_grad - or mask.requires_grad - or offset.requires_grad - or input.requires_grad - ): - ctx.save_for_backward(input, offset, mask, weight, bias) - output = input.new_empty(_ModulatedDeformConv._infer_shape(ctx, input, weight)) - ctx._bufs = [input.new_empty(0), input.new_empty(0)] - _C.modulated_deform_conv_forward( - input, - weight, - bias, - ctx._bufs[0], - offset, - mask, - output, - ctx._bufs[1], - weight.shape[2], - weight.shape[3], - ctx.stride, - ctx.stride, - ctx.padding, - ctx.padding, - ctx.dilation, - ctx.dilation, - ctx.groups, - ctx.deformable_groups, - ctx.with_bias, - ) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - if not grad_output.is_cuda: - raise NotImplementedError("Deformable Conv is not supported on CPUs!") - input, offset, mask, weight, bias = ctx.saved_tensors - grad_input = torch.zeros_like(input) - grad_offset = torch.zeros_like(offset) - grad_mask = torch.zeros_like(mask) - grad_weight = torch.zeros_like(weight) - grad_bias = torch.zeros_like(bias) - _C.modulated_deform_conv_backward( - input, - weight, - bias, - ctx._bufs[0], - offset, - mask, - ctx._bufs[1], - grad_input, - grad_weight, - grad_bias, - grad_offset, - grad_mask, - grad_output, - weight.shape[2], - weight.shape[3], - ctx.stride, - ctx.stride, - ctx.padding, - ctx.padding, - ctx.dilation, - ctx.dilation, - ctx.groups, - ctx.deformable_groups, - ctx.with_bias, - ) - if not ctx.with_bias: - grad_bias = None - - return ( - grad_input, - grad_offset, - grad_mask, - grad_weight, - grad_bias, - None, - None, - None, - None, - None, - ) - - @staticmethod - def _infer_shape(ctx, input, weight): - n = input.size(0) - channels_out = weight.size(0) - height, width = input.shape[2:4] - kernel_h, kernel_w = weight.shape[2:4] - height_out = ( - height + 2 * ctx.padding - (ctx.dilation * (kernel_h - 1) + 1) - ) // ctx.stride + 1 - width_out = ( - width + 2 * ctx.padding - (ctx.dilation * (kernel_w - 1) + 1) - ) // ctx.stride + 1 - return n, channels_out, height_out, width_out - - -deform_conv = _DeformConv.apply -modulated_deform_conv = _ModulatedDeformConv.apply - - -class DeformConv(nn.Module): - def __init__( - self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - deformable_groups=1, - bias=False, - norm=None, - activation=None, - ): - """ - Deformable convolution from :paper:`deformconv`. - - Arguments are similar to :class:`Conv2D`. Extra arguments: - - Args: - deformable_groups (int): number of groups used in deformable convolution. - norm (nn.Module, optional): a normalization layer - activation (callable(Tensor) -> Tensor): a callable activation function - """ - super(DeformConv, self).__init__() - - assert not bias - assert in_channels % groups == 0, "in_channels {} cannot be divisible by groups {}".format( - in_channels, groups - ) - assert ( - out_channels % groups == 0 - ), "out_channels {} cannot be divisible by groups {}".format(out_channels, groups) - - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = _pair(kernel_size) - self.stride = _pair(stride) - self.padding = _pair(padding) - self.dilation = _pair(dilation) - self.groups = groups - self.deformable_groups = deformable_groups - self.norm = norm - self.activation = activation - - self.weight = nn.Parameter( - torch.Tensor(out_channels, in_channels // self.groups, *self.kernel_size) - ) - self.bias = None - - nn.init.kaiming_uniform_(self.weight, nonlinearity="relu") - - def forward(self, x, offset): - if x.numel() == 0: - # When input is empty, we want to return a empty tensor with "correct" shape, - # So that the following operations will not panic - # if they check for the shape of the tensor. - # This computes the height and width of the output tensor - output_shape = [ - (i + 2 * p - (di * (k - 1) + 1)) // s + 1 - for i, p, di, k, s in zip( - x.shape[-2:], self.padding, self.dilation, self.kernel_size, self.stride - ) - ] - output_shape = [x.shape[0], self.weight.shape[0]] + output_shape - return _NewEmptyTensorOp.apply(x, output_shape) - - x = deform_conv( - x, - offset, - self.weight, - self.stride, - self.padding, - self.dilation, - self.groups, - self.deformable_groups, - ) - if self.norm is not None: - x = self.norm(x) - if self.activation is not None: - x = self.activation(x) - return x - - def extra_repr(self): - tmpstr = "in_channels=" + str(self.in_channels) - tmpstr += ", out_channels=" + str(self.out_channels) - tmpstr += ", kernel_size=" + str(self.kernel_size) - tmpstr += ", stride=" + str(self.stride) - tmpstr += ", padding=" + str(self.padding) - tmpstr += ", dilation=" + str(self.dilation) - tmpstr += ", groups=" + str(self.groups) - tmpstr += ", deformable_groups=" + str(self.deformable_groups) - tmpstr += ", bias=False" - return tmpstr - - -class ModulatedDeformConv(nn.Module): - def __init__( - self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - deformable_groups=1, - bias=True, - norm=None, - activation=None, - ): - """ - Modulated deformable convolution from :paper:`deformconv2`. - - Arguments are similar to :class:`Conv2D`. Extra arguments: - - Args: - deformable_groups (int): number of groups used in deformable convolution. - norm (nn.Module, optional): a normalization layer - activation (callable(Tensor) -> Tensor): a callable activation function - """ - super(ModulatedDeformConv, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = _pair(kernel_size) - self.stride = stride - self.padding = padding - self.dilation = dilation - self.groups = groups - self.deformable_groups = deformable_groups - self.with_bias = bias - self.norm = norm - self.activation = activation - - self.weight = nn.Parameter( - torch.Tensor(out_channels, in_channels // groups, *self.kernel_size) - ) - if bias: - self.bias = nn.Parameter(torch.Tensor(out_channels)) - else: - self.bias = None - - nn.init.kaiming_uniform_(self.weight, nonlinearity="relu") - if self.bias is not None: - nn.init.constant_(self.bias, 0) - - def forward(self, x, offset, mask): - if x.numel() == 0: - output_shape = [ - (i + 2 * p - (di * (k - 1) + 1)) // s + 1 - for i, p, di, k, s in zip( - x.shape[-2:], self.padding, self.dilation, self.kernel_size, self.stride - ) - ] - output_shape = [x.shape[0], self.weight.shape[0]] + output_shape - return _NewEmptyTensorOp.apply(x, output_shape) - - x = modulated_deform_conv( - x, - offset, - mask, - self.weight, - self.bias, - self.stride, - self.padding, - self.dilation, - self.groups, - self.deformable_groups, - ) - if self.norm is not None: - x = self.norm(x) - if self.activation is not None: - x = self.activation(x) - return x - - def extra_repr(self): - tmpstr = "in_channels=" + str(self.in_channels) - tmpstr += ", out_channels=" + str(self.out_channels) - tmpstr += ", kernel_size=" + str(self.kernel_size) - tmpstr += ", stride=" + str(self.stride) - tmpstr += ", padding=" + str(self.padding) - tmpstr += ", dilation=" + str(self.dilation) - tmpstr += ", groups=" + str(self.groups) - tmpstr += ", deformable_groups=" + str(self.deformable_groups) - tmpstr += ", bias=" + str(self.with_bias) - return tmpstr - - -try: - from detectron2 import _C -except ImportError: - # TODO: register ops natively so there is no need to import _C. - _msg = "detectron2 is not compiled successfully, please build following the instructions!" - _args = ("detectron2._C", _msg) - DeformConv = create_dummy_class("DeformConv", *_args) - ModulatedDeformConv = create_dummy_class("ModulatedDeformConv", *_args) - deform_conv = create_dummy_func("deform_conv", *_args) - modulated_deform_conv = create_dummy_func("modulated_deform_conv", *_args) diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/utils/video_visualizer.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/utils/video_visualizer.py deleted file mode 100644 index 1de1d89f5e207afba6d1558290a5a18c1e1f4a07..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/utils/video_visualizer.py +++ /dev/null @@ -1,287 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import numpy as np -from typing import List -import pycocotools.mask as mask_util - -from detectron2.structures import Instances -from detectron2.utils.visualizer import ( - ColorMode, - Visualizer, - _create_text_labels, - _PanopticPrediction, -) - -from .colormap import random_color, random_colors - - -class _DetectedInstance: - """ - Used to store data about detected objects in video frame, - in order to transfer color to objects in the future frames. - - Attributes: - label (int): - bbox (tuple[float]): - mask_rle (dict): - color (tuple[float]): RGB colors in range (0, 1) - ttl (int): time-to-live for the instance. For example, if ttl=2, - the instance color can be transferred to objects in the next two frames. - """ - - __slots__ = ["label", "bbox", "mask_rle", "color", "ttl"] - - def __init__(self, label, bbox, mask_rle, color, ttl): - self.label = label - self.bbox = bbox - self.mask_rle = mask_rle - self.color = color - self.ttl = ttl - - -class VideoVisualizer: - def __init__(self, metadata, instance_mode=ColorMode.IMAGE): - """ - Args: - metadata (MetadataCatalog): image metadata. - """ - self.metadata = metadata - self._old_instances = [] - assert instance_mode in [ - ColorMode.IMAGE, - ColorMode.IMAGE_BW, - ], "Other mode not supported yet." - self._instance_mode = instance_mode - self._max_num_instances = self.metadata.get("max_num_instances", 74) - self._assigned_colors = {} - self._color_pool = random_colors(self._max_num_instances, rgb=True, maximum=1) - self._color_idx_set = set(range(len(self._color_pool))) - - def draw_instance_predictions(self, frame, predictions): - """ - Draw instance-level prediction results on an image. - - Args: - frame (ndarray): an RGB image of shape (H, W, C), in the range [0, 255]. - predictions (Instances): the output of an instance detection/segmentation - model. Following fields will be used to draw: - "pred_boxes", "pred_classes", "scores", "pred_masks" (or "pred_masks_rle"). - - Returns: - output (VisImage): image object with visualizations. - """ - frame_visualizer = Visualizer(frame, self.metadata) - num_instances = len(predictions) - if num_instances == 0: - return frame_visualizer.output - - boxes = predictions.pred_boxes.tensor.numpy() if predictions.has("pred_boxes") else None - scores = predictions.scores if predictions.has("scores") else None - classes = predictions.pred_classes.numpy() if predictions.has("pred_classes") else None - keypoints = predictions.pred_keypoints if predictions.has("pred_keypoints") else None - colors = predictions.COLOR if predictions.has("COLOR") else [None] * len(predictions) - periods = predictions.ID_period if predictions.has("ID_period") else None - period_threshold = self.metadata.get("period_threshold", 0) - visibilities = ( - [True] * len(predictions) - if periods is None - else [x > period_threshold for x in periods] - ) - - if predictions.has("pred_masks"): - masks = predictions.pred_masks - # mask IOU is not yet enabled - # masks_rles = mask_util.encode(np.asarray(masks.permute(1, 2, 0), order="F")) - # assert len(masks_rles) == num_instances - else: - masks = None - - if not predictions.has("COLOR"): - if predictions.has("ID"): - colors = self._assign_colors_by_id(predictions) - else: - # ToDo: clean old assign color method and use a default tracker to assign id - detected = [ - _DetectedInstance(classes[i], boxes[i], mask_rle=None, color=colors[i], ttl=8) - for i in range(num_instances) - ] - colors = self._assign_colors(detected) - - labels = _create_text_labels(classes, scores, self.metadata.get("thing_classes", None)) - - if self._instance_mode == ColorMode.IMAGE_BW: - # any() returns uint8 tensor - frame_visualizer.output.reset_image( - frame_visualizer._create_grayscale_image( - (masks.any(dim=0) > 0).numpy() if masks is not None else None - ) - ) - alpha = 0.3 - else: - alpha = 0.5 - - labels = ( - None - if labels is None - else [y[0] for y in filter(lambda x: x[1], zip(labels, visibilities))] - ) # noqa - assigned_colors = ( - None - if colors is None - else [y[0] for y in filter(lambda x: x[1], zip(colors, visibilities))] - ) # noqa - frame_visualizer.overlay_instances( - boxes=None if masks is not None else boxes[visibilities], # boxes are a bit distracting - masks=None if masks is None else masks[visibilities], - labels=labels, - keypoints=None if keypoints is None else keypoints[visibilities], - assigned_colors=assigned_colors, - alpha=alpha, - ) - - return frame_visualizer.output - - def draw_sem_seg(self, frame, sem_seg, area_threshold=None): - """ - Args: - sem_seg (ndarray or Tensor): semantic segmentation of shape (H, W), - each value is the integer label. - area_threshold (Optional[int]): only draw segmentations larger than the threshold - """ - # don't need to do anything special - frame_visualizer = Visualizer(frame, self.metadata) - frame_visualizer.draw_sem_seg(sem_seg, area_threshold=None) - return frame_visualizer.output - - def draw_panoptic_seg_predictions( - self, frame, panoptic_seg, segments_info, area_threshold=None, alpha=0.5 - ): - frame_visualizer = Visualizer(frame, self.metadata) - pred = _PanopticPrediction(panoptic_seg, segments_info, self.metadata) - - if self._instance_mode == ColorMode.IMAGE_BW: - frame_visualizer.output.reset_image( - frame_visualizer._create_grayscale_image(pred.non_empty_mask()) - ) - - # draw mask for all semantic segments first i.e. "stuff" - for mask, sinfo in pred.semantic_masks(): - category_idx = sinfo["category_id"] - try: - mask_color = [x / 255 for x in self.metadata.stuff_colors[category_idx]] - except AttributeError: - mask_color = None - - frame_visualizer.draw_binary_mask( - mask, - color=mask_color, - text=self.metadata.stuff_classes[category_idx], - alpha=alpha, - area_threshold=area_threshold, - ) - - all_instances = list(pred.instance_masks()) - if len(all_instances) == 0: - return frame_visualizer.output - # draw mask for all instances second - masks, sinfo = list(zip(*all_instances)) - num_instances = len(masks) - masks_rles = mask_util.encode( - np.asarray(np.asarray(masks).transpose(1, 2, 0), dtype=np.uint8, order="F") - ) - assert len(masks_rles) == num_instances - - category_ids = [x["category_id"] for x in sinfo] - detected = [ - _DetectedInstance(category_ids[i], bbox=None, mask_rle=masks_rles[i], color=None, ttl=8) - for i in range(num_instances) - ] - colors = self._assign_colors(detected) - labels = [self.metadata.thing_classes[k] for k in category_ids] - - frame_visualizer.overlay_instances( - boxes=None, - masks=masks, - labels=labels, - keypoints=None, - assigned_colors=colors, - alpha=alpha, - ) - return frame_visualizer.output - - def _assign_colors(self, instances): - """ - Naive tracking heuristics to assign same color to the same instance, - will update the internal state of tracked instances. - - Returns: - list[tuple[float]]: list of colors. - """ - - # Compute iou with either boxes or masks: - is_crowd = np.zeros((len(instances),), dtype=np.bool) - if instances[0].bbox is None: - assert instances[0].mask_rle is not None - # use mask iou only when box iou is None - # because box seems good enough - rles_old = [x.mask_rle for x in self._old_instances] - rles_new = [x.mask_rle for x in instances] - ious = mask_util.iou(rles_old, rles_new, is_crowd) - threshold = 0.5 - else: - boxes_old = [x.bbox for x in self._old_instances] - boxes_new = [x.bbox for x in instances] - ious = mask_util.iou(boxes_old, boxes_new, is_crowd) - threshold = 0.6 - if len(ious) == 0: - ious = np.zeros((len(self._old_instances), len(instances)), dtype="float32") - - # Only allow matching instances of the same label: - for old_idx, old in enumerate(self._old_instances): - for new_idx, new in enumerate(instances): - if old.label != new.label: - ious[old_idx, new_idx] = 0 - - matched_new_per_old = np.asarray(ious).argmax(axis=1) - max_iou_per_old = np.asarray(ious).max(axis=1) - - # Try to find match for each old instance: - extra_instances = [] - for idx, inst in enumerate(self._old_instances): - if max_iou_per_old[idx] > threshold: - newidx = matched_new_per_old[idx] - if instances[newidx].color is None: - instances[newidx].color = inst.color - continue - # If an old instance does not match any new instances, - # keep it for the next frame in case it is just missed by the detector - inst.ttl -= 1 - if inst.ttl > 0: - extra_instances.append(inst) - - # Assign random color to newly-detected instances: - for inst in instances: - if inst.color is None: - inst.color = random_color(rgb=True, maximum=1) - self._old_instances = instances[:] + extra_instances - return [d.color for d in instances] - - def _assign_colors_by_id(self, instances: Instances) -> List: - colors = [] - untracked_ids = set(self._assigned_colors.keys()) - for id in instances.ID: - if id in self._assigned_colors: - colors.append(self._color_pool[self._assigned_colors[id]]) - untracked_ids.remove(id) - else: - assert ( - len(self._color_idx_set) >= 1 - ), f"Number of id exceeded maximum, \ - max = {self._max_num_instances}" - idx = self._color_idx_set.pop() - color = self._color_pool[idx] - self._assigned_colors[id] = idx - colors.append(color) - for id in untracked_ids: - self._color_idx_set.add(self._assigned_colors[id]) - del self._assigned_colors[id] - return colors diff --git a/spaces/cc38300/ConstructionGPT-SL/database.py b/spaces/cc38300/ConstructionGPT-SL/database.py deleted file mode 100644 index 4ccbe71a86928488d592808d6be0e810a0bb73f7..0000000000000000000000000000000000000000 --- a/spaces/cc38300/ConstructionGPT-SL/database.py +++ /dev/null @@ -1,82 +0,0 @@ -import pandas as pd -import numpy as np -import openai -from redis import Redis -from redis.commands.search.field import VectorField -from redis.commands.search.field import TextField, NumericField -from redis.commands.search.query import Query - -from config import EMBEDDINGS_MODEL, PREFIX, VECTOR_FIELD_NAME - -# Get a Redis connection -def get_redis_connection(host='localhost',port='6379',db=0): - - r = Redis(host=host, port=port, db=db,decode_responses=False) - return r - -# Create a Redis index to hold our data -def create_hnsw_index (redis_conn,vector_field_name,vector_dimensions=1536, distance_metric='COSINE'): - redis_conn.ft().create_index([ - VectorField(vector_field_name, "HNSW", {"TYPE": "FLOAT32", "DIM": vector_dimensions, "DISTANCE_METRIC": distance_metric}), - TextField("filename"), - TextField("text_chunk"), - NumericField("file_chunk_index") - ]) - -# Create a Redis pipeline to load all the vectors and their metadata -def load_vectors(client:Redis, input_list, vector_field_name): - p = client.pipeline(transaction=False) - for text in input_list: - #hash key - key=f"{PREFIX}:{text['id']}" - - #hash values - item_metadata = text['metadata'] - # - item_keywords_vector = np.array(text['vector'],dtype= 'float32').tobytes() - item_metadata[vector_field_name]=item_keywords_vector - - # HSET - p.hset(key,mapping=item_metadata) - - p.execute() - -# Make query to Redis -def query_redis(redis_conn,query,index_name, top_k=2): - - - - ## Creates embedding vector from user query - embedded_query = np.array(openai.Embedding.create( - input=query, - model=EMBEDDINGS_MODEL, - )["data"][0]['embedding'], dtype=np.float32).tobytes() - - #prepare the query - q = Query(f'*=>[KNN {top_k} @{VECTOR_FIELD_NAME} $vec_param AS vector_score]').sort_by('vector_score').paging(0,top_k).return_fields('vector_score','filename','text_chunk','text_chunk_index').dialect(2) - params_dict = {"vec_param": embedded_query} - - - #Execute the query - results = redis_conn.ft(index_name).search(q, query_params = params_dict) - - return results - -# Get mapped documents from Weaviate results -def get_redis_results(redis_conn,query,index_name): - - # Get most relevant documents from Redis - query_result = query_redis(redis_conn,query,index_name) - - # Extract info into a list - query_result_list = [] - for i, result in enumerate(query_result.docs): - result_order = i - text = result.text_chunk - score = result.vector_score - query_result_list.append((result_order,text,score)) - - # Display result as a DataFrame for ease of us - result_df = pd.DataFrame(query_result_list) - result_df.columns = ['id','result','certainty'] - return result_df \ No newline at end of file diff --git a/spaces/ccolas/TastyPiano/src/cocktails/representation_learning/vae_model.py b/spaces/ccolas/TastyPiano/src/cocktails/representation_learning/vae_model.py deleted file mode 100644 index 45ea70aadaba8c5274bd82e87be7c9bba5d43b9e..0000000000000000000000000000000000000000 --- a/spaces/ccolas/TastyPiano/src/cocktails/representation_learning/vae_model.py +++ /dev/null @@ -1,238 +0,0 @@ -import torch; torch.manual_seed(0) -import torch.nn as nn -import torch.nn.functional as F -import torch.utils -import torch.distributions -import matplotlib.pyplot as plt; plt.rcParams['figure.dpi'] = 200 - -device = 'cuda' if torch.cuda.is_available() else 'cpu' - -def get_activation(activation): - if activation == 'tanh': - activ = F.tanh - elif activation == 'relu': - activ = F.relu - elif activation == 'mish': - activ = F.mish - elif activation == 'sigmoid': - activ = F.sigmoid - elif activation == 'leakyrelu': - activ = F.leaky_relu - elif activation == 'exp': - activ = torch.exp - else: - raise ValueError - return activ - -class IngredientEncoder(nn.Module): - def __init__(self, input_dim, deepset_latent_dim, hidden_dims, activation, dropout): - super(IngredientEncoder, self).__init__() - self.linears = nn.ModuleList() - self.dropouts = nn.ModuleList() - dims = [input_dim] + hidden_dims + [deepset_latent_dim] - for d_in, d_out in zip(dims[:-1], dims[1:]): - self.linears.append(nn.Linear(d_in, d_out)) - self.dropouts.append(nn.Dropout(dropout)) - self.activation = get_activation(activation) - self.n_layers = len(self.linears) - self.layer_range = range(self.n_layers) - - def forward(self, x): - for i_layer, layer, dropout in zip(self.layer_range, self.linears, self.dropouts): - x = layer(x) - if i_layer != self.n_layers - 1: - x = self.activation(dropout(x)) - return x # do not use dropout on last layer? - -class DeepsetCocktailEncoder(nn.Module): - def __init__(self, input_dim, deepset_latent_dim, hidden_dims_ing, activation, - hidden_dims_cocktail, latent_dim, aggregation, dropout): - super(DeepsetCocktailEncoder, self).__init__() - self.input_dim = input_dim # dimension of ingredient representation + quantity - self.ingredient_encoder = IngredientEncoder(input_dim, deepset_latent_dim, hidden_dims_ing, activation, dropout) # encode each ingredient separately - self.deepset_latent_dim = deepset_latent_dim # dimension of the deepset aggregation - self.aggregation = aggregation - self.latent_dim = latent_dim - # post aggregation network - self.linears = nn.ModuleList() - self.dropouts = nn.ModuleList() - dims = [deepset_latent_dim] + hidden_dims_cocktail - for d_in, d_out in zip(dims[:-1], dims[1:]): - self.linears.append(nn.Linear(d_in, d_out)) - self.dropouts.append(nn.Dropout(dropout)) - self.FC_mean = nn.Linear(hidden_dims_cocktail[-1], latent_dim) - self.FC_logvar = nn.Linear(hidden_dims_cocktail[-1], latent_dim) - self.softplus = nn.Softplus() - - self.activation = get_activation(activation) - self.n_layers = len(self.linears) - self.layer_range = range(self.n_layers) - - def forward(self, nb_ingredients, x): - - # reshape x in (batch size * nb ingredients, dim_ing_rep) - batch_size = x.shape[0] - all_ingredients = [] - for i in range(batch_size): - for j in range(nb_ingredients[i]): - all_ingredients.append(x[i, self.input_dim * j: self.input_dim * (j + 1)].reshape(1, -1)) - x = torch.cat(all_ingredients, dim=0) - # encode ingredients in parallel - ingredients_encodings = self.ingredient_encoder(x) - assert ingredients_encodings.shape == (torch.sum(nb_ingredients), self.deepset_latent_dim) - - # aggregate - x = [] - index_first = 0 - for i in range(batch_size): - index_last = index_first + nb_ingredients[i] - # aggregate - if self.aggregation == 'sum': - x.append(torch.sum(ingredients_encodings[index_first:index_last], dim=0).reshape(1, -1)) - elif self.aggregation == 'mean': - x.append(torch.mean(ingredients_encodings[index_first:index_last], dim=0).reshape(1, -1)) - else: - raise ValueError - index_first = index_last - x = torch.cat(x, dim=0) - assert x.shape[0] == batch_size - - for i_layer, layer, dropout in zip(self.layer_range, self.linears, self.dropouts): - x = self.activation(dropout(layer(x))) - mean = self.FC_mean(x) - logvar = self.FC_logvar(x) - return mean, logvar - -class Decoder(nn.Module): - def __init__(self, latent_dim, hidden_dims, num_ingredients, activation, dropout, filter_output=None): - super(Decoder, self).__init__() - self.linears = nn.ModuleList() - self.dropouts = nn.ModuleList() - dims = [latent_dim] + hidden_dims + [num_ingredients] - for d_in, d_out in zip(dims[:-1], dims[1:]): - self.linears.append(nn.Linear(d_in, d_out)) - self.dropouts.append(nn.Dropout(dropout)) - self.activation = get_activation(activation) - self.n_layers = len(self.linears) - self.layer_range = range(self.n_layers) - self.filter = filter_output - - def forward(self, x, to_filter=False): - for i_layer, layer, dropout in zip(self.layer_range, self.linears, self.dropouts): - x = layer(x) - if i_layer != self.n_layers - 1: - x = self.activation(dropout(x)) - if to_filter: - x = self.filter(x) - return x - -class PredictorHead(nn.Module): - def __init__(self, latent_dim, dim_output, final_activ): - super(PredictorHead, self).__init__() - self.linear = nn.Linear(latent_dim, dim_output) - if final_activ != None: - self.final_activ = get_activation(final_activ) - self.use_final_activ = True - else: - self.use_final_activ = False - - def forward(self, x): - x = self.linear(x) - if self.use_final_activ: x = self.final_activ(x) - return x - - -class VAEModel(nn.Module): - def __init__(self, encoder, decoder, auxiliaries_dict): - super(VAEModel, self).__init__() - self.encoder = encoder - self.decoder = decoder - self.latent_dim = self.encoder.latent_dim - self.auxiliaries_str = [] - self.auxiliaries = nn.ModuleList() - for aux_str in sorted(auxiliaries_dict.keys()): - if aux_str == 'taste_reps': - self.taste_reps_decoder = PredictorHead(self.latent_dim, auxiliaries_dict[aux_str]['dim_output'], auxiliaries_dict[aux_str]['final_activ']) - else: - self.auxiliaries_str.append(aux_str) - self.auxiliaries.append(PredictorHead(self.latent_dim, auxiliaries_dict[aux_str]['dim_output'], auxiliaries_dict[aux_str]['final_activ'])) - - def reparameterization(self, mean, logvar): - std = torch.exp(0.5 * logvar) - epsilon = torch.randn_like(std).to(device) # sampling epsilon - z = mean + std * epsilon # reparameterization trick - return z - - - def sample(self, n=1): - z = torch.randn(size=(n, self.latent_dim)) - return self.decoder(z) - - def get_all_auxiliaries(self, x): - return [aux(x) for aux in self.auxiliaries] - - def get_auxiliary(self, z, aux_str): - if aux_str == 'taste_reps': - return self.taste_reps_decoder(z) - else: - index = self.auxiliaries_str.index(aux_str) - return self.auxiliaries[index](z) - - def forward_direct(self, x, aux_str=None, to_filter=False): - mean, logvar = self.encoder(x) - z = self.reparameterization(mean, logvar) # takes exponential function (log var -> std) - x_hat = self.decoder(mean, to_filter=to_filter) - if aux_str is not None: - return x_hat, z, mean, logvar, self.get_auxiliary(z, aux_str), [aux_str] - else: - return x_hat, z, mean, logvar, self.get_all_auxiliaries(z), self.auxiliaries_str - - def forward(self, nb_ingredients, x, aux_str=None, to_filter=False): - assert False - mean, std = self.encoder(nb_ingredients, x) - z = self.reparameterization(mean, std) # takes exponential function (log var -> std) - x_hat = self.decoder(mean, to_filter=to_filter) - if aux_str is not None: - return x_hat, z, mean, std, self.get_auxiliary(z, aux_str), [aux_str] - else: - return x_hat, z, mean, std, self.get_all_auxiliaries(z), self.auxiliaries_str - - - - -class SimpleEncoder(nn.Module): - - def __init__(self, input_dim, hidden_dims, latent_dim, activation, dropout): - super(SimpleEncoder, self).__init__() - self.latent_dim = latent_dim - # post aggregation network - self.linears = nn.ModuleList() - self.dropouts = nn.ModuleList() - dims = [input_dim] + hidden_dims - for d_in, d_out in zip(dims[:-1], dims[1:]): - self.linears.append(nn.Linear(d_in, d_out)) - self.dropouts.append(nn.Dropout(dropout)) - self.FC_mean = nn.Linear(hidden_dims[-1], latent_dim) - self.FC_logvar = nn.Linear(hidden_dims[-1], latent_dim) - # self.softplus = nn.Softplus() - - self.activation = get_activation(activation) - self.n_layers = len(self.linears) - self.layer_range = range(self.n_layers) - - def forward(self, x): - for i_layer, layer, dropout in zip(self.layer_range, self.linears, self.dropouts): - x = self.activation(dropout(layer(x))) - mean = self.FC_mean(x) - logvar = self.FC_logvar(x) - return mean, logvar - -def get_vae_model(input_dim, deepset_latent_dim, hidden_dims_ing, activation, - hidden_dims_cocktail, hidden_dims_decoder, num_ingredients, latent_dim, aggregation, dropout, auxiliaries_dict, - filter_decoder_output): - # encoder = DeepsetCocktailEncoder(input_dim, deepset_latent_dim, hidden_dims_ing, activation, - # hidden_dims_cocktail, latent_dim, aggregation, dropout) - encoder = SimpleEncoder(num_ingredients, hidden_dims_cocktail, latent_dim, activation, dropout) - decoder = Decoder(latent_dim, hidden_dims_decoder, num_ingredients, activation, dropout, filter_output=filter_decoder_output) - vae = VAEModel(encoder, decoder, auxiliaries_dict) - return vae \ No newline at end of file diff --git a/spaces/chendl/compositional_test/transformers/docker/transformers-pytorch-cpu/Dockerfile b/spaces/chendl/compositional_test/transformers/docker/transformers-pytorch-cpu/Dockerfile deleted file mode 100644 index d1759d650b84fd8de3ad4b048cf45c30405ea3b7..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/docker/transformers-pytorch-cpu/Dockerfile +++ /dev/null @@ -1,25 +0,0 @@ -FROM ubuntu:18.04 -LABEL maintainer="Hugging Face" -LABEL repository="transformers" - -RUN apt update && \ - apt install -y bash \ - build-essential \ - git \ - curl \ - ca-certificates \ - python3 \ - python3-pip && \ - rm -rf /var/lib/apt/lists - -RUN python3 -m pip install --no-cache-dir --upgrade pip && \ - python3 -m pip install --no-cache-dir \ - jupyter \ - torch - -WORKDIR /workspace -COPY . transformers/ -RUN cd transformers/ && \ - python3 -m pip install --no-cache-dir . - -CMD ["/bin/bash"] \ No newline at end of file diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/lxmert/README.md b/spaces/chendl/compositional_test/transformers/examples/research_projects/lxmert/README.md deleted file mode 100644 index 2ec1aaebbb04fb80c35cb92586846ac434dd8469..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/research_projects/lxmert/README.md +++ /dev/null @@ -1,5 +0,0 @@ -# LXMERT DEMO - -1. make a virtualenv: ``virtualenv venv`` and activate ``source venv/bin/activate`` -2. install reqs: ``pip install -r ./requirements.txt`` -3. usage is as shown in demo.ipynb diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/seq2seq-distillation/README.md b/spaces/chendl/compositional_test/transformers/examples/research_projects/seq2seq-distillation/README.md deleted file mode 100644 index 930e5b8fc983983c622e0056b64851007782f23d..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/research_projects/seq2seq-distillation/README.md +++ /dev/null @@ -1,434 +0,0 @@ -## Sequence to Sequence Training and Evaluation - -This directory contains examples for finetuning and evaluating transformers on summarization and translation tasks. - -Author: Sam Shleifer (https://github.com/sshleifer) - -### Supported Architectures - -- `BartForConditionalGeneration` (and anything that inherits from it) -- `MarianMTModel` -- `PegasusForConditionalGeneration` -- `MBartForConditionalGeneration` -- `FSMTForConditionalGeneration` -- `T5ForConditionalGeneration` - -# Note - -⚠️ This project should be run with pytorch-lightning==1.0.4 which has a potential security vulnerability - -## Datasets - -#### XSUM - -```bash -cd examples/contrib/pytorch-lightning/seq2seq -wget https://cdn-datasets.huggingface.co/summarization/xsum.tar.gz -tar -xzvf xsum.tar.gz -export XSUM_DIR=${PWD}/xsum -``` -this should make a directory called `xsum/` with files like `test.source`. -To use your own data, copy that files format. Each article to be summarized is on its own line. - -#### CNN/DailyMail - -```bash -cd examples/contrib/pytorch-lightning/seq2seq -wget https://cdn-datasets.huggingface.co/summarization/cnn_dm_v2.tgz -tar -xzvf cnn_dm_v2.tgz # empty lines removed -mv cnn_cln cnn_dm -export CNN_DIR=${PWD}/cnn_dm -``` -this should make a directory called `cnn_dm/` with 6 files. - -#### WMT16 English-Romanian Translation Data - -download with this command: -```bash -wget https://cdn-datasets.huggingface.co/translation/wmt_en_ro.tar.gz -tar -xzvf wmt_en_ro.tar.gz -export ENRO_DIR=${PWD}/wmt_en_ro -``` -this should make a directory called `wmt_en_ro/` with 6 files. - -#### WMT English-German - -```bash -wget https://cdn-datasets.huggingface.co/translation/wmt_en_de.tgz -tar -xzvf wmt_en_de.tgz -export DATA_DIR=${PWD}/wmt_en_de -``` - -#### FSMT datasets (wmt) - -Refer to the scripts starting with `eval_` under: -https://github.com/huggingface/transformers/tree/main/scripts/fsmt - -#### Pegasus (multiple datasets) - -Multiple eval datasets are available for download from: -https://github.com/stas00/porting/tree/master/datasets/pegasus - - -#### Your Data - -If you are using your own data, it must be formatted as one directory with 6 files: -``` -train.source -train.target -val.source -val.target -test.source -test.target -``` -The `.source` files are the input, the `.target` files are the desired output. - -### Potential issues - -- native AMP (`--fp16` and no apex) may lead to a huge memory leak and require 10x gpu memory. This has been fixed in pytorch-nightly and the minimal official version to have this fix will be pytorch-1.8. Until then if you have to use mixed precision please use AMP only with pytorch-nightly or NVIDIA's apex. Reference: https://github.com/huggingface/transformers/issues/8403 - - -### Tips and Tricks - -General Tips: -- since you need to run from this folder, and likely need to modify code, the easiest workflow is fork transformers, clone your fork, and run `pip install -e .` before you get started. -- try `--freeze_encoder` or `--freeze_embeds` for faster training/larger batch size. (3hr per epoch with bs=8, see the "xsum_shared_task" command below) -- `fp16_opt_level=O1` (the default works best). -- In addition to the pytorch-lightning .ckpt checkpoint, a transformers checkpoint will be saved. -Load it with `BartForConditionalGeneration.from_pretrained(f'{output_dir}/best_tfmr)`. -- At the moment, `--do_predict` does not work in a multi-gpu setting. You need to use `evaluate_checkpoint` or the `run_eval.py` code. -- This warning can be safely ignored: - > "Some weights of BartForConditionalGeneration were not initialized from the model checkpoint at facebook/bart-large-xsum and are newly initialized: ['final_logits_bias']" -- Both finetuning and eval are 30% faster with `--fp16`. For that you need to [install apex](https://github.com/NVIDIA/apex#quick-start). -- Read scripts before you run them! - -Summarization Tips: -- (summ) 1 epoch at batch size 1 for bart-large takes 24 hours and requires 13GB GPU RAM with fp16 on an NVIDIA-V100. -- If you want to run experiments on improving the summarization finetuning process, try the XSUM Shared Task (below). It's faster to train than CNNDM because the summaries are shorter. -- For CNN/DailyMail, the default `val_max_target_length` and `test_max_target_length` will truncate the ground truth labels, resulting in slightly higher rouge scores. To get accurate rouge scores, you should rerun calculate_rouge on the `{output_dir}/test_generations.txt` file saved by `trainer.test()` -- `--max_target_length=60 --val_max_target_length=60 --test_max_target_length=100 ` is a reasonable setting for XSUM. -- `wandb` can be used by specifying `--logger_name wandb`. It is useful for reproducibility. Specify the environment variable `WANDB_PROJECT='hf_xsum'` to do the XSUM shared task. -- If you are finetuning on your own dataset, start from `distilbart-cnn-12-6` if you want long summaries and `distilbart-xsum-12-6` if you want short summaries. -(It rarely makes sense to start from `bart-large` unless you are a researching finetuning methods). - -**Update 2018-07-18** -Datasets: `LegacySeq2SeqDataset` will be used for all tokenizers without a `prepare_seq2seq_batch` method. Otherwise, `Seq2SeqDataset` will be used. -Future work/help wanted: A new dataset to support multilingual tasks. - - -### Finetuning Scripts -All finetuning bash scripts call finetune.py (or distillation.py) with reasonable command line arguments. They usually require extra command line arguments to work. - -To see all the possible command line options, run: - -```bash -./finetune.py --help -``` - -### Finetuning Training Params - -To override the pretrained model's training params, you can pass them to `./finetune.sh`: - -```bash -./finetune.sh \ - [...] - --encoder_layerdrop 0.1 \ - --decoder_layerdrop 0.1 \ - --dropout 0.1 \ - --attention_dropout 0.1 \ -``` - -### Summarization Finetuning -Run/modify `finetune.sh` - -The following command should work on a 16GB GPU: -```bash -./finetune.sh \ - --data_dir $XSUM_DIR \ - --train_batch_size=1 \ - --eval_batch_size=1 \ - --output_dir=xsum_results \ - --num_train_epochs 6 \ - --model_name_or_path facebook/bart-large -``` - -There is a starter finetuning script for pegasus at `finetune_pegasus_xsum.sh`. - -### Translation Finetuning - -First, follow the wmt_en_ro download instructions. -Then you can finetune mbart_cc25 on english-romanian with the following command. -**Recommendation:** Read and potentially modify the fairly opinionated defaults in `train_mbart_cc25_enro.sh` script before running it. - -Best performing command: -```bash -# optionally -export ENRO_DIR='wmt_en_ro' # Download instructions above -# export WANDB_PROJECT="MT" # optional -export MAX_LEN=128 -export BS=4 -./train_mbart_cc25_enro.sh --output_dir enro_finetune_baseline --label_smoothing 0.1 --fp16_opt_level=O1 --logger_name wandb --sortish_sampler -``` -This should take < 6h/epoch on a 16GB v100 and achieve test BLEU above 26 -To get results in line with fairseq, you need to do some postprocessing. (see `romanian_postprocessing.md`) - -MultiGPU command -(using 8 GPUS as an example) -```bash -export ENRO_DIR='wmt_en_ro' # Download instructions above - # export WANDB_PROJECT="MT" # optional -export MAX_LEN=128 -export BS=4 -./train_mbart_cc25_enro.sh --output_dir enro_finetune_baseline --gpus 8 --logger_name wandb -``` -### Finetuning Outputs -As you train, `output_dir` will be filled with files, that look kind of like this (comments are mine). -Some of them are metrics, some of them are checkpoints, some of them are metadata. Here is a quick tour: - -```bash -output_dir -├── best_tfmr # this is a huggingface checkpoint generated by save_pretrained. It is the same model as the PL .ckpt file below -│ ├── config.json -│ ├── merges.txt -│ ├── pytorch_model.bin -│ ├── special_tokens_map.json -│ ├── tokenizer_config.json -│ └── vocab.json -├── git_log.json # repo, branch, and commit hash -├── val_avg_rouge2=0.1984-step_count=11.ckpt # this is a pytorch lightning checkpoint associated with the best val score. (it will be called BLEU for MT) -├── metrics.json # new validation metrics will continually be appended to this -├── student # this is a huggingface checkpoint generated by SummarizationDistiller. It is the student before it gets finetuned. -│ ├── config.json -│ └── pytorch_model.bin -├── test_generations.txt -# ^^ are the summaries or translations produced by your best checkpoint on the test data. Populated when training is done -├── test_results.txt # a convenience file with the test set metrics. This data is also in metrics.json['test'] -├── hparams.pkl # the command line args passed after some light preprocessing. Should be saved fairly quickly. -``` -After training, you can recover the best checkpoint by running -```python -from transformers import AutoModelForSeq2SeqLM -model = AutoModelForSeq2SeqLM.from_pretrained(f'{output_dir}/best_tfmr') -``` - -### Converting pytorch-lightning checkpoints -pytorch lightning ``-do_predict`` often fails, after you are done training, the best way to evaluate your model is to convert it. - -This should be done for you, with a file called `{save_dir}/best_tfmr`. - -If that file doesn't exist but you have a lightning `.ckpt` file, you can run -```bash -python convert_pl_checkpoint_to_hf.py PATH_TO_CKPT randomly_initialized_hf_model_path save_dir/best_tfmr -``` -Then either `run_eval` or `run_distributed_eval` with `save_dir/best_tfmr` (see previous sections) - - -# Experimental Features -These features are harder to use and not always useful. - -### Dynamic Batch Size for MT -`finetune.py` has a command line arg `--max_tokens_per_batch` that allows batches to be dynamically sized. -This feature can only be used: -- with fairseq installed -- on 1 GPU -- without sortish sampler -- after calling `./save_len_file.py $tok $data_dir` - -For example, -```bash -./save_len_file.py Helsinki-NLP/opus-mt-en-ro wmt_en_ro -./dynamic_bs_example.sh --max_tokens_per_batch=2000 --output_dir benchmark_dynamic_bs -``` -splits `wmt_en_ro/train` into 11,197 uneven lengthed batches and can finish 1 epoch in 8 minutes on a v100. - -For comparison, -```bash -./dynamic_bs_example.sh --sortish_sampler --train_batch_size 48 -``` -uses 12,723 batches of length 48 and takes slightly more time 9.5 minutes. - -The feature is still experimental, because: -+ we can make it much more robust if we have memory mapped/preprocessed datasets. -+ The speedup over sortish sampler is not that large at the moment. - -# DistilBART - -This section describes all code and artifacts from our [Paper](http://arxiv.org/abs/2010.13002) - -![DBART](https://huggingface.co/front/thumbnails/distilbart_large.png) - -+ For the CNN/DailyMail dataset, (relatively longer, more extractive summaries), we found a simple technique that works, which we call "Shrink and Fine-tune", or SFT. -you just copy alternating layers from `facebook/bart-large-cnn` and fine-tune more on the cnn/dm data. `sshleifer/distill-pegasus-cnn-16-4`, `sshleifer/distilbart-cnn-12-6` and all other checkpoints under `sshleifer` that start with `distilbart-cnn` were trained this way. -+ For the XSUM dataset, training on pseudo-labels worked best for Pegasus (`sshleifer/distill-pegasus-16-4`), while training with KD worked best for `distilbart-xsum-12-6` -+ For `sshleifer/dbart-xsum-12-3` -+ We ran 100s experiments, and didn't want to document 100s of commands. If you want a command to replicate a figure from the paper that is not documented below, feel free to ask on the [forums](https://discuss.huggingface.co/t/seq2seq-distillation-methodology-questions/1270) and tag `@sshleifer`. -+ You can see the performance tradeoffs of model sizes [here](https://docs.google.com/spreadsheets/d/1EkhDMwVO02m8jCD1cG3RoFPLicpcL1GQHTQjfvDYgIM/edit#gid=0). -and more granular timing results [here](https://docs.google.com/spreadsheets/d/1EkhDMwVO02m8jCD1cG3RoFPLicpcL1GQHTQjfvDYgIM/edit#gid=1753259047&range=B2:I23). - -### Evaluation - -use [run_distributed_eval](./run_distributed_eval.py), with the following convenient alias -```bash -deval () { - proc=$1 - m=$2 - dd=$3 - sd=$4 - shift - shift - shift - shift - python -m torch.distributed.launch --nproc_per_node=$proc run_distributed_eval.py \ - --model_name $m --save_dir $sd --data_dir $dd $@ -} -``` -On a 1 GPU system, here are four commands (that assume `xsum`, `cnn_dm` are downloaded, cmd-F for those links in this file). - -`distilBART`: -```bash -deval 1 sshleifer/distilbart-xsum-12-3 xsum dbart_12_3_xsum_eval --fp16 # --help for more choices. -deval 1 sshleifer/distilbart-cnn_dm-12-6 cnn_dm dbart_12_6_cnn_eval --fp16 -``` - -`distill-pegasus`: -```bash -deval 1 sshleifer/distill-pegasus-cnn-16-4 cnn_dm dpx_cnn_eval -deval 1 sshleifer/distill-pegasus-xsum-16-4 xsum dpx_xsum_eval -``` - -### Distillation -+ For all of the following commands, you can get roughly equivalent result and faster run times by passing `--num_beams=4`. That's not what we did for the paper. -+ Besides the KD section, you can also run commands with the built-in transformers trainer. See, for example, [builtin_trainer/train_distilbart_cnn.sh](./builtin_trainer/train_distilbart_cnn.sh). -+ Large performance deviations (> 5X slower or more than 0.5 Rouge-2 worse), should be reported. -+ Multi-gpu (controlled with `--gpus` should work, but might require more epochs). - -#### Recommended Workflow -+ Get your dataset in the right format. (see 6 files above). -+ Find a teacher model [Pegasus](https://huggingface.co/models?search=pegasus) (slower, better ROUGE) or `facebook/bart-large-xsum`/`facebook/bart-large-cnn` (faster, slightly lower.). -Choose the checkpoint where the corresponding dataset is most similar (or identical to) your dataset. -+ Follow the sections in order below. You can stop after SFT if you are satisfied, or move on to pseudo-labeling if you want more performance. -+ student size: If you want a close to free 50% speedup, cut the decoder in half. If you want a larger speedup, cut it in 4. -+ If your SFT run starts at a validation ROUGE-2 that is more than 10 pts below the teacher's validation ROUGE-2, you have a bug. Switching to a more expensive technique will not help. Try setting a breakpoint and looking at generation and truncation defaults/hyper-parameters, and share your experience on the forums! - - -#### Initialization -We use [make_student.py](./make_student.py) to copy alternating layers from the teacher, and save the resulting model to disk -```bash -python make_student.py facebook/bart-large-xsum --save_path dbart_xsum_12_3 -e 12 -d 3 -``` -or for `pegasus-xsum` -```bash -python make_student.py google/pegasus-xsum --save_path dpx_xsum_16_4 --e 16 --d 4 -``` -we now have an initialized student saved to `dbart_xsum_12_3`, which we will use for the following commands. -+ Extension: To replicate more complicated initialize experiments in section 6.1, or try your own. Use the `create_student_by_copying_alternating_layers` function. - -#### Pegasus -+ The following commands are written for BART and will require, at minimum, the following modifications -+ reduce batch size, and increase gradient accumulation steps so that the product `gpus * batch size * gradient_accumulation_steps = 256`. We used `--learning-rate` = 1e-4 * gradient accumulation steps. -+ don't use fp16 -+ `--tokenizer_name google/pegasus-large` - -### SFT (No Teacher Distillation) -You don't need `distillation.py`, you can just run: - -```bash -python finetune.py \ - --data_dir xsum \ - --freeze_encoder --freeze_embeds \ - --learning_rate=3e-4 \ - --do_train \ - --do_predict \ - --fp16 --fp16_opt_level=O1 \ - --val_check_interval 0.1 --n_val 1000 --eval_beams 2 --length_penalty=0.5 \ - --max_target_length=60 --val_max_target_length=60 --test_max_target_length=100 \ - --model_name_or_path dbart_xsum_12_3 \ - --train_batch_size=64 --eval_batch_size=64 \ - --sortish_sampler \ - --num_train_epochs=6 \ - --warmup_steps 500 \ - --output_dir distilbart_xsum_sft_12_3 --gpus 1 -``` - -+ Note: The command that produced `sshleifer/distilbart-cnn-12-6` is at [train_distilbart_cnn.sh](./[train_distilbart_cnn.sh) - -```bash -./train_distilbart_cnn.sh -``` - -+ Tip: You can get the same simple distillation logic by using `distillation.py --no_teacher ` followed by identical arguments as the ones in `train_distilbart_cnn.sh`. -If you are using `wandb` and comparing the two distillation methods, using this entry point will make your logs consistent, -because you will have the same hyper-parameters logged in every run. - -### Pseudo-Labeling -+ You don't need `distillation.py`. -+ Instructions to generate pseudo-labels and use pre-computed pseudo-labels can be found [here](./precomputed_pseudo_labels.md). -Simply run `finetune.py` with one of those pseudo-label datasets as `--data_dir` (`DATA`, below). - -```bash -python finetune.py \ - --teacher facebook/bart-large-xsum --data_dir DATA \ - --freeze_encoder --freeze_embeds \ - --learning_rate=3e-4 \ - --do_train \ - --do_predict \ - --fp16 --fp16_opt_level=O1 \ - --val_check_interval 0.1 --n_val 1000 --eval_beams 2 --length_penalty=0.5 \ - --max_target_length=60 --val_max_target_length=60 --test_max_target_length=100 \ - --model_name_or_path dbart_xsum_12_3 \ - --train_batch_size=32 --eval_batch_size=32 \ - --sortish_sampler \ - --num_train_epochs=5 \ - --warmup_steps 500 \ - --output_dir dbart_xsum_12_3_PL --gpus 1 --logger_name wandb -``` - - - -To combine datasets, as in Section 6.2, try something like: -```bash -curl -S https://cdn-datasets.huggingface.co/pseudo/xsum/bart_xsum_pl.tgz | tar -xvz -C . -curl -S https://cdn-datasets.huggingface.co/pseudo/xsum/pegasus_xsum.tgz | tar -xvz -C . -curl -S https://cdn-datasets.huggingface.co/summarization/xsum.tar.gz | tar -xvz -C . -mkdir all_pl -cat bart_xsum_pl/train.source pegasus_xsum/train.source xsum/train.source > all_pl/train.source -cat bart_xsum_pl/train.target pegasus_xsum/train.target xsum/train.target > all_pl/train.target -cp xsum/val* all_pl -cp xsum/test* all_pl -``` -then use `all_pl` as DATA in the command above. - -#### Direct Knowledge Distillation (KD) -+ In this method, we use try to enforce that the student and teacher produce similar encoder_outputs, logits, and hidden_states using `SummarizationDistiller`. -+ This method was used for `sshleifer/distilbart-xsum-12-6`, `6-6`, and `9-6` checkpoints were produced. -+ You must use [`distillation.py`](./distillation.py). Note that this command initializes the student for you. - -The command that produced `sshleifer/distilbart-xsum-12-6` is at [./train_distilbart_xsum.sh](train_distilbart_xsum.sh) -```bash -./train_distilbart_xsum.sh --logger_name wandb --gpus 1 -``` - -+ Expected ROUGE-2 between 21.3 and 21.6, run time ~13H. -+ direct KD + Pegasus is VERY slow and works best with `--supervise_forward --normalize_hidden`. - - - -### Citation - -```bibtex -@misc{shleifer2020pretrained, - title={Pre-trained Summarization Distillation}, - author={Sam Shleifer and Alexander M. Rush}, - year={2020}, - eprint={2010.13002}, - archivePrefix={arXiv}, - primaryClass={cs.CL} -} -@article{Wolf2019HuggingFacesTS, - title={HuggingFace's Transformers: State-of-the-art Natural Language Processing}, - author={Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush}, - journal={ArXiv}, - year={2019}, - volume={abs/1910.03771} -} -``` diff --git a/spaces/chendl/compositional_test/transformers/src/transformers/models/altclip/__init__.py b/spaces/chendl/compositional_test/transformers/src/transformers/models/altclip/__init__.py deleted file mode 100644 index 5fc02b192b256b620d9e590a22ff0e1ca8dbd6d6..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/src/transformers/models/altclip/__init__.py +++ /dev/null @@ -1,71 +0,0 @@ -# Copyright 2020 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from typing import TYPE_CHECKING - -from ...utils import OptionalDependencyNotAvailable, _LazyModule, is_tokenizers_available, is_torch_available - - -_import_structure = { - "configuration_altclip": [ - "ALTCLIP_PRETRAINED_CONFIG_ARCHIVE_MAP", - "AltCLIPConfig", - "AltCLIPTextConfig", - "AltCLIPVisionConfig", - ], - "processing_altclip": ["AltCLIPProcessor"], -} - -try: - if not is_torch_available(): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - pass -else: - _import_structure["modeling_altclip"] = [ - "ALTCLIP_PRETRAINED_MODEL_ARCHIVE_LIST", - "AltCLIPPreTrainedModel", - "AltCLIPModel", - "AltCLIPTextModel", - "AltCLIPVisionModel", - ] - - -if TYPE_CHECKING: - from .configuration_altclip import ( - ALTCLIP_PRETRAINED_CONFIG_ARCHIVE_MAP, - AltCLIPConfig, - AltCLIPTextConfig, - AltCLIPVisionConfig, - ) - from .processing_altclip import AltCLIPProcessor - - try: - if not is_torch_available(): - raise OptionalDependencyNotAvailable() - except OptionalDependencyNotAvailable: - pass - else: - from .modeling_altclip import ( - ALTCLIP_PRETRAINED_MODEL_ARCHIVE_LIST, - AltCLIPModel, - AltCLIPPreTrainedModel, - AltCLIPTextModel, - AltCLIPVisionModel, - ) - - -else: - import sys - - sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__) diff --git a/spaces/chengli-thu/ChatHaruhi-OpenAI/README.md b/spaces/chengli-thu/ChatHaruhi-OpenAI/README.md deleted file mode 100644 index c870ff19acd5c7cf13592e26018e72d383c06429..0000000000000000000000000000000000000000 --- a/spaces/chengli-thu/ChatHaruhi-OpenAI/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Haruhi -emoji: 💻 -colorFrom: green -colorTo: pink -sdk: gradio -sdk_version: 3.41.2 -app_file: app.py -pinned: false -duplicated_from: chenxiYan/ChatHaruhi-OpenAI ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/_yaml/__init__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/_yaml/__init__.py deleted file mode 100644 index 7baa8c4b68127d5cdf0be9a799429e61347c2694..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/_yaml/__init__.py +++ /dev/null @@ -1,33 +0,0 @@ -# This is a stub package designed to roughly emulate the _yaml -# extension module, which previously existed as a standalone module -# and has been moved into the `yaml` package namespace. -# It does not perfectly mimic its old counterpart, but should get -# close enough for anyone who's relying on it even when they shouldn't. -import yaml - -# in some circumstances, the yaml module we imoprted may be from a different version, so we need -# to tread carefully when poking at it here (it may not have the attributes we expect) -if not getattr(yaml, '__with_libyaml__', False): - from sys import version_info - - exc = ModuleNotFoundError if version_info >= (3, 6) else ImportError - raise exc("No module named '_yaml'") -else: - from yaml._yaml import * - import warnings - warnings.warn( - 'The _yaml extension module is now located at yaml._yaml' - ' and its location is subject to change. To use the' - ' LibYAML-based parser and emitter, import from `yaml`:' - ' `from yaml import CLoader as Loader, CDumper as Dumper`.', - DeprecationWarning - ) - del warnings - # Don't `del yaml` here because yaml is actually an existing - # namespace member of _yaml. - -__name__ = '_yaml' -# If the module is top-level (i.e. not a part of any specific package) -# then the attribute should be set to ''. -# https://docs.python.org/3.8/library/types.html -__package__ = '' diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/O_S_2f_2.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/O_S_2f_2.py deleted file mode 100644 index 7b403026aa4eabe03c7484f51f14db63ed2ebc5c..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/O_S_2f_2.py +++ /dev/null @@ -1,617 +0,0 @@ -from fontTools.misc import sstruct -from fontTools.misc.roundTools import otRound -from fontTools.misc.textTools import safeEval, num2binary, binary2num -from fontTools.ttLib.tables import DefaultTable -import bisect -import logging - - -log = logging.getLogger(__name__) - -# panose classification - -panoseFormat = """ - bFamilyType: B - bSerifStyle: B - bWeight: B - bProportion: B - bContrast: B - bStrokeVariation: B - bArmStyle: B - bLetterForm: B - bMidline: B - bXHeight: B -""" - - -class Panose(object): - def __init__(self, **kwargs): - _, names, _ = sstruct.getformat(panoseFormat) - for name in names: - setattr(self, name, kwargs.pop(name, 0)) - for k in kwargs: - raise TypeError(f"Panose() got an unexpected keyword argument {k!r}") - - def toXML(self, writer, ttFont): - formatstring, names, fixes = sstruct.getformat(panoseFormat) - for name in names: - writer.simpletag(name, value=getattr(self, name)) - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - setattr(self, name, safeEval(attrs["value"])) - - -# 'sfnt' OS/2 and Windows Metrics table - 'OS/2' - -OS2_format_0 = """ - > # big endian - version: H # version - xAvgCharWidth: h # average character width - usWeightClass: H # degree of thickness of strokes - usWidthClass: H # aspect ratio - fsType: H # type flags - ySubscriptXSize: h # subscript horizontal font size - ySubscriptYSize: h # subscript vertical font size - ySubscriptXOffset: h # subscript x offset - ySubscriptYOffset: h # subscript y offset - ySuperscriptXSize: h # superscript horizontal font size - ySuperscriptYSize: h # superscript vertical font size - ySuperscriptXOffset: h # superscript x offset - ySuperscriptYOffset: h # superscript y offset - yStrikeoutSize: h # strikeout size - yStrikeoutPosition: h # strikeout position - sFamilyClass: h # font family class and subclass - panose: 10s # panose classification number - ulUnicodeRange1: L # character range - ulUnicodeRange2: L # character range - ulUnicodeRange3: L # character range - ulUnicodeRange4: L # character range - achVendID: 4s # font vendor identification - fsSelection: H # font selection flags - usFirstCharIndex: H # first unicode character index - usLastCharIndex: H # last unicode character index - sTypoAscender: h # typographic ascender - sTypoDescender: h # typographic descender - sTypoLineGap: h # typographic line gap - usWinAscent: H # Windows ascender - usWinDescent: H # Windows descender -""" - -OS2_format_1_addition = """ - ulCodePageRange1: L - ulCodePageRange2: L -""" - -OS2_format_2_addition = ( - OS2_format_1_addition - + """ - sxHeight: h - sCapHeight: h - usDefaultChar: H - usBreakChar: H - usMaxContext: H -""" -) - -OS2_format_5_addition = ( - OS2_format_2_addition - + """ - usLowerOpticalPointSize: H - usUpperOpticalPointSize: H -""" -) - -bigendian = " > # big endian\n" - -OS2_format_1 = OS2_format_0 + OS2_format_1_addition -OS2_format_2 = OS2_format_0 + OS2_format_2_addition -OS2_format_5 = OS2_format_0 + OS2_format_5_addition -OS2_format_1_addition = bigendian + OS2_format_1_addition -OS2_format_2_addition = bigendian + OS2_format_2_addition -OS2_format_5_addition = bigendian + OS2_format_5_addition - - -class table_O_S_2f_2(DefaultTable.DefaultTable): - - """the OS/2 table""" - - dependencies = ["head"] - - def decompile(self, data, ttFont): - dummy, data = sstruct.unpack2(OS2_format_0, data, self) - - if self.version == 1: - dummy, data = sstruct.unpack2(OS2_format_1_addition, data, self) - elif self.version in (2, 3, 4): - dummy, data = sstruct.unpack2(OS2_format_2_addition, data, self) - elif self.version == 5: - dummy, data = sstruct.unpack2(OS2_format_5_addition, data, self) - self.usLowerOpticalPointSize /= 20 - self.usUpperOpticalPointSize /= 20 - elif self.version != 0: - from fontTools import ttLib - - raise ttLib.TTLibError( - "unknown format for OS/2 table: version %s" % self.version - ) - if len(data): - log.warning("too much 'OS/2' table data") - - self.panose = sstruct.unpack(panoseFormat, self.panose, Panose()) - - def compile(self, ttFont): - self.updateFirstAndLastCharIndex(ttFont) - panose = self.panose - head = ttFont["head"] - if (self.fsSelection & 1) and not (head.macStyle & 1 << 1): - log.warning( - "fsSelection bit 0 (italic) and " - "head table macStyle bit 1 (italic) should match" - ) - if (self.fsSelection & 1 << 5) and not (head.macStyle & 1): - log.warning( - "fsSelection bit 5 (bold) and " - "head table macStyle bit 0 (bold) should match" - ) - if (self.fsSelection & 1 << 6) and (self.fsSelection & 1 + (1 << 5)): - log.warning( - "fsSelection bit 6 (regular) is set, " - "bits 0 (italic) and 5 (bold) must be clear" - ) - if self.version < 4 and self.fsSelection & 0b1110000000: - log.warning( - "fsSelection bits 7, 8 and 9 are only defined in " - "OS/2 table version 4 and up: version %s", - self.version, - ) - self.panose = sstruct.pack(panoseFormat, self.panose) - if self.version == 0: - data = sstruct.pack(OS2_format_0, self) - elif self.version == 1: - data = sstruct.pack(OS2_format_1, self) - elif self.version in (2, 3, 4): - data = sstruct.pack(OS2_format_2, self) - elif self.version == 5: - d = self.__dict__.copy() - d["usLowerOpticalPointSize"] = round(self.usLowerOpticalPointSize * 20) - d["usUpperOpticalPointSize"] = round(self.usUpperOpticalPointSize * 20) - data = sstruct.pack(OS2_format_5, d) - else: - from fontTools import ttLib - - raise ttLib.TTLibError( - "unknown format for OS/2 table: version %s" % self.version - ) - self.panose = panose - return data - - def toXML(self, writer, ttFont): - writer.comment( - "The fields 'usFirstCharIndex' and 'usLastCharIndex'\n" - "will be recalculated by the compiler" - ) - writer.newline() - if self.version == 1: - format = OS2_format_1 - elif self.version in (2, 3, 4): - format = OS2_format_2 - elif self.version == 5: - format = OS2_format_5 - else: - format = OS2_format_0 - formatstring, names, fixes = sstruct.getformat(format) - for name in names: - value = getattr(self, name) - if name == "panose": - writer.begintag("panose") - writer.newline() - value.toXML(writer, ttFont) - writer.endtag("panose") - elif name in ( - "ulUnicodeRange1", - "ulUnicodeRange2", - "ulUnicodeRange3", - "ulUnicodeRange4", - "ulCodePageRange1", - "ulCodePageRange2", - ): - writer.simpletag(name, value=num2binary(value)) - elif name in ("fsType", "fsSelection"): - writer.simpletag(name, value=num2binary(value, 16)) - elif name == "achVendID": - writer.simpletag(name, value=repr(value)[1:-1]) - else: - writer.simpletag(name, value=value) - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - if name == "panose": - self.panose = panose = Panose() - for element in content: - if isinstance(element, tuple): - name, attrs, content = element - panose.fromXML(name, attrs, content, ttFont) - elif name in ( - "ulUnicodeRange1", - "ulUnicodeRange2", - "ulUnicodeRange3", - "ulUnicodeRange4", - "ulCodePageRange1", - "ulCodePageRange2", - "fsType", - "fsSelection", - ): - setattr(self, name, binary2num(attrs["value"])) - elif name == "achVendID": - setattr(self, name, safeEval("'''" + attrs["value"] + "'''")) - else: - setattr(self, name, safeEval(attrs["value"])) - - def updateFirstAndLastCharIndex(self, ttFont): - if "cmap" not in ttFont: - return - codes = set() - for table in getattr(ttFont["cmap"], "tables", []): - if table.isUnicode(): - codes.update(table.cmap.keys()) - if codes: - minCode = min(codes) - maxCode = max(codes) - # USHORT cannot hold codepoints greater than 0xFFFF - self.usFirstCharIndex = min(0xFFFF, minCode) - self.usLastCharIndex = min(0xFFFF, maxCode) - - # misspelled attributes kept for legacy reasons - - @property - def usMaxContex(self): - return self.usMaxContext - - @usMaxContex.setter - def usMaxContex(self, value): - self.usMaxContext = value - - @property - def fsFirstCharIndex(self): - return self.usFirstCharIndex - - @fsFirstCharIndex.setter - def fsFirstCharIndex(self, value): - self.usFirstCharIndex = value - - @property - def fsLastCharIndex(self): - return self.usLastCharIndex - - @fsLastCharIndex.setter - def fsLastCharIndex(self, value): - self.usLastCharIndex = value - - def getUnicodeRanges(self): - """Return the set of 'ulUnicodeRange*' bits currently enabled.""" - bits = set() - ul1, ul2 = self.ulUnicodeRange1, self.ulUnicodeRange2 - ul3, ul4 = self.ulUnicodeRange3, self.ulUnicodeRange4 - for i in range(32): - if ul1 & (1 << i): - bits.add(i) - if ul2 & (1 << i): - bits.add(i + 32) - if ul3 & (1 << i): - bits.add(i + 64) - if ul4 & (1 << i): - bits.add(i + 96) - return bits - - def setUnicodeRanges(self, bits): - """Set the 'ulUnicodeRange*' fields to the specified 'bits'.""" - ul1, ul2, ul3, ul4 = 0, 0, 0, 0 - for bit in bits: - if 0 <= bit < 32: - ul1 |= 1 << bit - elif 32 <= bit < 64: - ul2 |= 1 << (bit - 32) - elif 64 <= bit < 96: - ul3 |= 1 << (bit - 64) - elif 96 <= bit < 123: - ul4 |= 1 << (bit - 96) - else: - raise ValueError("expected 0 <= int <= 122, found: %r" % bit) - self.ulUnicodeRange1, self.ulUnicodeRange2 = ul1, ul2 - self.ulUnicodeRange3, self.ulUnicodeRange4 = ul3, ul4 - - def recalcUnicodeRanges(self, ttFont, pruneOnly=False): - """Intersect the codepoints in the font's Unicode cmap subtables with - the Unicode block ranges defined in the OpenType specification (v1.7), - and set the respective 'ulUnicodeRange*' bits if there is at least ONE - intersection. - If 'pruneOnly' is True, only clear unused bits with NO intersection. - """ - unicodes = set() - for table in ttFont["cmap"].tables: - if table.isUnicode(): - unicodes.update(table.cmap.keys()) - if pruneOnly: - empty = intersectUnicodeRanges(unicodes, inverse=True) - bits = self.getUnicodeRanges() - empty - else: - bits = intersectUnicodeRanges(unicodes) - self.setUnicodeRanges(bits) - return bits - - def recalcAvgCharWidth(self, ttFont): - """Recalculate xAvgCharWidth using metrics from ttFont's 'hmtx' table. - - Set it to 0 if the unlikely event 'hmtx' table is not found. - """ - avg_width = 0 - hmtx = ttFont.get("hmtx") - if hmtx is not None: - widths = [width for width, _ in hmtx.metrics.values() if width > 0] - if widths: - avg_width = otRound(sum(widths) / len(widths)) - self.xAvgCharWidth = avg_width - return avg_width - - -# Unicode ranges data from the OpenType OS/2 table specification v1.7 - -OS2_UNICODE_RANGES = ( - (("Basic Latin", (0x0000, 0x007F)),), - (("Latin-1 Supplement", (0x0080, 0x00FF)),), - (("Latin Extended-A", (0x0100, 0x017F)),), - (("Latin Extended-B", (0x0180, 0x024F)),), - ( - ("IPA Extensions", (0x0250, 0x02AF)), - ("Phonetic Extensions", (0x1D00, 0x1D7F)), - ("Phonetic Extensions Supplement", (0x1D80, 0x1DBF)), - ), - ( - ("Spacing Modifier Letters", (0x02B0, 0x02FF)), - ("Modifier Tone Letters", (0xA700, 0xA71F)), - ), - ( - ("Combining Diacritical Marks", (0x0300, 0x036F)), - ("Combining Diacritical Marks Supplement", (0x1DC0, 0x1DFF)), - ), - (("Greek and Coptic", (0x0370, 0x03FF)),), - (("Coptic", (0x2C80, 0x2CFF)),), - ( - ("Cyrillic", (0x0400, 0x04FF)), - ("Cyrillic Supplement", (0x0500, 0x052F)), - ("Cyrillic Extended-A", (0x2DE0, 0x2DFF)), - ("Cyrillic Extended-B", (0xA640, 0xA69F)), - ), - (("Armenian", (0x0530, 0x058F)),), - (("Hebrew", (0x0590, 0x05FF)),), - (("Vai", (0xA500, 0xA63F)),), - (("Arabic", (0x0600, 0x06FF)), ("Arabic Supplement", (0x0750, 0x077F))), - (("NKo", (0x07C0, 0x07FF)),), - (("Devanagari", (0x0900, 0x097F)),), - (("Bengali", (0x0980, 0x09FF)),), - (("Gurmukhi", (0x0A00, 0x0A7F)),), - (("Gujarati", (0x0A80, 0x0AFF)),), - (("Oriya", (0x0B00, 0x0B7F)),), - (("Tamil", (0x0B80, 0x0BFF)),), - (("Telugu", (0x0C00, 0x0C7F)),), - (("Kannada", (0x0C80, 0x0CFF)),), - (("Malayalam", (0x0D00, 0x0D7F)),), - (("Thai", (0x0E00, 0x0E7F)),), - (("Lao", (0x0E80, 0x0EFF)),), - (("Georgian", (0x10A0, 0x10FF)), ("Georgian Supplement", (0x2D00, 0x2D2F))), - (("Balinese", (0x1B00, 0x1B7F)),), - (("Hangul Jamo", (0x1100, 0x11FF)),), - ( - ("Latin Extended Additional", (0x1E00, 0x1EFF)), - ("Latin Extended-C", (0x2C60, 0x2C7F)), - ("Latin Extended-D", (0xA720, 0xA7FF)), - ), - (("Greek Extended", (0x1F00, 0x1FFF)),), - ( - ("General Punctuation", (0x2000, 0x206F)), - ("Supplemental Punctuation", (0x2E00, 0x2E7F)), - ), - (("Superscripts And Subscripts", (0x2070, 0x209F)),), - (("Currency Symbols", (0x20A0, 0x20CF)),), - (("Combining Diacritical Marks For Symbols", (0x20D0, 0x20FF)),), - (("Letterlike Symbols", (0x2100, 0x214F)),), - (("Number Forms", (0x2150, 0x218F)),), - ( - ("Arrows", (0x2190, 0x21FF)), - ("Supplemental Arrows-A", (0x27F0, 0x27FF)), - ("Supplemental Arrows-B", (0x2900, 0x297F)), - ("Miscellaneous Symbols and Arrows", (0x2B00, 0x2BFF)), - ), - ( - ("Mathematical Operators", (0x2200, 0x22FF)), - ("Supplemental Mathematical Operators", (0x2A00, 0x2AFF)), - ("Miscellaneous Mathematical Symbols-A", (0x27C0, 0x27EF)), - ("Miscellaneous Mathematical Symbols-B", (0x2980, 0x29FF)), - ), - (("Miscellaneous Technical", (0x2300, 0x23FF)),), - (("Control Pictures", (0x2400, 0x243F)),), - (("Optical Character Recognition", (0x2440, 0x245F)),), - (("Enclosed Alphanumerics", (0x2460, 0x24FF)),), - (("Box Drawing", (0x2500, 0x257F)),), - (("Block Elements", (0x2580, 0x259F)),), - (("Geometric Shapes", (0x25A0, 0x25FF)),), - (("Miscellaneous Symbols", (0x2600, 0x26FF)),), - (("Dingbats", (0x2700, 0x27BF)),), - (("CJK Symbols And Punctuation", (0x3000, 0x303F)),), - (("Hiragana", (0x3040, 0x309F)),), - ( - ("Katakana", (0x30A0, 0x30FF)), - ("Katakana Phonetic Extensions", (0x31F0, 0x31FF)), - ), - (("Bopomofo", (0x3100, 0x312F)), ("Bopomofo Extended", (0x31A0, 0x31BF))), - (("Hangul Compatibility Jamo", (0x3130, 0x318F)),), - (("Phags-pa", (0xA840, 0xA87F)),), - (("Enclosed CJK Letters And Months", (0x3200, 0x32FF)),), - (("CJK Compatibility", (0x3300, 0x33FF)),), - (("Hangul Syllables", (0xAC00, 0xD7AF)),), - (("Non-Plane 0 *", (0xD800, 0xDFFF)),), - (("Phoenician", (0x10900, 0x1091F)),), - ( - ("CJK Unified Ideographs", (0x4E00, 0x9FFF)), - ("CJK Radicals Supplement", (0x2E80, 0x2EFF)), - ("Kangxi Radicals", (0x2F00, 0x2FDF)), - ("Ideographic Description Characters", (0x2FF0, 0x2FFF)), - ("CJK Unified Ideographs Extension A", (0x3400, 0x4DBF)), - ("CJK Unified Ideographs Extension B", (0x20000, 0x2A6DF)), - ("Kanbun", (0x3190, 0x319F)), - ), - (("Private Use Area (plane 0)", (0xE000, 0xF8FF)),), - ( - ("CJK Strokes", (0x31C0, 0x31EF)), - ("CJK Compatibility Ideographs", (0xF900, 0xFAFF)), - ("CJK Compatibility Ideographs Supplement", (0x2F800, 0x2FA1F)), - ), - (("Alphabetic Presentation Forms", (0xFB00, 0xFB4F)),), - (("Arabic Presentation Forms-A", (0xFB50, 0xFDFF)),), - (("Combining Half Marks", (0xFE20, 0xFE2F)),), - ( - ("Vertical Forms", (0xFE10, 0xFE1F)), - ("CJK Compatibility Forms", (0xFE30, 0xFE4F)), - ), - (("Small Form Variants", (0xFE50, 0xFE6F)),), - (("Arabic Presentation Forms-B", (0xFE70, 0xFEFF)),), - (("Halfwidth And Fullwidth Forms", (0xFF00, 0xFFEF)),), - (("Specials", (0xFFF0, 0xFFFF)),), - (("Tibetan", (0x0F00, 0x0FFF)),), - (("Syriac", (0x0700, 0x074F)),), - (("Thaana", (0x0780, 0x07BF)),), - (("Sinhala", (0x0D80, 0x0DFF)),), - (("Myanmar", (0x1000, 0x109F)),), - ( - ("Ethiopic", (0x1200, 0x137F)), - ("Ethiopic Supplement", (0x1380, 0x139F)), - ("Ethiopic Extended", (0x2D80, 0x2DDF)), - ), - (("Cherokee", (0x13A0, 0x13FF)),), - (("Unified Canadian Aboriginal Syllabics", (0x1400, 0x167F)),), - (("Ogham", (0x1680, 0x169F)),), - (("Runic", (0x16A0, 0x16FF)),), - (("Khmer", (0x1780, 0x17FF)), ("Khmer Symbols", (0x19E0, 0x19FF))), - (("Mongolian", (0x1800, 0x18AF)),), - (("Braille Patterns", (0x2800, 0x28FF)),), - (("Yi Syllables", (0xA000, 0xA48F)), ("Yi Radicals", (0xA490, 0xA4CF))), - ( - ("Tagalog", (0x1700, 0x171F)), - ("Hanunoo", (0x1720, 0x173F)), - ("Buhid", (0x1740, 0x175F)), - ("Tagbanwa", (0x1760, 0x177F)), - ), - (("Old Italic", (0x10300, 0x1032F)),), - (("Gothic", (0x10330, 0x1034F)),), - (("Deseret", (0x10400, 0x1044F)),), - ( - ("Byzantine Musical Symbols", (0x1D000, 0x1D0FF)), - ("Musical Symbols", (0x1D100, 0x1D1FF)), - ("Ancient Greek Musical Notation", (0x1D200, 0x1D24F)), - ), - (("Mathematical Alphanumeric Symbols", (0x1D400, 0x1D7FF)),), - ( - ("Private Use (plane 15)", (0xF0000, 0xFFFFD)), - ("Private Use (plane 16)", (0x100000, 0x10FFFD)), - ), - ( - ("Variation Selectors", (0xFE00, 0xFE0F)), - ("Variation Selectors Supplement", (0xE0100, 0xE01EF)), - ), - (("Tags", (0xE0000, 0xE007F)),), - (("Limbu", (0x1900, 0x194F)),), - (("Tai Le", (0x1950, 0x197F)),), - (("New Tai Lue", (0x1980, 0x19DF)),), - (("Buginese", (0x1A00, 0x1A1F)),), - (("Glagolitic", (0x2C00, 0x2C5F)),), - (("Tifinagh", (0x2D30, 0x2D7F)),), - (("Yijing Hexagram Symbols", (0x4DC0, 0x4DFF)),), - (("Syloti Nagri", (0xA800, 0xA82F)),), - ( - ("Linear B Syllabary", (0x10000, 0x1007F)), - ("Linear B Ideograms", (0x10080, 0x100FF)), - ("Aegean Numbers", (0x10100, 0x1013F)), - ), - (("Ancient Greek Numbers", (0x10140, 0x1018F)),), - (("Ugaritic", (0x10380, 0x1039F)),), - (("Old Persian", (0x103A0, 0x103DF)),), - (("Shavian", (0x10450, 0x1047F)),), - (("Osmanya", (0x10480, 0x104AF)),), - (("Cypriot Syllabary", (0x10800, 0x1083F)),), - (("Kharoshthi", (0x10A00, 0x10A5F)),), - (("Tai Xuan Jing Symbols", (0x1D300, 0x1D35F)),), - ( - ("Cuneiform", (0x12000, 0x123FF)), - ("Cuneiform Numbers and Punctuation", (0x12400, 0x1247F)), - ), - (("Counting Rod Numerals", (0x1D360, 0x1D37F)),), - (("Sundanese", (0x1B80, 0x1BBF)),), - (("Lepcha", (0x1C00, 0x1C4F)),), - (("Ol Chiki", (0x1C50, 0x1C7F)),), - (("Saurashtra", (0xA880, 0xA8DF)),), - (("Kayah Li", (0xA900, 0xA92F)),), - (("Rejang", (0xA930, 0xA95F)),), - (("Cham", (0xAA00, 0xAA5F)),), - (("Ancient Symbols", (0x10190, 0x101CF)),), - (("Phaistos Disc", (0x101D0, 0x101FF)),), - ( - ("Carian", (0x102A0, 0x102DF)), - ("Lycian", (0x10280, 0x1029F)), - ("Lydian", (0x10920, 0x1093F)), - ), - (("Domino Tiles", (0x1F030, 0x1F09F)), ("Mahjong Tiles", (0x1F000, 0x1F02F))), -) - - -_unicodeStarts = [] -_unicodeValues = [None] - - -def _getUnicodeRanges(): - # build the ranges of codepoints for each unicode range bit, and cache result - if not _unicodeStarts: - unicodeRanges = [ - (start, (stop, bit)) - for bit, blocks in enumerate(OS2_UNICODE_RANGES) - for _, (start, stop) in blocks - ] - for start, (stop, bit) in sorted(unicodeRanges): - _unicodeStarts.append(start) - _unicodeValues.append((stop, bit)) - return _unicodeStarts, _unicodeValues - - -def intersectUnicodeRanges(unicodes, inverse=False): - """Intersect a sequence of (int) Unicode codepoints with the Unicode block - ranges defined in the OpenType specification v1.7, and return the set of - 'ulUnicodeRanges' bits for which there is at least ONE intersection. - If 'inverse' is True, return the the bits for which there is NO intersection. - - >>> intersectUnicodeRanges([0x0410]) == {9} - True - >>> intersectUnicodeRanges([0x0410, 0x1F000]) == {9, 57, 122} - True - >>> intersectUnicodeRanges([0x0410, 0x1F000], inverse=True) == ( - ... set(range(len(OS2_UNICODE_RANGES))) - {9, 57, 122}) - True - """ - unicodes = set(unicodes) - unicodestarts, unicodevalues = _getUnicodeRanges() - bits = set() - for code in unicodes: - stop, bit = unicodevalues[bisect.bisect(unicodestarts, code)] - if code <= stop: - bits.add(bit) - # The spec says that bit 57 ("Non Plane 0") implies that there's - # at least one codepoint beyond the BMP; so I also include all - # the non-BMP codepoints here - if any(0x10000 <= code < 0x110000 for code in unicodes): - bits.add(57) - return set(range(len(OS2_UNICODE_RANGES))) - bits if inverse else bits - - -if __name__ == "__main__": - import doctest, sys - - sys.exit(doctest.testmod().failed) diff --git a/spaces/chungsarit/ytdownload/README.md b/spaces/chungsarit/ytdownload/README.md deleted file mode 100644 index 2438fba680cb3f309e084bf7a7cdfb2d9da734cf..0000000000000000000000000000000000000000 --- a/spaces/chungsarit/ytdownload/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Ytdownloader -emoji: 💻 -colorFrom: yellow -colorTo: gray -sdk: docker -pinned: false -license: mit -app_port: 8765 -duplicated_from: demo ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - -### YT downloader based on pytube and solara \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Download A Fortaleza 2 Dubladol.md b/spaces/cihyFjudo/fairness-paper-search/Download A Fortaleza 2 Dubladol.md deleted file mode 100644 index 262df6de09627b7dee9d93e04cda695a62970903..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Download A Fortaleza 2 Dubladol.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Download A Fortaleza 2 Dubladol


    DOWNLOAD > https://tinurli.com/2uwiuT



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Download Hit! full movie in italian dubbed in Mp4 the most memorable scenes and quotes from the film.md b/spaces/cihyFjudo/fairness-paper-search/Download Hit! full movie in italian dubbed in Mp4 the most memorable scenes and quotes from the film.md deleted file mode 100644 index 1d958766f82af37dfee16f32264dd658c0b3ee86..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Download Hit! full movie in italian dubbed in Mp4 the most memorable scenes and quotes from the film.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Download Hit! full movie in italian dubbed in Mp4


    Download Ziphttps://tinurli.com/2uwhRV



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Qsb 4.5 Liter Serial Number The Key to Unlocking Your Engines Performance and Potential.md b/spaces/cihyFjudo/fairness-paper-search/Qsb 4.5 Liter Serial Number The Key to Unlocking Your Engines Performance and Potential.md deleted file mode 100644 index aedc89a75b9cf8538ae45c39d334744cc2436bd3..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Qsb 4.5 Liter Serial Number The Key to Unlocking Your Engines Performance and Potential.md +++ /dev/null @@ -1,10 +0,0 @@ -
    -

    Other misconceptions between the 4BT 3.9 L and 4.5 L are that the engine blocks and cylinder heads are the same which is untrue. The 4.5 L engines come in many different varieties all with different block sizes and part number supersessions throughout the years.

    -

    In cases where the ESN cannot be obtained, we have the ability to manufacturer QSB engines back to a few common engine serial numbers. Ultimately, it is up to you the customer to determine fit of an engine into your application.

    -

    Qsb 4.5 Liter Serial Number


    DOWNLOAD https://tinurli.com/2uwjEV



    -

    For over 100 years, Cummins engines have been powering machines around the world. Built on a reputation for durability and reliability, Cummins has a global reach that stretches over more than 190 countries and territories. Additionally, they produce engines of various sizes and power outputs for use in various industries and applications. This makes knowing your engine number an additional factor when it comes to ordering Cummins engine parts.

    -

    A key factor will be determining the correct parts you need for repair and replacement. In order to do so, you will need to know your Cummins engine serial number. The experts at Diesel Pro Power are here to help. Read our guide for the Cummins engine serial number breakdown or get in touch with a knowledgeable member of our team for further assistance.

    -

    The good news is you can identify your Cummins engine serial number location and other vital information such as RPM rating, horsepower and a Critical Parts List (Cummins CPL lookup) by checking the data plate. However, finding the Cummins engine data plate can be difficult because they are frequently located in different positions depending on the model and year.

    -

    The location of your ESN is dependent on your Cummins engine number. The following is our list of the common Cummins serial number locations for most makes and models to save time and ensure that you have the correct information when ordering parts.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Rang De Basanti Movie Hindi Dubbed Download 720p HD Watch the Youth of India Today.md b/spaces/cihyFjudo/fairness-paper-search/Rang De Basanti Movie Hindi Dubbed Download 720p HD Watch the Youth of India Today.md deleted file mode 100644 index f9ee917c23a6e4e183fd4051796e95799a2aac84..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Rang De Basanti Movie Hindi Dubbed Download 720p HD Watch the Youth of India Today.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Rang De Basanti movie hindi dubbed download 720p hd


    Download Ziphttps://tinurli.com/2uwkx8



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Shutterstock Pack Vol. 2.27 The Ultimate Collection of Stock Photos and Videos.md b/spaces/cihyFjudo/fairness-paper-search/Shutterstock Pack Vol. 2.27 The Ultimate Collection of Stock Photos and Videos.md deleted file mode 100644 index ed31b717a6c5c9e9ef26ab0a7694dbe54e60249a..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Shutterstock Pack Vol. 2.27 The Ultimate Collection of Stock Photos and Videos.md +++ /dev/null @@ -1,5 +0,0 @@ - -

    The increase in net sales is attributable to the acquisition of packaging solutions company Signode. Americas Beverage segment recorded $3.36bn, while European Beverage business contributed $1.49bn. Contributions from other business segments include Asia Pacific ($1.29bn), European Food ($1.88bn), and Transit Packaging ($2.27bn).

    -

    Shutterstock Pack Vol. 2.27


    Download Zip ———>>> https://tinurli.com/2uwiUu



    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Tom Clancys Splinter Cell Conviction (2010-ENG-FullRip 4.3gb) Corepack A Guide to Install and Run the Game on Your PC.md b/spaces/cihyFjudo/fairness-paper-search/Tom Clancys Splinter Cell Conviction (2010-ENG-FullRip 4.3gb) Corepack A Guide to Install and Run the Game on Your PC.md deleted file mode 100644 index 98dd62549d1d85ea1dc9db728f710032bd9cd8d9..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Tom Clancys Splinter Cell Conviction (2010-ENG-FullRip 4.3gb) Corepack A Guide to Install and Run the Game on Your PC.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Tom Clancy's Splinter Cell Conviction (2010-ENG-FullRip 4.3gb) Corepack


    Download Zip »»» https://tinurli.com/2uwjRG



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Xforce Keygen InfraWorks 2010 Crack Learn How to Use the Xforce Keygen to Patch and Generate Codes for Autodesk Products.md b/spaces/cihyFjudo/fairness-paper-search/Xforce Keygen InfraWorks 2010 Crack Learn How to Use the Xforce Keygen to Patch and Generate Codes for Autodesk Products.md deleted file mode 100644 index 51a6201fa99b395fcbc2f8361a9ecb6f3d7c1645..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Xforce Keygen InfraWorks 2010 Crack Learn How to Use the Xforce Keygen to Patch and Generate Codes for Autodesk Products.md +++ /dev/null @@ -1,6 +0,0 @@ -

    xforcekeygenInfraWorks2010crack


    DOWNLOAD ··· https://tinurli.com/2uwiyO



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Yeon Gaesomun English Subtitles Download for Movie Watch the Historical Drama of the General Who Killed the King.md b/spaces/cihyFjudo/fairness-paper-search/Yeon Gaesomun English Subtitles Download for Movie Watch the Historical Drama of the General Who Killed the King.md deleted file mode 100644 index eeb5d280487a65a30c4dc6cae42719cbd9ad7490..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Yeon Gaesomun English Subtitles Download for Movie Watch the Historical Drama of the General Who Killed the King.md +++ /dev/null @@ -1,11 +0,0 @@ -
    -

    mikdawa 19191a764c
    -kinect-pro-body-keygen-torrent
    [ -yugioh-power-of-chaos-the-legend-reborn-trainer -world-icon-editor-crack-serial-11]
    [ -zaban-ki-tareekh-pdf-free -telugu-full-movie-download-in-utorrent -silhouette-v454-x64-incl-crack-and-key-tordigger-rar]
    [ ://soundcloud.com/kiddgesskuwal1983/gta-san-andreas-real-v2-ovisebdan-mod -backup-and-recovery-10-free-edition-portable-download -iphone-free-beta-version-209exe-free-download]
    link= -esponja-pelicula-1080p-torrent -lt-2019-with-x-force-keygen-2019 -car-driving-122-serial-keyfull46 -inventor-download-free-crack -libro-ginecologia-perez-sanchez-pdf-70
    link= -zaban-ki-tareekh-pdf-free -telugu-full-movie-download-in-utorrent -silhouette-v454-x64-incl-crack-and-key-tordigger-rar
    link= ://soundcloud.com/kiddgesskuwal1983/gta-san-andreas-real-v2-ovisebdan-mod -backup-and-recovery-10-free-edition-portable-download -iphone-free-beta-version-209exe-free-download

    -

    yeon gaesomun english subtitles download for movie


    DOWNLOAD ››› https://tinurli.com/2uwiMz



    -

    adrihan 19191a764c
    -malayalam-full-movie-with-english-subtitles-download-torrent
    [ -malayalam-full-movie-with-english-subtitles-download-torrent]
    [ -malayalam-full-movie-with-english-subtitles-download-torrent]
    [ -malayalam-full-movie-with-english-subtitles-download-torrent]
    link= -malayalam-full-movie-with-english-subtitles-download-torrent
    link= -malayalam-full-movie-with-english-subtitles-download-torrent
    link= -malayalam-full-movie-with-english-subtitles-download-torrent

    -

    walwalw 19191a764c
    -walk-to-remember-movie-download-in-hindi-dubbed
    [ -walk-to-remember-movie-download-in-hindi-dubbed ]
    [ -walk-to-remember-movie-download-in-hindi-dubbed ]
    [ -walk-to-remember-movie-download-in-hindi-dubbed ]
    link= -walk-to-remember-movie-download-in-hindi-dubbed
    link= -walk-to-remember-movie-download-in-hindi-dubbed
    link= -walk-to-remember-movie-download-in-hindi-dubbed

    -

    kaimlen 19191a764c
    -mazha-navsacha-full-marathi-movie-download-free
    [ -mazha-navsacha-full-marathi-movie-download-free ]
    [ -mazha-navsacha-full-marathi-movie-download-free ]
    [ -mazha-navsacha-full-marathi-movie-download-free ]
    link= -mazha-navsacha-full-marathi-movie-download-free
    link= -mazha-navsacha-full-marathi-movie-download-free
    link= -mazha-navsacha-full-marathi-movie-download-free

    -

    -

    rawdire 19191a764c
    -gaesomun-english-subtitles-download-for-movie
    [ -gaesomun-english-subtitles-download-for-movie ]
    [ -gaesomun-english-subtitles-download-for-movie ]
    [ -gaesomun-english-subtitles-download-for-movie ]
    link= -gaesomun-english-subtitles-download-for-movie
    link= -gaesomun-english-subtitles-download-for-movie
    link= -gaesomun-english-subtitles-download-for-movie

    -

    idamari 19191a764c
    -day-out-2-full-movie-in-hindi-download
    [ -day-out-2-full-movie-in-hindi-download ]
    [ -day-out-2-full-movie-in-hindi-download ]
    [ -day-out-2-full-movie-in-hindi-download ]
    link= -day-out-2-full-movie-in-hindi-download
    link= -day-out-2-full-movie-in-hindi-download
    link= -day-out-2-full-movie-in-hindi-download

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/codys12/MergeLlama-7b/style.css b/spaces/codys12/MergeLlama-7b/style.css deleted file mode 100644 index 60878febc13db001635a52688abfe34d95e6c309..0000000000000000000000000000000000000000 --- a/spaces/codys12/MergeLlama-7b/style.css +++ /dev/null @@ -1,16 +0,0 @@ -h1 { - text-align: center; -} - -#duplicate-button { - margin: auto; - color: white; - background: #1565c0; - border-radius: 100vh; -} - -.contain { - max-width: 900px; - margin: auto; - padding-top: 1.5rem; -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/intrax8.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/intrax8.h deleted file mode 100644 index 8e22361f1f8b96b6244eacd5048cc614a4e3b14e..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/intrax8.h +++ /dev/null @@ -1,114 +0,0 @@ -/* - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_INTRAX8_H -#define AVCODEC_INTRAX8_H - -#include "blockdsp.h" -#include "get_bits.h" -#include "intrax8dsp.h" -#include "wmv2dsp.h" -#include "mpegpicture.h" - -typedef struct IntraX8Context { - const VLCElem *j_ac_vlc_table[4]; // they point to the static j_mb_vlc.table - const VLCElem *j_orient_vlc_table; - const VLCElem *j_dc_vlc_table[3]; - - int use_quant_matrix; - - // set by ff_intrax8_common_init - uint8_t *prediction_table; // 2 * (mb_w * 2) - uint8_t permutated_scantable[3][64]; - WMV2DSPContext wdsp; - uint8_t idct_permutation[64]; - AVCodecContext *avctx; - int *block_last_index; ///< last nonzero coefficient in block - int16_t (*block)[64]; - - // set by the caller codec - IntraX8DSPContext dsp; - BlockDSPContext bdsp; - int quant; - int dquant; - int qsum; - int loopfilter; - AVFrame *frame; - GetBitContext *gb; - - // calculated per frame - int quant_dc_chroma; - int divide_quant_dc_luma; - int divide_quant_dc_chroma; - uint8_t *dest[3]; - uint8_t scratchpad[42]; // size of the block is fixed (8x8 plus padding) - - // changed per block - int edges; - int flat_dc; - int predicted_dc; - int raw_orient; - int chroma_orient; - int orient; - int est_run; - - // block props - int mb_x, mb_y; - int mb_width, mb_height; -} IntraX8Context; - -/** - * Initialize IntraX8 frame decoder. - * @param avctx pointer to AVCodecContext - * @param w pointer to IntraX8Context - * @param block pointer to block array - * @param block_last_index pointer to index array - * @param mb_width macroblock width - * @param mb_height macroblock height - * @return 0 on success, a negative AVERROR value on error - */ -int ff_intrax8_common_init(AVCodecContext *avctx, - IntraX8Context *w, - int16_t (*block)[64], - int block_last_index[12], - int mb_width, int mb_height); - -/** - * Destroy IntraX8 frame structure. - * @param w pointer to IntraX8Context - */ -void ff_intrax8_common_end(IntraX8Context *w); - -/** - * Decode single IntraX8 frame. - * lowres decoding is theoretically impossible. - * @param w pointer to IntraX8Context - * @param pict the output Picture containing an AVFrame - * @param gb open bitstream reader - * @param mb_x pointer to the x coordinate of the current macroblock - * @param mb_y pointer to the y coordinate of the current macroblock - * @param dquant doubled quantizer, it would be odd in case of VC-1 halfpq==1. - * @param quant_offset offset away from zero - * @param loopfilter enable filter after decoding a block - */ -int ff_intrax8_decode_picture(IntraX8Context *w, Picture *pict, - GetBitContext *gb, int *mb_x, int *mb_y, - int quant, int halfpq, - int loopfilter, int lowdelay); - -#endif /* AVCODEC_INTRAX8_H */ diff --git a/spaces/colutti/timpal0l-mdeberta-v3-base-squad2/README.md b/spaces/colutti/timpal0l-mdeberta-v3-base-squad2/README.md deleted file mode 100644 index b50d2f5e5f8a09d89ecdffec28953ba4267b2bab..0000000000000000000000000000000000000000 --- a/spaces/colutti/timpal0l-mdeberta-v3-base-squad2/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Timpal0l Mdeberta V3 Base Squad2 -emoji: 💻 -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 3.33.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/congsaPfin/Manga-OCR/logs/Among Us 3D The Ultimate Guide to Downloading and Playing the Game.md b/spaces/congsaPfin/Manga-OCR/logs/Among Us 3D The Ultimate Guide to Downloading and Playing the Game.md deleted file mode 100644 index 91e3d9efc09085aee6043bb27d476ac19952c746..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Among Us 3D The Ultimate Guide to Downloading and Playing the Game.md +++ /dev/null @@ -1,85 +0,0 @@ -
    -

    Among Us 3D: How to Download and Play the Fan-Made VR Version of the Popular Game

    -

    Among Us is one of the most popular multiplayer games of recent years, with millions of players enjoying its social deduction gameplay. But what if you could experience the game in a more immersive way, with a first-person perspective, realistic graphics, and motion controls? That's what a fan-made VR version of the game, called Among Us 3D, offers. In this article, we will tell you everything you need to know about Among Us 3D, how to download and play it, what are the reviews and ratings of it, and who should try it.

    -

    among us 3d free download


    Download ····· https://urlca.com/2uO4Dk



    -

    What is Among Us 3D?

    -

    A brief introduction to the original Among Us game and its gameplay

    -

    Among Us is a game developed by Innersloth, released in 2018. It is a party game of teamwork and betrayal, where players are divided into two roles: Crewmates and Impostors. Crewmates have to work together to complete tasks on a spaceship, while Impostors have to secretly kill them or sabotage their mission. The game can be played online with up to 10 players, or locally with friends. The game has four different maps to choose from, each with its own layout, tasks, and sabotages. The game also has various customization options for players, such as choosing their color, hat, skin, pet, name, and game settings.

    -

    A description of the fan-made 3D VR version of the game and its features

    -

    Among Us 3D is a fan-made VR version of the game, created by Jar. It is a ground-up remake of the game in 3D, using VRChat as a platform. It allows players to experience the game in a more realistic and immersive way, using a VR headset and motion controllers. The game features one map, The Skeld II, which is a recreation of the original Skeld map with some changes. The game also has all the core mechanics of the original game, such as tasks, sabotages, emergency meetings, venting, killing, reporting, voting, etc. The game also has some new features that are exclusive to VR, such as voice chat (with proximity chat), new tasks that use motion controls (such as whack-a-mole or retinal scanner), new cosmetics (such as hats or skins), new settings (such as hands visible or number of impostors), etc.

    -

    How to Download and Play Among Us 3D?

    -

    The system requirements for playing Among Us 3D on PC, Android, and iOS devices

    -

    Before you download and hear the voices of the players who are near you, and not the ones who are far away. This adds a layer of realism and immersion to the game, as well as a challenge for the impostors, who have to be careful about what they say and where they say it. - Use motion controls: Motion controls are another feature of Among Us 3D that makes the game more interactive and fun. You can use motion controls to move around, interact with objects, perform tasks, vent, kill, report, vote, etc. You can also use gestures to express yourself, such as waving, pointing, nodding, shaking your head, etc. Motion controls also add a level of difficulty to the game, as you have to be more precise and coordinated with your movements, especially when doing tasks or killing. - Use the map: The map is a useful tool that helps you navigate the 3D environment of The Skeld II. You can access the map by pressing a button on your controller or tapping on your screen. The map shows you the layout of the ship, your location, the location of other players (if they are alive), the location of tasks (if you are a crewmate), and the location of sabotages (if you are an impostor). You can use the map to plan your route, find your tasks, avoid or chase other players, or fix or cause sabotages. - Use the table: The table is a new feature that is exclusive to Among Us 3D. It is a large circular table that is located in the cafeteria. The table has several functions that can help you play the game better. You can use the table to see the status of the players (alive or dead), the status of the tasks (completed or not), and the status of the sabotages (active or not). You can also use the table to call an emergency meeting by pressing a button on it. The table is a great way to get information and make decisions in the game. - Be creative: One of the best things about Among Us 3D is that it allows you to be more creative and expressive than the original game. You can customize your avatar with different hats, skins, pets, and names. You can also use your voice, gestures, and movements to communicate with other players. You can also use your surroundings and objects to create scenarios or hide clues. For example, you can hide a body behind a door or a vent, or you can leave a trail of blood or footprints behind you. You can also use props such as weapons or tools to make your kills more dramatic or convincing. The possibilities are endless in Among Us 3D.

    -

    What are the Reviews and Ratings of Among Us 3D?

    -

    A summary of the positive and negative feedback from critics and players of Among Us 3D

    -

    Among Us 3D has received mixed reviews from critics and players alike. On one hand, many people have praised the game for its innovation, immersion, realism, and fun factor. They have enjoyed playing the game in VR and experiencing it in a new way. They have also appreciated the work and effort that Jar and his team have put into making the game.

    -

    On the other hand, some people have criticized the game for its bugs, glitches, crashes, and lag issues. They have also complained about the lack of content, variety, and polish in the game. They have also encountered some problems with the VRChat platform, such as hackers, trolls, or toxic players. They have also expressed their preference for the original game over the VR version.

    -

    A comparison of Among Us 3D with the original Among Us game and other VR games

    -

    Among Us 3D is a unique game that stands out from the original Among Us game and other VR games. Here are some of the main differences and similarities between them: - Among Us 3D vs. Among Us: The most obvious difference is the graphics and perspective. Among Us 3D has 3D graphics and a first-person view, while Among Us has 2D graphics and a top-down view. Another difference is the platform and controls. Among Us 3D uses VRChat as a platform and motion controls as input, while Among Us uses various platforms and keyboard/mouse or touch controls as input. A third difference is the content and features. Among Us 3D has one map, The Skeld II, and some new features that are exclusive to VR, while Among Us has four maps and some features that are not available in VR. A similarity between them is the gameplay and mechanics. Both games have the same core gameplay of teamwork and betrayal, with the same roles, tasks, sabotages, meetings, voting, etc. - Among Us 3D vs. other VR games: The main difference is the genre and theme. Among Us 3D is a social deduction game set in a sci-fi space theme, while other VR games may have different genres and themes, such as action, adventure, horror, fantasy, etc. Another difference is the quality and development. Among Us 3D is a fan-made game that is still in beta stage, while other VR games may be professionally made and fully released. A similarity between them is the immersion and interaction. Both games use VR technology to create immersive and interactive experiences for the players, using realistic graphics, sound, voice chat, motion controls, gestures, etc.

    -

    among us 3d models free download
    -among us 3d game free download for pc
    -among us 3d animation free download
    -among us 3d wallpaper free download
    -among us 3d mod apk free download
    -among us 3d sketchfab free download
    -among us 3d blender free download
    -among us 3d character free download
    -among us 3d assets free download
    -among us 3d online free no download
    -among us 3d unity free download
    -among us 3d low poly free download
    -among us 3d rigged free download
    -among us 3d maya free download
    -among us 3d printable free download
    -among us 3d simulator free download
    -among us 3d fan game free download
    -among us 3d minecraft free download
    -among us 3d roblox free download
    -among us 3d unreal engine free download
    -among us 3d vr free download
    -among us 3d cursor free download
    -among us 3d logo free download
    -among us 3d intro free download
    -among us 3d render free download
    -among us 3d pixel art free download
    -among us 3d texture pack free download
    -among us 3d map free download
    -among us 3d icon free download
    -among us 3d font free download
    -among us 3d sound effects free download
    -among us 3d stickers free download
    -among us 3d theme free download
    -among us 3d live wallpaper free download
    -among us 3d model rigged blender free download
    -among us 3d model rigged maya free download
    -among us 3d model rigged unity free download
    -among us 3d model rigged sketchfab free download
    -how to play among us in 3d for free no download
    -how to make a 3d model of among us for free no download
    -how to get a custom skin in among us in 3d for free no download
    -how to draw a realistic 3d version of your favorite character from the game Among Us for Free no Download
    -how to create a custom map in Among Us in 3D for Free no Download
    -how to animate your own Among Us character in Blender in 3D for Free no Download
    -how to make a fan art of Among Us in Photoshop in 3D for Free no Download
    -how to design a logo for your Among Us crew in Illustrator in 3D for Free no Download
    -how to make a catchy intro for your Among Us YouTube channel in After Effects in 3D for Free no Download
    -how to make a cool wallpaper for your phone or desktop featuring Among Us in Gimp in 3D for Free no Download
    -how to make a funny meme with Among Us characters in Meme Generator in 3D for Free no Download

    -

    A recommendation for who should try Among Us 3D and why

    -

    Among Us 3D is a game that can appeal to different types of players, depending on their preferences and expectations. Here are some of the reasons why you should or should not try Among Us 3D: - You should try Among Us 3D if: - You are a fan of the original Among Us game and want to experience it in a new way. - You are a fan of VR games and want to try something different and fun. - You are looking for a social and interactive game that you can play with your friends or strangers online. - You are curious about how a fan-made game can recreate a popular game in VR. - You should not try Among Us 3D if: - You are not a fan of the original Among Us game or social deduction games in general. - You are not a fan of VR games or you don't have a compatible device or headset. - You are looking for a polished and bug-free game that has a lot of content and variety. - You are easily annoyed by hackers, trolls, or toxic players that may ruin your game experience.

    -

    Conclusion

    -

    A recap of the main points of the article and a call to action for the readers

    -

    In conclusion, Among Us 3D is a fan-made VR version of the popular game Among Us. It allows players to enjoy the game in a more immersive and realistic way, using a VR headset and motion controls. The game features one map, The Skeld II, which is a recreation of the original Skeld map with some changes. The game also has all the core mechanics of the original game, such as tasks, sabotages, emergency meetings, venting, killing, reporting, voting, etc. The game also has some new features that are exclusive to VR, such as voice chat (with proximity chat), new tasks that use motion controls (such as whack-a-mole or retinal scanner), new cosmetics (such as hats or skins), new settings (such as hands visible or number of impostors), etc. To play Among Us 3D, you need to have a compatible device and a VR headset. The game is available for PC, Android, and iOS devices, but the requirements may vary depending on the device and the headset. You also need to have VRChat installed on your device, which is a free social VR platform that hosts Among Us 3D. You can download VRChat from Steam, Oculus Store, Google Play Store, or App Store. Then, you need to search for "Among Us 3D" in the Worlds tab and enter the world and join a game. Among Us 3D has received mixed reviews from critics and players alike. Some people have praised the game for its innovation, immersion, realism, and fun factor. They have enjoyed playing the game in VR and experiencing it in a new way. They have also appreciated the work and effort that Jar and his team have put into making the game. However, some people have criticized the game for its bugs, glitches, crashes, and lag issues. They have also complained about the lack of content, variety, and polish in the game. They have also encountered some problems with the VRChat platform, such as hackers, trolls, or toxic players. They have also expressed their preference for the original game over the VR version. Among Us 3D is a game that can appeal to different types of players, depending on their preferences and expectations. You should try Among Us 3D if you are a fan of the original Among Us game and want to experience it in a new way. You should also try it if you are a fan of VR games and want to try something different and fun. You should also try it if you are looking for a social and interactive game that you can play with your friends or strangers online. You should also try it if you are curious about how a fan-made game can recreate a popular game in VR. However, you should not try Among Us 3D if you are not a fan of the original Among Us game or social deduction games in general. You should also not try it if you are not a fan of VR games or you don't have a compatible device or headset. You should also not try it if you are looking for a polished and bug-free game that has a lot of content and variety. You should also not try it if you are easily annoyed by hackers, trolls, or toxic players that may ruin your game experience. If you are interested in playing Among Us 3D, you can download it from VRChat and join a game today. You can also check out Jar's YouTube channel for more information and updates on the game. You can also join the official Discord server for Among Us 3D to chat with other players and get support from the developers. Have fun playing Among Us 3D and remember: Trust no one!

    -

    FAQs

    -

    Is Among Us 3D compatible with the original Among Us game?

    -

    No, Among Us 3D is not compatible with the original Among Us game. They are two separate games that use different platforms and servers. You cannot play with players who are using the original game or vice versa.

    -

    Is Among Us 3D safe to download and play?

    -

    Yes, Among Us 3D is safe to download and play. It is hosted on VRChat, which is a reputable social VR platform that has millions of users. However, you should be careful about downloading any unofficial mods or add-ons for the game that may contain viruses or malware.

    -

    How many players can play Among Us 3D online?

    -

    Among Us 3D can support up to 10 players online per game session. However, you can join multiple sessions at once by using portals or friends list.

    -

    What are the best VR headsets for playing Among Us 3D?

    -

    The best VR headsets for playing Among Us 3D depend on your personal preference and budget. However, some of the most popular and recommended ones are Oculus Rift S, HTC Vive, Valve Index, or Windows Mixed Reality for PC, and Oculus Quest 2, Samsung Gear VR, or Google Cardboard for mobile devices.

    -

    How can I report or mute abusive players in Among Us 3D?

    -

    If you encounter any abusive players in Among Us 3D, such as hackers, trolls, or toxic players, you can report or mute them using the VRChat menu. To report a player, you need to open the menu, select the player's name, and click on Report. You can then choose the reason for the report and submit it. To mute a player, you need to open the menu, select the player's name, and click on Mute. You can then choose to mute their voice or their avatar.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/DiskMaker X 9.0 (6 3 MB) - The Best Way to Build an OS XmacOS Install Disk.md b/spaces/congsaPfin/Manga-OCR/logs/DiskMaker X 9.0 (6 3 MB) - The Best Way to Build an OS XmacOS Install Disk.md deleted file mode 100644 index d6cb34905cb96ab69adb3e065de97e93dad1b083..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/DiskMaker X 9.0 (6 3 MB) - The Best Way to Build an OS XmacOS Install Disk.md +++ /dev/null @@ -1,193 +0,0 @@ - -

    How to Download and Use DiskMaker X 9.0 to Create a macOS Installation Disk

    -

    If you want to install or reinstall macOS on your Mac, you might need a bootable installer disk that contains the macOS version you want. A bootable installer disk can be useful if you want to install macOS on multiple Macs without downloading the installer each time, or if you have trouble installing macOS from the Finder or macOS Recovery.

    -

    download diskmaker x 9.0 (6 3 mb)


    Download File ……… https://urlca.com/2uO8o8



    -

    One of the easiest ways to create a bootable installer disk for macOS is to use DiskMaker X, a free and popular app that can build a bootable drive from any macOS installer program that you download from the App Store. In this article, we will show you how to download and use DiskMaker X 9.0, the latest version that supports macOS Catalina, Mojave, High Sierra, and Sierra.

    -

    What is DiskMaker X and why you might need it

    -

    DiskMaker X (formerly Lion DiskMaker) is an application built with AppleScript that can create a bootable installer disk for any version of OS X or macOS from OS X Lion (10.7) to macOS Catalina (10.15). It works by copying the contents of the macOS installer program that you download from the App Store to an external drive or volume, such as a USB flash drive, a hard drive, or an SD card. It also makes the disk look as nice as possible by adding a custom icon and label.

    -

    You might need DiskMaker X if you want to:

    -

    How to create a bootable macOS Catalina install drive with DiskMaker X 9
    -DiskMaker X 9 for macOS Catalina: free app to build a bootable drive
    -Download DiskMaker X 9.0 (6,3 MB) SHA-1 Checksum: 87d92610155135621014afefa88d8b6c9ad5f0ed
    -DiskMaker X 9: the easiest way to build an OS X/macOS installer
    -DiskMaker X 9 requires macOS 10.10 or later to run
    -DiskMaker X 9: compatible with macOS Catalina install app
    -DiskMaker X 9: an application built with AppleScript
    -DiskMaker X 9: make a bootable install disk in a few clicks
    -DiskMaker X 9: not compatible with macOS Big Sur
    -DiskMaker X 9: download link and installation guide
    -DiskMaker X 9: how to use it with Spotlight
    -DiskMaker X 9: how to format a drive for bootable installation
    -DiskMaker X 9: how to re-install the OS on a freshly formatted drive
    -DiskMaker X 9: how to install it on multiple Macs without re-downloading
    -DiskMaker X 9: how to verify the SHA-1 checksum of the download file
    -DiskMaker X 9: how to donate to the developer
    -DiskMaker X 9: how to troubleshoot common issues
    -DiskMaker X 9: how to contact the developer for support
    -DiskMaker X 9: how to uninstall it from your Mac
    -DiskMaker X 9: how to update it to the latest version
    -DiskMaker X 9 vs Terminal app: which one is better for creating a bootable drive?
    -DiskMaker X 9 vs other apps: how does it compare to alternative solutions?
    -DiskMaker X 9 reviews: what do users say about it?
    -DiskMaker X 9 features: what can it do and what can't it do?
    -DiskMaker X 9 FAQs: answers to frequently asked questions about it
    -Why you need a bootable macOS Catalina install drive and how DiskMaker X 9 can help you
    -How to backup your data before using DiskMaker X 9 to create a bootable drive
    -How to customize your bootable macOS Catalina install drive with DiskMaker X 9
    -How to test your bootable macOS Catalina install drive created with DiskMaker X 9
    -How to fix a corrupted or damaged bootable macOS Catalina install drive made with DiskMaker X 9
    -How to download macOS Catalina installer from the App Store for using with DiskMaker X 9
    -How to use DiskMaker X 9 with older versions of OS X/macOS
    -How to use DiskMaker X 9 with different types of drives (USB, SD card, external hard drive, etc.)
    -How to use DiskMaker X 9 with different Mac models (MacBook, iMac, Mac mini, etc.)
    -How to use DiskMaker X 9 with different file systems (APFS, HFS+, etc.)
    -How long does it take to create a bootable macOS Catalina install drive with DiskMaker X 9?
    -How much space do you need on your drive for creating a bootable macOS Catalina install drive with DiskMaker X 9?
    -How secure is your data when using DiskMaker X 9 to create a bootable drive?
    -How reliable is your bootable macOS Catalina install drive created with DiskMaker X 9?

    -
      -
    • Install or reinstall macOS on your Mac without relying on an internet connection or a recovery partition
    • -
    • Install or reinstall macOS on multiple Macs without downloading the installer each time
    • -
    • Install or reinstall an older version of macOS that is not available from the App Store or macOS Recovery
    • -
    • Create a backup or emergency disk that can boot your Mac and access various utilities
    • -
    -

    What are the requirements and limitations of DiskMaker X

    -

    To use DiskMaker X, you will need:

    -
      -
    • A Mac running OS X Yosemite (10.10) or later
    • -
    • A macOS installer program downloaded from the App Store (the app will try to find it automatically with Spotlight)
    • -
    • An external drive or volume with at least 14 GB of available storage, formatted as Mac OS Extended (Journaled)
    • -
    -

    Some limitations of DiskMaker X are:

    -
      -
    • It does not support macOS Big Sur (11) or later (the developer recommends using another app called Install Disk Creator instead)
    • -
    • It does not support creating Windows installation disks
    • -
    • It does not support creating Linux installation disks (you can use another app called UNetbootin instead)
    • -
    • It does not support creating dual-boot or multi-boot disks
    • -
    -

    How to download DiskMaker X 9.0

    -

    To download DiskMaker X 9.0, follow these steps:

    -
      -
    1. Go to the official website of DiskMaker X
    2. -
    3. Click on DiskMaker X Seven OS, then click on Download DiskMaker X 9.0 (6,3 MB)
    4. Save the file to your Mac and double-click on it to open it -
    5. Drag and drop the DiskMaker X app to your Applications folder
    6. -
    -

    Before you run DiskMaker X, you might want to verify the checksum of the downloaded file to make sure it is not corrupted or tampered with. To do this, follow these steps:

    -
      -
    1. Open the Terminal app on your Mac (you can find it in the Utilities folder inside the Applications folder)
    2. -
    3. Type the following command and press Enter:
    4. -
      shasum -a 256 /Applications/DiskMaker\ X\ 9.app/Contents/Resources/createinstallmedia 
      -
    5. Compare the output with the checksum provided on the download page of DiskMaker X. It should match exactly. If not, you might have a corrupted or malicious file and you should delete it and download it again.
    6. -
    -

    How to use DiskMaker X 9.0 to create a bootable installer for macOS

    -

    To use DiskMaker X 9.0 to create a bootable installer for macOS, follow these steps:

    -

    How to prepare an external drive or volume for the installer

    -

    Before you use DiskMaker X, you need to prepare an external drive or volume that has at least 14 GB of available storage and is formatted as Mac OS Extended (Journaled). You can use any type of external drive, such as a USB flash drive, a hard drive, or an SD card. However, keep in mind that the process will erase all the data on the drive, so make sure you back up any important files before you proceed.

    -

    To prepare your external drive or volume, follow these steps:

    -
      -
    1. Connect your external drive or volume to your Mac
    2. -
    3. Open the Disk Utility app on your Mac (you can find it in the Utilities folder inside the Applications folder)
    4. -
    5. Select your external drive or volume from the sidebar (not the partition or volume name, but the device name)
    6. -
    7. Click on the Erase button at the top of the window
    8. -
    9. Choose a name for your disk (for example, "macOS Installer")
    10. -
    11. Choose Mac OS Extended (Journaled) as the format
    12. -
    13. Choose GUID Partition Map as the scheme
    14. -
    15. Click on Erase and wait for the process to complete
    16. -
    17. Click on Done and close Disk Utility
    18. -
    -

    How to launch DiskMaker X and select the macOS version

    -

    To launch DiskMaker X and select the macOS version that you want to create a bootable installer for, follow these steps:

    -
      -
    1. Open the DiskMaker X app from your Applications folder
    2. -
    3. If you see a warning message about downloading software from the internet, click on Open
    4. -
    5. If you see a dialog box asking for permission to access your contacts, click on Don't Allow (this is not necessary for the app to work)
    6. -
    7. If you see a dialog box asking for permission to access files on a removable volume, click on OK (this is necessary for the app to work)
    8. -
    9. You will see a window with four buttons representing different versions of macOS: Catalina, Mojave, High Sierra, and Sierra. Click on the button that matches the macOS version that you have downloaded from the App Store. If you have downloaded more than one version, you can choose which one to use by clicking on Other versions...
    10. -
    11. The app will try to find the macOS installer program on your Mac using Spotlight. If it finds it, it will show you its location and ask you to confirm. Click on Use this copy. If it does not find it, it will ask you to locate it manually. Click on Choose a macOS Installer and navigate to where you have saved the installer program (usually in the Applications folder). Select it and click on Choose.
    12. -
    -

    How to choose the destination disk and start the process

    -

    To choose the destination disk where you want to create the bootable installer and start the process, follow these steps:

    -
      -
    1. The app will ask you to choose an external drive or volume where you want to create the bootable installer. You should see your prepared disk in the list. Click on it and then click on Choose this disk.
    2. -
    3. The app will warn you that it will erase all the data on your disk and ask you to confirm. Click on Erase then create disk.
    4. -
    5. The app will ask you for your administrator password. Enter it and click on OK.
    6. -
    7. The app will start copying files from the macOS installer program to your disk. This may take some time depending on the speed of your disk and your Mac. You will see a progress bar and a log window showing the details of the process. Do not interrupt the process or eject your disk until it is finished.
    8. -
    9. When the process is complete, you will see a message saying that your disk is ready and that you can use it to install macOS on any compatible Mac. You will also see a button to make a donation to the developer of DiskMaker X if you appreciate their work. Click on Quit to exit the app.
    10. -
    -

    How to troubleshoot common errors or issues

    -

    Sometimes, you might encounter some errors or issues when using DiskMaker X. Here are some of the most common ones and how to fix them:

    - - - - - - - - - - - - - - - - - - - - - - - - - -
    Error or issueSolution
    The app cannot find the macOS installer program on your MacMake sure you have downloaded the macOS installer program from the App Store and that it is located in the Applications folder. If not, download it again or move it to the Applications folder. You can also try to locate it manually by clicking on Other versions... and then Choose a macOS Installer.
    The app cannot verify the checksum of the macOS installer programThis means that the macOS installer program that you have downloaded might be corrupted or modified. You should delete it and download it again from the App Store. You can also try to verify the checksum manually by following these instructions.
    The app cannot erase or format your external drive or volumeThis might happen if your external drive or volume is locked, encrypted, or has some other issues. You should try to erase or format it manually using Disk Utility before using DiskMaker X. You can follow these instructions to do so.
    The app fails to copy files or create the bootable installerThis might happen due to various reasons, such as a faulty disk, a bad connection, a software bug, or a system error. You should try to restart your Mac and your external drive or volume, and then run DiskMaker X again. You should also check if there are any updates available for DiskMaker X or your Mac's operating system. If none of these work, you might have to use another method to create a bootable installer, such as using the createinstallmedia command in Terminal.
    The bootable installer does not work or does not boot your MacThis might happen if your Mac is not compatible with the macOS version that you have created a bootable installer for, or if your Mac's firmware settings are not configured correctly. You should check the compatibility list of macOS versions and make sure your Mac meets the minimum requirements. You should also check the startup key combinations for Mac and make sure you are using the correct one to boot from the installer disk.
    -

    How to use the bootable installer to install or reinstall macOS

    -

    To use the bootable installer that you have created with DiskMaker X to install or reinstall macOS on your Mac, follow these steps:

    -

    How to boot from the installer disk and access the utilities

    -

    To boot from the installer disk and access the utilities that can help you install or reinstall macOS, follow these steps:

    -
      -
    1. Connect your external drive or volume with the bootable installer to your Mac
    2. -
    3. Restart your Mac and hold down the Option key (or Alt key) until you see a list of available startup disks
    4. -
    5. Select your bootable installer disk (it should have an orange icon and a label like "Install macOS Catalina") and press Enter
    6. -
    7. You will see a window with four options: Install macOS, Restore From Time Machine Backup, Get Help Online, and Disk Utility. You can use these options depending on what you want to do with your Mac.
    8. -
    -

    How to erase or format your Mac's internal drive if needed

    -

    If you want to erase or format your Mac's internal drive before installing or reinstalling macOS, follow these steps:

    -
      -
    1. From the window with four options, select Disk Utility and click on Continue
    2. -
    3. Select your Mac's internal drive from the sidebar (not the partition or volume name, but the device name)
    4. Click on the Erase button at the top of the window -
    5. Choose a name for your disk (for example, "Macintosh HD")
    6. -
    7. Choose Mac OS Extended (Journaled) as the format
    8. -
    9. Choose GUID Partition Map as the scheme
    10. -
    11. Click on Erase and wait for the process to complete
    12. -
    13. Click on Done and close Disk Utility
    14. -
    -

    How to install or reinstall macOS from the installer disk

    -

    To install or reinstall macOS from the installer disk, follow these steps:

    -
      -
    1. From the window with four options, select Install macOS and click on Continue
    2. -
    3. You will see a welcome screen with the macOS logo and version. Click on Continue
    4. -
    5. You will see a license agreement. Read it and click on Agree
    6. -
    7. You will see a list of available disks where you can install macOS. Select your Mac's internal drive (the one you erased or formatted if needed) and click on Install
    8. -
    9. The installer will start copying files to your disk. This may take some time depending on the speed of your disk and your Mac. You will see a progress bar and an estimated time remaining. Do not interrupt the installation or shut down your Mac until it is finished.
    10. -
    11. Your Mac will restart automatically when the installation is complete. You will see a setup assistant that will guide you through the initial configuration of your Mac, such as choosing a language, a keyboard layout, a network, an Apple ID, a password, and so on.
    12. -
    13. When the setup assistant is done, you will see your Mac's desktop with the Finder and other apps. You can now use your Mac with the new or reinstalled macOS version.
    14. -
    -

    Conclusion

    -

    In this article, we have shown you how to download and use DiskMaker X 9.0 to create a bootable installer disk for macOS. We have also shown you how to use the bootable installer disk to install or reinstall macOS on your Mac.

    -

    DiskMaker X is a handy and easy-to-use app that can save you time and hassle when you need to install or reinstall macOS on your Mac or multiple Macs. It can also help you create a backup or emergency disk that can boot your Mac and access various utilities.

    -

    We hope you have found this article useful and informative. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you.

    -

    Frequently Asked Questions

    -

    Q: Is DiskMaker X safe to use?

    -

    A: Yes, DiskMaker X is safe to use as long as you download it from the official website of DiskMaker X. You can also verify the checksum of the downloaded file to make sure it is not corrupted or tampered with. However, you should always be careful when using any app that can erase or modify your disks, and make sure you back up any important data before using DiskMaker X.

    -

    Q: Can I use DiskMaker X to create a bootable installer for macOS Big Sur (11) or later?

    -

    A: No, DiskMaker X does not support macOS Big Sur (11) or later. The developer recommends using another app called Install Disk Creator instead.

    -

    Q: Can I use DiskMaker X to create a bootable installer for Windows or Linux?

    -

    A: No, DiskMaker X does not support creating bootable installers for Windows or Linux. You can use another app called UNetbootin instead.

    -

    Q: Can I use DiskMaker X to create a dual-boot or multi-boot disk?

    -

    A: No, DiskMaker X does not support creating dual-boot or multi-boot disks. You can use another app called Boot Camp Assistant instead.

    -

    Q: Can I use DiskMaker X to create a bootable installer for an older version of macOS than OS X Lion (10.7)?

    -

    A: No, DiskMaker X does not support creating bootable installers for older versions of macOS than OS X Lion (10.7). You can use another app called Carbon Copy Cloner instead.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Rainbow Six Mobile The Ultimate Tactical Shooter on Your Phone.md b/spaces/congsaPfin/Manga-OCR/logs/Download Rainbow Six Mobile The Ultimate Tactical Shooter on Your Phone.md deleted file mode 100644 index 47c2f6b0daedcaa3b2810efd8346597ba8cbf7d5..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Rainbow Six Mobile The Ultimate Tactical Shooter on Your Phone.md +++ /dev/null @@ -1,78 +0,0 @@ - -

    Download Rainbow Six Mobile: A Guide for Beginners

    -

    If you are a fan of tactical shooter games, you might have heard of Rainbow Six Siege, a popular game that pits teams of operators against each other in intense 5v5 matches. But did you know that you can also enjoy this game on your mobile device? That's right, Ubisoft has developed a mobile version of Rainbow Six Siege called Rainbow Six Mobile, and it is coming soon to Android and iOS devices. In this article, we will tell you everything you need to know about Rainbow Six Mobile, including what it is, how to download it, and how to play it.

    -

    download rainbow six siege mobile


    DOWNLOAD 🗹 https://urlca.com/2uO7ia



    -

    What is Rainbow Six Mobile?

    -

    A mobile version of the popular tactical shooter game

    -

    Rainbow Six Mobile is an upcoming mobile game based on the formula popularized in Ubisoft's prior console and PC title, Tom Clancy's Rainbow Six Siege, which was released in October 2015 and has since grown to over 70 million players. In Rainbow Six Siege, two teams of five players face off against each other. There are variations depending on exactly what the game mode is, but one team always defends while the other team is on the offensive.

    -

    Rainbow Six Mobile looks and plays like its console counterpart; it’s a 5v5 attack and defend FPS with destructible environments and a variety of specialist gadgets. Attackers use drones, explosives, and various other offensive items to push their way into the defending team’s fortified positions. Defenders use traps, cameras, and reinforcements to protect their objective and stop the attackers.

    -

    Features of Rainbow Six Mobile

    -

    Attack vs. Defense game modes

    -

    Rainbow Six Mobile features two classic game modes from Rainbow Six Siege: Secure Area and Bomb. In Secure Area, the attackers have to locate and secure a biohazard container within a certain time limit, while the defenders have to prevent them from doing so. In Bomb, the attackers have to locate and defuse one of two bombs planted by the defenders, while the defenders have to stop them or run down the clock.

    -

    Destructible environments

    -

    One of the most unique aspects of Rainbow Six Mobile is the destructible environments. You can use weapons and operators' unique abilities to breach through destructible walls and ceilings or rappel from the roof and break through windows. You can also make holes in walls and floors to create new lines of sight or entry points. The environment is a key part of your tactics, so use it wisely.

    -

    Specialized operators

    -

    Rainbow Six Mobile features a roster of highly trained operators, each with their own unique abilities and gadgets. You can choose from attackers or defenders, depending on your role in the match. Each operator has a primary and secondary weapon, as well as a special gadget that can give them an edge in combat. For example, Sledge can use his hammer to smash through walls and floors, while Valkyrie can deploy cameras to monitor enemy movements.

    -

    How to download Rainbow Six Mobile?

    -

    Requirements and availability

    -

    Rainbow Six Mobile is a free-to-play game that will be available for Android and iOS devices. However, it is not yet released globally, as Ubisoft is still testing it with a limited number of players in some regions. You can register for a chance to play before the release on the official website. You will need a Ubisoft account to do so.

    -

    download rainbow six mobile apk
    -how to download rainbow six siege mobile on android
    -rainbow six mobile release date
    -rainbow six mobile gameplay
    -rainbow six mobile beta download
    -rainbow six mobile system requirements
    -rainbow six mobile review
    -rainbow six mobile tips and tricks
    -rainbow six mobile vs cod mobile
    -rainbow six mobile best operators
    -download rainbow six siege mobile ios
    -how to download rainbow six siege mobile on iphone
    -rainbow six siege mobile app store
    -rainbow six siege mobile controller support
    -rainbow six siege mobile crossplay
    -rainbow six siege mobile graphics settings
    -rainbow six siege mobile reddit
    -rainbow six siege mobile discord
    -rainbow six siege mobile cheats and hacks
    -rainbow six siege mobile free credits
    -download tom clancy's rainbow six mobile
    -how to download tom clancy's rainbow six siege mobile
    -tom clancy's rainbow six mobile ubisoft
    -tom clancy's rainbow six mobile trailer
    -tom clancy's rainbow six mobile news
    -tom clancy's rainbow six mobile pre register
    -tom clancy's rainbow six mobile wiki
    -tom clancy's rainbow six mobile characters
    -tom clancy's rainbow six mobile weapons
    -tom clancy's rainbow six mobile maps
    -download r6s mobile apk
    -how to download r6s on android phone
    -r6s mobile game download for android
    -r6s mobile game size
    -r6s mobile game modes
    -r6s mobile game update
    -r6s mobile game ranking system
    -r6s mobile game feedback
    -r6s mobile game bugs and issues
    -r6s mobile game community

    -

    The game will require at least 2 GB of RAM and Android 7 or iOS 11 or higher to run smoothly. It will also across open areas without checking for enemies. You should also avoid standing still for too long, as you can be easily spotted by drones or cameras. You should always move with caution and cover, and use your operators' gadgets to create or deny angles of attack.

    -

    Conclusion

    -

    Summary of the main points

    -

    Rainbow Six Mobile is a mobile game that brings the thrilling experience of Rainbow Six Siege to your smartphone or tablet. You can play as one of the many operators, each with their own unique abilities and gadgets, and compete in 5v5 matches that require strategy, teamwork, and skill. You can download the game for free from the Google Play Store or the App Store, but you will need to register for a chance to play before the global release. You can also learn more about the game by watching tutorials and tips videos, or by playing the training mode.

    -

    FAQs

    -

    Here are some frequently asked questions about Rainbow Six Mobile:

    -
      -
    • Q: Is Rainbow Six Mobile cross-play compatible with Rainbow Six Siege?
    • -
    • A: No, Rainbow Six Mobile is a separate game from Rainbow Six Siege, and they are not cross-play compatible. You will only be able to play with other players who have Rainbow Six Mobile on their devices.
    • -
    • Q: How can I customize my operator and weapon in Rainbow Six Mobile?
    • -
    • A: You can customize your operator and weapon by using the loadout menu in the game. You can change your operator's outfit, headgear, and charm, as well as your weapon's skin, sight, barrel, grip, and magazine. You can unlock more customization options by playing the game and earning rewards.
    • -
    • Q: How can I communicate with my teammates in Rainbow Six Mobile?
    • -
    • A: You can communicate with your teammates in Rainbow Six Mobile by using the voice chat or text chat features in the game. You can also use the quick chat buttons to send predefined messages or pings to your teammates.
    • -
    • Q: How can I report a bug or a cheater in Rainbow Six Mobile?
    • -
    • A: You can report a bug or a cheater in Rainbow Six Mobile by using the report button in the game. You can also contact Ubisoft support through the official website or the social media channels.
    • -
    • Q: How can I get more information about Rainbow Six Mobile?
    • -
    • A: You can get more information about Rainbow Six Mobile by visiting the official website, where you can find news, updates, videos, and more. You can also follow the official social media channels, where you can interact with other players and developers.
    • -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Enjoy the Best Mobile Games with 233 Leyuan App - Free Download.md b/spaces/congsaPfin/Manga-OCR/logs/Enjoy the Best Mobile Games with 233 Leyuan App - Free Download.md deleted file mode 100644 index a084f62528df24fcb76ea39636a79b63cb2bc1a5..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Enjoy the Best Mobile Games with 233 Leyuan App - Free Download.md +++ /dev/null @@ -1,89 +0,0 @@ -
    -

    Download 233 leyuan: A Fun and Social Gaming Platform

    -

    Are you looking for a new way to enjoy your favorite games and meet new people? If so, you might want to check out 233 leyuan, a leading mobile game platform in China that offers a variety of games, social features, and services. In this article, we will tell you what 233 leyuan is, how to download it, and what benefits you can get from it.

    -

    download 233 leyuan


    Download Ziphttps://urlca.com/2uObgP



    -

    What is 233 leyuan?

    -

    233 leyuan is a game platform that provides users with access to hundreds of games across different genres, such as puzzle, arcade, action, adventure, simulation, and more. You can find popular titles like Cut the Rope, Om Nom, Super Truck, Fishing Season, and many others. You can also discover new games that are updated regularly.

    -

    But 233 leyuan is not just a game platform. It is also a social platform that allows you to interact with other gamers and make friends. You can join communities based on your interests, share your gaming moments, chat with other players, and even play together. You can also participate in events, contests, and activities that are organized by the platform.

    -

    Moreover, 233 leyuan is a reliable platform that provides professional service and support to its users. You can contact the customer service team anytime if you have any questions or issues. You can also enjoy a safe and secure gaming environment that protects your privacy and data. You can also trust that the games are fair and tested for quality.

    -

    How to download 233 leyuan?

    -

    There are several ways to download 233 leyuan on your Android device. Here are some of them:

    -

    download 233 leyuan app
    -download 233 leyuan apk
    -download 233 leyuan android
    -download 233 leyuan ios
    -download 233 leyuan for pc
    -download 233 leyuan games
    -download 233 leyuan latest version
    -download 233 leyuan mod apk
    -download 233 leyuan hack apk
    -download 233 leyuan unlimited money
    -download 233 leyuan free coins
    -download 233 leyuan online
    -download 233 leyuan offline
    -download 233 leyuan update
    -download 233 leyuan beta
    -download 233 leyuan chinese
    -download 233 leyuan english
    -download 233 leyuan play store
    -download 233 leyuan app store
    -download 233 leyuan website
    -download 233 leyuan official site
    -download 233 leyuan review
    -download 233 leyuan rating
    -download 233 leyuan tutorial
    -download 233 leyuan guide
    -download 233 leyuan tips
    -download 233 leyuan tricks
    -download 233 leyuan cheats
    -download 233 leyuan codes
    -download 233 leyuan redeem code
    -download 233 leyuan gift card
    -download 233 leyuan coupon code
    -download 233 leyuan promo code
    -download 233 leyuan discount code
    -download 233 leyuan referral code
    -download 233 leyuan invite code
    -download 233 leyuan sign up bonus
    -download 233 leyuan welcome bonus
    -download 233 leyuan free trial
    -download 233 leyuan subscription
    -download 233 leyuan membership
    -download 233 leyuan vip access
    -download 233 leyuan premium features
    -download 233 leyuan pro version
    -download 233 leyuan full version
    -download 233 leyuan cracked version
    -download 233 leyuan patched version
    -download 233 leyuan unlocked version
    -download 233 leyuan root version
    -download 233 leyuan no ads version

    -

    Download from the official website

    -

    You can visit the official website of 233 leyuan at www.233leyuan.com and click on the download button. You will be redirected to a page where you can choose your preferred language (English or Chinese) and then download the APK file. After downloading the file, you need to enable the installation of apps from unknown sources on your device settings and then install the app.

    -

    Download from the APKCombo website

    -

    You can also download 233 leyuan from the APKCombo website at apkcombo.com/233leyuan. This website provides various versions of the app that you can choose from. You can also see the description, screenshots, ratings, and reviews of the app. After selecting the version you want, you can download the APK file and install it on your device.

    -

    Download from the Google Play Store

    -

    If you prefer to download 233 leyuan from the Google Play Store, you can search for it using the ID com.meta.box or use this link: play.google.com/store/apps/details?id=com.meta.box. You can then tap on the install button and wait for the app to be downloaded and installed on your device.

    -

    What are the benefits of downloading 233 leyuan?

    -

    By downloading 233 leyuan, you can enjoy many benefits that will enhance your gaming experience and social life. Here are some of them:

    -

    Access to hundreds of games for free

    -

    You can play hundreds of games for free on 233 leyuan without any subscription or registration fees. You can also save your progress and data on the cloud and access it anytime and anywhere. You can also switch between different games without any

    Connect with other gamers and make friends

    -

    You can also socialize with other gamers and make friends on 233 leyuan. You can join various communities that suit your interests and preferences, such as casual, hardcore, female, male, etc. You can also chat with other players using text, voice, or video messages. You can also play together with your friends or join random matches with strangers. You can also share your gaming moments, tips, and feedback with others.

    -

    Enjoy exclusive offers and rewards

    -

    Another benefit of downloading 233 leyuan is that you can get exclusive offers and rewards from the platform. You can get free coins, diamonds, vouchers, and other items that you can use to play games or exchange for gifts. You can also get discounts and coupons for various products and services from the platform's partners. You can also participate in lucky draws, giveaways, and competitions that can win you more prizes.

    -

    Conclusion

    -

    233 leyuan is a fun and social gaming platform that offers a variety of games, features, and services to its users. You can download it from the official website, the APKCombo website, or the Google Play Store. By downloading it, you can access hundreds of games for free, connect with other gamers and make friends, and enjoy exclusive offers and rewards. If you are looking for a new way to enjoy your favorite games and meet new people, you should give 233 leyuan a try.

    -

    So what are you waiting for? Download 233 leyuan today and start your gaming adventure!

    -

    FAQs

    -

    What is the minimum requirement to download 233 leyuan?

    -

    The minimum requirement to download 233 leyuan is Android 4.4 or higher.

    -

    Is 233 leyuan safe to download and use?

    -

    Yes, 233 leyuan is safe to download and use. It has been verified by Google Play Protect and other security software. It also protects your privacy and data with encryption and other measures.

    -

    How can I contact the customer service of 233 leyuan?

    -

    You can contact the customer service of 233 leyuan by sending an email to service@meta-box.com or by calling +86-400-888-8888.

    -

    Can I play 233 leyuan on my PC or laptop?

    -

    No, 233 leyuan is only available for mobile devices. However, you can use an emulator software to run it on your PC or laptop.

    -

    Can I play 233 leyuan offline?

    -

    No, 233 leyuan requires an internet connection to play. However, some games may have offline modes that you can play without internet.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Enjoy the Best of Bollywood with Wapking Movie Download 2018.md b/spaces/congsaPfin/Manga-OCR/logs/Enjoy the Best of Bollywood with Wapking Movie Download 2018.md deleted file mode 100644 index af54ade5f4ca609a97d37de353cb83c782c75255..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Enjoy the Best of Bollywood with Wapking Movie Download 2018.md +++ /dev/null @@ -1,128 +0,0 @@ -
    -

    Wapking Movie Download 2018: How to Watch and Download Movies for Free

    -

    If you are a movie lover and want to watch and download movies for free, you might have heard of Wapking. Wapking is a popular website that offers a huge collection of Bollywood, Hollywood, Tamil, Telugu, Malayalam, and other regional movies. You can find movies from various genres, languages, and years on this website. But is it safe and legal to use Wapking? How can you download and watch movies from Wapking? What are the alternatives to Wapking? In this article, we will answer all these questions and more.

    -

    wapking movie download 2018


    Download Zip ::: https://urlca.com/2uO75C



    -

    Introduction

    -

    Movies are a great source of entertainment and relaxation. They can make us laugh, cry, thrill, and inspire. They can also educate us about different cultures, histories, and issues. However, watching movies in theatres or subscribing to streaming services can be expensive and inconvenient. That's why many people look for free and easy ways to watch and download movies online.

    -

    What is Wapking?

    -

    Wapking is one of the websites that provide free and easy access to movies online. It is a torrent website that uploads pirated copies of movies from various sources. It allows users to download movies in different qualities and formats, such as HD, MP4, AVI, etc. It also offers streaming options for some movies.

    -

    Why do people use Wapking?

    -

    People use Wapking for various reasons. Some of them are:

    -
      -
    • It has a large and diverse collection of movies from different languages, genres, and years.
    • -
    • It updates its content regularly with the latest releases.
    • -
    • It does not require any registration or payment to access its content.
    • -
    • It has a simple and user-friendly interface that makes it easy to search and download movies.
    • -
    • It has multiple download links and servers that ensure fast and smooth downloads.
    • -
    -

    What are the risks of using Wapking?

    -

    Despite its advantages, using Wapking also involves some risks. Some of them are:

    -
      -
    • It is illegal to use Wapking as it violates the copyright laws of various countries. You can face legal actions or penalties if you are caught using or promoting Wapking.
    • -
    • It is unsafe to use Wapking as it may contain viruses, malware, or spyware that can harm your device or data. You may also encounter pop-ups, ads, or redirects that can expose you to phishing or scamming sites.
    • -
    • It is unethical to use Wapking as it harms the movie industry and the artists who work hard to create quality content. By using Wapking, you are depriving them of their rightful earnings and recognition.
    • -
    -

    How to download movies from Wapking?

    -

    If you still want to download movies from Wapking, you can follow these steps:

    -

    Step 1: Visit the website

    -

    The first step is to visit the official website of Wapking. However, since Wapking is a banned website, it keeps changing its domain name and URL frequently. You may need to use a VPN or proxy service to access the website. You can also search for the latest working link of Wapking on Google or other search engines.

    -

    Step 2: Search for the movie

    -

    The next step is to search for the movie that you want to download. You can use the search bar on the top of the website or browse through the categories and genres on the homepage. You can also filter the movies by year, language, quality, and format.

    -

    Step 3: Choose the quality and format

    -

    Once you find the movie that you want to download, click on it and you will be redirected to another page. Here, you will see various download options for different qualities and formats, such as HD, MP4, AVI, etc. You can also see the file size and duration of each option. Choose the one that suits your preference and device compatibility.

    -

    wapking baazaar movie download 2018 hd
    -wapking hindi movies download 2018 free
    -wapking bollywood movies download 2018 mp4
    -wapking new movies download 2018 720p
    -wapking latest movies download 2018 filmywap
    -wapking full movies download 2018 300mb
    -wapking best movies download 2018 hotstar
    -wapking saif ali khan movie download 2018
    -wapking radhika apte movie download 2018
    -wapking chitrangada singh movie download 2018
    -wapking stock market movie download 2018
    -wapking gujarati businessman movie download 2018
    -wapking shakun kothari movie download 2018
    -wapking rizwan ahmad movie download 2018
    -wapking rohan mehra movie download 2018
    -wapking public domain movies download 2018
    -wapking old movies download 2018 free
    -wapking classic movies download 2018 online
    -wapking no payment movies download 2018
    -wapking legal movies download 2018 sites
    -wapking top rated hindi movies download 2018
    -wapking best language movies download 2018
    -wapking to-watch list movies download 2018
    -wapking etimes movies download 2018 news
    -wapking times of india movies download 2018
    -wapking comedy movies download 2018 hd
    -wapking action movies download 2018 free
    -wapking thriller movies download 2018 mp4
    -wapking horror movies download 2018 720p
    -wapking drama movies download 2018 filmywap
    -wapking romance movies download 2018 full hd
    -wapking crime movies download 2018 torrent
    -wapking mystery movies download 2018 online
    -wapking adventure movies download 2018 sites
    -wapking fantasy movies download 2018 free
    -wapking sci-fi movies download 2018 hd
    -wapking animation movies download 2018 mp4
    -wapking family movies download 2018 online
    -wapking musical movies download 2018 sites
    -wapking biopic movies download 2018 free
    -wapking sports movies download 2018 hd
    -wapking war movies download 2018 mp4
    -wapking historical movies download 2018 online
    -wapking patriotic movies download 2018 sites
    -wapking social message movies download 2018 free
    -wapking award winning movies download 2018 hd

    -

    Step 4: Download the movie

    -

    After choosing the quality and format, click on the download button and you will be taken to another page. Here, you will see multiple download links and servers that you can use to download the movie. Some of these links may not work or may be slow, so you may need to try different ones until you find a working and fast one. You may also encounter some pop-ups, ads, or redirects that you need to close or skip. Once you click on a working link, your download will start automatically or you may need to confirm it manually.

    -

    How to watch movies online from Wapking?

    -

    If you don't want to download movies from Wapking, you can also watch them online from the website. Here are the steps to do so:

    -

    Step 1: Visit the website

    -

    The first step is the same as downloading movies from Wapking. You need to visit the official website of Wapking or find its latest working link using a VPN or proxy service.

    -

    Step 2: Search for the movie

    -

    The second step is also the same as downloading movies from Wapking. You need to search for the movie that you want to watch online using the search bar or browsing through the categories and genres.

    -

    Step 3: Choose the streaming option

    -

    Once you find the movie that you want to watch online, click on it and you will be redirected to another page. Here, instead of choosing the quality and format, look for the streaming option. It may be labeled as "Watch Online", "Online Streaming", "Play Online", etc. Click on it and you will be taken to another page.

    -

    Step 4: Enjoy the movie

    -

    On this page, you will see a video player that will play the movie online. You may need to wait for some time for the movie to load or buffer. You may also encounter some pop-ups, ads, or redirects that you need to close or skip. You can also adjust the volume, brightness, resolution, and subtitles of the movie according to your preference. Enjoy watching your favorite movie online from Wapking.

    -

    What are the alternatives to Wapking?

    -

    If you are looking for some alternatives to Wapking, here are some options that you can consider:

    -

    Legal alternatives

    -

    The best and safest way to watch and download movies online is to use legal alternatives that have official licenses and permissions from the movie makers and distributors. Some of these alternatives are:

    -

    Netflix

    -

    Netflix is one of the most popular and widely used streaming services in the world. It offers a huge collection of movies, shows, documentaries, and originals from different languages and genres. You can watch unlimited content on Netflix by paying a monthly or yearly subscription fee. You can also download some of the content for offline viewing.

    -

    Amazon Prime Video

    -

    Amazon Prime Video is another popular and widely used streaming service in the world. It is a part of the Amazon Prime membership that also offers other benefits such as free and fast delivery, music streaming, e-books, and more. You can watch a variety of movies, shows, documentaries, and originals from different languages and genres on Amazon Prime Video. You can also download some of the content for offline viewing.

    -

    Disney+ Hotstar

    -

    Disney+ Hotstar is a streaming service that is especially popular in India and some other Asian countries. It offers a vast collection of movies, shows, documentaries, and originals from Disney, Marvel, Star Wars, National Geographic, and more. You can also watch live sports, news, and events on Disney+ Hotstar. You can watch some of the content for free with ads or pay a monthly or yearly subscription fee for premium access. You can also download some of the content for offline viewing.

    -

    Illegal alternatives

    -

    If you are looking for some illegal alternatives to Wapking that offer free and easy access to movies online, here are some options that you can consider. However, we do not recommend or endorse these websites as they are illegal, unsafe, and unethical. Use them at your own risk.

    -

    Filmywap

    -

    Filmywap is another torrent website that offers a large and diverse collection of movies from Bollywood, Hollywood, Tamil, Telugu, Malayalam, and other regional languages. You can download movies in different qualities and formats from Filmywap. You can also watch some movies online from Filmywap.

    -

    Tamilrockers

    -

    Tamilrockers is one of the most notorious and infamous torrent websites in India. It is known for leaking the latest movies from Tamil, Telugu, Malayalam, Hindi, and other languages before or soon after their release. You can download movies in different qualities and formats from Tamilrockers. You can also watch some movies online from Tamilrockers.

    -

    Movierulz

    -

    Movierulz is another torrent website that offers a huge collection of movies from Bollywood, Hollywood, Tamil, Telugu, Malayalam, and other regional languages. You can download movies in different qualities and formats from Movierulz. You can also watch some movies online from Movierulz.

    -

    Conclusion

    -

    Wapking is a website that allows you to watch and download movies for free. However, it is not a legal or safe option as it violates the copyright laws and may contain viruses or malware. It also harms the movie industry and the artists who work hard to create quality content. Therefore, we suggest you to use legal alternatives such as Netflix, Amazon Prime Video, or Disney+ Hotstar that offer a better and safer experience.

    -

    We hope this article has helped you to understand more about Wapking movie download 2018. If you have any questions or feedback, please feel free to leave them in the comments section below.

    -

    FAQs

    -

    Here are some frequently asked questions about Wapking movie download 2018:

    -
      -
    1. Is Wapking movie download 2018 legal?
    2. -

      No, Wapking movie download 2018 is not legal as it uploads pirated copies of movies without the permission of the movie makers or distributors. It violates the copyright laws of various countries and can result in legal actions or penalties.

      -
    3. Is Wapking movie download 2018 safe?
    4. -

      No, Wapking movie download 2018 is not safe as it may contain viruses, malware, or spyware that can harm your device or data. It may also expose you to pop-ups, ads, or redirects that can lead you to phishing or scamming sites.

      -
    5. How to access Wapking movie download 2018?
    6. -

      To access Wapking movie download 2018, you need to visit the official website of Wapking or find its latest working link using a VPN or proxy service. However, we do not recommend or endorse this method as it is illegal and unsafe.

      -
    7. What are the best alternatives to Wapking movie download 2018?
    8. -

      The best alternatives to Wapking movie download 2018 are legal streaming services such as Netflix, Amazon Prime Video, or Disney+ Hotstar that offer a huge collection of movies from different languages and genres. You can watch unlimited content on these services by paying a monthly or yearly subscription fee. You can also download some of the content for offline viewing.

      -
    9. What are the risks of using Wapking movie download 2018?
    10. -

      The risks of using Wapking movie download 2018 are:- You can face legal actions or penalties if you are caught using or promoting Wapking movie download 2018.

      -

      - You can harm your device or data by downloading or watching movies from Wapking movie download 2018.

      -

      - You can deprive the movie industry and the artists of their rightful earnings and recognition by using Wapking movie download 2018.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Download and Use GTA 3 Cheats on Your Android Device.md b/spaces/congsaPfin/Manga-OCR/logs/How to Download and Use GTA 3 Cheats on Your Android Device.md deleted file mode 100644 index 206b4a31100235ed6ee8edcf5d0faf867dc471b4..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/How to Download and Use GTA 3 Cheats on Your Android Device.md +++ /dev/null @@ -1,112 +0,0 @@ - -

      Download cheats for GTA 3 apk: How to spice up your gameplay with cheat codes

      -

      Grand Theft Auto III, or GTA 3, is one of the most iconic and influential games of all time. It revolutionized the open-world genre with its immersive 3D graphics, realistic physics, and dynamic gameplay. GTA 3 lets you explore the fictional city of Liberty City, where you can do anything from stealing cars, shooting enemies, completing missions, or just causing chaos.

      -

      download cheats for gta 3 apk


      Download File ››››› https://urlca.com/2uOcim



      -

      But what if you want to make the game even more fun and exciting? What if you want to have unlimited health, money, weapons, or vehicles? What if you want to change the weather, time, or physics of the game? Well, that's where cheat codes come in handy. Cheat codes are special commands that you can enter in the game to activate various effects and features. They can help you overcome difficult situations, unlock hidden items, or just have a blast with the game.

      -

      In this article, we will show you how to download and use cheats for GTA 3 apk on your Android device. We will also provide you with a list of cheat codes that you can use in the game to enhance your gameplay experience. So, without further ado, let's get started!

      -

      How to download and use cheats for GTA 3 apk on Android devices?

      -

      If you want to use cheats for GTA 3 apk on your Android device, you will need two things: the GTA 3 apk file and a cheat app. The GTA 3 apk file is the game itself that you can download from various sources online. Just make sure that you download a compatible version for your device and that it is free of viruses or malware. The cheat app is a tool that allows you to enter cheat codes in the game. There are many cheat apps available on the Google Play Store or other websites, but we recommend using JCheater: GTA III Edition. This app is easy to use and has a lot of features.

      -

      download gta 3 apk with all cheat codes
      -how to get gta 3 cheats on android device
      -gta 3 android cheats keyboard apk download
      -best cheats for gta 3 apk free download
      -download gta 3 mod apk with unlimited money and cheats
      -gta 3 cheat codes for android phone download
      -gta 3 apk + obb + cheats download for android
      -download gta 3 cheats menu apk for android
      -gta 3 cheats android no root apk download
      -gta 3 cheat codes list for android download
      -download gta 3 apk with cleo cheats
      -gta 3 cheats android app download free
      -gta 3 cheat codes apk download latest version
      -download gta 3 mod menu apk with cheats
      -gta 3 cheats android game keyboard download
      -gta 3 cheat codes for android free download
      -download gta 3 apk with all weapons cheats
      -gta 3 cheats android apk download offline
      -gta 3 cheat codes for android apk download
      -download gta 3 mod apk with all cheats unlocked
      -gta 3 cheats android full version apk download
      -gta 3 cheat codes for android phone free download
      -download gta 3 apk with health and armor cheats
      -gta 3 cheats android no keyboard apk download
      -gta 3 cheat codes for android devices download
      -download gta 3 modded apk with cheats and mods
      -gta 3 cheats android gamepad apk download
      -gta 3 cheat codes for android tablet download
      -download gta 3 apk with flying car cheat
      -gta 3 cheats android hack apk download
      -gta 3 cheat codes for android mobile download
      -download gta 3 mod apk with unlimited cheats and money
      -gta 3 cheats android latest version apk download
      -gta 3 cheat codes for android emulator download
      -download gta 3 apk with tank cheat code
      -gta 3 cheats android no ads apk download
      -gta 3 cheat codes for android phone apk download
      -download gta 3 modded apk with all cheat codes unlocked
      -gta 3 cheats android premium apk download
      -gta 3 cheat codes for android free apk download
      -download gta 3 apk with invincibility cheat code
      -gta 3 cheats android pro apk download free
      -gta 3 cheat codes for android phone free apk download offline mode

      -

      Here are the steps to download and use cheats for GTA 3 apk on your Android device:

      -
        -
      1. Download and install the GTA 3 apk file on your device. You may need to enable unknown sources in your settings to do this.
      2. -
      3. Download and install JCheater: GTA III Edition from the Google Play Store or any other source.
      4. -
      5. Launch GTA 3 on your device and start a new game or load an existing one.
      6. -
      7. Press the home button on your device to minimize the game.
      8. -
      9. Launch JCheater: GTA III Edition and select your game slot.
      10. -
      11. You will see a list of cheat categories that you can choose from. Tap on any category to see the available cheat codes.
      12. -
      13. Select the cheat codes that you want to use and tap on Apply Cheats.
      14. -
      15. Return to GTA 3 and resume your game. You will see a message saying "Cheats Activated" on the screen.
      16. -
      17. Enjoy the game with your cheat codes!
      18. -
      -

      List of cheat codes for GTA 3 apk

      -

      There are many cheat codes that you can use in GTA 3 apk to modify various aspects of the game. Here are some of the most popular ones:

      -

      Cheat codes for health, armor, and money

      -
        -
      • tortoise: This cheat code gives you full armor. It can help you survive longer in combat or when chased by the police.
      • -
      • gesundheit: This cheat code gives you full health. It can heal you from any injuries or damage that you may have suffered.
      • -
      • ifiwerearichman: This cheat code gives you $250,000. It can help you buy weapons, vehicles, or properties in the game.
      • -
      -

      Cheat codes for weapons and vehicles

      -
        -
      • gunsgunsguns: This cheat code gives you all the weapons in the game. You can switch between them using the weapon wheel or the menu. You can also use this cheat code to refill your ammo.
      • -
      • giveusatank: This cheat code spawns a tank near you. You can enter the tank and use it to destroy anything in your way. The tank is also bulletproof and fireproof, making it very durable.
      • -
      • chittycittybb: This cheat code makes your car fly. You can use this cheat code to escape from enemies, reach high places, or just have fun. To activate this cheat code, you need to be in a car and press the horn button.
      • -
      -

      Cheat codes for weather and time

      -
        -
      • skincancerforme: This cheat code makes the weather sunny and clear. It can improve your visibility and make the game look more vibrant.
      • -
      • madweather: This cheat code makes the weather stormy and rainy. It can make the game more challenging and realistic.
      • -
      • timeflieswhenyou: This cheat code speeds up the passage of time in the game. It can help you skip some waiting periods or change the time of day.
      • -
      -

      Cheat codes for gameplay and fun

      -
        -
      • boooooring: This cheat code slows down the gameplay. It can make the game easier or more cinematic.
      • -
      • itsallgoingmaaad: This cheat code makes the pedestrians go crazy. They will start attacking each other, running away, or doing random things.
      • -
      • bangbangbang: This cheat code makes all cars explode. You can use this cheat code to create chaos, clear traffic, or just have fun.
      • -
      -

      Conclusion

      -

      GTA 3 is a classic game that offers a lot of fun and freedom to the players. However, if you want to spice up your gameplay with some extra features and effects, you can use cheat codes to do so. Cheat codes are special commands that you can enter in the game to activate various cheats. You can use cheat codes to get health, money, weapons, vehicles, and more. You can also use cheat codes to change the weather, time, or physics of the game. To use cheat codes for GTA 3 apk on your Android device, you will need to download and install a cheat app like JCheater: GTA III Edition. Then, you can follow the steps that we have provided in this article to apply cheats in the game.

      -

      We hope that this article has helped you learn how to download and use cheats for GTA 3 apk on your Android device. We also hope that you have enjoyed using some of the cheat codes that we have listed in this article. Remember that cheat codes are meant to enhance your gameplay experience and not ruin it. So, use them wisely and responsibly. Have fun with GTA 3!

      -

      FAQs

      -
        -
      1. Are cheat codes safe to use in GTA 3 apk?
      2. -

        Cheat codes are generally safe to use in GTA 3 apk as long as you download them from trusted sources and use them properly. However, some cheat codes may cause glitches, bugs, or crashes in the game. So, it is advisable to save your game before using any cheat codes and avoid using too many cheats at once.

        -
      3. Can I use cheat codes in GTA 3 online mode?
      4. -

        No, you cannot use cheat codes in GTA 3 online mode. Cheat codes are only meant for offline single-player mode. If you try to use cheat codes in online mode, you may get banned or kicked out of the server.

        -
      5. How do I disable cheat codes in GTA 3 apk?
      6. -

        To disable cheat codes in GTA 3 apk, you can either restart your game or load a previous save file. Alternatively, you can use JCheater: GTA III Edition to turn off any active cheats.

        -
      7. What are some of the best cheat codes for GTA 3 apk?
      8. -

        Some of the best cheat codes for GTA 3 apk are:

        -
          -
        • nopoliceplease: This cheat code removes all the wanted stars from your character. It can help you escape from the police or complete missions without any trouble.
        • -
        • ilikedressingup: This cheat code changes your character's appearance. You can use this cheat code to disguise yourself, try different outfits, or just have fun.
        • -
        • nastylimbscheat: This cheat code enables gore mode. You can use this cheat code to make the game more violent and bloody. You can also dismember enemies with your weapons.
        • -
        -
      9. Where can I find more cheat codes for GTA 3 apk?
      10. -

        You can find more cheat codes for GTA 3 apk on various websites, forums, blogs, or videos online. You can also use JCheater: GTA III Edition to access more cheat codes in the app.

        -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congxin95/BMTools-demo/README.md b/spaces/congxin95/BMTools-demo/README.md deleted file mode 100644 index a01f24750822da7d9afb84e729a1f77c82303a8a..0000000000000000000000000000000000000000 --- a/spaces/congxin95/BMTools-demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: BMTools Demo -emoji: 🏃 -colorFrom: pink -colorTo: indigo -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/contluForse/HuggingGPT/assets/AutoCAD LT 2016 Herunterladen Riss 64 Bits.md b/spaces/contluForse/HuggingGPT/assets/AutoCAD LT 2016 Herunterladen Riss 64 Bits.md deleted file mode 100644 index 6f0c3d624ef824bbff43d59b0ba621c0e0e959e1..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/AutoCAD LT 2016 Herunterladen Riss 64 Bits.md +++ /dev/null @@ -1,132 +0,0 @@ - -

      AutoCAD LT 2016 Herunterladen Riss 64 Bits

      -

      AutoCAD LT 2016 is a powerful software for 2D design and drafting. It allows you to create, edit, and share professional drawings with ease. AutoCAD LT 2016 is compatible with Windows 10, as well as Windows 7 and 8. However, you may need to download and install a service pack to ensure the best performance and stability of the software.

      -

      AutoCAD LT 2016 Herunterladen Riss 64 Bits


      Download Filehttps://ssurll.com/2uzypk



      -

      How to Download AutoCAD LT 2016 Herunterladen Riss 64 Bits

      -

      If you have a subscription to AutoCAD LT 2016, you can download the software from your Autodesk Account. You can also download previous versions of AutoCAD LT up to three years back. To do this, follow these steps:

      -
        -
      1. Sign in to Autodesk Account at manage.autodesk.com.
      2. -
      3. Under All Products and Services, find AutoCAD LT 2016.
      4. -
      5. In the product tile, click the current version and select a previous version.
      6. -
      7. Download your product.
      8. -
      -

      If you need to download older versions of AutoCAD LT, such as AutoCAD LT 2016 Herunterladen Riss 64 Bits, you can use the Autodesk Assistant. This tool will help you find and download the software you need. You will need your serial number and product key for the installation. You can find them in your Autodesk Account or at Look up product keys.

      -

      How to Install AutoCAD LT 2016 Herunterladen Riss 64 Bits

      -

      Before you install AutoCAD LT 2016 Herunterladen Riss 64 Bits, make sure your system meets the minimum requirements. You can check them at System requirements for AutoCAD LT 2016. You also need to have an internet connection and enough disk space for the installation.

      -

      To install AutoCAD LT 2016 Herunterladen Riss 64 Bits, follow these steps:

      -
        -
      1. Run the downloaded file and follow the instructions on the screen.
      2. -
      3. Enter your serial number and product key when prompted.
      4. -
      5. Select the components and options you want to install.
      6. -
      7. Click Install and wait for the installation to complete.
      8. -
      9. Restart your computer if required.
      10. -
      -

      How to Apply Service Pack 1 for AutoCAD LT 2016

      -

      Autodesk releases service packs to fix bugs and improve the functionality of its products. Service Pack 1 for AutoCAD LT 2016 addresses some issues related to Windows 10 compatibility, performance, stability, and security. It is recommended that you apply this service pack to your software as soon as possible.

      -

      To apply Service Pack 1 for AutoCAD LT 2016, follow these steps:

      -

      -
        -
      1. Download the appropriate service pack file from AutoCAD 2016 and AutoCAD LT 2016 Service Pack 1 Readme.
      2. -
      3. Close all running applications, including AutoCAD LT 2016.
      4. -
      5. Run the service pack file and follow the instructions on the screen.
      6. -
      7. Restart your computer if required.
      8. -
      -

      Conclusion

      -

      AutoCAD LT 2016 Herunterladen Riss 64 Bits is a great software for 2D design and drafting. It offers many features and tools to help you create professional drawings. You can download and install it easily from your Autodesk Account or using the Autodesk Assistant. You should also apply Service Pack 1 for AutoCAD LT 2016 to ensure the best performance and stability of the software. If you have any questions or issues with the installation or usage of AutoCAD LT 2016 Herunterladen Riss 64 Bits, you can contact Autodesk Support or visit the Installation & licensing forum.

      -

      How to Use AutoCAD LT 2016 Herunterladen Riss 64 Bits

      -

      AutoCAD LT 2016 Herunterladen Riss 64 Bits is a software that allows you to create and edit 2D drawings with precision and accuracy. You can use it for various purposes, such as architectural design, engineering, construction, and manufacturing. AutoCAD LT 2016 Herunterladen Riss 64 Bits has many features and tools to help you with your projects, such as:

      -
        -
      • A user-friendly interface that lets you access commands and options easily.
      • -
      • A ribbon that organizes tools into tabs and panels.
      • -
      • A command line that lets you enter commands and options directly.
      • -
      • A status bar that shows the current settings and modes.
      • -
      • A drawing area that displays your drawing and allows you to zoom and pan.
      • -
      • A navigation bar that lets you switch between different views and perspectives.
      • -
      • A properties palette that shows the properties of the selected objects.
      • -
      • A layer manager that lets you control the visibility and appearance of layers.
      • -
      • A design center that lets you access blocks, styles, and other content from other drawings.
      • -
      • A tool palette that lets you insert blocks, hatches, dimensions, and other objects.
      • -
      -

      To use AutoCAD LT 2016 Herunterladen Riss 64 Bits, you need to follow these basic steps:

      -
        -
      1. Create a new drawing or open an existing one.
      2. -
      3. Set up your drawing units, limits, scale, and orientation.
      4. -
      5. Draw objects using the drawing tools, such as lines, circles, arcs, polylines, etc.
      6. -
      7. Modify objects using the editing tools, such as move, copy, rotate, trim, extend, etc.
      8. -
      9. Add annotations using the annotation tools, such as text, dimensions, leaders, tables, etc.
      10. -
      11. Apply styles and formats using the style tools, such as layers, colors, linetypes, lineweights, etc.
      12. -
      13. Save and print your drawing using the file tools.
      14. -
      -

      Benefits of AutoCAD LT 2016 Herunterladen Riss 64 Bits

      -

      AutoCAD LT 2016 Herunterladen Riss 64 Bits is a software that offers many benefits for 2D design and drafting. Some of the benefits are:

      -
        -
      • It is compatible with Windows 10 and other operating systems.
      • -
      • It is easy to learn and use for beginners and professionals alike.
      • -
      • It is fast and reliable for creating and editing drawings.
      • -
      • It supports various file formats and standards for importing and exporting drawings.
      • -
      • It has a large online community and support network for troubleshooting and learning.
      • -
      -

      If you want to create professional 2D drawings with ease and efficiency, AutoCAD LT 2016 Herunterladen Riss 64 Bits is the software for you. You can download it from your Autodesk Account or using the Autodesk Assistant. You should also apply Service Pack 1 for AutoCAD LT 2016 to ensure the best performance and stability of the software. If you have any questions or issues with the installation or usage of AutoCAD LT 2016 Herunterladen Riss 64 Bits, you can contact Autodesk Support or visit the Installation & licensing forum.

      -

      Features of AutoCAD LT 2016 Herunterladen Riss 64 Bits

      -

      AutoCAD LT 2016 Herunterladen Riss 64 Bits is a software that offers many features for 2D design and drafting. Some of the features are:

      -
        -
      • Smart Dimensioning: This feature allows you to create accurate dimensions automatically based on the context of your drawing.
      • -
      • Revision Clouds: This feature allows you to create revision clouds to highlight changes in your drawing.
      • -
      • PDF Enhancements: This feature allows you to import and export PDF files with more quality and fidelity.
      • -
      • Coordination Model: This feature allows you to attach and view Navisworks and BIM 360 Glue models in your drawing.
      • -
      • System Variable Monitor: This feature allows you to monitor and restore system variables that affect your drawing settings.
      • -
      -

      These are just some of the features of AutoCAD LT 2016 Herunterladen Riss 64 Bits. You can explore more features by visiting the AutoCAD LT Features page.

      -

      Tips and Tricks for AutoCAD LT 2016 Herunterladen Riss 64 Bits

      -

      AutoCAD LT 2016 Herunterladen Riss 64 Bits is a software that can help you improve your productivity and efficiency in 2D design and drafting. Here are some tips and tricks that can help you get the most out of the software:

      -
        -
      • Use keyboard shortcuts: Keyboard shortcuts can help you access commands and options faster and easier. You can find a list of keyboard shortcuts at Keyboard Shortcuts Guide.
      • -
      • Use grips: Grips are small squares that appear on objects when you select them. You can use grips to modify objects without using commands. You can move, rotate, scale, stretch, copy, or mirror objects using grips.
      • -
      • Use object snaps: Object snaps are tools that help you snap to precise points on objects. You can use object snaps to draw or edit objects with accuracy and precision. You can activate object snaps by pressing F3 or clicking the OSNAP button on the status bar.
      • -
      • Use dynamic input: Dynamic input is a feature that displays command prompts, options, and values near the cursor. You can use dynamic input to enter commands and options without looking at the command line. You can activate dynamic input by pressing F12 or clicking the DYN button on the status bar.
      • -
      • Use online resources: Online resources are sources of information and help that you can access from within the software. You can use online resources to learn new skills, find solutions, or get support. Some of the online resources are:
      • -
          -
        • The Help system: The Help system provides comprehensive documentation and tutorials on how to use the software. You can access the Help system by pressing F1 or clicking the Help button on the ribbon.
        • -
        • The Learning Center: The Learning Center provides interactive lessons and videos on how to use the software. You can access the Learning Center by clicking the Learn tab on the ribbon.
        • -
        • The Autodesk Knowledge Network: The Autodesk Knowledge Network provides articles, forums, blogs, videos, and webinars on various topics related to the software. You can access the Autodesk Knowledge Network by clicking the Online Resources button on the ribbon.
        • -
        -
      -

      These are just some of the tips and tricks for AutoCAD LT 2016 Herunterladen Riss 64 Bits. You can find more tips and tricks by visiting the AutoCAD LT Tips & Tricks page.

      -

      Comparison of AutoCAD LT 2016 Herunterladen Riss 64 Bits and AutoCAD 2016

      -

      AutoCAD LT 2016 Herunterladen Riss 64 Bits and AutoCAD 2016 are both software products from Autodesk that are used for 2D design and drafting. However, they have some differences in terms of features, capabilities, and price. Here are some of the main differences between them:

      -
        -
      • AutoCAD LT 2016 Herunterladen Riss 64 Bits is a lighter and cheaper version of AutoCAD 2016. It has fewer features and tools than AutoCAD 2016, but it still provides the essential functions for 2D design and drafting.
      • -
      • AutoCAD LT 2016 Herunterladen Riss 64 Bits does not support 3D modeling and rendering. It also does not support customization and programming with LISP, VBA, .NET, or ObjectARX.
      • -
      • AutoCAD LT 2016 Herunterladen Riss 64 Bits does not include some of the advanced features and tools of AutoCAD 2016, such as parametric drawing, dynamic blocks, sheet sets, data extraction, data linking, point clouds, geographic location, and reality capture.
      • -
      • AutoCAD LT 2016 Herunterladen Riss 64 Bits is more suitable for individual users or small businesses that need a simple and affordable software for 2D design and drafting.
      • -
      • AutoCAD 2016 is a more comprehensive and powerful software for 2D and 3D design and drafting. It has more features and tools than AutoCAD LT 2016 Herunterladen Riss 64 Bits, but it also costs more.
      • -
      • AutoCAD 2016 supports 3D modeling and rendering. It also supports customization and programming with LISP, VBA, .NET, or ObjectARX.
      • -
      • AutoCAD 2016 includes some of the advanced features and tools of AutoCAD LT 2016 Herunterladen Riss 64 Bits, such as smart dimensioning, revision clouds, PDF enhancements, coordination model, system variable monitor, etc. It also includes some additional features and tools that are not available in AutoCAD LT 2016 Herunterladen Riss 64 Bits, such as parametric drawing, dynamic blocks, sheet sets, data extraction, data linking, point clouds, geographic location, reality capture, etc.
      • -
      • AutoCAD 2016 is more suitable for professional users or large businesses that need a versatile and robust software for 2D and 3D design and drafting.
      • -
      -

      If you want to compare the features of AutoCAD LT 2016 Herunterladen Riss 64 Bits and AutoCAD 2016 in more detail, you can visit the Compare AutoCAD vs. AutoCAD LT page.

      -

      Frequently Asked Questions about AutoCAD LT 2016 Herunterladen Riss 64 Bits

      -

      Here are some of the frequently asked questions about AutoCAD LT 2016 Herunterladen Riss 64 Bits:

      -
        -
      1. What are the system requirements for AutoCAD LT 2016 Herunterladen Riss 64 Bits?
      2. -

        The system requirements for AutoCAD LT 2016 Herunterladen Riss 64 Bits are:

        -
          -
        • Operating System: Windows XP/Vista/7/8/10 (32-bit or 64-bit)
        • -
        • Processor: Intel Pentium IV or AMD Athlon dual-core processor with SSE2 technology (3 GHz or higher recommended)
        • -
        • Memory: 2 GB RAM (4 GB or higher recommended)
        • -
        • Disk Space: 4 GB free disk space for installation
        • -
        • Display: 1024 x 768 display resolution with true color (1600 x1050 or higher recommended)
        • -
        • Graphics: Windows display adapter capable of DirectX®9.0c with Shader Model3 (DirectX11 compliant card recommended)
        • -
        • Pointing Device: MS-Mouse compliant device
        • -
        • Brower: Windows Internet Explorer®9.0 (or later)
        • -
        • .NET Framework: .NET Framework Version4.5
        • -
        - -
      3. How can I get a trial version of AutoCAD LT 2016 Herunterladen Riss 64 Bits?
      4. -

        You can get a free trial version of AutoCAD LT 2016 Herunterladen Riss -

        Conclusion

        -

        AutoCAD LT 2016 Herunterladen Riss 64 Bits is a software that can help you create and edit 2D drawings with ease and efficiency. It has many features and tools to help you with your projects, such as smart dimensioning, revision clouds, PDF enhancements, coordination model, system variable monitor, etc. It is compatible with Windows 10 and other operating systems. You can download and install it easily from your Autodesk Account or using the Autodesk Assistant. You should also apply Service Pack 1 for AutoCAD LT 2016 to ensure the best performance and stability of the software. If you have any questions or issues with the installation or usage of AutoCAD LT 2016 Herunterladen Riss 64 Bits, you can contact Autodesk Support or visit the Installation & licensing forum.

        -

        If you want to compare AutoCAD LT 2016 Herunterladen Riss 64 Bits with AutoCAD 2016, you can visit the Compare AutoCAD vs. AutoCAD LT page. You can also find more tips and tricks for AutoCAD LT 2016 Herunterladen Riss 64 Bits by visiting the AutoCAD LT Tips & Tricks page. You can also access online resources such as the Help system, the Learning Center, and the Autodesk Knowledge Network from within the software.

        -

        AutoCAD LT 2016 Herunterladen Riss 64 Bits is a great software for 2D design and drafting. It offers many benefits for individual users or small businesses that need a simple and affordable software for 2D design and drafting. If you want to create professional 2D drawings with ease and efficiency, AutoCAD LT 2016 Herunterladen Riss 64 Bits is the software for you.

        3cee63e6c2
        -
        -
        \ No newline at end of file diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/necks/multilevel_neck.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/necks/multilevel_neck.py deleted file mode 100644 index 0b86c073cd1a72354d2426846125e80f7ab20dbc..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/necks/multilevel_neck.py +++ /dev/null @@ -1,70 +0,0 @@ -import torch.nn as nn -import torch.nn.functional as F -from annotator.mmpkg.mmcv.cnn import ConvModule - -from ..builder import NECKS - - -@NECKS.register_module() -class MultiLevelNeck(nn.Module): - """MultiLevelNeck. - - A neck structure connect vit backbone and decoder_heads. - Args: - in_channels (List[int]): Number of input channels per scale. - out_channels (int): Number of output channels (used at each scale). - scales (List[int]): Scale factors for each input feature map. - norm_cfg (dict): Config dict for normalization layer. Default: None. - act_cfg (dict): Config dict for activation layer in ConvModule. - Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - scales=[0.5, 1, 2, 4], - norm_cfg=None, - act_cfg=None): - super(MultiLevelNeck, self).__init__() - assert isinstance(in_channels, list) - self.in_channels = in_channels - self.out_channels = out_channels - self.scales = scales - self.num_outs = len(scales) - self.lateral_convs = nn.ModuleList() - self.convs = nn.ModuleList() - for in_channel in in_channels: - self.lateral_convs.append( - ConvModule( - in_channel, - out_channels, - kernel_size=1, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - for _ in range(self.num_outs): - self.convs.append( - ConvModule( - out_channels, - out_channels, - kernel_size=3, - padding=1, - stride=1, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - - def forward(self, inputs): - assert len(inputs) == len(self.in_channels) - print(inputs[0].shape) - inputs = [ - lateral_conv(inputs[i]) - for i, lateral_conv in enumerate(self.lateral_convs) - ] - # for len(inputs) not equal to self.num_outs - if len(inputs) == 1: - inputs = [inputs[0] for _ in range(self.num_outs)] - outs = [] - for i in range(self.num_outs): - x_resize = F.interpolate( - inputs[i], scale_factor=self.scales[i], mode='bilinear') - outs.append(self.convs[i](x_resize)) - return tuple(outs) diff --git a/spaces/cscan/CodeFormer/CodeFormer/basicsr/metrics/metric_util.py b/spaces/cscan/CodeFormer/CodeFormer/basicsr/metrics/metric_util.py deleted file mode 100644 index 4d18f0f7816431bed6af9d58319c6435bdf5c971..0000000000000000000000000000000000000000 --- a/spaces/cscan/CodeFormer/CodeFormer/basicsr/metrics/metric_util.py +++ /dev/null @@ -1,45 +0,0 @@ -import numpy as np - -from basicsr.utils.matlab_functions import bgr2ycbcr - - -def reorder_image(img, input_order='HWC'): - """Reorder images to 'HWC' order. - - If the input_order is (h, w), return (h, w, 1); - If the input_order is (c, h, w), return (h, w, c); - If the input_order is (h, w, c), return as it is. - - Args: - img (ndarray): Input image. - input_order (str): Whether the input order is 'HWC' or 'CHW'. - If the input image shape is (h, w), input_order will not have - effects. Default: 'HWC'. - - Returns: - ndarray: reordered image. - """ - - if input_order not in ['HWC', 'CHW']: - raise ValueError(f'Wrong input_order {input_order}. Supported input_orders are ' "'HWC' and 'CHW'") - if len(img.shape) == 2: - img = img[..., None] - if input_order == 'CHW': - img = img.transpose(1, 2, 0) - return img - - -def to_y_channel(img): - """Change to Y channel of YCbCr. - - Args: - img (ndarray): Images with range [0, 255]. - - Returns: - (ndarray): Images with range [0, 255] (float type) without round. - """ - img = img.astype(np.float32) / 255. - if img.ndim == 3 and img.shape[2] == 3: - img = bgr2ycbcr(img, y_only=True) - img = img[..., None] - return img * 255. diff --git a/spaces/dawood/chatbot-guide-multimodal/README.md b/spaces/dawood/chatbot-guide-multimodal/README.md deleted file mode 100644 index fbb21bc709d9a5e2f83bb0b63496c6aef185ccd9..0000000000000000000000000000000000000000 --- a/spaces/dawood/chatbot-guide-multimodal/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Chatbot Guide Multimodal -emoji: 💩 -colorFrom: pink -colorTo: green -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/certifi/__main__.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/certifi/__main__.py deleted file mode 100644 index 8945b5da857f4a7dec2b84f1225f012f6098418c..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/certifi/__main__.py +++ /dev/null @@ -1,12 +0,0 @@ -import argparse - -from certifi import contents, where - -parser = argparse.ArgumentParser() -parser.add_argument("-c", "--contents", action="store_true") -args = parser.parse_args() - -if args.contents: - print(contents()) -else: - print(where()) diff --git a/spaces/deaaassws/QQsign1/unidbg-fetch-qsign/bin/unidbg-fetch-qsign.bat b/spaces/deaaassws/QQsign1/unidbg-fetch-qsign/bin/unidbg-fetch-qsign.bat deleted file mode 100644 index 4e44bab8aa65d16e35e935f1273de2e98ce80cf9..0000000000000000000000000000000000000000 --- a/spaces/deaaassws/QQsign1/unidbg-fetch-qsign/bin/unidbg-fetch-qsign.bat +++ /dev/null @@ -1,89 +0,0 @@ -@rem -@rem Copyright 2015 the original author or authors. -@rem -@rem Licensed under the Apache License, Version 2.0 (the "License"); -@rem you may not use this file except in compliance with the License. -@rem You may obtain a copy of the License at -@rem -@rem https://www.apache.org/licenses/LICENSE-2.0 -@rem -@rem Unless required by applicable law or agreed to in writing, software -@rem distributed under the License is distributed on an "AS IS" BASIS, -@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -@rem See the License for the specific language governing permissions and -@rem limitations under the License. -@rem - -@if "%DEBUG%" == "" @echo off -@rem ########################################################################## -@rem -@rem unidbg-fetch-qsign startup script for Windows -@rem -@rem ########################################################################## - -@rem Set local scope for the variables with windows NT shell -if "%OS%"=="Windows_NT" setlocal - -set DIRNAME=%~dp0 -if "%DIRNAME%" == "" set DIRNAME=. -set APP_BASE_NAME=%~n0 -set APP_HOME=%DIRNAME%.. - -@rem Resolve any "." and ".." in APP_HOME to make it shorter. -for %%i in ("%APP_HOME%") do set APP_HOME=%%~fi - -@rem Add default JVM options here. You can also use JAVA_OPTS and UNIDBG_FETCH_QSIGN_OPTS to pass JVM options to this script. -set DEFAULT_JVM_OPTS= - -@rem Find java.exe -if defined JAVA_HOME goto findJavaFromJavaHome - -set JAVA_EXE=java.exe -%JAVA_EXE% -version >NUL 2>&1 -if "%ERRORLEVEL%" == "0" goto execute - -echo. -echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH. -echo. -echo Please set the JAVA_HOME variable in your environment to match the -echo location of your Java installation. - -goto fail - -:findJavaFromJavaHome -set JAVA_HOME=%JAVA_HOME:"=% -set JAVA_EXE=%JAVA_HOME%/bin/java.exe - -if exist "%JAVA_EXE%" goto execute - -echo. -echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME% -echo. -echo Please set the JAVA_HOME variable in your environment to match the -echo location of your Java installation. - -goto fail - -:execute -@rem Setup the command line - -set CLASSPATH=%APP_HOME%\lib\unidbg-fetch-qsign-1.1.0.jar;%APP_HOME%\lib\unidbg-fix.jar;%APP_HOME%\lib\ktor-server-content-negotiation-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-json-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-netty-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-host-common-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-core-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-events-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-websockets-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-cio-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-network-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-utils-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-io-jvm-2.3.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk8-1.8.22.jar;%APP_HOME%\lib\kotlinx-serialization-json-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-protobuf-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-core-jvm-1.5.1.jar;%APP_HOME%\lib\logback-classic-1.2.11.jar;%APP_HOME%\lib\kotlinx-coroutines-jdk8-1.7.1.jar;%APP_HOME%\lib\kotlinx-coroutines-core-jvm-1.7.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk7-1.8.22.jar;%APP_HOME%\lib\kotlin-reflect-1.8.10.jar;%APP_HOME%\lib\kotlin-stdlib-1.8.22.jar;%APP_HOME%\lib\slf4j-api-1.7.36.jar;%APP_HOME%\lib\kotlin-stdlib-common-1.8.22.jar;%APP_HOME%\lib\config-1.4.2.jar;%APP_HOME%\lib\jansi-2.4.0.jar;%APP_HOME%\lib\netty-codec-http2-4.1.92.Final.jar;%APP_HOME%\lib\alpn-api-1.1.3.v20160715.jar;%APP_HOME%\lib\netty-transport-native-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-epoll-4.1.92.Final.jar;%APP_HOME%\lib\logback-core-1.2.11.jar;%APP_HOME%\lib\annotations-23.0.0.jar;%APP_HOME%\lib\netty-codec-http-4.1.92.Final.jar;%APP_HOME%\lib\netty-handler-4.1.92.Final.jar;%APP_HOME%\lib\netty-codec-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-epoll-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-unix-common-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-4.1.92.Final.jar;%APP_HOME%\lib\netty-buffer-4.1.92.Final.jar;%APP_HOME%\lib\netty-resolver-4.1.92.Final.jar;%APP_HOME%\lib\netty-common-4.1.92.Final.jar - - -@rem Execute unidbg-fetch-qsign -"%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %UNIDBG_FETCH_QSIGN_OPTS% -classpath "%CLASSPATH%" MainKt %* - -:end -@rem End local scope for the variables with windows NT shell -if "%ERRORLEVEL%"=="0" goto mainEnd - -:fail -rem Set variable UNIDBG_FETCH_QSIGN_EXIT_CONSOLE if you need the _script_ return code instead of -rem the _cmd.exe /c_ return code! -if not "" == "%UNIDBG_FETCH_QSIGN_EXIT_CONSOLE%" exit 1 -exit /b 1 - -:mainEnd -if "%OS%"=="Windows_NT" endlocal - -:omega diff --git a/spaces/declare-lab/tango/diffusers/examples/rl/run_diffuser_locomotion.py b/spaces/declare-lab/tango/diffusers/examples/rl/run_diffuser_locomotion.py deleted file mode 100644 index adf6d1443d1c2e7caca7bdc1a26da1f2f186b8f9..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/examples/rl/run_diffuser_locomotion.py +++ /dev/null @@ -1,59 +0,0 @@ -import d4rl # noqa -import gym -import tqdm -from diffusers.experimental import ValueGuidedRLPipeline - - -config = { - "n_samples": 64, - "horizon": 32, - "num_inference_steps": 20, - "n_guide_steps": 2, # can set to 0 for faster sampling, does not use value network - "scale_grad_by_std": True, - "scale": 0.1, - "eta": 0.0, - "t_grad_cutoff": 2, - "device": "cpu", -} - - -if __name__ == "__main__": - env_name = "hopper-medium-v2" - env = gym.make(env_name) - - pipeline = ValueGuidedRLPipeline.from_pretrained( - "bglick13/hopper-medium-v2-value-function-hor32", - env=env, - ) - - env.seed(0) - obs = env.reset() - total_reward = 0 - total_score = 0 - T = 1000 - rollout = [obs.copy()] - try: - for t in tqdm.tqdm(range(T)): - # call the policy - denorm_actions = pipeline(obs, planning_horizon=32) - - # execute action in environment - next_observation, reward, terminal, _ = env.step(denorm_actions) - score = env.get_normalized_score(total_reward) - - # update return - total_reward += reward - total_score += score - print( - f"Step: {t}, Reward: {reward}, Total Reward: {total_reward}, Score: {score}, Total Score:" - f" {total_score}" - ) - - # save observations for rendering - rollout.append(next_observation.copy()) - - obs = next_observation - except KeyboardInterrupt: - pass - - print(f"Total reward: {total_reward}") diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/__init__.py b/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/__init__.py deleted file mode 100644 index 421099a6d746f072222567bbe5f313da5de36206..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/__init__.py +++ /dev/null @@ -1,139 +0,0 @@ -from ..utils import ( - OptionalDependencyNotAvailable, - is_flax_available, - is_k_diffusion_available, - is_librosa_available, - is_note_seq_available, - is_onnx_available, - is_torch_available, - is_transformers_available, -) - - -try: - if not is_torch_available(): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from ..utils.dummy_pt_objects import * # noqa F403 -else: - from .dance_diffusion import DanceDiffusionPipeline - from .ddim import DDIMPipeline - from .ddpm import DDPMPipeline - from .dit import DiTPipeline - from .latent_diffusion import LDMSuperResolutionPipeline - from .latent_diffusion_uncond import LDMPipeline - from .pipeline_utils import AudioPipelineOutput, DiffusionPipeline, ImagePipelineOutput - from .pndm import PNDMPipeline - from .repaint import RePaintPipeline - from .score_sde_ve import ScoreSdeVePipeline - from .stochastic_karras_ve import KarrasVePipeline - -try: - if not (is_torch_available() and is_librosa_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from ..utils.dummy_torch_and_librosa_objects import * # noqa F403 -else: - from .audio_diffusion import AudioDiffusionPipeline, Mel - -try: - if not (is_torch_available() and is_transformers_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from ..utils.dummy_torch_and_transformers_objects import * # noqa F403 -else: - from .alt_diffusion import AltDiffusionImg2ImgPipeline, AltDiffusionPipeline - from .audioldm import AudioLDMPipeline - from .latent_diffusion import LDMTextToImagePipeline - from .paint_by_example import PaintByExamplePipeline - from .semantic_stable_diffusion import SemanticStableDiffusionPipeline - from .stable_diffusion import ( - CycleDiffusionPipeline, - StableDiffusionAttendAndExcitePipeline, - StableDiffusionControlNetPipeline, - StableDiffusionDepth2ImgPipeline, - StableDiffusionImageVariationPipeline, - StableDiffusionImg2ImgPipeline, - StableDiffusionInpaintPipeline, - StableDiffusionInpaintPipelineLegacy, - StableDiffusionInstructPix2PixPipeline, - StableDiffusionLatentUpscalePipeline, - StableDiffusionModelEditingPipeline, - StableDiffusionPanoramaPipeline, - StableDiffusionPipeline, - StableDiffusionPix2PixZeroPipeline, - StableDiffusionSAGPipeline, - StableDiffusionUpscalePipeline, - StableUnCLIPImg2ImgPipeline, - StableUnCLIPPipeline, - ) - from .stable_diffusion_safe import StableDiffusionPipelineSafe - from .text_to_video_synthesis import TextToVideoSDPipeline - from .unclip import UnCLIPImageVariationPipeline, UnCLIPPipeline - from .versatile_diffusion import ( - VersatileDiffusionDualGuidedPipeline, - VersatileDiffusionImageVariationPipeline, - VersatileDiffusionPipeline, - VersatileDiffusionTextToImagePipeline, - ) - from .vq_diffusion import VQDiffusionPipeline - -try: - if not is_onnx_available(): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from ..utils.dummy_onnx_objects import * # noqa F403 -else: - from .onnx_utils import OnnxRuntimeModel - -try: - if not (is_torch_available() and is_transformers_available() and is_onnx_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from ..utils.dummy_torch_and_transformers_and_onnx_objects import * # noqa F403 -else: - from .stable_diffusion import ( - OnnxStableDiffusionImg2ImgPipeline, - OnnxStableDiffusionInpaintPipeline, - OnnxStableDiffusionInpaintPipelineLegacy, - OnnxStableDiffusionPipeline, - OnnxStableDiffusionUpscalePipeline, - StableDiffusionOnnxPipeline, - ) - -try: - if not (is_torch_available() and is_transformers_available() and is_k_diffusion_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from ..utils.dummy_torch_and_transformers_and_k_diffusion_objects import * # noqa F403 -else: - from .stable_diffusion import StableDiffusionKDiffusionPipeline - -try: - if not is_flax_available(): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from ..utils.dummy_flax_objects import * # noqa F403 -else: - from .pipeline_flax_utils import FlaxDiffusionPipeline - - -try: - if not (is_flax_available() and is_transformers_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from ..utils.dummy_flax_and_transformers_objects import * # noqa F403 -else: - from .stable_diffusion import ( - FlaxStableDiffusionControlNetPipeline, - FlaxStableDiffusionImg2ImgPipeline, - FlaxStableDiffusionInpaintPipeline, - FlaxStableDiffusionPipeline, - ) -try: - if not (is_transformers_available() and is_torch_available() and is_note_seq_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from ..utils.dummy_transformers_and_torch_and_note_seq_objects import * # noqa F403 -else: - from .spectrogram_diffusion import MidiProcessor, SpectrogramDiffusionPipeline diff --git a/spaces/diacanFperku/AutoGPT/Legion Of Nagash Pdf Download ((BETTER)).md b/spaces/diacanFperku/AutoGPT/Legion Of Nagash Pdf Download ((BETTER)).md deleted file mode 100644 index d187e9dcb68621a38db74f28380a809b0e18e4d2..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Legion Of Nagash Pdf Download ((BETTER)).md +++ /dev/null @@ -1,12 +0,0 @@ -
        -

        How to Download Legions of Nagash Battletome PDF for Free

        -

        If you are a fan of Warhammer Age of Sigmar and want to learn more about the undead armies of Nagash, the Lord of Death, you might be interested in downloading the Legions of Nagash Battletome PDF for free. This book contains the rules, lore, and background for the four legions of Nagash: the Grand Host of Nagash, the Legion of Sacrament, the Legion of Blood, and the Legion of Night. You will also find warscrolls for all the units and heroes in these legions, as well as allegiance abilities, artefacts, spells, and battalions.

        -

        legion of nagash pdf download


        Download File ✺✺✺ https://gohhs.com/2uFU3v



        -

        But how can you download Legions of Nagash Battletome PDF for free? There are a few ways to do this, but you should be careful about the sources you use. Some websites might offer you a free download link, but they could also contain viruses, malware, or other harmful content. You should always scan any file you download with a reliable antivirus software before opening it.

        -

        One way to download Legions of Nagash Battletome PDF for free is to use a file-sharing website like idoc.pub[^1^]. This website allows users to upload and share PDF files with others. You can find the Legions of Nagash Battletome PDF by searching for it on the website or by using this link: https://idoc.pub/download/legions-of-nagash-battletome-vnd1qzev3wnx. However, you should be aware that this website is not affiliated with Games Workshop or Warhammer Age of Sigmar, and it might not have the permission to share this book. You should also check the quality and accuracy of the PDF file before using it.

        -

        Another way to download Legions of Nagash Battletome PDF for free is to use a torrent website like The Pirate Bay. Torrent websites allow users to download files from other users who have them on their computers. You can find the Legions of Nagash Battletome PDF by searching for it on the website or by using this link: https://thepiratebay.org/description.php?id=20073667. However, you should be aware that torrent websites are illegal in many countries and regions, and they could expose you to legal risks or penalties. You should also check the quality and accuracy of the PDF file before using it.

        -

        -

        The best way to download Legions of Nagash Battletome PDF for free is to use a legitimate source like Games Workshop's official website. Games Workshop sometimes offers free downloads of their books and rules as part of their promotions or events. You can find the Legions of Nagash Battletome PDF by visiting their website or by using this link: https://www.games-workshop.com/en-GB/Battletome-Legions-of-Nagash-2018-eng. However, you should be aware that this offer might not be available at all times or in all regions. You should also respect Games Workshop's intellectual property rights and not share or distribute their books without their permission.

        -

        We hope this article has helped you find out how to download Legions of Nagash Battletome PDF for free. Remember to always use safe and legal sources when downloading any file online. Happy reading!

        d5da3c52bf
        -
        -
        \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Liebherr Lidos Epc And Service S 05 2015 Multilingual Zip.md b/spaces/diacanFperku/AutoGPT/Liebherr Lidos Epc And Service S 05 2015 Multilingual Zip.md deleted file mode 100644 index 5064cacdf7b40c5c673c05936b9caaf21d5a2c5b..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Liebherr Lidos Epc And Service S 05 2015 Multilingual Zip.md +++ /dev/null @@ -1,6 +0,0 @@ -

        liebherr lidos epc and service s 05 2015 multilingual zip


        Download Filehttps://gohhs.com/2uFSUi



        -
        -Liebherr Lidos Epc And Service S 05 2015 Multilingual Zip. Download. LIEBHERR Lidos EPC & Service Manuals Multilingual + Activator Win | 445 MB ... Given ... 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/digitalOSHO/webui/app.py b/spaces/digitalOSHO/webui/app.py deleted file mode 100644 index 776cd480bb9d4c86cfadef6a7fe501b62a8e0781..0000000000000000000000000000000000000000 --- a/spaces/digitalOSHO/webui/app.py +++ /dev/null @@ -1,76 +0,0 @@ -import os -from subprocess import getoutput - -gpu_info = getoutput('nvidia-smi') -if("A10G" in gpu_info): - os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+4c06c79.d20221205-cp38-cp38-linux_x86_64.whl") -elif("T4" in gpu_info): - os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+1515f77.d20221130-cp38-cp38-linux_x86_64.whl") - -os.system(f"git clone -b v1.5 https://github.com/camenduru/stable-diffusion-webui /home/user/app/stable-diffusion-webui") -os.chdir("/home/user/app/stable-diffusion-webui") - -os.system(f"wget -q https://github.com/camenduru/webui/raw/main/env_patch.py -O /home/user/app/env_patch.py") -os.system(f"sed -i '$a fastapi==0.90.0' /home/user/app/stable-diffusion-webui/requirements_versions.txt") -os.system(f"sed -i -e '/import image_from_url_text/r /home/user/app/env_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/(modelmerger_interface, \"Checkpoint Merger\", \"modelmerger\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/(train_interface, \"Train\", \"ti\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/extensions_interface, \"Extensions\", \"extensions\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/settings_interface, \"Settings\", \"settings\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f'''sed -i -e "s/document.getElementsByTagName('gradio-app')\[0\].shadowRoot/!!document.getElementsByTagName('gradio-app')[0].shadowRoot ? document.getElementsByTagName('gradio-app')[0].shadowRoot : document/g" /home/user/app/stable-diffusion-webui/script.js''') -os.system(f"sed -i -e 's/ show_progress=False,/ show_progress=True,/g' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e 's/shared.demo.launch/shared.demo.queue().launch/g' /home/user/app/stable-diffusion-webui/webui.py") -os.system(f"sed -i -e 's/ outputs=\[/queue=False, &/g' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e 's/ queue=False, / /g' /home/user/app/stable-diffusion-webui/modules/ui.py") - -# ----------------------------Please duplicate this space and delete this block if you don't want to see the extra header---------------------------- -os.system(f"wget -q https://github.com/camenduru/webui/raw/main/header_patch.py -O /home/user/app/header_patch.py") -os.system(f"sed -i -e '/demo:/r /home/user/app/header_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py") -# --------------------------------------------------------------------------------------------------------------------------------------------------- - -if "IS_SHARED_UI" in os.environ: - os.system(f"rm -rfv /home/user/app/stable-diffusion-webui/scripts/") - - os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-config.json -O /home/user/app/shared-config.json") - os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-ui-config.json -O /home/user/app/shared-ui-config.json") - - os.system(f"wget -q https://huggingface.co/ckpt/anything-v3-vae-swapped/resolve/main/anything-v3-vae-swapped.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/anything-v3-vae-swapped.ckpt") - os.system(f"wget -q {os.getenv('MODEL_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('MODEL_NAME')}") - os.system(f"wget -q {os.getenv('VAE_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('VAE_NAME')}") - os.system(f"wget -q {os.getenv('YAML_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('YAML_NAME')}") - - os.system(f"python launch.py --force-enable-xformers --disable-console-progressbars --enable-console-prompts --ui-config-file /home/user/app/shared-ui-config.json --ui-settings-file /home/user/app/shared-config.json --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding") -else: - # Please duplicate this space and delete # character in front of the custom script you want to use or add here more custom scripts with same structure os.system(f"wget -q https://CUSTOM_SCRIPT_URL -O /home/user/app/stable-diffusion-webui/scripts/CUSTOM_SCRIPT_NAME.py") - os.system(f"wget -q https://gist.github.com/camenduru/9ec5f8141db9902e375967e93250860f/raw/d0bcf01786f20107c329c03f8968584ee67be12a/run_n_times.py -O /home/user/app/stable-diffusion-webui/scripts/run_n_times.py") - - # Please duplicate this space and delete # character in front of the extension you want to use or add here more extensions with same structure os.system(f"git clone https://EXTENSION_GIT_URL /home/user/app/stable-diffusion-webui/extensions/EXTENSION_NAME") - #os.system(f"git clone https://github.com/camenduru/stable-diffusion-webui-artists-to-study /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-artists-to-study") - os.system(f"git clone https://github.com/yfszzx/stable-diffusion-webui-images-browser /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser") - os.system(f"git clone https://github.com/camenduru/deforum-for-automatic1111-webui /home/user/app/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui") - - # Please duplicate this space and delete # character in front of the model you want to use or add here more ckpts with same structure os.system(f"wget -q https://CKPT_URL -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/CKPT_NAME.ckpt") - #os.system(f"wget -q https://huggingface.co/nitrosocke/Arcane-Diffusion/resolve/main/arcane-diffusion-v3.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/arcane-diffusion-v3.ckpt") - #os.system(f"wget -q https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion/resolve/main/Cyberpunk-Anime-Diffusion.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Cyberpunk-Anime-Diffusion.ckpt") - #os.system(f"wget -q https://huggingface.co/prompthero/midjourney-v4-diffusion/resolve/main/mdjrny-v4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/mdjrny-v4.ckpt") - #os.system(f"wget -q https://huggingface.co/nitrosocke/mo-di-diffusion/resolve/main/moDi-v1-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/moDi-v1-pruned.ckpt") - #os.system(f"wget -q https://huggingface.co/Fictiverse/Stable_Diffusion_PaperCut_Model/resolve/main/PaperCut_v1.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/PaperCut_v1.ckpt") - #os.system(f"wget -q https://huggingface.co/lilpotat/sa/resolve/main/samdoesarts_style.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/samdoesarts_style.ckpt") - #os.system(f"wget -q https://huggingface.co/hakurei/waifu-diffusion-v1-3/resolve/main/wd-v1-3-float32.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/wd-v1-3-float32.ckpt") - #os.system(f"wget -q https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-4.ckpt") - #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt") - #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-inpainting/resolve/main/sd-v1-5-inpainting.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-5-inpainting.ckpt") - - #os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.ckpt") - #os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0.vae.pt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.vae.pt") - - #os.system(f"wget -q https://huggingface.co/stabilityai/stable-diffusion-2/resolve/main/768-v-ema.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.ckpt") - #os.system(f"wget -q https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.yaml") - - os.system(f"wget -q https://huggingface.co/stabilityai/stable-diffusion-2-1/resolve/main/v2-1_768-ema-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v2-1_768-ema-pruned.ckpt") - os.system(f"wget -q https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v2-1_768-ema-pruned.yaml") - os.system(f"wget -q {os.getenv('MODEL_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('MODEL_NAME')}") - os.system(f"wget -q {os.getenv('VAE_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('VAE_NAME')}") - os.system(f"wget -q {os.getenv('YAML_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('YAML_NAME')}") - os.system(f"python launch.py --force-enable-xformers --ui-config-file /home/user/app/ui-config.json --ui-settings-file /home/user/app/config.json --disable-console-progressbars --enable-console-prompts --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding --api --skip-torch-cuda-test") - \ No newline at end of file diff --git a/spaces/digitalxingtong/Taffy-Bert-VITS2/commons.py b/spaces/digitalxingtong/Taffy-Bert-VITS2/commons.py deleted file mode 100644 index 9ad0444b61cbadaa388619986c2889c707d873ce..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Taffy-Bert-VITS2/commons.py +++ /dev/null @@ -1,161 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/digitalxingtong/Xingtong-Longread-Bert-VITS2/setup_ffmpeg.py b/spaces/digitalxingtong/Xingtong-Longread-Bert-VITS2/setup_ffmpeg.py deleted file mode 100644 index 7137ab5faebb6d80740b8c843667458f25596839..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Xingtong-Longread-Bert-VITS2/setup_ffmpeg.py +++ /dev/null @@ -1,55 +0,0 @@ -import os -import sys -import re -from pathlib import Path -import winreg - -def check_ffmpeg_path(): - path_list = os.environ['Path'].split(';') - ffmpeg_found = False - - for path in path_list: - if 'ffmpeg' in path.lower() and 'bin' in path.lower(): - ffmpeg_found = True - print("FFmpeg already installed.") - break - - return ffmpeg_found - -def add_ffmpeg_path_to_user_variable(): - ffmpeg_bin_path = Path('.\\ffmpeg\\bin') - if ffmpeg_bin_path.is_dir(): - abs_path = str(ffmpeg_bin_path.resolve()) - - try: - key = winreg.OpenKey( - winreg.HKEY_CURRENT_USER, - r"Environment", - 0, - winreg.KEY_READ | winreg.KEY_WRITE - ) - - try: - current_path, _ = winreg.QueryValueEx(key, "Path") - if abs_path not in current_path: - new_path = f"{current_path};{abs_path}" - winreg.SetValueEx(key, "Path", 0, winreg.REG_EXPAND_SZ, new_path) - print(f"Added FFmpeg path to user variable 'Path': {abs_path}") - else: - print("FFmpeg path already exists in the user variable 'Path'.") - finally: - winreg.CloseKey(key) - except WindowsError: - print("Error: Unable to modify user variable 'Path'.") - sys.exit(1) - - else: - print("Error: ffmpeg\\bin folder not found in the current path.") - sys.exit(1) - -def main(): - if not check_ffmpeg_path(): - add_ffmpeg_path_to_user_variable() - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/det_models/textsnake_r50_fpn_unet.py b/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/det_models/textsnake_r50_fpn_unet.py deleted file mode 100644 index 7d74f376b8c635451a3036e780ffc88e7640bf2c..0000000000000000000000000000000000000000 --- a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/det_models/textsnake_r50_fpn_unet.py +++ /dev/null @@ -1,22 +0,0 @@ -model = dict( - type='TextSnake', - backbone=dict( - type='mmdet.ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=-1, - norm_cfg=dict(type='BN', requires_grad=True), - init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50'), - norm_eval=True, - style='caffe'), - neck=dict( - type='FPN_UNet', in_channels=[256, 512, 1024, 2048], out_channels=32), - bbox_head=dict( - type='TextSnakeHead', - in_channels=32, - loss=dict(type='TextSnakeLoss'), - postprocessor=dict( - type='TextSnakePostprocessor', text_repr_type='poly')), - train_cfg=None, - test_cfg=None) diff --git a/spaces/dorkai/text-generation-webui-main/extensions/openai/cache_embedding_model.py b/spaces/dorkai/text-generation-webui-main/extensions/openai/cache_embedding_model.py deleted file mode 100644 index 44ac1dcd663d09a9f36bf9793ee2fa653339cbb3..0000000000000000000000000000000000000000 --- a/spaces/dorkai/text-generation-webui-main/extensions/openai/cache_embedding_model.py +++ /dev/null @@ -1,8 +0,0 @@ -#!/usr/bin/env python3 -# preload the embedding model, useful for Docker images to prevent re-download on config change -# Dockerfile: -# ENV OPENEDAI_EMBEDDING_MODEL=all-mpnet-base-v2 # Optional -# RUN python3 cache_embedded_model.py -import os, sentence_transformers -st_model = os.environ["OPENEDAI_EMBEDDING_MODEL"] if "OPENEDAI_EMBEDDING_MODEL" in os.environ else "all-mpnet-base-v2" -model = sentence_transformers.SentenceTransformer(st_model) diff --git a/spaces/edvanger/White-box-Cartoonization/README.md b/spaces/edvanger/White-box-Cartoonization/README.md deleted file mode 100644 index 9860239cf42c94e385faaaa75a85311e010d64f7..0000000000000000000000000000000000000000 --- a/spaces/edvanger/White-box-Cartoonization/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -python_version: 3.7 -title: White Box Cartoonization -emoji: 📚 -colorFrom: purple -colorTo: green -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: hylee/White-box-Cartoonization ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/eliwill/ask-a-philosopher/app.py b/spaces/eliwill/ask-a-philosopher/app.py deleted file mode 100644 index 410adbe581d98fedb20dd2556df1ed03e2df91c2..0000000000000000000000000000000000000000 --- a/spaces/eliwill/ask-a-philosopher/app.py +++ /dev/null @@ -1,106 +0,0 @@ -import gradio as gr -from transformers import pipeline -import numpy as np -import pandas as pd -from sentence_transformers import SentenceTransformer, util -import nltk -from nltk import sent_tokenize -nltk.download("punkt") - -# Loading in dataframes -krishnamurti_df = pd.read_json("krishnamurti_df.json") -stoic_df = pd.read_json("stoic_df.json") -alan_df = pd.read_json("alan-watts_df.json") - -# Loading in sentence_similarity model -sentence_similarity_model = "all-mpnet-base-v2" -model = SentenceTransformer(sentence_similarity_model) - -# Loading in text-generation models -stoic_generator = pipeline("text-generation", model="eliwill/stoic-generator-10e") -krishnamurti_generator = pipeline("text-generation", model="eliwill/distilgpt2-finetuned-final-project") -alan_generator = pipeline("text-generation", model="eliwill/alan-watts-8e") - - -# Creating philosopher dictionary -philosopher_dictionary = { - "Marcus Aurelius": { - "generator": stoic_generator, - "dataframe": stoic_df, - "image": "marcus-aurelius.jpg" - }, - - "Krishnamurti": { - "generator": krishnamurti_generator, - "dataframe": krishnamurti_df, - "image": "krishnamurti.jpg" - }, - - "Alan Watts": { - "generator": alan_generator , - "dataframe": alan_df , - "image": "rsz_1alan_watts.jpg" - } -} - -############### DEFINING FUNCTIONS ########################### - -def ask_philosopher(philosopher, question): - """ Return first 5 sentences generated by question for the given philosopher model """ - - generator = philosopher_dictionary[philosopher]['generator'] - answer = generator(question, min_length=100, max_length=120)[0]['generated_text'] # generate about 50 word tokens - answer = " ".join(sent_tokenize(answer)[:6]) # Get the first five sentences - return answer - -def get_similar_quotes(philosopher, question): - """ Return top 5 most similar quotes to the question from a philosopher's dataframe """ - df = philosopher_dictionary[philosopher]['dataframe'] - question_embedding = model.encode(question) - sims = [util.dot_score(question_embedding, quote_embedding) for quote_embedding in df['Embedding']] - ind = np.argpartition(sims, -5)[-5:] - similar_sentences = [df['quote'][i] for i in ind] - top5quotes = pd.DataFrame(data = similar_sentences, columns=["Quotes"], index=range(1,6)) - top5quotes['Quotes'] = top5quotes['Quotes'].str[:-1].str[:250] + "..." - return top5quotes - -def main(question, philosopher): - out_image = philosopher_dictionary[philosopher]['image'] - return ask_philosopher(philosopher, question), get_similar_quotes(philosopher, question), out_image - -with gr.Blocks(css=".gradio-container {background-image: url('file=blue_mountains.jpg')} # title {color: #F0FFFF}") as demo: - gr.Markdown(""" - # Ask a Philsopher - """, - elem_id="title" - ) - - with gr.Row(): - with gr.Column(): - inp1 = gr.Textbox(placeholder="Place your question here...", label="Ask a question", elem_id="title") - inp2 = gr.Dropdown(choices=["Alan Watts", "Marcus Aurelius", "Krishnamurti"], value="Marcus Aurelius", label="Choose a philosopher") - - out1 = gr.Textbox( - lines=3, - max_lines=10, - label="Answer" - ) - - with gr.Row(): - out_image = gr.Image(label="Picture", image_mode="L") - out2 = gr.DataFrame( - headers=["Quotes"], - max_rows=5, - interactive=False, - wrap=True, - value=[["When you arise in the morning, think of what a precious privilege it is to be alive – to breathe, to think, to enjoy, to love."], - ["Each day provides its own gifts."], - ["Only time can heal what reason cannot."], - ["He who is brave is free."], - ["First learn the meaning of what you say, and then speak."]] - ) - - btn = gr.Button("Run") - btn.click(fn=main, inputs=[inp1,inp2], outputs=[out1,out2,out_image]) - -demo.launch() \ No newline at end of file diff --git a/spaces/eson/tokenizer-arena/vocab/belle_llama_ext_7b/__init__.py b/spaces/eson/tokenizer-arena/vocab/belle_llama_ext_7b/__init__.py deleted file mode 100644 index eec55b2ad27455cba09cfcc1ec7595090f32f7a3..0000000000000000000000000000000000000000 --- a/spaces/eson/tokenizer-arena/vocab/belle_llama_ext_7b/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ - -import os -from transformers import LlamaTokenizer - -CURRENT_DIR = os.path.dirname(os.path.abspath(__file__)) -TOKENIZER_DIR = os.path.join(CURRENT_DIR, "BELLE-LLaMA-EXT-7B") - -tokenizer = LlamaTokenizer.from_pretrained(TOKENIZER_DIR) - - - -# print(tokenizer.tokenize("大道发生")) \ No newline at end of file diff --git a/spaces/ezioruan/roop/roop/processors/__init__.py b/spaces/ezioruan/roop/roop/processors/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/facebook/StyleNeRF/training/augment.py b/spaces/facebook/StyleNeRF/training/augment.py deleted file mode 100644 index db3a668c5bfc72235611ac07a247f7dd297d831a..0000000000000000000000000000000000000000 --- a/spaces/facebook/StyleNeRF/training/augment.py +++ /dev/null @@ -1,431 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import numpy as np -import scipy.signal -import torch -from torch_utils import persistence -from torch_utils import misc -from torch_utils.ops import upfirdn2d -from torch_utils.ops import grid_sample_gradfix -from torch_utils.ops import conv2d_gradfix - -#---------------------------------------------------------------------------- -# Coefficients of various wavelet decomposition low-pass filters. - -wavelets = { - 'haar': [0.7071067811865476, 0.7071067811865476], - 'db1': [0.7071067811865476, 0.7071067811865476], - 'db2': [-0.12940952255092145, 0.22414386804185735, 0.836516303737469, 0.48296291314469025], - 'db3': [0.035226291882100656, -0.08544127388224149, -0.13501102001039084, 0.4598775021193313, 0.8068915093133388, 0.3326705529509569], - 'db4': [-0.010597401784997278, 0.032883011666982945, 0.030841381835986965, -0.18703481171888114, -0.02798376941698385, 0.6308807679295904, 0.7148465705525415, 0.23037781330885523], - 'db5': [0.003335725285001549, -0.012580751999015526, -0.006241490213011705, 0.07757149384006515, -0.03224486958502952, -0.24229488706619015, 0.13842814590110342, 0.7243085284385744, 0.6038292697974729, 0.160102397974125], - 'db6': [-0.00107730108499558, 0.004777257511010651, 0.0005538422009938016, -0.031582039318031156, 0.02752286553001629, 0.09750160558707936, -0.12976686756709563, -0.22626469396516913, 0.3152503517092432, 0.7511339080215775, 0.4946238903983854, 0.11154074335008017], - 'db7': [0.0003537138000010399, -0.0018016407039998328, 0.00042957797300470274, 0.012550998556013784, -0.01657454163101562, -0.03802993693503463, 0.0806126091510659, 0.07130921926705004, -0.22403618499416572, -0.14390600392910627, 0.4697822874053586, 0.7291320908465551, 0.39653931948230575, 0.07785205408506236], - 'db8': [-0.00011747678400228192, 0.0006754494059985568, -0.0003917403729959771, -0.00487035299301066, 0.008746094047015655, 0.013981027917015516, -0.04408825393106472, -0.01736930100202211, 0.128747426620186, 0.00047248457399797254, -0.2840155429624281, -0.015829105256023893, 0.5853546836548691, 0.6756307362980128, 0.3128715909144659, 0.05441584224308161], - 'sym2': [-0.12940952255092145, 0.22414386804185735, 0.836516303737469, 0.48296291314469025], - 'sym3': [0.035226291882100656, -0.08544127388224149, -0.13501102001039084, 0.4598775021193313, 0.8068915093133388, 0.3326705529509569], - 'sym4': [-0.07576571478927333, -0.02963552764599851, 0.49761866763201545, 0.8037387518059161, 0.29785779560527736, -0.09921954357684722, -0.012603967262037833, 0.0322231006040427], - 'sym5': [0.027333068345077982, 0.029519490925774643, -0.039134249302383094, 0.1993975339773936, 0.7234076904024206, 0.6339789634582119, 0.01660210576452232, -0.17532808990845047, -0.021101834024758855, 0.019538882735286728], - 'sym6': [0.015404109327027373, 0.0034907120842174702, -0.11799011114819057, -0.048311742585633, 0.4910559419267466, 0.787641141030194, 0.3379294217276218, -0.07263752278646252, -0.021060292512300564, 0.04472490177066578, 0.0017677118642428036, -0.007800708325034148], - 'sym7': [0.002681814568257878, -0.0010473848886829163, -0.01263630340325193, 0.03051551316596357, 0.0678926935013727, -0.049552834937127255, 0.017441255086855827, 0.5361019170917628, 0.767764317003164, 0.2886296317515146, -0.14004724044296152, -0.10780823770381774, 0.004010244871533663, 0.010268176708511255], - 'sym8': [-0.0033824159510061256, -0.0005421323317911481, 0.03169508781149298, 0.007607487324917605, -0.1432942383508097, -0.061273359067658524, 0.4813596512583722, 0.7771857517005235, 0.3644418948353314, -0.05194583810770904, -0.027219029917056003, 0.049137179673607506, 0.003808752013890615, -0.01495225833704823, -0.0003029205147213668, 0.0018899503327594609], -} - -#---------------------------------------------------------------------------- -# Helpers for constructing transformation matrices. - -def matrix(*rows, device=None): - assert all(len(row) == len(rows[0]) for row in rows) - elems = [x for row in rows for x in row] - ref = [x for x in elems if isinstance(x, torch.Tensor)] - if len(ref) == 0: - return misc.constant(np.asarray(rows), device=device) - assert device is None or device == ref[0].device - elems = [x if isinstance(x, torch.Tensor) else misc.constant(x, shape=ref[0].shape, device=ref[0].device) for x in elems] - return torch.stack(elems, dim=-1).reshape(ref[0].shape + (len(rows), -1)) - -def translate2d(tx, ty, **kwargs): - return matrix( - [1, 0, tx], - [0, 1, ty], - [0, 0, 1], - **kwargs) - -def translate3d(tx, ty, tz, **kwargs): - return matrix( - [1, 0, 0, tx], - [0, 1, 0, ty], - [0, 0, 1, tz], - [0, 0, 0, 1], - **kwargs) - -def scale2d(sx, sy, **kwargs): - return matrix( - [sx, 0, 0], - [0, sy, 0], - [0, 0, 1], - **kwargs) - -def scale3d(sx, sy, sz, **kwargs): - return matrix( - [sx, 0, 0, 0], - [0, sy, 0, 0], - [0, 0, sz, 0], - [0, 0, 0, 1], - **kwargs) - -def rotate2d(theta, **kwargs): - return matrix( - [torch.cos(theta), torch.sin(-theta), 0], - [torch.sin(theta), torch.cos(theta), 0], - [0, 0, 1], - **kwargs) - -def rotate3d(v, theta, **kwargs): - vx = v[..., 0]; vy = v[..., 1]; vz = v[..., 2] - s = torch.sin(theta); c = torch.cos(theta); cc = 1 - c - return matrix( - [vx*vx*cc+c, vx*vy*cc-vz*s, vx*vz*cc+vy*s, 0], - [vy*vx*cc+vz*s, vy*vy*cc+c, vy*vz*cc-vx*s, 0], - [vz*vx*cc-vy*s, vz*vy*cc+vx*s, vz*vz*cc+c, 0], - [0, 0, 0, 1], - **kwargs) - -def translate2d_inv(tx, ty, **kwargs): - return translate2d(-tx, -ty, **kwargs) - -def scale2d_inv(sx, sy, **kwargs): - return scale2d(1 / sx, 1 / sy, **kwargs) - -def rotate2d_inv(theta, **kwargs): - return rotate2d(-theta, **kwargs) - -#---------------------------------------------------------------------------- -# Versatile image augmentation pipeline from the paper -# "Training Generative Adversarial Networks with Limited Data". -# -# All augmentations are disabled by default; individual augmentations can -# be enabled by setting their probability multipliers to 1. - -@persistence.persistent_class -class AugmentPipe(torch.nn.Module): - def __init__(self, - xflip=0, rotate90=0, xint=0, xint_max=0.125, - scale=0, rotate=0, aniso=0, xfrac=0, scale_std=0.2, rotate_max=1, aniso_std=0.2, xfrac_std=0.125, - brightness=0, contrast=0, lumaflip=0, hue=0, saturation=0, brightness_std=0.2, contrast_std=0.5, hue_max=1, saturation_std=1, - imgfilter=0, imgfilter_bands=[1,1,1,1], imgfilter_std=1, - noise=0, cutout=0, noise_std=0.1, cutout_size=0.5, - ): - super().__init__() - self.register_buffer('p', torch.ones([])) # Overall multiplier for augmentation probability. - - # Pixel blitting. - self.xflip = float(xflip) # Probability multiplier for x-flip. - self.rotate90 = float(rotate90) # Probability multiplier for 90 degree rotations. - self.xint = float(xint) # Probability multiplier for integer translation. - self.xint_max = float(xint_max) # Range of integer translation, relative to image dimensions. - - # General geometric transformations. - self.scale = float(scale) # Probability multiplier for isotropic scaling. - self.rotate = float(rotate) # Probability multiplier for arbitrary rotation. - self.aniso = float(aniso) # Probability multiplier for anisotropic scaling. - self.xfrac = float(xfrac) # Probability multiplier for fractional translation. - self.scale_std = float(scale_std) # Log2 standard deviation of isotropic scaling. - self.rotate_max = float(rotate_max) # Range of arbitrary rotation, 1 = full circle. - self.aniso_std = float(aniso_std) # Log2 standard deviation of anisotropic scaling. - self.xfrac_std = float(xfrac_std) # Standard deviation of frational translation, relative to image dimensions. - - # Color transformations. - self.brightness = float(brightness) # Probability multiplier for brightness. - self.contrast = float(contrast) # Probability multiplier for contrast. - self.lumaflip = float(lumaflip) # Probability multiplier for luma flip. - self.hue = float(hue) # Probability multiplier for hue rotation. - self.saturation = float(saturation) # Probability multiplier for saturation. - self.brightness_std = float(brightness_std) # Standard deviation of brightness. - self.contrast_std = float(contrast_std) # Log2 standard deviation of contrast. - self.hue_max = float(hue_max) # Range of hue rotation, 1 = full circle. - self.saturation_std = float(saturation_std) # Log2 standard deviation of saturation. - - # Image-space filtering. - self.imgfilter = float(imgfilter) # Probability multiplier for image-space filtering. - self.imgfilter_bands = list(imgfilter_bands) # Probability multipliers for individual frequency bands. - self.imgfilter_std = float(imgfilter_std) # Log2 standard deviation of image-space filter amplification. - - # Image-space corruptions. - self.noise = float(noise) # Probability multiplier for additive RGB noise. - self.cutout = float(cutout) # Probability multiplier for cutout. - self.noise_std = float(noise_std) # Standard deviation of additive RGB noise. - self.cutout_size = float(cutout_size) # Size of the cutout rectangle, relative to image dimensions. - - # Setup orthogonal lowpass filter for geometric augmentations. - self.register_buffer('Hz_geom', upfirdn2d.setup_filter(wavelets['sym6'])) - - # Construct filter bank for image-space filtering. - Hz_lo = np.asarray(wavelets['sym2']) # H(z) - Hz_hi = Hz_lo * ((-1) ** np.arange(Hz_lo.size)) # H(-z) - Hz_lo2 = np.convolve(Hz_lo, Hz_lo[::-1]) / 2 # H(z) * H(z^-1) / 2 - Hz_hi2 = np.convolve(Hz_hi, Hz_hi[::-1]) / 2 # H(-z) * H(-z^-1) / 2 - Hz_fbank = np.eye(4, 1) # Bandpass(H(z), b_i) - for i in range(1, Hz_fbank.shape[0]): - Hz_fbank = np.dstack([Hz_fbank, np.zeros_like(Hz_fbank)]).reshape(Hz_fbank.shape[0], -1)[:, :-1] - Hz_fbank = scipy.signal.convolve(Hz_fbank, [Hz_lo2]) - Hz_fbank[i, (Hz_fbank.shape[1] - Hz_hi2.size) // 2 : (Hz_fbank.shape[1] + Hz_hi2.size) // 2] += Hz_hi2 - self.register_buffer('Hz_fbank', torch.as_tensor(Hz_fbank, dtype=torch.float32)) - - def forward(self, images, debug_percentile=None): - assert isinstance(images, torch.Tensor) and images.ndim == 4 - batch_size, num_channels, height, width = images.shape - device = images.device - if debug_percentile is not None: - debug_percentile = torch.as_tensor(debug_percentile, dtype=torch.float32, device=device) - - # ------------------------------------- - # Select parameters for pixel blitting. - # ------------------------------------- - - # Initialize inverse homogeneous 2D transform: G_inv @ pixel_out ==> pixel_in - I_3 = torch.eye(3, device=device) - G_inv = I_3 - - # Apply x-flip with probability (xflip * strength). - if self.xflip > 0: - i = torch.floor(torch.rand([batch_size], device=device) * 2) - i = torch.where(torch.rand([batch_size], device=device) < self.xflip * self.p, i, torch.zeros_like(i)) - if debug_percentile is not None: - i = torch.full_like(i, torch.floor(debug_percentile * 2)) - G_inv = G_inv @ scale2d_inv(1 - 2 * i, 1) - - # Apply 90 degree rotations with probability (rotate90 * strength). - if self.rotate90 > 0: - i = torch.floor(torch.rand([batch_size], device=device) * 4) - i = torch.where(torch.rand([batch_size], device=device) < self.rotate90 * self.p, i, torch.zeros_like(i)) - if debug_percentile is not None: - i = torch.full_like(i, torch.floor(debug_percentile * 4)) - G_inv = G_inv @ rotate2d_inv(-np.pi / 2 * i) - - # Apply integer translation with probability (xint * strength). - if self.xint > 0: - t = (torch.rand([batch_size, 2], device=device) * 2 - 1) * self.xint_max - t = torch.where(torch.rand([batch_size, 1], device=device) < self.xint * self.p, t, torch.zeros_like(t)) - if debug_percentile is not None: - t = torch.full_like(t, (debug_percentile * 2 - 1) * self.xint_max) - G_inv = G_inv @ translate2d_inv(torch.round(t[:,0] * width), torch.round(t[:,1] * height)) - - # -------------------------------------------------------- - # Select parameters for general geometric transformations. - # -------------------------------------------------------- - - # Apply isotropic scaling with probability (scale * strength). - if self.scale > 0: - s = torch.exp2(torch.randn([batch_size], device=device) * self.scale_std) - s = torch.where(torch.rand([batch_size], device=device) < self.scale * self.p, s, torch.ones_like(s)) - if debug_percentile is not None: - s = torch.full_like(s, torch.exp2(torch.erfinv(debug_percentile * 2 - 1) * self.scale_std)) - G_inv = G_inv @ scale2d_inv(s, s) - - # Apply pre-rotation with probability p_rot. - p_rot = 1 - torch.sqrt((1 - self.rotate * self.p).clamp(0, 1)) # P(pre OR post) = p - if self.rotate > 0: - theta = (torch.rand([batch_size], device=device) * 2 - 1) * np.pi * self.rotate_max - theta = torch.where(torch.rand([batch_size], device=device) < p_rot, theta, torch.zeros_like(theta)) - if debug_percentile is not None: - theta = torch.full_like(theta, (debug_percentile * 2 - 1) * np.pi * self.rotate_max) - G_inv = G_inv @ rotate2d_inv(-theta) # Before anisotropic scaling. - - # Apply anisotropic scaling with probability (aniso * strength). - if self.aniso > 0: - s = torch.exp2(torch.randn([batch_size], device=device) * self.aniso_std) - s = torch.where(torch.rand([batch_size], device=device) < self.aniso * self.p, s, torch.ones_like(s)) - if debug_percentile is not None: - s = torch.full_like(s, torch.exp2(torch.erfinv(debug_percentile * 2 - 1) * self.aniso_std)) - G_inv = G_inv @ scale2d_inv(s, 1 / s) - - # Apply post-rotation with probability p_rot. - if self.rotate > 0: - theta = (torch.rand([batch_size], device=device) * 2 - 1) * np.pi * self.rotate_max - theta = torch.where(torch.rand([batch_size], device=device) < p_rot, theta, torch.zeros_like(theta)) - if debug_percentile is not None: - theta = torch.zeros_like(theta) - G_inv = G_inv @ rotate2d_inv(-theta) # After anisotropic scaling. - - # Apply fractional translation with probability (xfrac * strength). - if self.xfrac > 0: - t = torch.randn([batch_size, 2], device=device) * self.xfrac_std - t = torch.where(torch.rand([batch_size, 1], device=device) < self.xfrac * self.p, t, torch.zeros_like(t)) - if debug_percentile is not None: - t = torch.full_like(t, torch.erfinv(debug_percentile * 2 - 1) * self.xfrac_std) - G_inv = G_inv @ translate2d_inv(t[:,0] * width, t[:,1] * height) - - # ---------------------------------- - # Execute geometric transformations. - # ---------------------------------- - - # Execute if the transform is not identity. - if G_inv is not I_3: - - # Calculate padding. - cx = (width - 1) / 2 - cy = (height - 1) / 2 - cp = matrix([-cx, -cy, 1], [cx, -cy, 1], [cx, cy, 1], [-cx, cy, 1], device=device) # [idx, xyz] - cp = G_inv @ cp.t() # [batch, xyz, idx] - Hz_pad = self.Hz_geom.shape[0] // 4 - margin = cp[:, :2, :].permute(1, 0, 2).flatten(1) # [xy, batch * idx] - margin = torch.cat([-margin, margin]).max(dim=1).values # [x0, y0, x1, y1] - margin = margin + misc.constant([Hz_pad * 2 - cx, Hz_pad * 2 - cy] * 2, device=device) - margin = margin.max(misc.constant([0, 0] * 2, device=device)) - margin = margin.min(misc.constant([width-1, height-1] * 2, device=device)) - mx0, my0, mx1, my1 = margin.ceil().to(torch.int32) - - # Pad image and adjust origin. - images = torch.nn.functional.pad(input=images, pad=[mx0,mx1,my0,my1], mode='reflect') - G_inv = translate2d((mx0 - mx1) / 2, (my0 - my1) / 2) @ G_inv - - # Upsample. - images = upfirdn2d.upsample2d(x=images, f=self.Hz_geom, up=2) - G_inv = scale2d(2, 2, device=device) @ G_inv @ scale2d_inv(2, 2, device=device) - G_inv = translate2d(-0.5, -0.5, device=device) @ G_inv @ translate2d_inv(-0.5, -0.5, device=device) - - # Execute transformation. - shape = [batch_size, num_channels, (height + Hz_pad * 2) * 2, (width + Hz_pad * 2) * 2] - G_inv = scale2d(2 / images.shape[3], 2 / images.shape[2], device=device) @ G_inv @ scale2d_inv(2 / shape[3], 2 / shape[2], device=device) - grid = torch.nn.functional.affine_grid(theta=G_inv[:,:2,:], size=shape, align_corners=False) - images = grid_sample_gradfix.grid_sample(images, grid) - - # Downsample and crop. - images = upfirdn2d.downsample2d(x=images, f=self.Hz_geom, down=2, padding=-Hz_pad*2, flip_filter=True) - - # -------------------------------------------- - # Select parameters for color transformations. - # -------------------------------------------- - - # Initialize homogeneous 3D transformation matrix: C @ color_in ==> color_out - I_4 = torch.eye(4, device=device) - C = I_4 - - # Apply brightness with probability (brightness * strength). - if self.brightness > 0: - b = torch.randn([batch_size], device=device) * self.brightness_std - b = torch.where(torch.rand([batch_size], device=device) < self.brightness * self.p, b, torch.zeros_like(b)) - if debug_percentile is not None: - b = torch.full_like(b, torch.erfinv(debug_percentile * 2 - 1) * self.brightness_std) - C = translate3d(b, b, b) @ C - - # Apply contrast with probability (contrast * strength). - if self.contrast > 0: - c = torch.exp2(torch.randn([batch_size], device=device) * self.contrast_std) - c = torch.where(torch.rand([batch_size], device=device) < self.contrast * self.p, c, torch.ones_like(c)) - if debug_percentile is not None: - c = torch.full_like(c, torch.exp2(torch.erfinv(debug_percentile * 2 - 1) * self.contrast_std)) - C = scale3d(c, c, c) @ C - - # Apply luma flip with probability (lumaflip * strength). - v = misc.constant(np.asarray([1, 1, 1, 0]) / np.sqrt(3), device=device) # Luma axis. - if self.lumaflip > 0: - i = torch.floor(torch.rand([batch_size, 1, 1], device=device) * 2) - i = torch.where(torch.rand([batch_size, 1, 1], device=device) < self.lumaflip * self.p, i, torch.zeros_like(i)) - if debug_percentile is not None: - i = torch.full_like(i, torch.floor(debug_percentile * 2)) - C = (I_4 - 2 * v.ger(v) * i) @ C # Householder reflection. - - # Apply hue rotation with probability (hue * strength). - if self.hue > 0 and num_channels > 1: - theta = (torch.rand([batch_size], device=device) * 2 - 1) * np.pi * self.hue_max - theta = torch.where(torch.rand([batch_size], device=device) < self.hue * self.p, theta, torch.zeros_like(theta)) - if debug_percentile is not None: - theta = torch.full_like(theta, (debug_percentile * 2 - 1) * np.pi * self.hue_max) - C = rotate3d(v, theta) @ C # Rotate around v. - - # Apply saturation with probability (saturation * strength). - if self.saturation > 0 and num_channels > 1: - s = torch.exp2(torch.randn([batch_size, 1, 1], device=device) * self.saturation_std) - s = torch.where(torch.rand([batch_size, 1, 1], device=device) < self.saturation * self.p, s, torch.ones_like(s)) - if debug_percentile is not None: - s = torch.full_like(s, torch.exp2(torch.erfinv(debug_percentile * 2 - 1) * self.saturation_std)) - C = (v.ger(v) + (I_4 - v.ger(v)) * s) @ C - - # ------------------------------ - # Execute color transformations. - # ------------------------------ - - # Execute if the transform is not identity. - if C is not I_4: - images = images.reshape([batch_size, num_channels, height * width]) - if num_channels == 3: - images = C[:, :3, :3] @ images + C[:, :3, 3:] - elif num_channels == 1: - C = C[:, :3, :].mean(dim=1, keepdims=True) - images = images * C[:, :, :3].sum(dim=2, keepdims=True) + C[:, :, 3:] - else: - raise ValueError('Image must be RGB (3 channels) or L (1 channel)') - images = images.reshape([batch_size, num_channels, height, width]) - - # ---------------------- - # Image-space filtering. - # ---------------------- - - if self.imgfilter > 0: - num_bands = self.Hz_fbank.shape[0] - assert len(self.imgfilter_bands) == num_bands - expected_power = misc.constant(np.array([10, 1, 1, 1]) / 13, device=device) # Expected power spectrum (1/f). - - # Apply amplification for each band with probability (imgfilter * strength * band_strength). - g = torch.ones([batch_size, num_bands], device=device) # Global gain vector (identity). - for i, band_strength in enumerate(self.imgfilter_bands): - t_i = torch.exp2(torch.randn([batch_size], device=device) * self.imgfilter_std) - t_i = torch.where(torch.rand([batch_size], device=device) < self.imgfilter * self.p * band_strength, t_i, torch.ones_like(t_i)) - if debug_percentile is not None: - t_i = torch.full_like(t_i, torch.exp2(torch.erfinv(debug_percentile * 2 - 1) * self.imgfilter_std)) if band_strength > 0 else torch.ones_like(t_i) - t = torch.ones([batch_size, num_bands], device=device) # Temporary gain vector. - t[:, i] = t_i # Replace i'th element. - t = t / (expected_power * t.square()).sum(dim=-1, keepdims=True).sqrt() # Normalize power. - g = g * t # Accumulate into global gain. - - # Construct combined amplification filter. - Hz_prime = g @ self.Hz_fbank # [batch, tap] - Hz_prime = Hz_prime.unsqueeze(1).repeat([1, num_channels, 1]) # [batch, channels, tap] - Hz_prime = Hz_prime.reshape([batch_size * num_channels, 1, -1]) # [batch * channels, 1, tap] - - # Apply filter. - p = self.Hz_fbank.shape[1] // 2 - images = images.reshape([1, batch_size * num_channels, height, width]) - images = torch.nn.functional.pad(input=images, pad=[p,p,p,p], mode='reflect') - images = conv2d_gradfix.conv2d(input=images, weight=Hz_prime.unsqueeze(2), groups=batch_size*num_channels) - images = conv2d_gradfix.conv2d(input=images, weight=Hz_prime.unsqueeze(3), groups=batch_size*num_channels) - images = images.reshape([batch_size, num_channels, height, width]) - - # ------------------------ - # Image-space corruptions. - # ------------------------ - - # Apply additive RGB noise with probability (noise * strength). - if self.noise > 0: - sigma = torch.randn([batch_size, 1, 1, 1], device=device).abs() * self.noise_std - sigma = torch.where(torch.rand([batch_size, 1, 1, 1], device=device) < self.noise * self.p, sigma, torch.zeros_like(sigma)) - if debug_percentile is not None: - sigma = torch.full_like(sigma, torch.erfinv(debug_percentile) * self.noise_std) - images = images + torch.randn([batch_size, num_channels, height, width], device=device) * sigma - - # Apply cutout with probability (cutout * strength). - if self.cutout > 0: - size = torch.full([batch_size, 2, 1, 1, 1], self.cutout_size, device=device) - size = torch.where(torch.rand([batch_size, 1, 1, 1, 1], device=device) < self.cutout * self.p, size, torch.zeros_like(size)) - center = torch.rand([batch_size, 2, 1, 1, 1], device=device) - if debug_percentile is not None: - size = torch.full_like(size, self.cutout_size) - center = torch.full_like(center, debug_percentile) - coord_x = torch.arange(width, device=device).reshape([1, 1, 1, -1]) - coord_y = torch.arange(height, device=device).reshape([1, 1, -1, 1]) - mask_x = (((coord_x + 0.5) / width - center[:, 0]).abs() >= size[:, 0] / 2) - mask_y = (((coord_y + 0.5) / height - center[:, 1]).abs() >= size[:, 1] / 2) - mask = torch.logical_or(mask_x, mask_y).to(torch.float32) - images = images * mask - - return images - -#---------------------------------------------------------------------------- diff --git a/spaces/falterWliame/Face_Mask_Detection/Audio Repeater 1.24 !!TOP!! Download Fre Tbt.md b/spaces/falterWliame/Face_Mask_Detection/Audio Repeater 1.24 !!TOP!! Download Fre Tbt.md deleted file mode 100644 index 71c7facddd4cf482c083ca7c9d7c26b532335623..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Audio Repeater 1.24 !!TOP!! Download Fre Tbt.md +++ /dev/null @@ -1,6 +0,0 @@ -

        audio repeater 1.24 download fre tbt


        Download File ……… https://urlca.com/2uDbVQ



        -
        -Alen Kar Lako Je Smrsati >> DOWNLOAD. dc4e8033f2 Alen,,,kar,,,lako,,,je,,,prestati, ... HD Online Player (audio repeater 1.24 download fre tbt) 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/falterWliame/Face_Mask_Detection/Guerrini Champion Accordion For NI Kontakt VST.md b/spaces/falterWliame/Face_Mask_Detection/Guerrini Champion Accordion For NI Kontakt VST.md deleted file mode 100644 index 19c4271105fcf99c336047c7fb105d73dae4aa95..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Guerrini Champion Accordion For NI Kontakt VST.md +++ /dev/null @@ -1,36 +0,0 @@ -

        Guerrini Champion accordion for NI Kontakt VST


        DOWNLOADhttps://urlca.com/2uDcQT



        -
        -upom s linkom k facebooku :). - - - - The Cellar Door - Music Venue | Facebook - - »The Cellar Door is a cellar-based music venue and club. Our monthly guest lists are kept as small as possible to guarantee the best atmosphere to our attendees. More info...« - - Strain je že vrgl - - a je tko neskromno kupil v kartogrdu stran? :) - - Ja drgač jaz zj pa ne vem če je ksne prevedene dokumentacije če jih je - - msev- no worries..enkrat bo kaj dost..vem da bo znanstveno pokazat - - Pridš se - - Ej amapk kaj če pač nardijo kontakt z linkom k se lahko vedno v navodilih v ajaxu vključijo - - pošlji link stran ki je organizirana v eni majici - - prej pa napisi eno vprašanje na forumu :) - - En mašinek - - afk/app/views/shared/ajax.html.erb at master · OneDumbed/afk · GitHub - - »afk - Home« - - Od takrat k se mi ne da naložit ta js da še vedno dalje dela tko kot sm ga 4fefd39f24
        -
        -
        -

        diff --git a/spaces/fatiXbelha/sd/APKPure How to Enjoy Unlimited Android Apps Games Music and Movies.md b/spaces/fatiXbelha/sd/APKPure How to Enjoy Unlimited Android Apps Games Music and Movies.md deleted file mode 100644 index d2eeea3c711cd03ee7c7521b7a7b7395639d32d2..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/APKPure How to Enjoy Unlimited Android Apps Games Music and Movies.md +++ /dev/null @@ -1,98 +0,0 @@ -
        -

        How to APK Pure: A Guide for Android Users

        -

        If you are an Android user, you may have heard of APK Pure, an alternative app store that allows you to download all kinds of apps that you can't find on Google Play Store. But what is APK Pure exactly, and how can you use it safely and effectively? In this article, we will answer these questions and more, so you can enjoy the benefits of APK Pure without any hassle.

        -

        how to apk pure


        Downloadhttps://urllie.com/2uNwT6



        -

        What is APK Pure and why use it?

        -

        APK Pure is an alternative app store for Android devices

        -

        APK Pure is a website and an app that lets you download APK files, which are the installation files for Android apps. APK files are similar to EXE files for Windows computers, but they are not available on Google Play Store. Instead, you have to download them from other sources, such as APK Pure.

        -

        APK Pure offers many benefits, such as:

        -

        Accessing apps that are not available on Google Play Store

        -

        Some apps may be banned, restricted, or removed from Google Play Store for various reasons, such as violating policies, being incompatible with your device, or being geo-blocked. With APK Pure, you can access these apps easily and legally, as long as they are not infringing any copyrights or trademarks.

        -

        Downloading older or newer versions of apps

        -

        Sometimes, you may want to download an older version of an app because the newer one has bugs, removed features, or compatibility issues. Or you may want to download a newer version of an app because it has new features, improvements, or fixes. With APK Pure, you can choose from different versions of apps and download the one that suits your needs.

        -

        Saving storage space and data usage

        -

        APK Pure has a feature called XAPK Installer, which helps you install large apps and games faster and easier. XAPK files are compressed files that contain both the APK file and the OBB file (which is the data file for some apps and games). By downloading XAPK files instead of separate APK and OBB files, you can save storage space and data usage on your device.

        -

        How to install APK Pure on your Android device?

        -

        Step 1: Enable unknown sources in your settings

        -

        Before you can install any APK file on your Android device, you need to enable unknown sources in your settings. This allows you to install apps from sources other than Google Play Store. To do this, follow these steps:

        -
          -
        • Go to your device settings and tap on Security (or Apps in older versions of Android).
        • -
        • Tap on the toggle next to Unknown sources (or Install unknown apps in newer versions of Android).
        • -
        • Select the browser or file manager app that you will use to download APK files.
        • -
        • Turn on the permission to allow from this source.
        • -
        -

        Step 2: Download the APK Pure app from its official website

        -

        The next part of the article is:

        The easiest way to download the APK Pure app is from its official website, which is https://apkpure.com. To do this, follow these steps:

        -
          -
        • Open your browser and go to https://apkpure.com.
        • -
        • Tap on the Download APK button at the top of the page.
        • -
        • Wait for the download to finish and then tap on the file to open it.
        • -
        -

        Step 3: Install the APK Pure app and launch it

        -

        Once you have opened the APK Pure file, you will see a prompt asking you to install the app. To do this, follow these steps:

        -
          -
        • Tap on Install and wait for the installation to complete.
        • -
        • Tap on Open to launch the APK Pure app.
        • -
        • Grant the necessary permissions to the app, such as storage, phone, and location.
        • -
        -

        Congratulations, you have successfully installed APK Pure on your Android device. You can now use it to download and update apps.

        -

        How to install APKPure on Android devices
        -How to download APK files from APKPure app store
        -How to update apps using APKPure
        -How to use APKPure to install apps not available on Google Play
        -How to uninstall APKPure from your device
        -How to download and install Pure APK from Google Play
        -How to use APKPure to download and play Android games
        -How to backup and restore apps with APKPure
        -How to share apps with friends using APKPure
        -How to use APKPure to access unlimited downloads of Android apps, games, music, and movies
        -How to install APKPure on Firestick or Fire TV
        -How to download and install Apk Pure Application on Android - YouTube[^3^]
        -How to Install APK on Android - Lifewire[^2^]
        -How to Install APKPure: A Great App Store with apps not ... - BestDroidplayer[^1^]
        -How to use APKPure to download modded or hacked apps and games
        -How to fix common errors or issues with APKPure
        -How to change the language or region settings on APKPure
        -How to use APKPure to download older versions of apps and games
        -How to use APKPure to download beta or unreleased apps and games
        -How to use APKPure to download apps and games that are compatible with your device
        -How to use APKPure to download apps and games that are free or discounted
        -How to use APKPure to download apps and games that are popular or trending
        -How to use APKPure to download apps and games that are recommended or curated
        -How to use APKPure to download apps and games that are exclusive or featured
        -How to use APKPure to download apps and games that are offline or online
        -How to use APKPure to download apps and games that are multiplayer or single-player
        -How to use APKPure to download apps and games that are casual or hardcore
        -How to use APKPure to download apps and games that are action or adventure
        -How to use APKPure to download apps and games that are puzzle or strategy
        -How to use APKPure to download apps and games that are simulation or role-playing

        -

        How to use APK Pure to download and update apps?

        -

        Step 1: Search for the app you want or browse by categories

        -

        The APK Pure app has a simple and user-friendly interface that allows you to find the app you want easily. You can either use the search bar at the top of the screen or browse by categories, such as games, editors' choice, new releases, popular, and more. You can also filter by genres, ratings, downloads, and updates.

        -

        Step 2: Tap on the app and select the version you want

        -

        When you find the app you want, tap on it to see more details, such as screenshots, description, ratings, reviews, and version history. You can also see if the app is compatible with your device and if it requires any additional files. To download the app, tap on the Download APK button and select the version you want. You can choose from different versions of apps, such as stable, beta, or modded. You can also see the size and date of each version.

        -

        Step 3: Tap on install and wait for the download to finish

        -

        After you have selected the version you want, tap on Install and wait for the download to finish. You will see a progress bar showing you how much time is left. Once the download is complete, you will see a notification asking you to install the app. Tap on Install and follow the instructions on your screen. If the app requires any additional files, such as OBB or XAPK files, APK Pure will automatically install them for you.

        -

        You have successfully downloaded and installed an app using APK Pure. You can now launch the app and enjoy it.

        The next part of the article is:

        What are the risks and precautions of using APK Pure?

        -

        Risks include malware, viruses, and data theft

        -

        While APK Pure is a reputable and safe source of APK files, it is not the only one. There are many other websites and apps that offer APK files, but some of them may be malicious or fraudulent. They may contain malware, viruses, or spyware that can harm your device or steal your data. They may also trick you into downloading fake or modified apps that can compromise your security or privacy.

        -

        Precautions include using antivirus software, checking reviews, and avoiding sensitive apps

        -

        To avoid these risks, you should take some precautions when using APK Pure or any other source of APK files. Here are some tips to help you:

        -
          -
        • Use a reliable antivirus software on your device and scan any APK file before installing it.
        • -
        • Check the reviews and ratings of the app and the source before downloading it. Look for any red flags, such as complaints, warnings, or negative feedback.
        • -
        • Avoid downloading sensitive apps, such as banking, payment, or social media apps, from unknown sources. These apps may require your personal or financial information, which can be easily stolen or misused by hackers.
        • -
        -

        By following these precautions, you can reduce the risks of using APK Pure and enjoy its benefits safely.

        -

        Conclusion and FAQs

        -

        In conclusion, APK Pure is an alternative app store for Android users that offers many advantages, such as accessing apps that are not available on Google Play Store, downloading older or newer versions of apps, and saving storage space and data usage. However, it also comes with some risks, such as malware, viruses, and data theft. Therefore, you should be careful when using APK Pure and take some precautions, such as using antivirus software, checking reviews, and avoiding sensitive apps.

        -

        If you have any questions about APK Pure, you may find the answers in the following FAQs:

        - - - - - - -
        Q: Is APK Pure legal?A: Yes, APK Pure is legal as long as it does not distribute any pirated or illegal apps. However, some apps may have different terms and conditions that prohibit their distribution outside Google Play Store. You should check the app's license agreement before downloading it.
        Q: Is APK Pure safe?A: Yes, APK Pure is safe as long as you download the app from its official website and follow the precautions mentioned above. However, you should always be cautious when downloading any app from unknown sources and scan it for any malware or viruses.
        Q: How do I update the apps I downloaded from APK Pure?A: You can update the apps you downloaded from APK Pure by using the APK Pure app itself. The app will notify you when there is a new version available for any app you have installed. You can also check for updates manually by tapping on the menu icon at the top left corner of the screen and selecting Updates.
        Q: How do I uninstall the apps I downloaded from APK Pure?A: You can uninstall the apps you downloaded from APK Pure by using the same method you use to uninstall any other app on your device. You can either go to your device settings and tap on Apps (or Applications) and select the app you want to uninstall, or you can long-press on the app icon on your home screen and drag it to the Uninstall option.
        Q: How do I contact APK Pure if I have any issues or feedback?A: You can contact APK Pure by using their online form at https://apkpure.com/contact-us. You can also follow them on their social media accounts, such as Facebook (https://www.facebook.com/apkpure), Twitter (https://twitter.com/apkpure), and Instagram (https://www.instagram.com/apkpure).

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Car Simulator 2 Mod APK The Best Racing Game with Multiplayer Mode and Free Cash.md b/spaces/fatiXbelha/sd/Car Simulator 2 Mod APK The Best Racing Game with Multiplayer Mode and Free Cash.md deleted file mode 100644 index 0e99c7323145488701c49c0b1a2f18a84b3eb862..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Car Simulator 2 Mod APK The Best Racing Game with Multiplayer Mode and Free Cash.md +++ /dev/null @@ -1,105 +0,0 @@ -
        -

        Car Sim 2 Mod Apk: A Driving Simulator Game with Unlimited Possibilities

        -

        Do you love driving cars and exploring different cities? Do you want to experience the thrill of racing and customizing your own vehicles? If yes, then you should try Car Sim 2 Mod Apk, a driving simulator game that lets you control modern cars and play online with friends and players from all over the world. In this article, we will tell you what is Car Sim 2 Mod Apk, what are its features, how to download and install it, and what are its pros and cons.

        -

        car sim 2 mod apk


        Download ⚙⚙⚙ https://urllie.com/2uNFUD



        -

        What is Car Sim 2 Mod Apk?

        -

        Car Sim 2 Mod Apk is a modified version of Car Simulator 2, a popular driving simulator game developed by Oppana Games. The game allows you to travel around the city, upgrade your car, join and win crazy races, and enjoy realistic graphics and physics. You can also interact with other players, chat with them, and join clubs.

        -

        The mod apk version of the game gives you access to unlimited gold coins, which you can use to buy new cars, upgrade them, and customize them. You also get to unlock all the cars in the game, which include sports cars, muscle cars, SUVs, trucks, and more. You can choose from different models, colors, wheels, spoilers, and stickers. You can also create your own garage and show off your collection.

        -

        Features of Car Sim 2 Mod Apk

        -

        Car Sim 2 Mod Apk has many features that make it an exciting and fun game to play. Here are some of them:

        -

        Unlimited Gold Coins

        -

        With unlimited gold coins, you can buy anything you want in the game without worrying about running out of money. You can buy new cars, upgrade them, customize them, and even buy houses and businesses. You can also use gold coins to enter races and tournaments.

        -

        All Cars Unlocked

        -

        You don't have to wait for level ups or complete missions to unlock new cars in the game. With Car Sim 2 Mod Apk, you get access to all the cars in the game from the start. You can choose from over 50 different cars, each with its own characteristics and performance. You can also test drive any car before buying it.

        -

        Realistic Graphics and Physics

        -

        The game has stunning graphics that make you feel like you are driving in a real city. The game also has realistic physics that simulate the behavior of the cars on different terrains and weather conditions. You can see the damage effects on your car when you crash or hit something. You can also adjust the camera angle and view your car from different perspectives.

        -

        Online Multiplayer Mode

        -

        The game has an online multiplayer mode that lets you play with friends and players from all over the world. You can join or create rooms, chat with other players, invite them to races or missions, and join clubs. You can also compete with other players on leaderboards and rankings.

        -

        Customizable Vehicles and Garage

        -

        The game gives you the freedom to customize your vehicles according to your preferences. You can change the color, wheels, spoilers, stickers, lights, exhausts, and more. You can also upgrade your engine, transmission, brakes, suspension, turbo, and nitro. You can create your own garage and display your collection of cars. You can also visit other players' garages and rate them.

        -

        Various Game Modes and Missions

        -

        The game has different game modes and missions that keep you entertained and challenged. You can play in free mode, where you can explore the city and do whatever you want. You can also play in career mode, where you have to complete various tasks and earn money and reputation. You can also play in racing mode, where you have to compete with other players or AI opponents on different tracks and circuits. You can also play in police mode, where you have to chase or escape from the cops.

        -

        car simulator 2 mod apk unlimited money
        -car simulator 2 mod apk download for android
        -car simulator 2 mod apk latest version
        -car simulator 2 mod apk free shopping
        -car simulator 2 mod apk revdl
        -car simulator 2 mod apk happymod
        -car simulator 2 mod apk all cars unlocked
        -car simulator 2 mod apk android 1
        -car simulator 2 mod apk offline
        -car simulator 2 mod apk rexdl
        -car simulator 2 mod apk unlimited gold coins
        -car simulator 2 mod apk obb
        -car simulator 2 mod apk no root
        -car simulator 2 mod apk online
        -car simulator 2 mod apk unlimited everything
        -car simulator 2 mod apk hack
        -car simulator 2 mod apk an1
        -car simulator 2 mod apk unlimited fuel
        -car simulator 2 mod apk pure
        -car simulator 2 mod apk vip unlocked
        -car simulator 2 mod apk new update
        -car simulator 2 mod apk old version
        -car simulator 2 mod apk unlimited gems
        -car simulator 2 mod apk data
        -car simulator 2 mod apk full unlocked
        -car simulator 2 mod apk mega
        -car simulator 2 mod apk premium
        -car simulator 2 mod apk pro
        -car simulator 2 mod apk plus
        -car simulator 2 mod apk original
        -car simulator 2 mod apk mirror
        -car simulator 2 mod apk mediafıre
        -car simulator 2 mod apk unlimited xp
        -car simulator 2 mod apk cheat menu
        -car simulator 2 mod apk god mode
        -car simulator 2 mod apk no ads
        -car simulator 2 mod apk high graphics
        -car simulator 2 mod apk low mb
        -car simulator 2 mod apk easy download
        -car simulator 2 mod apk fast install

        -

        How to Download and Install Car Sim 2 Mod Apk?

        -

        If you want to download and install Car Sim 2 Mod Apk on your device, you have to follow these simple steps:

        -

        Step 1: Download the Apk File from a Trusted Source

        -

        You have to download the apk file of Car Sim 2 Mod Apk from a trusted source, such as [this website]. Make sure you download the latest version of the mod apk, which is compatible with your device. You can also scan the apk file with an antivirus program before installing it.

        -

        Step 2: Enable Unknown Sources on Your Device

        -

        You have to enable unknown sources on your device, which allows you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on. You may also have to grant some permissions to the app when installing it.

        -

        Step 3: Install the Apk File and Launch the Game

        -

        You have to locate the apk file on your device, usually in the Downloads folder, and tap on it to install it. Wait for the installation process to finish, and then launch the game from your app drawer or home screen. You can now enjoy Car Sim 2 Mod Apk with unlimited gold coins and all cars unlocked.

        -

        Pros and Cons of Car Sim 2 Mod Apk

        -

        Car Sim 2 Mod Apk has many advantages, but it also has some drawbacks. Here are some of them:

        -

        Pros

        -
          -
        • You get unlimited gold coins, which you can use to buy and upgrade anything in the game.
        • -
        • You get access to all the cars in the game, which include over 50 different models with different features and performance.
        • -
        • You get to enjoy realistic graphics and physics, which make the game more immersive and realistic.
        • -
        • You get to play online with friends and players from all over the world, chat with them, join clubs, and compete on leaderboards.
        • -
        • You get to customize your vehicles and garage according to your preferences, and show off your collection.
        • -
        • You get to play different game modes and missions, which keep you entertained and challenged.
        • -
        -

        Cons

        -
          -
        • You may face some compatibility issues with some devices or Android versions.
        • -
        • You may encounter some bugs or glitches in the game, which may affect your gameplay or performance.
        • -
        • You may get banned from the game if you use the mod apk in an unfair way or violate the terms of service of the game.
        • -
        • You may miss out on some updates or features that are available in the original version of the game.
        • -
        -

        Conclusion

        -

        Car Sim 2 Mod Apk is a driving simulator game that lets you control modern cars and play online with friends and players from all over the world. The mod apk version of the game gives you unlimited gold coins, which you can use to buy new cars, upgrade them, and customize them. You also get to unlock all the cars in the game, which include sports cars, muscle cars, SUVs, trucks, and more. You can also enjoy realistic graphics and physics, online multiplayer mode, customizable vehicles and garage, various game modes and missions, and more. However, you should also be aware of some drawbacks of using the mod apk, such as compatibility issues, bugs or glitches, ban risk, or missing updates or features. Therefore, you should use the mod apk at your own risk and discretion.

        -

        If you are looking for a fun and exciting driving simulator game with unlimited possibilities, then you should try Car Sim 2 Mod Apk. You can download it from [this website] and follow the steps mentioned above to install it on your device. Have fun driving!

        -

        FAQs

        -
          -
        • Q: Is Car Sim 2 Mod Apk safe to use?
        • -
        • A: Car Sim 2 Mod Apk is safe to use as long as you download it from a trusted source, such as [this website], and scan it with an antivirus program before installing it. You should also avoid using the mod apk in an unfair way or violating the terms of service of the game, as this may result in a ban.
        • -
        • Q: How can I update Car Sim 2 Mod Apk?
        • -
        • A: You can update Car Sim 2 Mod Apk by downloading the latest version of the mod apk from [this website] and installing it over the existing one. You should also backup your game data before updating, as some updates may erase your progress or settings.
        • -
        • Q: Can I play Car Sim 2 Mod Apk offline?
        • -
        • A: You can play Car Sim 2 Mod Apk offline in free mode, where you can explore the city and do whatever you want. However, you need an internet connection to play online multiplayer mode, where you can interact with other players, join races or missions, and join clubs.
        • -
        • Q: What are the best cars in Car Sim 2 Mod Apk?
        • -
        • A: The best cars in Car Sim 2 Mod Apk depend on your personal preference and play style. However, some of the most popular and powerful cars in the game are the Lamborghini Aventador, the Bugatti Chiron, the Ferrari LaFerrari, the McLaren P1, and the Pagani Huayra.
        • -
        • Q: How can I contact the developers of Car Sim 2 Mod Apk?
        • -
        • A: You can contact the developers of Car Sim 2 Mod Apk by visiting their official website, [this website], or by sending them an email at [this email address]. You can also follow them on social media platforms, such as Facebook, Twitter, Instagram, and YouTube.
        • -

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Experience the Thrill of Both Killer and Survivor in Dead by Daylight Mobile APK IOS.md b/spaces/fatiXbelha/sd/Experience the Thrill of Both Killer and Survivor in Dead by Daylight Mobile APK IOS.md deleted file mode 100644 index a4930e61b198175225f84a7295fe90721ba77b99..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Experience the Thrill of Both Killer and Survivor in Dead by Daylight Mobile APK IOS.md +++ /dev/null @@ -1,24 +0,0 @@ - -

        Dead by Daylight Mobile: A Thrilling Horror Game for iOS Devices

        - If you are a fan of horror and action games, you might have heard of Dead by Daylight™, a multiplayer (4vs1) game that has been popular on console and PC for years. But did you know that you can also play this game on your iOS device for free? That's right, Dead by Daylight Mobile is now available on the App Store, and it offers the same survival horror experience as the original game, but fully optimized for mobile. In this article, we will tell you everything you need to know about Dead by Daylight Mobile, including what it is, how to download and install it, how to play it, and why you should play it.

        What is Dead by Daylight Mobile?

        -

        A multiplayer horror and action game

        - Dead by Daylight Mobile is a multiplayer horror and action game in which one ruthless Killer hunts down four Survivors trying to evade a gruesome death. The game is based on the concept of asymmetrical gameplay, meaning that each side has different abilities, objectives, and strategies. The Killer's goal is to sacrifice the Survivors to The Entity, a mysterious force that feeds on their fear and pain. The Survivors' goal is to escape the Killing Grounds, a randomly generated map with various obstacles, items, and hiding places.

        A game of cat and mouse between one Killer and four Survivors

        - Dead by Daylight Mobile is a game of cat and mouse between one Killer and four Survivors. The Killer can choose from a variety of characters from different horror franchises, each with their own unique power and perks. For example, The Hillbilly can use his Chainsaw to cover large distances in a short time and instantly down Survivors, while The Nurse can blink through walls and obstacles. The Survivors can also choose from a variety of characters, each with their own perks and abilities. For example, Meg Thomas can sprint faster than others, while Jake Park can sabotage hooks and traps. The game also features different modes, such as Ranked Mode, Custom Mode, Training Mode, and Event Mode.

        A game with iconic characters from horror franchises

        - One of the most appealing aspects of Dead by Daylight Mobile is that it features iconic characters from some of your favorite horror franchises. You can play as legends of horror such as Michael Myers from Halloween, Freddy Krueger from A Nightmare on Elm Street, Leatherface from The Texas Chainsaw Massacre, Ghostface from Scream, Pyramid Head from Silent Hill, The Cenobite from Hellraiser, Sadako Yamamura from The Ring, and more. You can also play as original characters created by the developers, such as The Huntress, The Clown, The Spirit, The Legion, The Plague, The Oni, The Trickster, and more. Each character has their own backstory, personality, and appearance that make them unique and memorable.

        How to Download and Install Dead by Daylight Mobile on iOS Devices?

        -

        Requirements and compatibility

        - Dead by Daylight Mobile is a free-to-play game that requires an internet connection and a compatible iOS device to run. The game is compatible with iPhone 6S or newer, iPad Air 2 or newer, iPad Mini 4 or newer, and iPod Touch 7th generation or newer. The game also requires iOS 11 or later and at least 2.5 GB of free space on your device. The game supports English, French, Italian, German, Spanish, Russian, Turkish, Portuguese, Japanese, Korean, Simplified Chinese, and Traditional Chinese languages.

        Steps to download and install the game from the App Store

        - To download and install Dead by Daylight Mobile on your iOS device, follow these simple steps: - Open the App Store on your device and search for "Dead by Daylight Mobile" or use this link to go directly to the game page. - Tap on the "Get" button and then on the "Install" button to start downloading the game. You may need to enter your Apple ID password or use Touch ID or Face ID to confirm the download. - Wait for the download to finish and then tap on the "Open" button to launch the game. You may need to accept the terms of service and privacy policy before playing the game. - Enjoy the game!

        Tips to optimize the game performance and settings

        - To optimize the game performance and settings on your iOS device, you can follow these tips: - Make sure your device has enough battery life and is not overheating. You can also use a low power mode or turn off background apps to save battery and reduce lag. - Adjust the graphics quality and frame rate in the game settings according to your device's capabilities and your personal preference. You can also turn off blood effects, shadows, anti-aliasing, and other options to improve the game performance. - Use headphones or earphones to enjoy the game sound effects and music better. You can also adjust the volume and sound settings in the game options. - Use a stable Wi-Fi connection or a mobile data plan with enough bandwidth to play the game online. You can also check your ping and network status in the game lobby before joining a match. - Report any bugs, glitches, or feedback to the developers through the in-game support system or their official website. You can also join their social media channels to stay updated on the latest news, events, and updates.

        How to Play Dead by Daylight Mobile on iOS Devices?

        -

        Choose your role: Killer or Survivor

        - The first thing you need to do when you play Dead by Daylight Mobile is to choose your role: Killer or Survivor. You can switch between roles anytime in the main menu. Each role has its own gameplay mechanics, objectives, challenges, and rewards. As a Killer, you will have access to a first-person perspective and a variety of powers and perks that will help you hunt down and kill the Survivors. As a Survivor, you will have access to a third-person perspective and a variety of items and perks that will help you evade the Killer and escape the map.

        Customize your character and perks

        - Before you enter a match, you can customize your character and perks according to your playstyle and strategy. You can choose from different outfits, accessories, weapons, skins, charms, gestures, and more for your character. You can also select up to four perks for your character that will give you different advantages and disadvantages during the match. For example, as a Killer, you can use Hex: Ruin to slow down the Survivors' progress on repairing generators, or as a Survivor, you can use Self-Care to heal yourself without needing a med-kit. You can unlock more characters, perks, items, and cosmetics by playing the game, completing challenges, leveling up your account, or purchasing them with in-game currency or real money.

        Enter the Killing Grounds and start the match

        - Once you are ready, you can enter the Killing Grounds and start the match. The match will consist of one Killer and four Survivors who will be randomly matched together based on their rank and region. The match will take place in one of several maps that are randomly generated each time. Each map will have different features, such as generators, hooks, pallets, windows, chests, lockers, totems, exit gates, hatch, etc., that will affect the gameplay and strategy. The match will last until either all Survivors escape or are sacrificed, or until the Killer disconnects or surrenders. As a Killer, your main objective is to find, chase, hit, down, hook, and sacrifice the Survivors before they escape. You can use your power, perks, and the environment to your advantage. You can also damage generators, break pallets, kick totems, and set traps to hinder the Survivors' progress. You can also use a special ability called Bloodlust that will increase your movement speed the longer you chase a Survivor. However, you will also have some disadvantages, such as a limited field of vision, a red stain that shows your direction, a terror radius that alerts the Survivors of your presence, and a cooldown after using your power or missing an attack. As a Survivor, your main objective is to repair five generators, open the exit gates, and escape the map alive. You can use your items, perks, and the environment to your advantage. You can also heal yourself and other Survivors, cleanse totems, search chests, hide in lockers, and use pallets and windows to evade the Killer. You can also use a special ability called Decisive Strike that will allow you to escape the Killer's grasp once per match. However, you will also have some disadvantages, such as being slower than the Killer, having limited resources and inventory space, making noise when injured or performing actions, and leaving blood trails and scratch marks when running.

        Why You Should Play Dead by Daylight Mobile on iOS Devices?

        -

        Experience the thrill of horror and action on your mobile device

        - Dead by Daylight Mobile is a game that will keep you on the edge of your seat with its thrilling horror and action gameplay. Whether you play as a Killer or a Survivor, you will never know what to expect in each match. The game will test your skills, strategy, teamwork, and nerve as you face different scenarios and challenges. The game will also immerse you in its dark and atmospheric graphics, sound effects, music, and voice acting that will make you feel like you are in a horror movie.

        Enjoy the game with your friends or other players online

        - Dead by Daylight Mobile is a game that you can enjoy with your friends or other players online. You can invite your friends to join your party and play together as Survivors or against each other as Killers. You can also chat with them using the in-game voice chat or text chat features. You can also play with other players from around the world who share your rank and region. You can communicate with them using the in-game gestures and emotes. You can also add them as friends and send them messages after the match.

        Discover new characters, features, and updates regularly

        - Dead by Daylight Mobile is a game that is constantly updated with new characters, features, and updates regularly. You can discover new Killers and Survivors from different horror franchises or original creations by the developers. You can also explore new maps, modes, events, cosmetics, items, perks, and more. You can also participate in seasonal events that offer exclusive rewards and challenges. You can also follow the game's roadmap that shows the upcoming plans and goals for the game's development.

        Conclusion

        - Dead by Daylight Mobile is a thrilling horror game for iOS devices that offers the same survival horror experience as the original game on console and PC but fully optimized for mobile. The game allows you to play as one of several Killers or Survivors from different horror franchises or original creations by the developers. The game also lets you customize your character and perks according to your playstyle and strategy. The game also features different maps, modes, events, cosmetics, items, perks, and more that make each match unique and exciting. The game also allows you to play with your friends or other players online and communicate with them using the in-game chat features. The game also updates regularly with new content and improvements that keep the game fresh and fun. If you are looking for a horror game that will keep you on the edge of your seat, you should definitely try Dead by Daylight Mobile on your iOS device.

        FAQs

        - Here are some frequently asked questions about Dead by Daylight Mobile: - Q: How do I get more Bloodpoints, Iridescent Shards, Auric Cells, or Rift Fragments? - A: Bloodpoints are the main currency in the game that you can use to level up your characters and unlock perks, items, and cosmetics. You can earn Bloodpoints by playing matches, completing challenges, and participating in events. Iridescent Shards are another currency that you can use to buy characters, perks, and cosmetics from the Shrine of Secrets or the Store. You can earn Iridescent Shards by leveling up your account or completing Tome challenges. Auric Cells are the premium currency that you can use to buy characters, cosmetics, items, and more from the Store. You can buy Auric Cells with real money or earn them by completing certain Tome challenges or reaching certain tiers in the Rift. Rift Fragments are tokens that you can use to progress through the Rift, a seasonal pass that offers exclusive rewards and cosmetics. You can earn Rift Fragments by playing matches or completing Tome challenges. - Q: How do I rank up or derank in the game? - A: Ranking is a system that measures your skill and performance as a Killer or a Survivor. You can rank up or derank by earning or losing Pips at the end of each match. Pips are points that depend on your Emblems, which are indicators of how well you did in four categories: Objective, Survival, Altruism, and Boldness for Survivors; Gatekeeper, Devout, Malicious, and Chaser for Killers. You need to earn a certain number of Pips to reach the next rank or avoid losing Pips to prevent dropping to a lower rank. The higher your rank, the more skilled and experienced players you will face. - Q: How do I get more characters, perks, items, or cosmetics in the game? - A: You can get more characters, perks, items, or cosmetics in the game by unlocking them with Bloodpoints, Iridescent Shards, Auric Cells, Rift Fragments, or real money. You can also get them by playing matches, completing challenges, participating in events, leveling up your account or characters, progressing through the Rift, or buying them from the Store or the Shrine of Secrets. - Q: How do I report a bug, glitch, hacker, cheater, or toxic player in the game? - A: You can report a bug, glitch, hacker, cheater, or toxic player in the game by using the in-game report system or contacting the support team through their official website. You can also provide feedback and suggestions to the developers through their social media channels or forums. - Q: How do I join the Dead by Daylight Mobile community and stay updated on the latest news and updates? - A: You can join the Dead by Daylight Mobile community and stay updated on the latest news and updates by following their official website, Facebook, Twitter, Instagram, YouTube, Discord, Reddit, Twitch, or TikTok pages. You can also join their newsletter to receive exclusive offers and information.

        -

        dead by daylight mobile apk ios


        Download ✸✸✸ https://urllie.com/2uNyb0



        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/FIFA Mobile Para Hilesi 2020 - Kolay ve Gvenli Bir Yntem.md b/spaces/fatiXbelha/sd/FIFA Mobile Para Hilesi 2020 - Kolay ve Gvenli Bir Yntem.md deleted file mode 100644 index c4c99ebfd827480ef0277852b1a82c86d1000672..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/FIFA Mobile Para Hilesi 2020 - Kolay ve Gvenli Bir Yntem.md +++ /dev/null @@ -1,81 +0,0 @@ -
        -

        FIFA Mobile APK Para Hilesi 2020: How to Get Unlimited Coins and Gems in FIFA Mobile

        -

        Introduction

        -

        FIFA Mobile is one of the most popular soccer games on mobile devices, with over 100 million downloads on Google Play. It allows you to build your ultimate team of soccer stars from over 600 clubs and compete in various modes, such as Head-to-Head, VS Attack, Manager Mode, and more. You can also relive the world's greatest soccer tournament with the FIFA World Cup 2022 mode, where you can play with any of the 32 qualified nations.

        -

        fifa mobile apk para hilesi 2020


        Download File --->>> https://urllie.com/2uNyQK



        -

        Coins and gems are the main currencies in FIFA Mobile. You can use them to buy player packs, upgrade your players, unlock new features, and more. However, earning coins and gems can be time-consuming and challenging, especially if you want to get the best players and items in the game.

        -

        Para hilesi is a Turkish term that means money cheat. It refers to any method or tool that can help you get unlimited or free coins and gems in FIFA Mobile. Some players may want to use para hilesi to save time and money, or to have an edge over other players.

        -

        But how can you get FIFA Mobile APK para hilesi 2020? In this article, we will show you two methods that claim to provide you with unlimited coins and gems in FIFA Mobile. We will also discuss the pros and cons of each method, as well as some tips and warnings for using them.

        -

        How to Get FIFA Mobile APK Para Hilesi 2020

        -

        Method 1: Download a Modded APK File

        -

        A modded APK file is an altered version of the original game file that has been modified by someone to include extra features or cheats. For example, a modded APK file for FIFA Mobile may have unlimited coins and gems, unlocked players, boosted stats, etc.

        -

        To get a modded APK file for FIFA Mobile, you need to find a reliable source that offers one. You can search online for websites or forums that provide modded APK files for various games. However, you need to be careful about the quality and safety of the files you download, as some of them may contain viruses or malware that can harm your device or steal your personal information.

        -

        fifa mobile apk para hilesi nasıl yapılır
        -fifa mobile apk para hilesi 2020 indir
        -fifa mobile apk para hilesi 2020 güncel
        -fifa mobile apk para hilesi 2020 link
        -fifa mobile apk para hilesi 2020 türkçe
        -fifa mobile apk para hilesi 2020 oyunoxs
        -fifa mobile apk para hilesi 2020 youtube
        -fifa mobile apk para hilesi 2020 game guardian
        -fifa mobile apk para hilesi 2020 yakup
        -fifa mobile apk para hilesi 2020 coins
        -fifa mobile apk para hilesi 2020 android
        -fifa mobile apk para hilesi 2020 ios
        -fifa mobile apk para hilesi 2020 bedava
        -fifa mobile apk para hilesi 2020 çalışan
        -fifa mobile apk para hilesi 2020 son sürüm
        -fifa mobile apk para hilesi 2020 mod
        -fifa mobile apk para hilesi 2020 hack
        -fifa mobile apk para hilesi 2020 cheat
        -fifa mobile apk para hilesi 2020 download
        -fifa mobile apk para hilesi 2020 yükle
        -fifa mobile apk para hilesi 2020 kurulumu
        -fifa mobile apk para hilesi 2020 video
        -fifa mobile apk para hilesi 2020 anlatımı
        -fifa mobile apk para hilesi 2020 forum
        -fifa mobile apk para hilesi 2020 yorumlar
        -fifa mobile apk para hilesi 2020 gerçek mi
        -fifa mobile apk para hilesi 2020 ban riski
        -fifa mobile apk para hilesi 2020 güvenli mi
        -fifa mobile apk para hilesi 2020 nasıl indirilir
        -fifa mobile apk para hilesi 2020 nasıl kullanılır
        -fifa mobile apk para hilesi 2020 nasıl çalışır
        -fifa mobile apk para hilesi 2020 nasıl yaparım
        -fifa mobile apk para hilesi 2020 nereden indirilir
        -fifa mobile apk para hilesi 2020 nereden bulabilirim
        -fifa mobile apk para hilesi 2020 nimo tv
        -fifa mobile apk para hilesi 2020 facebook
        -fifa mobile apk para hilesi 2020 twitter
        -fifa mobile apk para hilesi 2020 instagram
        -fifa mobile apk para hilesi 2020 reddit
        -fifa mobile apk para hilesi 2020 quora
        -fifa mobile apk para hilesi 2020 medium
        -fifa mobile apk para hilesi 2020 pinterest
        -fifa mobile apk para hilesi 2021 ne zaman çıkacak
        -fifa mobile apk para hilesi beta sürümü
        -fifa mobile apk para kasma taktikleri

        -

        Once you have downloaded a modded APK file for FIFA Mobile, you need to install it on your device. To do this, you need to enable the installation of apps from unknown sources in your device settings. Then, you need to delete or uninstall the original game file from your device. After that, you can install the modded APK file and launch the game. You should see the changes or cheats applied in the game. For example, you may see a huge amount of coins and gems in your account, or you may have access to all the players and items in the game.

        -

        However, using a modded APK file for FIFA Mobile has some risks and drawbacks. First of all, it may not be compatible with the latest version of the game or your device. You may experience crashes, glitches, or errors while playing the game. Second, it may violate the terms of service of the game and result in your account being banned or suspended. You may lose all your progress and data in the game. Third, it may ruin the fun and challenge of the game, as you may not have any goals or incentives to play the game anymore.

        -

        Method 2: Use a Game Hack Tool

        -

        A game hack tool is an online service or application that can generate coins and gems for you in FIFA Mobile. It usually works by connecting to the game server and modifying your account data. For example, a game hack tool for FIFA Mobile may ask you to enter your username or email, select your device type, choose the amount of coins and gems you want, and click on a generate button. Then, it will process your request and add the coins and gems to your account.

        -

        To use a game hack tool for FIFA Mobile, you need to find a trustworthy one that works for your device and game version. You can search online for websites or videos that review or recommend game hack tools for various games. However, you need to be wary of the legitimacy and security of the game hack tools you use, as some of them may be scams or phishing sites that can steal your personal information or infect your device with viruses or malware.

        -

        Once you have found a game hack tool for FIFA Mobile that you want to use, you need to follow the instructions on how to use it. You may need to verify your identity or complete some surveys or offers before you can access the tool. After that, you can enter your account details and choose the amount of coins and gems you want. Then, you can wait for the tool to generate the coins and gems for you and check your account to see if they have been added.

        -

        However, using a game hack tool for FIFA Mobile also has some advantages and disadvantages. On one hand, it may be easier and faster than downloading a modded APK file, as you do not need to install anything on your device or delete the original game file. You can also use it on any device and any game version, as long as you have an internet connection. On the other hand, it may also violate the terms of service of the game and result in your account being banned or suspended. You may also face some technical issues or errors while using the tool, such as server overload, connection failure, or invalid data.

        -

        Conclusion

        -

        In conclusion, FIFA Mobile APK para hilesi 2020 is a term that refers to any method or tool that can help you get unlimited coins and gems in FIFA Mobile. We have shown you two methods that claim to provide you with this cheat: downloading a modded APK file or using a game hack tool. Both methods have their pros and cons, and both methods may pose some risks and challenges for you.

        -

        Therefore, we advise you to use these methods with caution and at your own risk. We do not endorse or recommend any of these methods, as they may harm your device, your account, or your gaming experience. We also suggest that you play the game fairly and honestly, as that is more fun and rewarding than cheating.

        -

        What do you think about FIFA Mobile APK para hilesi 2020? Have you tried any of these methods? Do you know any other methods that work? Share your thoughts and opinions with us in the comments below!

        -

        FAQs

        -

        Q: Is FIFA Mobile APK para hilesi 2020 legal?

        -

        A: No, it is not legal. It is against the terms of service of the game and may result in your account being banned or suspended.

        -

        Q: Is FIFA Mobile APK para hilesi 2020 safe?

        -

        A: No, it is not safe. It may expose your device or your account to viruses, malware, phishing, or hacking.

        -

        Q: Is FIFA Mobile APK para hilesi 2020 free?

        -

        A: It depends on the method or tool you use. Some modded APK files or game hack tools may be free, while others may require you to pay or complete some tasks before you can use them.

        -

        Q: Is FIFA Mobile APK para hilesi 2020 effective?

        -

        A: It depends on the quality and reliability of the method or tool you use. Some modded APK files or game hack tools may work well, while others may not work at all or cause problems.

        -

        Q: Is FIFA Mobile APK para hilesi 2020 worth it?

        -

        A: It depends on your personal preference and goal. Some players may find it worth it to get unlimited coins and gems in FIFA Mobile, while others may find it not worth it to risk their device, their account, or their gaming enjoyment.

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Border of Wild A Free Game that Tests Your Skills and Wits.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Border of Wild A Free Game that Tests Your Skills and Wits.md deleted file mode 100644 index e86f61036888e7631871d6baf427f797ebdc1b6d..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Border of Wild A Free Game that Tests Your Skills and Wits.md +++ /dev/null @@ -1,91 +0,0 @@ -
        -

        Border of Wild: A Survival Game with Endless Challenges

        -

        If you are looking for a thrilling and immersive survival game that will test your skills and creativity, you should check out Border of Wild. This game is developed by Exceptional Hong Kong Limited and is available for Android devices. In this game, you will explore a vast and diverse wilderness, hunt for resources, craft items, build shelters, and fight against enemies and bosses. You will also customize your character and weapons, and unlock special abilities and powers. Border of Wild is a game that will keep you on your toes and challenge you to overcome unknown difficulties.

        -

        Introduction

        -

        Border of Wild is a game that mixes the fun of hunting with the difficulty of strategic planning. You play as a hunter who must seek the huge forest for wildlife. You will also encounter hostile creatures, such as Guardians, that will try to kill you. You will need to use your wits and skills to outsmart them and survive. You will also need to build barriers, lay traps, and use your environment to your advantage.

        -

        border of wild free download


        Download Ziphttps://gohhs.com/2uPnuq



        -

        What is Border of Wild?

        -

        Border of Wild is an action game that was released on June 6, 2023. It has over 1 million downloads on Google Play Store and has received positive reviews from players. The game has in-app purchases that allow you to buy items and currency with real money. However, you can also play the game for free and enjoy its features without spending anything.

        -

        How to download Border of Wild for free?

        -

        To download Border of Wild for free, you will need an Android device that meets the minimum system requirements. These are:

        -
          -
        • Android 5.0 or higher
        • -
        • At least 267 MB of free storage space
        • -
        • A stable internet connection
        • -
        -

        To download the game, follow these steps:

        -
          -
        1. Go to Google Play Store on your device.
        2. -
        3. Search for Border of Wild or use this link: [Border of Wild](^2^).
        4. -
        5. Tap on Install and wait for the download to finish.
        6. -
        7. Tap on Open and enjoy the game.
        8. -
        -

        Features of Border of Wild

        -

        Border of Wild is a game that offers many features that make it fun and exciting. Here are some of them:

        -

        Explore a vast and diverse wilderness

        -

        The game features a large map that is filled with different terrains, biomes, animals, plants, and secrets. You can explore the map by walking, running, climbing, swimming, gliding, or riding a horse. You can also use fast travel points to move between locations quickly. The map is dynamic and changes according to the time of day, weather, seasons, and events. You will encounter different challenges and opportunities depending on these factors.

        -

        Hunt, craft, and build to survive

        -

        The game requires you to hunt for resources such as food, water, wood, stone, metal, leather, fur, feathers, bones, etc. You can use these resources to craft items such as weapons, armor, tools, potions, traps, etc. You can also use them to build shelters such as tents, cabins, fences, etc. You will need these items to protect yourself from enemies, heal yourself from injuries or diseases, improve your stats or abilities, or complete quests or objectives.

        -

        Fight against enemies and bosses

        The game also features many enemies and bosses that will challenge your combat skills and strategies. You will face different types of enemies, such as wolves, bears, bandits, soldiers, robots, etc. Each enemy has its own behavior, strength, weakness, and loot. You will also encounter bosses that are much stronger and harder to defeat. You will need to use your weapons, items, abilities, and environment to fight them and win rewards.

        -

        Customize your character and weapons

        -

        The game allows you to customize your character and weapons to suit your style and preferences. You can choose your gender, appearance, clothes, accessories, etc. You can also upgrade your weapons by adding attachments, scopes, silencers, etc. You can also unlock special abilities and powers that will enhance your performance and give you an edge in combat. For example, you can use the Eye of the Wild to scan your surroundings and reveal hidden information, or use the Fury of the Wild to unleash a powerful attack that damages all enemies in range.

        -

        border of wild game free download
        -border of wild apk download for android
        -border of wild pc download windows 10
        -border of wild mod apk unlimited money
        -border of wild online play without download
        -border of wild survival game download
        -border of wild hack version download
        -border of wild latest update download
        -border of wild offline mode download
        -border of wild cheats and tips download
        -border of wild app store download
        -border of wild ios download for iphone
        -border of wild bluestacks download for mac
        -border of wild emulator download for pc
        -border of wild guide and walkthrough download
        -border of wild review and rating download
        -border of wild wallpaper and theme download
        -border of wild soundtrack and music download
        -border of wild trailer and gameplay download
        -border of wild beta version download
        -border of wild full version download free
        -border of wild cracked version download
        -border of wild patch notes and changelog download
        -border of wild best weapons and items download
        -border of wild tips and tricks download free
        -border of wild how to play and win download
        -border of wild system requirements and specs download
        -border of wild graphics settings and optimization download
        -border of wild multiplayer mode download free
        -border of wild co-op mode download free
        -border of wild pvp mode download free
        -border of wild solo mode download free
        -border of wild sandbox mode download free
        -border of wild adventure mode download free
        -border of wild story mode download free
        -border of wild custom mode download free
        -border of wild map and location download free
        -border of wild characters and skills download free
        -border of wild enemies and bosses download free
        -border of wild quests and missions download free
        -border of wild rewards and achievements download free
        -border of wild events and challenges download free
        -border of wild codes and coupons download free
        -border of wild skins and outfits download free
        -border of wild pets and mounts download free
        -border of wild vehicles and transportations download free
        -border of wild weapons and armors download free
        -border of wild resources and materials download free
        -border of wild crafting and building download free

        -

        Tips and tricks for Border of Wild

        -

        Border of Wild is a game that can be challenging and rewarding at the same time. Here are some tips and tricks that will help you enjoy the game more:

        -

        Use your shield to parry lasers from Guardians

        -

        Guardians are one of the most dangerous enemies in the game. They are giant robots that can shoot lasers from their eyes. These lasers can deal massive damage and knock you down. However, you can use your shield to parry these lasers and reflect them back at the Guardians. This will stun them and expose their weak points. You can then attack them with your melee weapon or shoot them with your bow or gun.

        -

        Cut off the legs of Guardians to immobilize them

        -

        Another way to deal with Guardians is to cut off their legs with your sword or axe. This will make them unable to move or chase you. You can then keep a safe distance and shoot them with your ranged weapon or throw bombs at them. Be careful though, as they can still shoot lasers from their eyes or summon smaller robots to assist them.

        -

        Use ancient arrows to kill Guardians in one shot

        -

        If you want to kill Guardians in one shot, you will need ancient arrows. These are special arrows that are made from ancient technology. They can pierce through any armor and destroy any enemy in one hit. However, they are very rare and expensive to obtain. You can find them in ancient shrines or buy them from merchants. You can also craft them by using ancient parts that you can loot from Guardians or other ancient machines.

        -

        Use flurry rush to deal multiple strikes in slow motion

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/fffiloni/Image-to-MusicGen/README.md b/spaces/fffiloni/Image-to-MusicGen/README.md deleted file mode 100644 index 7794087885d5d11f0c9b1783cfaccf26ed1794e0..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/Image-to-MusicGen/README.md +++ /dev/null @@ -1,124 +0,0 @@ ---- -title: Image to MusicGen -python_version: '3.9' -tags: -- music generation -- language models -- LLMs -app_file: app.py -emoji: 🎵 -colorFrom: white -colorTo: blue -sdk: gradio -sdk_version: 3.47.1 -pinned: false -license: cc-by-nc-4.0 -duplicated_from: facebook/MusicGen ---- -# Audiocraft -![docs badge](https://github.com/facebookresearch/audiocraft/workflows/audiocraft_docs/badge.svg) -![linter badge](https://github.com/facebookresearch/audiocraft/workflows/audiocraft_linter/badge.svg) -![tests badge](https://github.com/facebookresearch/audiocraft/workflows/audiocraft_tests/badge.svg) - -Audiocraft is a PyTorch library for deep learning research on audio generation. At the moment, it contains the code for MusicGen, a state-of-the-art controllable text-to-music model. - -## MusicGen - -Audiocraft provides the code and models for MusicGen, [a simple and controllable model for music generation][arxiv]. MusicGen is a single stage auto-regressive -Transformer model trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz. Unlike existing methods like [MusicLM](https://arxiv.org/abs/2301.11325), MusicGen doesn't not require a self-supervised semantic representation, and it generates -all 4 codebooks in one pass. By introducing a small delay between the codebooks, we show we can predict -them in parallel, thus having only 50 auto-regressive steps per second of audio. -Check out our [sample page][musicgen_samples] or test the available demo! - - - Open In Colab - - - Open in HugginFace - -
        - -## Installation -Audiocraft requires Python 3.9, PyTorch 2.0.0, and a GPU with at least 16 GB of memory (for the medium-sized model). To install Audiocraft, you can run the following: - -```shell -# Best to make sure you have torch installed first, in particular before installing xformers. -# Don't run this if you already have PyTorch installed. -pip install 'torch>=2.0' -# Then proceed to one of the following -pip install -U audiocraft # stable release -pip install -U git+https://git@github.com/facebookresearch/audiocraft#egg=audiocraft # bleeding edge -pip install -e . # or if you cloned the repo locally -``` - -## Usage -You can play with MusicGen by running the jupyter notebook at [`demo.ipynb`](./demo.ipynb) locally, or use the provided [colab notebook](https://colab.research.google.com/drive/1fxGqfg96RBUvGxZ1XXN07s3DthrKUl4-?usp=sharing). Finally, a demo is also available on the [`facebook/MusiGen` HugginFace Space](https://huggingface.co/spaces/facebook/MusicGen) (huge thanks to all the HF team for their support). - -## API - -We provide a simple API and 4 pre-trained models. The pre trained models are: -- `small`: 300M model, text to music only - [🤗 Hub](https://huggingface.co/facebook/musicgen-small) -- `medium`: 1.5B model, text to music only - [🤗 Hub](https://huggingface.co/facebook/musicgen-medium) -- `melody`: 1.5B model, text to music and text+melody to music - [🤗 Hub](https://huggingface.co/facebook/musicgen-melody) -- `large`: 3.3B model, text to music only - [🤗 Hub](https://huggingface.co/facebook/musicgen-large) - -We observe the best trade-off between quality and compute with the `medium` or `melody` model. -In order to use MusicGen locally **you must have a GPU**. We recommend 16GB of memory, but smaller -GPUs will be able to generate short sequences, or longer sequences with the `small` model. - -**Note**: Please make sure to have [ffmpeg](https://ffmpeg.org/download.html) installed when using newer version of `torchaudio`. -You can install it with: -``` -apt get install ffmpeg -``` - -See after a quick example for using the API. - -```python -import torchaudio -from audiocraft.models import MusicGen -from audiocraft.data.audio import audio_write - -model = MusicGen.get_pretrained('melody') -model.set_generation_params(duration=8) # generate 8 seconds. -wav = model.generate_unconditional(4) # generates 4 unconditional audio samples -descriptions = ['happy rock', 'energetic EDM', 'sad jazz'] -wav = model.generate(descriptions) # generates 3 samples. - -melody, sr = torchaudio.load('./assets/bach.mp3') -# generates using the melody from the given audio and the provided descriptions. -wav = model.generate_with_chroma(descriptions, melody[None].expand(3, -1, -1), sr) - -for idx, one_wav in enumerate(wav): - # Will save under {idx}.wav, with loudness normalization at -14 db LUFS. - audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness") -``` - - -## Model Card - -See [the model card page](./MODEL_CARD.md). - -## FAQ - -#### Will the training code be released? - -Yes. We will soon release the training code for MusicGen and EnCodec. - - -## Citation -``` -@article{copet2023simple, - title={Simple and Controllable Music Generation}, - author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez}, - year={2023}, - journal={arXiv preprint arXiv:2306.05284}, -} -``` - -## License -* The code in this repository is released under the MIT license as found in the [LICENSE file](LICENSE). -* The weights in this repository are released under the CC-BY-NC 4.0 license as found in the [LICENSE_weights file](LICENSE_weights). - -[arxiv]: https://arxiv.org/abs/2306.05284 -[musicgen_samples]: https://ai.honu.io/papers/musicgen/ \ No newline at end of file diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io/build/transport.d.ts b/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io/build/transport.d.ts deleted file mode 100644 index dba27402b596c649c2789b0a4c9248d46a311ff8..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io/build/transport.d.ts +++ /dev/null @@ -1,76 +0,0 @@ -/// -import { EventEmitter } from "events"; -import { IncomingMessage } from "http"; -import { Packet } from "engine.io-parser"; -export declare abstract class Transport extends EventEmitter { - sid: string; - writable: boolean; - protocol: number; - protected _readyState: string; - protected discarded: boolean; - protected parser: any; - protected req: IncomingMessage & { - cleanup: Function; - }; - protected supportsBinary: boolean; - get readyState(): string; - set readyState(state: string); - /** - * Transport constructor. - * - * @param {http.IncomingMessage} request - * @api public - */ - constructor(req: any); - /** - * Flags the transport as discarded. - * - * @api private - */ - discard(): void; - /** - * Called with an incoming HTTP request. - * - * @param {http.IncomingMessage} request - * @api protected - */ - protected onRequest(req: any): void; - /** - * Closes the transport. - * - * @api private - */ - close(fn?: any): void; - /** - * Called with a transport error. - * - * @param {String} message error - * @param {Object} error description - * @api protected - */ - protected onError(msg: string, desc?: any): void; - /** - * Called with parsed out a packets from the data stream. - * - * @param {Object} packet - * @api protected - */ - protected onPacket(packet: Packet): void; - /** - * Called with the encoded packet data. - * - * @param {String} data - * @api protected - */ - protected onData(data: any): void; - /** - * Called upon transport close. - * - * @api protected - */ - protected onClose(): void; - abstract get supportsFraming(): any; - abstract get name(): any; - abstract send(packets: any): any; - abstract doClose(fn?: any): any; -} diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/has/test/index.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/has/test/index.js deleted file mode 100644 index 43d480b2c2e7638dc5c4a40d5a274da66d050a37..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/has/test/index.js +++ /dev/null @@ -1,10 +0,0 @@ -'use strict'; - -var test = require('tape'); -var has = require('../'); - -test('has', function (t) { - t.equal(has({}, 'hasOwnProperty'), false, 'object literal does not have own property "hasOwnProperty"'); - t.equal(has(Object.prototype, 'hasOwnProperty'), true, 'Object.prototype has own property "hasOwnProperty"'); - t.end(); -}); diff --git a/spaces/fhipol/deeplearning/transform.py b/spaces/fhipol/deeplearning/transform.py deleted file mode 100644 index bd186723bec89fed5e56ba89fbb0e53f025a4019..0000000000000000000000000000000000000000 --- a/spaces/fhipol/deeplearning/transform.py +++ /dev/null @@ -1,10 +0,0 @@ -from torchvision import transforms - -image_transform = transforms.Compose([ - transforms.Resize((224, 224)), - transforms.Lambda(lambda x: x.convert('RGB') if x.mode == 'RGBA' else x), - transforms.Grayscale(num_output_channels=3), - transforms.ToTensor(), - transforms.Normalize(mean=[0.485, 0.456, 0.406], - std=[0.229, 0.224, 0.225]), -]) diff --git a/spaces/finding-fossils/metaextractor-data-review-tool/README.md b/spaces/finding-fossils/metaextractor-data-review-tool/README.md deleted file mode 100644 index 2b7da036d9dbd6102d72689c01ba6bc10fb4ad29..0000000000000000000000000000000000000000 --- a/spaces/finding-fossils/metaextractor-data-review-tool/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Metaextractor Data Review Tool -emoji: 📚 -colorFrom: yellow -colorTo: blue -sdk: docker -pinned: false -license: mit ---- - -This is a sample image of the MetaExtractor Data Review Tool which can be used to review research article data extracted using the [finding-fossils/metaextractor](https://huggingface.co/finding-fossils/metaextractor) model. - -See the GitHub Repo for more information: [github.com/NeotomaDB/MetaExtractor](https://github.com/NeotomaDB/MetaExtractor) diff --git a/spaces/fiyen/YangyangChatGPT/assets/custom.css b/spaces/fiyen/YangyangChatGPT/assets/custom.css deleted file mode 100644 index 3cf5f946a240f595e19f02259969f01d4b088012..0000000000000000000000000000000000000000 --- a/spaces/fiyen/YangyangChatGPT/assets/custom.css +++ /dev/null @@ -1,239 +0,0 @@ -:root { - --chatbot-color-light: #F3F3F3; - --chatbot-color-dark: #121111; -} - -/* 覆盖gradio的页脚信息QAQ */ -footer { - display: none !important; -} -#footer{ - text-align: center; -} -#footer div{ - display: inline-block; -} -#footer .versions{ - font-size: 85%; - opacity: 0.85; -} - -/* status_display */ -#status_display { - display: flex; - min-height: 2.5em; - align-items: flex-end; - justify-content: flex-end; -} -#status_display p { - font-size: .85em; - font-family: monospace; - color: var(--body-text-color-subdued); -} - -#chuanhu_chatbot, #status_display { - transition: all 0.6s; -} - -/* usage_display */ -#usage_display { - position: relative; - margin: 0; - box-shadow: var(--block-shadow); - border-width: var(--block-border-width); - border-color: var(--block-border-color); - border-radius: var(--block-radius); - background: var(--block-background-fill); - width: 100%; - line-height: var(--line-sm); - min-height: 2em; -} -#usage_display p, #usage_display span { - margin: 0; - padding: .5em 1em; - font-size: .85em; - color: var(--body-text-color-subdued); -} -.progress-bar { - background-color: var(--input-background-fill);; - margin: 0 1em; - height: 20px; - border-radius: 10px; - overflow: hidden; -} -.progress { - background-color: var(--block-title-background-fill);; - height: 100%; - border-radius: 10px; - text-align: right; - transition: width 0.5s ease-in-out; -} -.progress-text { - /* color: white; */ - color: var(--color-accent) !important; - font-size: 1em !important; - font-weight: bold; - padding-right: 10px; - line-height: 20px; -} -/* list */ -ol:not(.options), ul:not(.options) { - padding-inline-start: 2em !important; -} - -/* 亮色 */ -@media (prefers-color-scheme: light) { - #chuanhu_chatbot { - background-color: var(--chatbot-color-light) !important; - color: #000000 !important; - } - [data-testid = "bot"] { - background-color: #FFFFFF !important; - } - [data-testid = "user"] { - background-color: #95EC69 !important; - } -} -/* 暗色 */ -@media (prefers-color-scheme: dark) { - #chuanhu_chatbot { - background-color: var(--chatbot-color-dark) !important; - color: #FFFFFF !important; - } - [data-testid = "bot"] { - background-color: #2C2C2C !important; - } - [data-testid = "user"] { - background-color: #26B561 !important; - } - body { - background-color: var(--neutral-950) !important; - } -} - -/* 对话气泡 */ -[class *= "message"] { - border-radius: var(--radius-xl) !important; - border: none; - padding: var(--spacing-xl) !important; - font-size: var(--text-md) !important; - line-height: var(--line-md) !important; - min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); - min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); -} -[data-testid = "bot"] { - max-width: 85%; - border-bottom-left-radius: 0 !important; -} -[data-testid = "user"] { - max-width: 85%; - width: auto !important; - border-bottom-right-radius: 0 !important; -} -/* 表格 */ -table { - margin: 1em 0; - border-collapse: collapse; - empty-cells: show; -} -td,th { - border: 1.2px solid var(--border-color-primary) !important; - padding: 0.2em; -} -thead { - background-color: rgba(175,184,193,0.2); -} -thead th { - padding: .5em .2em; -} -/* 行内代码 */ -code { - display: inline; - white-space: break-spaces; - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} -/* 代码块 */ -pre code { - display: block; - overflow: auto; - white-space: pre; - background-color: hsla(0, 0%, 0%, 80%)!important; - border-radius: 10px; - padding: 1.4em 1.2em 0em 1.4em; - margin: 1.2em 2em 1.2em 0.5em; - color: #FFF; - box-shadow: 6px 6px 16px hsla(0, 0%, 0%, 0.2); -} -/* 代码高亮样式 */ -.highlight .hll { background-color: #49483e } -.highlight .c { color: #75715e } /* Comment */ -.highlight .err { color: #960050; background-color: #1e0010 } /* Error */ -.highlight .k { color: #66d9ef } /* Keyword */ -.highlight .l { color: #ae81ff } /* Literal */ -.highlight .n { color: #f8f8f2 } /* Name */ -.highlight .o { color: #f92672 } /* Operator */ -.highlight .p { color: #f8f8f2 } /* Punctuation */ -.highlight .ch { color: #75715e } /* Comment.Hashbang */ -.highlight .cm { color: #75715e } /* Comment.Multiline */ -.highlight .cp { color: #75715e } /* Comment.Preproc */ -.highlight .cpf { color: #75715e } /* Comment.PreprocFile */ -.highlight .c1 { color: #75715e } /* Comment.Single */ -.highlight .cs { color: #75715e } /* Comment.Special */ -.highlight .gd { color: #f92672 } /* Generic.Deleted */ -.highlight .ge { font-style: italic } /* Generic.Emph */ -.highlight .gi { color: #a6e22e } /* Generic.Inserted */ -.highlight .gs { font-weight: bold } /* Generic.Strong */ -.highlight .gu { color: #75715e } /* Generic.Subheading */ -.highlight .kc { color: #66d9ef } /* Keyword.Constant */ -.highlight .kd { color: #66d9ef } /* Keyword.Declaration */ -.highlight .kn { color: #f92672 } /* Keyword.Namespace */ -.highlight .kp { color: #66d9ef } /* Keyword.Pseudo */ -.highlight .kr { color: #66d9ef } /* Keyword.Reserved */ -.highlight .kt { color: #66d9ef } /* Keyword.Type */ -.highlight .ld { color: #e6db74 } /* Literal.Date */ -.highlight .m { color: #ae81ff } /* Literal.Number */ -.highlight .s { color: #e6db74 } /* Literal.String */ -.highlight .na { color: #a6e22e } /* Name.Attribute */ -.highlight .nb { color: #f8f8f2 } /* Name.Builtin */ -.highlight .nc { color: #a6e22e } /* Name.Class */ -.highlight .no { color: #66d9ef } /* Name.Constant */ -.highlight .nd { color: #a6e22e } /* Name.Decorator */ -.highlight .ni { color: #f8f8f2 } /* Name.Entity */ -.highlight .ne { color: #a6e22e } /* Name.Exception */ -.highlight .nf { color: #a6e22e } /* Name.Function */ -.highlight .nl { color: #f8f8f2 } /* Name.Label */ -.highlight .nn { color: #f8f8f2 } /* Name.Namespace */ -.highlight .nx { color: #a6e22e } /* Name.Other */ -.highlight .py { color: #f8f8f2 } /* Name.Property */ -.highlight .nt { color: #f92672 } /* Name.Tag */ -.highlight .nv { color: #f8f8f2 } /* Name.Variable */ -.highlight .ow { color: #f92672 } /* Operator.Word */ -.highlight .w { color: #f8f8f2 } /* Text.Whitespace */ -.highlight .mb { color: #ae81ff } /* Literal.Number.Bin */ -.highlight .mf { color: #ae81ff } /* Literal.Number.Float */ -.highlight .mh { color: #ae81ff } /* Literal.Number.Hex */ -.highlight .mi { color: #ae81ff } /* Literal.Number.Integer */ -.highlight .mo { color: #ae81ff } /* Literal.Number.Oct */ -.highlight .sa { color: #e6db74 } /* Literal.String.Affix */ -.highlight .sb { color: #e6db74 } /* Literal.String.Backtick */ -.highlight .sc { color: #e6db74 } /* Literal.String.Char */ -.highlight .dl { color: #e6db74 } /* Literal.String.Delimiter */ -.highlight .sd { color: #e6db74 } /* Literal.String.Doc */ -.highlight .s2 { color: #e6db74 } /* Literal.String.Double */ -.highlight .se { color: #ae81ff } /* Literal.String.Escape */ -.highlight .sh { color: #e6db74 } /* Literal.String.Heredoc */ -.highlight .si { color: #e6db74 } /* Literal.String.Interpol */ -.highlight .sx { color: #e6db74 } /* Literal.String.Other */ -.highlight .sr { color: #e6db74 } /* Literal.String.Regex */ -.highlight .s1 { color: #e6db74 } /* Literal.String.Single */ -.highlight .ss { color: #e6db74 } /* Literal.String.Symbol */ -.highlight .bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */ -.highlight .fm { color: #a6e22e } /* Name.Function.Magic */ -.highlight .vc { color: #f8f8f2 } /* Name.Variable.Class */ -.highlight .vg { color: #f8f8f2 } /* Name.Variable.Global */ -.highlight .vi { color: #f8f8f2 } /* Name.Variable.Instance */ -.highlight .vm { color: #f8f8f2 } /* Name.Variable.Magic */ -.highlight .il { color: #ae81ff } /* Literal.Number.Integer.Long */ diff --git a/spaces/flax-community/netherformer/style.css b/spaces/flax-community/netherformer/style.css deleted file mode 100644 index f7e8c30ee80a411dcd885f6833b89a1ddc1e3f01..0000000000000000000000000000000000000000 --- a/spaces/flax-community/netherformer/style.css +++ /dev/null @@ -1,38 +0,0 @@ -body { - background-color: #eee; -} -/*.fullScreenFrame > div {*/ -/* display: flex;*/ -/* justify-content: center;*/ -/*}*/ -/*.stButton>button {*/ -/* color: #4F8BF9;*/ -/* border-radius: 50%;*/ -/* height: 3em;*/ -/* width: 3em;*/ -/*}*/ - -.stTextInput>div>div>input { - color: #4F8BF9; -} -.stTextArea>div>div>input { - color: #4F8BF9; - min-height: 500px; -} - - -/*.st-cj {*/ -/* min-height: 500px;*/ -/* spellcheck="false";*/ -/* color: #4F8BF9;*/ -/*}*/ -/*.st-ch {*/ -/* min-height: 500px;*/ -/* spellcheck="false";*/ -/* color: #4F8BF9;*/ -/*}*/ -/*.st-bb {*/ -/* min-height: 500px;*/ -/* spellcheck="false";*/ -/* color: #4F8BF9;*/ -/*}*/ diff --git a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/envs/memory.py b/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/envs/memory.py deleted file mode 100644 index ff9ca86a18619f1eb9bc77a365582925205b9b2e..0000000000000000000000000000000000000000 --- a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/envs/memory.py +++ /dev/null @@ -1,154 +0,0 @@ -from gym_minigrid.minigrid import * -from gym_minigrid.register import register - -class MemoryEnv(MiniGridEnv): - """ - This environment is a memory test. The agent starts in a small room - where it sees an object. It then has to go through a narrow hallway - which ends in a split. At each end of the split there is an object, - one of which is the same as the object in the starting room. The - agent has to remember the initial object, and go to the matching - object at split. - """ - - def __init__( - self, - seed, - size=8, - random_length=False, - ): - self.random_length = random_length - super().__init__( - seed=seed, - grid_size=size, - max_steps=5*size**2, - # Set this to True for maximum speed - see_through_walls=False, - ) - - def _gen_grid(self, width, height): - self.grid = Grid(width, height) - - # Generate the surrounding walls - self.grid.horz_wall(0, 0) - self.grid.horz_wall(0, height-1) - self.grid.vert_wall(0, 0) - self.grid.vert_wall(width - 1, 0) - - assert height % 2 == 1 - upper_room_wall = height // 2 - 2 - lower_room_wall = height // 2 + 2 - if self.random_length: - hallway_end = self._rand_int(4, width - 2) - else: - hallway_end = width - 3 - - # Start room - for i in range(1, 5): - self.grid.set(i, upper_room_wall, Wall()) - self.grid.set(i, lower_room_wall, Wall()) - self.grid.set(4, upper_room_wall + 1, Wall()) - self.grid.set(4, lower_room_wall - 1, Wall()) - - # Horizontal hallway - for i in range(5, hallway_end): - self.grid.set(i, upper_room_wall + 1, Wall()) - self.grid.set(i, lower_room_wall - 1, Wall()) - - # Vertical hallway - for j in range(0, height): - if j != height // 2: - self.grid.set(hallway_end, j, Wall()) - self.grid.set(hallway_end + 2, j, Wall()) - - # Fix the player's start position and orientation - self.agent_pos = (self._rand_int(1, hallway_end + 1), height // 2) - self.agent_dir = 0 - - # Place objects - start_room_obj = self._rand_elem([Key, Ball]) - self.grid.set(1, height // 2 - 1, start_room_obj('green')) - - other_objs = self._rand_elem([[Ball, Key], [Key, Ball]]) - pos0 = (hallway_end + 1, height // 2 - 2) - pos1 = (hallway_end + 1, height // 2 + 2) - self.grid.set(*pos0, other_objs[0]('green')) - self.grid.set(*pos1, other_objs[1]('green')) - - # Choose the target objects - if start_room_obj == other_objs[0]: - self.success_pos = (pos0[0], pos0[1] + 1) - self.failure_pos = (pos1[0], pos1[1] - 1) - else: - self.success_pos = (pos1[0], pos1[1] - 1) - self.failure_pos = (pos0[0], pos0[1] + 1) - - self.mission = 'go to the matching object at the end of the hallway' - - def step(self, action): - if action == MiniGridEnv.Actions.pickup: - action = MiniGridEnv.Actions.toggle - obs, reward, done, info = MiniGridEnv.step(self, action) - - if tuple(self.agent_pos) == self.success_pos: - reward = self._reward() - done = True - if tuple(self.agent_pos) == self.failure_pos: - reward = 0 - done = True - - return obs, reward, done, info - -class MemoryS17Random(MemoryEnv): - def __init__(self, seed=None): - super().__init__(seed=seed, size=17, random_length=True) - -register( - id='MiniGrid-MemoryS17Random-v0', - entry_point='gym_minigrid.envs:MemoryS17Random', -) - -class MemoryS13Random(MemoryEnv): - def __init__(self, seed=None): - super().__init__(seed=seed, size=13, random_length=True) - -register( - id='MiniGrid-MemoryS13Random-v0', - entry_point='gym_minigrid.envs:MemoryS13Random', -) - -class MemoryS13(MemoryEnv): - def __init__(self, seed=None): - super().__init__(seed=seed, size=13) - -register( - id='MiniGrid-MemoryS13-v0', - entry_point='gym_minigrid.envs:MemoryS13', -) - -class MemoryS11(MemoryEnv): - def __init__(self, seed=None): - super().__init__(seed=seed, size=11) - -register( - id='MiniGrid-MemoryS11-v0', - entry_point='gym_minigrid.envs:MemoryS11', -) - -class MemoryS9(MemoryEnv): - def __init__(self, seed=None): - super().__init__(seed=seed, size=9) - -register( - id='MiniGrid-MemoryS9-v0', - entry_point='gym_minigrid.envs:MemoryS9', -) - -class MemoryS7(MemoryEnv): - def __init__(self, seed=None): - super().__init__(seed=seed, size=7) - -register( - id='MiniGrid-MemoryS7-v0', - entry_point='gym_minigrid.envs:MemoryS7', -) diff --git a/spaces/fun-research/FC-CLIP/fcclip/modeling/pixel_decoder/ops/functions/ms_deform_attn_func.py b/spaces/fun-research/FC-CLIP/fcclip/modeling/pixel_decoder/ops/functions/ms_deform_attn_func.py deleted file mode 100644 index 17d18f617359d79e0d7171d95b63a2be577a27b3..0000000000000000000000000000000000000000 --- a/spaces/fun-research/FC-CLIP/fcclip/modeling/pixel_decoder/ops/functions/ms_deform_attn_func.py +++ /dev/null @@ -1,73 +0,0 @@ -# ------------------------------------------------------------------------------------------------ -# Deformable DETR -# Copyright (c) 2020 SenseTime. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------------------------------ -# Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -# ------------------------------------------------------------------------------------------------ - -# Copyright (c) Facebook, Inc. and its affiliates. -# Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR - -from __future__ import absolute_import -from __future__ import print_function -from __future__ import division - -import torch -import torch.nn.functional as F -from torch.autograd import Function -from torch.autograd.function import once_differentiable - -try: - import MultiScaleDeformableAttention as MSDA -except ModuleNotFoundError as e: - info_string = ( - "\n\nPlease compile MultiScaleDeformableAttention CUDA op with the following commands:\n" - "\t`cd mask2former/modeling/pixel_decoder/ops`\n" - "\t`sh make.sh`\n" - ) - print(info_string) - #raise ModuleNotFoundError(info_string) - - -class MSDeformAttnFunction(Function): - @staticmethod - def forward(ctx, value, value_spatial_shapes, value_level_start_index, sampling_locations, attention_weights, im2col_step): - ctx.im2col_step = im2col_step - output = MSDA.ms_deform_attn_forward( - value, value_spatial_shapes, value_level_start_index, sampling_locations, attention_weights, ctx.im2col_step) - ctx.save_for_backward(value, value_spatial_shapes, value_level_start_index, sampling_locations, attention_weights) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - value, value_spatial_shapes, value_level_start_index, sampling_locations, attention_weights = ctx.saved_tensors - grad_value, grad_sampling_loc, grad_attn_weight = \ - MSDA.ms_deform_attn_backward( - value, value_spatial_shapes, value_level_start_index, sampling_locations, attention_weights, grad_output, ctx.im2col_step) - - return grad_value, None, None, grad_sampling_loc, grad_attn_weight, None - - -def ms_deform_attn_core_pytorch(value, value_spatial_shapes, sampling_locations, attention_weights): - # for debug and test only, - # need to use cuda version instead - N_, S_, M_, D_ = value.shape - _, Lq_, M_, L_, P_, _ = sampling_locations.shape - value_list = value.split([H_ * W_ for H_, W_ in value_spatial_shapes], dim=1) - sampling_grids = 2 * sampling_locations - 1 - sampling_value_list = [] - for lid_, (H_, W_) in enumerate(value_spatial_shapes): - # N_, H_*W_, M_, D_ -> N_, H_*W_, M_*D_ -> N_, M_*D_, H_*W_ -> N_*M_, D_, H_, W_ - value_l_ = value_list[lid_].flatten(2).transpose(1, 2).reshape(N_*M_, D_, H_, W_) - # N_, Lq_, M_, P_, 2 -> N_, M_, Lq_, P_, 2 -> N_*M_, Lq_, P_, 2 - sampling_grid_l_ = sampling_grids[:, :, :, lid_].transpose(1, 2).flatten(0, 1) - # N_*M_, D_, Lq_, P_ - sampling_value_l_ = F.grid_sample(value_l_, sampling_grid_l_, - mode='bilinear', padding_mode='zeros', align_corners=False) - sampling_value_list.append(sampling_value_l_) - # (N_, Lq_, M_, L_, P_) -> (N_, M_, Lq_, L_, P_) -> (N_, M_, 1, Lq_, L_*P_) - attention_weights = attention_weights.transpose(1, 2).reshape(N_*M_, 1, Lq_, L_*P_) - output = (torch.stack(sampling_value_list, dim=-2).flatten(-2) * attention_weights).sum(-1).view(N_, M_*D_, Lq_) - return output.transpose(1, 2).contiguous() diff --git a/spaces/gary109/hotdog-not-hotdog/README.md b/spaces/gary109/hotdog-not-hotdog/README.md deleted file mode 100644 index 12dd27f07652a6a81d637199afcbf3df38bd1660..0000000000000000000000000000000000000000 --- a/spaces/gary109/hotdog-not-hotdog/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Hotdog Not Hotdog -emoji: 📉 -colorFrom: indigo -colorTo: gray -sdk: gradio -sdk_version: 3.38.0 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/arraymisc/__init__.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/arraymisc/__init__.py deleted file mode 100644 index 4b4700d6139ae3d604ff6e542468cce4200c020c..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/arraymisc/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .quantization import dequantize, quantize - -__all__ = ['quantize', 'dequantize'] diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/video/__init__.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/video/__init__.py deleted file mode 100644 index 73199b01dec52820dc6ca0139903536344d5a1eb..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/video/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .io import Cache, VideoReader, frames2video -from .optflow import (dequantize_flow, flow_from_bytes, flow_warp, flowread, - flowwrite, quantize_flow, sparse_flow_from_bytes) -from .processing import concat_video, convert_video, cut_video, resize_video - -__all__ = [ - 'Cache', 'VideoReader', 'frames2video', 'convert_video', 'resize_video', - 'cut_video', 'concat_video', 'flowread', 'flowwrite', 'quantize_flow', - 'dequantize_flow', 'flow_warp', 'flow_from_bytes', 'sparse_flow_from_bytes' -] diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Como Fazer Um Cabo Serial Rs232 Para Usb Entenda o Funcionamento e a Histria Dessa Tecnologia.md b/spaces/gotiQspiryo/whisper-ui/examples/Como Fazer Um Cabo Serial Rs232 Para Usb Entenda o Funcionamento e a Histria Dessa Tecnologia.md deleted file mode 100644 index 05781a0beb16da9d9ac2eaa13a9dcfcc7a1cc16c..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Como Fazer Um Cabo Serial Rs232 Para Usb Entenda o Funcionamento e a Histria Dessa Tecnologia.md +++ /dev/null @@ -1,6 +0,0 @@ -

        My Autoplay 9.5 Pro crack.rar


        Downloadhttps://urlgoal.com/2uyM32



        -
        - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/gradio/GANsNRoses/model.py b/spaces/gradio/GANsNRoses/model.py deleted file mode 100644 index 0e0fdea92fefc34bdc70f33bd2bfd464338e2365..0000000000000000000000000000000000000000 --- a/spaces/gradio/GANsNRoses/model.py +++ /dev/null @@ -1,757 +0,0 @@ -import torchvision -import math -import random -import functools -import operator - -import torch -from torch import nn -from torch.nn import functional as F -from torch.autograd import Function - -from op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d -n_latent = 11 - - -channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256, - 128: 128, - 256: 64, - 512: 32, - 1024: 16, -} - -class LambdaLR(): - def __init__(self, n_epochs, offset, decay_start_epoch): - assert ((n_epochs - decay_start_epoch) > 0), "Decay must start before the training session ends!" - self.n_epochs = n_epochs - self.offset = offset - self.decay_start_epoch = decay_start_epoch - - def step(self, epoch): - return 1.0 - max(0, epoch + self.offset - self.decay_start_epoch)/(self.n_epochs - self.decay_start_epoch) - - -class PixelNorm(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, input): - return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8) - -def make_kernel(k): - k = torch.tensor(k, dtype=torch.float32) - - if k.ndim == 1: - k = k[None, :] * k[:, None] - - k /= k.sum() - - return k - -class Upsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) * (factor ** 2) - self.register_buffer('kernel', kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=self.factor, down=1, pad=self.pad) - - return out - - -class Downsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) - self.register_buffer('kernel', kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=1, down=self.factor, pad=self.pad) - - return out - - -class Blur(nn.Module): - def __init__(self, kernel, pad, upsample_factor=1): - super().__init__() - - kernel = make_kernel(kernel) - - if upsample_factor > 1: - kernel = kernel * (upsample_factor ** 2) - - self.register_buffer('kernel', kernel) - - self.pad = pad - - def forward(self, input): - out = upfirdn2d(input, self.kernel, pad=self.pad) - - return out - - -class EqualConv2d(nn.Module): - def __init__( - self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True - ): - super().__init__() - - self.weight = nn.Parameter( - torch.randn(out_channel, in_channel, kernel_size, kernel_size) - ) - self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2) - - self.stride = stride - self.padding = padding - - if bias: - self.bias = nn.Parameter(torch.zeros(out_channel)) - - else: - self.bias = None - - def forward(self, input): - out = F.conv2d( - input, - self.weight * self.scale, - bias=self.bias, - stride=self.stride, - padding=self.padding, - ) - - return out - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]},' - f' {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})' - ) - - -class EqualLinear(nn.Module): - def __init__( - self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None - ): - super().__init__() - - self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul)) - - if bias: - self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init)) - - else: - self.bias = None - - self.activation = activation - - self.scale = (1 / math.sqrt(in_dim)) * lr_mul - self.lr_mul = lr_mul - - def forward(self, input): - bias = self.bias*self.lr_mul if self.bias is not None else None - if self.activation: - out = F.linear(input, self.weight * self.scale) - out = fused_leaky_relu(out, bias) - - else: - out = F.linear( - input, self.weight * self.scale, bias=bias - ) - - return out - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})' - ) - - -class ScaledLeakyReLU(nn.Module): - def __init__(self, negative_slope=0.2): - super().__init__() - - self.negative_slope = negative_slope - - def forward(self, input): - out = F.leaky_relu(input, negative_slope=self.negative_slope) - - return out * math.sqrt(2) - - -class ModulatedConv2d(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - use_style=True, - demodulate=True, - upsample=False, - downsample=False, - blur_kernel=[1, 3, 3, 1], - ): - super().__init__() - - self.eps = 1e-8 - self.kernel_size = kernel_size - self.in_channel = in_channel - self.out_channel = out_channel - self.upsample = upsample - self.downsample = downsample - self.use_style = use_style - - if upsample: - factor = 2 - p = (len(blur_kernel) - factor) - (kernel_size - 1) - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 + 1 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1), upsample_factor=factor) - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1)) - - fan_in = in_channel * kernel_size ** 2 - self.scale = 1 / math.sqrt(fan_in) - self.padding = kernel_size // 2 - - self.weight = nn.Parameter( - torch.randn(1, out_channel, in_channel, kernel_size, kernel_size) - ) - - if use_style: - self.modulation = EqualLinear(style_dim, in_channel, bias_init=1) - else: - self.modulation = nn.Parameter(torch.Tensor(1, 1, in_channel, 1, 1).fill_(1)) - - self.demodulate = demodulate - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, ' - f'upsample={self.upsample}, downsample={self.downsample})' - ) - - def forward(self, input, style): - batch, in_channel, height, width = input.shape - - if self.use_style: - style = self.modulation(style).view(batch, 1, in_channel, 1, 1) - weight = self.scale * self.weight * style - else: - weight = self.scale * self.weight.expand(batch,-1,-1,-1,-1) * self.modulation - - if self.demodulate: - demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8) - weight = weight * demod.view(batch, self.out_channel, 1, 1, 1) - - weight = weight.view( - batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - - if self.upsample: - input = input.view(1, batch * in_channel, height, width) - weight = weight.view( - batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - weight = weight.transpose(1, 2).reshape( - batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size - ) - out = F.conv_transpose2d(input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - out = self.blur(out) - - elif self.downsample: - input = self.blur(input) - _, _, height, width = input.shape - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - else: - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=self.padding, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - return out - - -class NoiseInjection(nn.Module): - def __init__(self): - super().__init__() - - self.weight = nn.Parameter(torch.zeros(1)) - - def forward(self, image, noise=None): - if noise is None: - batch, _, height, width = image.shape - noise = image.new_empty(batch, 1, height, width).normal_() - - return image + self.weight * noise - - -class ConstantInput(nn.Module): - def __init__(self, style_dim): - super().__init__() - - self.input = nn.Parameter(torch.randn(1, style_dim)) - - def forward(self, input): - batch = input.shape[0] - out = self.input.repeat(batch, n_latent) - - return out - - -class StyledConv(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - use_style=True, - upsample=False, - downsample=False, - blur_kernel=[1, 3, 3, 1], - demodulate=True, - ): - super().__init__() - self.use_style = use_style - - self.conv = ModulatedConv2d( - in_channel, - out_channel, - kernel_size, - style_dim, - use_style=use_style, - upsample=upsample, - downsample=downsample, - blur_kernel=blur_kernel, - demodulate=demodulate, - ) - - #if use_style: - # self.noise = NoiseInjection() - #else: - # self.noise = None - # self.bias = nn.Parameter(torch.zeros(1, out_channel, 1, 1)) - # self.activate = ScaledLeakyReLU(0.2) - self.activate = FusedLeakyReLU(out_channel) - - def forward(self, input, style=None, noise=None): - out = self.conv(input, style) - #if self.use_style: - # out = self.noise(out, noise=noise) - # out = out + self.bias - out = self.activate(out) - - return out - - -class StyledResBlock(nn.Module): - def __init__(self, in_channel, style_dim, blur_kernel=[1, 3, 3, 1], demodulate=True): - super().__init__() - - self.conv1 = StyledConv(in_channel, in_channel, 3, style_dim, upsample=False, blur_kernel=blur_kernel, demodulate=demodulate) - self.conv2 = StyledConv(in_channel, in_channel, 3, style_dim, upsample=False, blur_kernel=blur_kernel, demodulate=demodulate) - - def forward(self, input, style): - out = self.conv1(input, style) - out = self.conv2(out, style) - out = (out + input) / math.sqrt(2) - - return out - -class ToRGB(nn.Module): - def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - if upsample: - self.upsample = Upsample(blur_kernel) - - self.conv = ModulatedConv2d(in_channel, 3, 1, style_dim, demodulate=False) - self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1)) - - def forward(self, input, style, skip=None): - out = self.conv(input, style) - out = out + self.bias - - if skip is not None: - skip = self.upsample(skip) - - out = out + skip - - return out - - -class Generator(nn.Module): - def __init__( - self, - size, - num_down, - latent_dim, - n_mlp, - n_res, - channel_multiplier=1, - blur_kernel=[1, 3, 3, 1], - lr_mlp=0.01, - ): - super().__init__() - self.size = size - - style_dim = 512 - - mapping = [EqualLinear(latent_dim, style_dim, lr_mul=lr_mlp, activation='fused_lrelu')] - for i in range(n_mlp-1): - mapping.append(EqualLinear(style_dim, style_dim, lr_mul=lr_mlp, activation='fused_lrelu')) - - self.mapping = nn.Sequential(*mapping) - - self.encoder = Encoder(size, latent_dim, num_down, n_res, channel_multiplier) - - self.log_size = int(math.log(size, 2)) #7 - in_log_size = self.log_size - num_down #7-2 or 7-3 - in_size = 2 ** in_log_size - - in_channel = channels[in_size] - self.adain_bottleneck = nn.ModuleList() - for i in range(n_res): - self.adain_bottleneck.append(StyledResBlock(in_channel, style_dim)) - - self.conv1 = StyledConv(in_channel, in_channel, 3, style_dim, blur_kernel=blur_kernel) - self.to_rgb1 = ToRGB(in_channel, style_dim, upsample=False) - - self.num_layers = (self.log_size - in_log_size) * 2 + 1 #7 - - self.convs = nn.ModuleList() - self.upsamples = nn.ModuleList() - self.to_rgbs = nn.ModuleList() - #self.noises = nn.Module() - - - #for layer_idx in range(self.num_layers): - # res = (layer_idx + (in_log_size*2+1)) // 2 #2,3,3,5 ... -> 4,5,5,6 ... - # shape = [1, 1, 2 ** res, 2 ** res] - # self.noises.register_buffer(f'noise_{layer_idx}', torch.randn(*shape)) - - for i in range(in_log_size+1, self.log_size + 1): - out_channel = channels[2 ** i] - - self.convs.append( - StyledConv( - in_channel, - out_channel, - 3, - style_dim, - upsample=True, - blur_kernel=blur_kernel, - ) - ) - - self.convs.append( - StyledConv( - out_channel, out_channel, 3, style_dim, blur_kernel=blur_kernel - ) - ) - - self.to_rgbs.append(ToRGB(out_channel, style_dim)) - - in_channel = out_channel - - def style_encode(self, input): - return self.encoder(input)[1] - - def encode(self, input): - return self.encoder(input) - - def forward(self, input, z=None): - content, style = self.encode(input) - if z is None: - out = self.decode(content, style) - else: - out = self.decode(content, z) - - return out, content, style - - def decode(self, input, styles, use_mapping=True): - if use_mapping: - styles = self.mapping(styles) - #styles = styles.repeat(1, n_latent).view(styles.size(0), n_latent, -1) - out = input - i = 0 - for conv in self.adain_bottleneck: - out = conv(out, styles) - i += 1 - - out = self.conv1(out, styles, noise=None) - skip = self.to_rgb1(out, styles) - i += 2 - - for conv1, conv2, to_rgb in zip( - self.convs[::2], self.convs[1::2], self.to_rgbs - ): - out = conv1(out, styles, noise=None) - out = conv2(out, styles, noise=None) - skip = to_rgb(out, styles, skip) - - i += 3 - - image = skip - return image - -class ConvLayer(nn.Sequential): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - downsample=False, - blur_kernel=[1, 3, 3, 1], - bias=True, - activate=True, - ): - layers = [] - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - layers.append(Blur(blur_kernel, pad=(pad0, pad1))) - - stride = 2 - self.padding = 0 - - else: - stride = 1 - self.padding = kernel_size // 2 - - layers.append( - EqualConv2d( - in_channel, - out_channel, - kernel_size, - padding=self.padding, - stride=stride, - bias=bias and not activate, - ) - ) - - if activate: - if bias: - layers.append(FusedLeakyReLU(out_channel)) - - else: - layers.append(ScaledLeakyReLU(0.2)) - - super().__init__(*layers) - -class InResBlock(nn.Module): - def __init__(self, in_channel, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - self.conv1 = StyledConv(in_channel, in_channel, 3, None, blur_kernel=blur_kernel, demodulate=True, use_style=False) - self.conv2 = StyledConv(in_channel, in_channel, 3, None, blur_kernel=blur_kernel, demodulate=True, use_style=False) - - def forward(self, input): - out = self.conv1(input, None) - out = self.conv2(out, None) - out = (out + input) / math.sqrt(2) - - return out - -class ResBlock(nn.Module): - def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1], downsample=True): - super().__init__() - - self.conv1 = ConvLayer(in_channel, in_channel, 3) - self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=downsample) - - if downsample or in_channel != out_channel: - self.skip = ConvLayer( - in_channel, out_channel, 1, downsample=downsample, activate=False, bias=False - ) - else: - self.skip = None - - def forward(self, input): - out = self.conv1(input) - out = self.conv2(out) - - if self.skip is None: - skip = input - else: - skip = self.skip(input) - out = (out + skip) / math.sqrt(2) - - return out - -class Discriminator(nn.Module): - def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1]): - super().__init__() - self.size = size - l_branch = self.make_net_(32) - l_branch += [ConvLayer(channels[32], 1, 1, activate=False)] - self.l_branch = nn.Sequential(*l_branch) - - - g_branch = self.make_net_(8) - self.g_branch = nn.Sequential(*g_branch) - self.g_adv = ConvLayer(channels[8], 1, 1, activate=False) - - self.g_std = nn.Sequential(ConvLayer(channels[8], channels[4], 3, downsample=True), - nn.Flatten(), - EqualLinear(channels[4] * 4 * 4, 128, activation='fused_lrelu'), - ) - self.g_final = EqualLinear(128, 1, activation=False) - - - def make_net_(self, out_size): - size = self.size - convs = [ConvLayer(3, channels[size], 1)] - log_size = int(math.log(size, 2)) - out_log_size = int(math.log(out_size, 2)) - in_channel = channels[size] - - for i in range(log_size, out_log_size, -1): - out_channel = channels[2 ** (i - 1)] - convs.append(ResBlock(in_channel, out_channel)) - in_channel = out_channel - - return convs - - def forward(self, x): - l_adv = self.l_branch(x) - - g_act = self.g_branch(x) - g_adv = self.g_adv(g_act) - - output = self.g_std(g_act) - g_stddev = torch.sqrt(output.var(0, keepdim=True, unbiased=False) + 1e-8).repeat(x.size(0),1) - g_std = self.g_final(g_stddev) - return [l_adv, g_adv, g_std] - - - -class Encoder(nn.Module): - def __init__(self, size, latent_dim, num_down, n_res, channel_multiplier=2, blur_kernel=[1, 3, 3, 1]): - super().__init__() - stem = [ConvLayer(3, channels[size], 1)] - log_size = int(math.log(size, 2)) - in_channel = channels[size] - - for i in range(log_size, log_size-num_down, -1): - out_channel = channels[2 ** (i - 1)] - stem.append(ResBlock(in_channel, out_channel, downsample=True)) - in_channel = out_channel - stem += [ResBlock(in_channel, in_channel, downsample=False) for i in range(n_res)] - self.stem = nn.Sequential(*stem) - - self.content = nn.Sequential( - ConvLayer(in_channel, in_channel, 1), - ConvLayer(in_channel, in_channel, 1) - ) - style = [] - for i in range(log_size-num_down, 2, -1): - out_channel = channels[2 ** (i - 1)] - style.append(ConvLayer(in_channel, out_channel, 3, downsample=True)) - in_channel = out_channel - style += [ - nn.Flatten(), - EqualLinear(channels[4] * 4 * 4, channels[4], activation='fused_lrelu'), - EqualLinear(channels[4], latent_dim), - ] - self.style = nn.Sequential(*style) - - - def forward(self, input): - act = self.stem(input) - content = self.content(act) - style = self.style(act) - return content, style - -class StyleEncoder(nn.Module): - def __init__(self, size, style_dim, channel_multiplier=2, blur_kernel=[1, 3, 3, 1]): - super().__init__() - convs = [ConvLayer(3, channels[size], 1)] - - log_size = int(math.log(size, 2)) - - in_channel = channels[size] - num_down = 6 - - for i in range(log_size, log_size-num_down, -1): - w = 2 ** (i - 1) - out_channel = channels[w] - convs.append(ConvLayer(in_channel, out_channel, 3, downsample=True)) - in_channel = out_channel - - convs += [ - nn.Flatten(), - EqualLinear(channels[4] * 4 * 4, channels[4], activation='fused_lrelu'), EqualLinear(channels[4], style_dim), - ] - self.convs = nn.Sequential(*convs) - - def forward(self, input): - style = self.convs(input) - return style.view(input.size(0), -1) - -class LatDiscriminator(nn.Module): - def __init__(self, style_dim): - super().__init__() - - fc = [EqualLinear(style_dim, 256, activation='fused_lrelu')] - for i in range(3): - fc += [EqualLinear(256, 256, activation='fused_lrelu')] - fc += [FCMinibatchStd(256, 256)] - fc += [EqualLinear(256, 1)] - self.fc = nn.Sequential(*fc) - - def forward(self, input): - return [self.fc(input), ] - -class FCMinibatchStd(nn.Module): - def __init__(self, in_channel, out_channel): - super().__init__() - self.fc = EqualLinear(in_channel+1, out_channel, activation='fused_lrelu') - - def forward(self, out): - stddev = torch.sqrt(out.var(0, unbiased=False) + 1e-8).mean().view(1,1).repeat(out.size(0), 1) - out = torch.cat([out, stddev], 1) - out = self.fc(out) - return out diff --git a/spaces/gradio/HuBERT/fairseq/data/audio/hubert_dataset.py b/spaces/gradio/HuBERT/fairseq/data/audio/hubert_dataset.py deleted file mode 100644 index f00fe301a64a8740ed3ce07e44f6774edb933926..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/data/audio/hubert_dataset.py +++ /dev/null @@ -1,358 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import itertools -import logging -import os -import sys -from typing import Any, List, Optional, Union - -import numpy as np - -import torch -import torch.nn.functional as F -from fairseq.data import data_utils -from fairseq.data.fairseq_dataset import FairseqDataset - -logger = logging.getLogger(__name__) - - -def load_audio(manifest_path, max_keep, min_keep): - n_long, n_short = 0, 0 - names, inds, sizes = [], [], [] - with open(manifest_path) as f: - root = f.readline().strip() - for ind, line in enumerate(f): - items = line.strip().split("\t") - assert len(items) == 2, line - sz = int(items[1]) - if min_keep is not None and sz < min_keep: - n_short += 1 - elif max_keep is not None and sz > max_keep: - n_long += 1 - else: - names.append(items[0]) - inds.append(ind) - sizes.append(sz) - tot = ind + 1 - logger.info( - ( - f"max_keep={max_keep}, min_keep={min_keep}, " - f"loaded {len(names)}, skipped {n_short} short and {n_long} long, " - f"longest-loaded={max(sizes)}, shortest-loaded={min(sizes)}" - ) - ) - return root, names, inds, tot, sizes - - -def load_label(label_path, inds, tot): - with open(label_path) as f: - labels = [line.rstrip() for line in f] - assert ( - len(labels) == tot - ), f"number of labels does not match ({len(labels)} != {tot})" - labels = [labels[i] for i in inds] - return labels - - -def load_label_offset(label_path, inds, tot): - with open(label_path) as f: - code_lengths = [len(line.encode("utf-8")) for line in f] - assert ( - len(code_lengths) == tot - ), f"number of labels does not match ({len(code_lengths)} != {tot})" - offsets = list(itertools.accumulate([0] + code_lengths)) - offsets = [(offsets[i], offsets[i + 1]) for i in inds] - return offsets - - -def verify_label_lengths( - audio_sizes, - audio_rate, - label_path, - label_rate, - inds, - tot, - tol=0.1, # tolerance in seconds -): - if label_rate < 0: - logger.info(f"{label_path} is sequence label. skipped") - return - - with open(label_path) as f: - lengths = [len(line.rstrip().split()) for line in f] - assert len(lengths) == tot - lengths = [lengths[i] for i in inds] - num_invalid = 0 - for i, ind in enumerate(inds): - dur_from_audio = audio_sizes[i] / audio_rate - dur_from_label = lengths[i] / label_rate - if abs(dur_from_audio - dur_from_label) > tol: - logger.warning( - ( - f"audio and label duration differ too much " - f"(|{dur_from_audio} - {dur_from_label}| > {tol}) " - f"in line {ind+1} of {label_path}. Check if `label_rate` " - f"is correctly set (currently {label_rate}). " - f"num. of samples = {audio_sizes[i]}; " - f"label length = {lengths[i]}" - ) - ) - num_invalid += 1 - if num_invalid > 0: - logger.warning( - f"total {num_invalid} (audio, label) pairs with mismatched lengths" - ) - - -class HubertDataset(FairseqDataset): - def __init__( - self, - manifest_path: str, - sample_rate: float, - label_paths: List[str], - label_rates: Union[List[float], float], # -1 for sequence labels - pad_list: List[str], - eos_list: List[str], - label_processors: Optional[List[Any]] = None, - max_keep_sample_size: Optional[int] = None, - min_keep_sample_size: Optional[int] = None, - max_sample_size: Optional[int] = None, - shuffle: bool = True, - pad_audio: bool = False, - normalize: bool = False, - store_labels: bool = True, - random_crop: bool = False, - single_target: bool = False, - ): - self.audio_root, self.audio_names, inds, tot, self.sizes = load_audio( - manifest_path, max_keep_sample_size, min_keep_sample_size - ) - self.sample_rate = sample_rate - self.shuffle = shuffle - self.random_crop = random_crop - - self.num_labels = len(label_paths) - self.pad_list = pad_list - self.eos_list = eos_list - self.label_processors = label_processors - self.single_target = single_target - self.label_rates = ( - [label_rates for _ in range(len(label_paths))] - if isinstance(label_rates, int) - else label_rates - ) - self.store_labels = store_labels - if store_labels: - self.label_list = [load_label(p, inds, tot) for p in label_paths] - else: - self.label_paths = label_paths - self.label_offsets_list = [ - load_label_offset(p, inds, tot) for p in label_paths - ] - assert ( - label_processors is None - or len(label_processors) == self.num_labels - ) - for label_path, label_rate in zip(label_paths, self.label_rates): - verify_label_lengths( - self.sizes, sample_rate, label_path, label_rate, inds, tot - ) - - self.max_sample_size = ( - max_sample_size if max_sample_size is not None else sys.maxsize - ) - self.pad_audio = pad_audio - self.normalize = normalize - logger.info( - f"pad_audio={pad_audio}, random_crop={random_crop}, " - f"normalize={normalize}, max_sample_size={self.max_sample_size}" - ) - - def get_audio(self, index): - import soundfile as sf - - wav_path = os.path.join(self.audio_root, self.audio_names[index]) - wav, cur_sample_rate = sf.read(wav_path) - wav = torch.from_numpy(wav).float() - wav = self.postprocess(wav, cur_sample_rate) - return wav - - def get_label(self, index, label_idx): - if self.store_labels: - label = self.label_list[label_idx][index] - else: - with open(self.label_paths[label_idx]) as f: - offset_s, offset_e = self.label_offsets_list[label_idx][index] - f.seek(offset_s) - label = f.read(offset_e - offset_s) - - if self.label_processors is not None: - label = self.label_processors[label_idx](label) - return label - - def get_labels(self, index): - return [self.get_label(index, i) for i in range(self.num_labels)] - - def __getitem__(self, index): - wav = self.get_audio(index) - labels = self.get_labels(index) - return {"id": index, "source": wav, "label_list": labels} - - def __len__(self): - return len(self.sizes) - - def crop_to_max_size(self, wav, target_size): - size = len(wav) - diff = size - target_size - if diff <= 0: - return wav, 0 - - start, end = 0, target_size - if self.random_crop: - start = np.random.randint(0, diff + 1) - end = size - diff + start - return wav[start:end], start - - def collater(self, samples): - # target = max(sizes) -> random_crop not used - # target = max_sample_size -> random_crop used for long - samples = [s for s in samples if s["source"] is not None] - if len(samples) == 0: - return {} - - audios = [s["source"] for s in samples] - audio_sizes = [len(s) for s in audios] - if self.pad_audio: - audio_size = min(max(audio_sizes), self.max_sample_size) - else: - audio_size = min(min(audio_sizes), self.max_sample_size) - collated_audios, padding_mask, audio_starts = self.collater_audio( - audios, audio_size - ) - - targets_by_label = [ - [s["label_list"][i] for s in samples] - for i in range(self.num_labels) - ] - targets_list, lengths_list, ntokens_list = self.collater_label( - targets_by_label, audio_size, audio_starts - ) - - net_input = {"source": collated_audios, "padding_mask": padding_mask} - batch = { - "id": torch.LongTensor([s["id"] for s in samples]), - "net_input": net_input, - } - - if self.single_target: - batch["target_lengths"] = lengths_list[0] - batch["ntokens"] = ntokens_list[0] - batch["target"] = targets_list[0] - else: - batch["target_lengths_list"] = lengths_list - batch["ntokens_list"] = ntokens_list - batch["target_list"] = targets_list - return batch - - def collater_audio(self, audios, audio_size): - collated_audios = audios[0].new_zeros(len(audios), audio_size) - padding_mask = ( - torch.BoolTensor(collated_audios.shape).fill_(False) - # if self.pad_audio else None - ) - audio_starts = [0 for _ in audios] - for i, audio in enumerate(audios): - diff = len(audio) - audio_size - if diff == 0: - collated_audios[i] = audio - elif diff < 0: - assert self.pad_audio - collated_audios[i] = torch.cat( - [audio, audio.new_full((-diff,), 0.0)] - ) - padding_mask[i, diff:] = True - else: - collated_audios[i], audio_starts[i] = self.crop_to_max_size( - audio, audio_size - ) - return collated_audios, padding_mask, audio_starts - - def collater_frm_label( - self, targets, audio_size, audio_starts, label_rate, pad - ): - assert label_rate > 0 - s2f = label_rate / self.sample_rate - frm_starts = [int(round(s * s2f)) for s in audio_starts] - frm_size = int(round(audio_size * s2f)) - if not self.pad_audio: - rem_size = [len(t) - s for t, s in zip(targets, frm_starts)] - frm_size = min(frm_size, *rem_size) - targets = [t[s: s + frm_size] for t, s in zip(targets, frm_starts)] - logger.debug(f"audio_starts={audio_starts}") - logger.debug(f"frame_starts={frm_starts}") - logger.debug(f"frame_size={frm_size}") - - lengths = torch.LongTensor([len(t) for t in targets]) - ntokens = lengths.sum().item() - targets = data_utils.collate_tokens( - targets, pad_idx=pad, left_pad=False - ) - return targets, lengths, ntokens - - def collater_seq_label(self, targets, pad): - lengths = torch.LongTensor([len(t) for t in targets]) - ntokens = lengths.sum().item() - targets = data_utils.collate_tokens( - targets, pad_idx=pad, left_pad=False - ) - return targets, lengths, ntokens - - def collater_label(self, targets_by_label, audio_size, audio_starts): - targets_list, lengths_list, ntokens_list = [], [], [] - itr = zip(targets_by_label, self.label_rates, self.pad_list) - for targets, label_rate, pad in itr: - if label_rate == -1: - targets, lengths, ntokens = self.collater_seq_label( - targets, pad - ) - else: - targets, lengths, ntokens = self.collater_frm_label( - targets, audio_size, audio_starts, label_rate, pad - ) - targets_list.append(targets) - lengths_list.append(lengths) - ntokens_list.append(ntokens) - return targets_list, lengths_list, ntokens_list - - def num_tokens(self, index): - return self.size(index) - - def size(self, index): - if self.pad_audio: - return self.sizes[index] - return min(self.sizes[index], self.max_sample_size) - - def ordered_indices(self): - if self.shuffle: - order = [np.random.permutation(len(self))] - else: - order = [np.arange(len(self))] - - order.append(self.sizes) - return np.lexsort(order)[::-1] - - def postprocess(self, wav, cur_sample_rate): - if wav.dim() == 2: - wav = wav.mean(-1) - assert wav.dim() == 1, wav.dim() - - if cur_sample_rate != self.sample_rate: - raise Exception(f"sr {cur_sample_rate} != {self.sample_rate}") - - if self.normalize: - with torch.no_grad(): - wav = F.layer_norm(wav, wav.shape) - return wav diff --git a/spaces/gradio/longformer/scripts/triviaqa_utils/convert_to_squad_format.py b/spaces/gradio/longformer/scripts/triviaqa_utils/convert_to_squad_format.py deleted file mode 100644 index be400e3e437d4e4b5e52da4e96511a5e204a77f2..0000000000000000000000000000000000000000 --- a/spaces/gradio/longformer/scripts/triviaqa_utils/convert_to_squad_format.py +++ /dev/null @@ -1,129 +0,0 @@ -from . import file_utils -from . import dataset_utils -import os -from tqdm import tqdm -import random -import nltk -import argparse - - -def get_text(qad, domain): - local_file = os.path.join(args.web_dir, qad['Filename']) if domain == 'SearchResults' else os.path.join(args.wikipedia_dir, qad['Filename']) - return file_utils.get_file_contents(local_file, encoding='utf-8') - - -def select_relevant_portion(text): - paras = text.split('\n') - selected = [] - done = False - for para in paras: - # nltk is slow, but we have to use its word tokenizer for the distant supervision matching to work - # TODO: try both see which one works better - # words = para.split() - # extra_words = args.max_num_tokens - len(selected) - # selected.extend(words[:extra_words]) - # if len(selected) >= args.max_num_tokens: - # break - sents = sent_tokenize.tokenize(para) - for sent in sents: - words = nltk.word_tokenize(sent) - for word in words: - selected.append(word) - if len(selected) >= args.max_num_tokens: - done = True - break - if done: - break - if done: - break - selected.append('\n') - st = ' '.join(selected).strip() - return st - - -def add_triple_data(datum, page, domain): - qad = {'Source': domain} - for key in ['QuestionId', 'Question', 'Answer']: - if key == 'Answer' and key not in datum: - qad[key] = {'NormalizedAliases': []} - qid = datum['QuestionId'] - print(f'qid: {qid} does not have an answer.') - else: - qad[key] = datum[key] - for key in page: - qad[key] = page[key] - return qad - - -def get_qad_triples(data): - qad_triples = [] - for datum in data['Data']: - for key in ['EntityPages', 'SearchResults']: - for page in datum.get(key, []): - qad = add_triple_data(datum, page, key) - qad_triples.append(qad) - return qad_triples - - -def convert_to_squad_format(qa_json_file, squad_file): - qa_json = dataset_utils.read_triviaqa_data(qa_json_file) - qad_triples = get_qad_triples(qa_json) - random.seed(args.seed) - random.shuffle(qad_triples) - - data = [] - for qad in tqdm(qad_triples): - qid = qad['QuestionId'] - - text = get_text(qad, qad['Source']) - selected_text = select_relevant_portion(text) - - question = qad['Question'] - para = {'context': selected_text, 'qas': [{'question': question, 'answers': []}]} - data.append({'paragraphs': [para]}) - qa = para['qas'][0] - qa['id'] = dataset_utils.get_question_doc_string(qid, qad['Filename']) - qa['qid'] = qid - - answers_in_doc = dataset_utils.answer_index_in_document(qad['Answer'], selected_text) - qa['answers'] = answers_in_doc - # We want all answers in the document, not just the first answer - # if index == -1: - # if qa_json['Split'] == 'train': - # continue - # else: - # qa['answers'].append({'text': ans_string, 'answer_start': index}) - - # This doesn't fit the squad format, but we need it for evaluation - qa['aliases'] = qad['Answer']['NormalizedAliases'] - - if qa_json['Split'] == 'train' and len(data) >= args.sample_size and qa_json['Domain'] == 'Web': - break - - if len(data) >= args.sample_size: - break - - squad = {'data': data, 'version': qa_json['Version']} - file_utils.write_json_to_file(squad, squad_file) - print('Added', len(data)) - - -def get_args(): - parser = argparse.ArgumentParser() - parser.add_argument('--triviaqa_file', help='Triviaqa file') - parser.add_argument('--squad_file', help='Squad file') - parser.add_argument('--wikipedia_dir', help='Wikipedia doc dir') - parser.add_argument('--web_dir', help='Web doc dir') - - parser.add_argument('--seed', default=10, type=int, help='Random seed') - parser.add_argument('--max_num_tokens', default=800, type=int, help='Maximum number of tokens from a document') - parser.add_argument('--sample_size', default=8000000000000, type=int, help='Random seed') - parser.add_argument('--tokenizer', default='tokenizers/punkt/english.pickle', help='Sentence tokenizer') - args = parser.parse_args() - return args - - -if __name__ == '__main__': - args = get_args() - sent_tokenize = nltk.data.load(args.tokenizer) - convert_to_squad_format(args.triviaqa_file, args.squad_file) diff --git a/spaces/gradio/monochrome/app.py b/spaces/gradio/monochrome/app.py deleted file mode 100644 index a4eb75c308a66e98fccee7877a97cb3ba70a26c2..0000000000000000000000000000000000000000 --- a/spaces/gradio/monochrome/app.py +++ /dev/null @@ -1,147 +0,0 @@ -import time - -from theme_dropdown import create_theme_dropdown # noqa: F401 - -import gradio as gr - -dropdown, js = create_theme_dropdown() - -with gr.Blocks(theme='gradio/monochrome') as demo: - with gr.Row().style(equal_height=True): - with gr.Column(scale=10): - gr.Markdown( - """ - # Theme preview: `Monochrome` - To use this theme, set `theme='gradio/monochrome'` in `gr.Blocks()` or `gr.Interface()`. - You can append an `@` and a semantic version expression, e.g. @>=1.0.0,<2.0.0 to pin to a given version - of this theme. - """ - ) - with gr.Column(scale=3): - with gr.Box(): - dropdown.render() - toggle_dark = gr.Button(value="Toggle Dark").style(full_width=True) - - dropdown.change(None, dropdown, None, _js=js) - toggle_dark.click( - None, - _js=""" - () => { - document.body.classList.toggle('dark'); - document.querySelector('gradio-app').style.backgroundColor = 'var(--color-background-primary)' - } - """, - ) - - name = gr.Textbox( - label="Name", - info="Full name, including middle name. No special characters.", - placeholder="John Doe", - value="John Doe", - interactive=True, - ) - - with gr.Row(): - slider1 = gr.Slider(label="Slider 1") - slider2 = gr.Slider(label="Slider 2") - gr.CheckboxGroup(["A", "B", "C"], label="Checkbox Group") - - with gr.Row(): - with gr.Column(variant="panel", scale=1): - gr.Markdown("## Panel 1") - radio = gr.Radio( - ["A", "B", "C"], - label="Radio", - info="Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.", - ) - drop = gr.Dropdown(["Option 1", "Option 2", "Option 3"], show_label=False) - drop_2 = gr.Dropdown( - ["Option A", "Option B", "Option C"], - multiselect=True, - value=["Option A"], - label="Dropdown", - interactive=True, - ) - check = gr.Checkbox(label="Go") - with gr.Column(variant="panel", scale=2): - img = gr.Image( - "https://gradio.app/assets/img/header-image.jpg", label="Image" - ).style(height=320) - with gr.Row(): - go_btn = gr.Button("Go", label="Primary Button", variant="primary") - clear_btn = gr.Button( - "Clear", label="Secondary Button", variant="secondary" - ) - - def go(*args): - time.sleep(3) - return "https://gradio.app/assets/img/header-image.jpg" - - go_btn.click(go, [radio, drop, drop_2, check, name], img, api_name="go") - - def clear(): - time.sleep(0.2) - return None - - clear_btn.click(clear, None, img) - - with gr.Row(): - btn1 = gr.Button("Button 1").style(size="sm") - btn2 = gr.UploadButton().style(size="sm") - stop_btn = gr.Button("Stop", label="Stop Button", variant="stop").style( - size="sm" - ) - - with gr.Row(): - gr.Dataframe(value=[[1, 2, 3], [4, 5, 6], [7, 8, 9]], label="Dataframe") - gr.JSON( - value={"a": 1, "b": 2, "c": {"test": "a", "test2": [1, 2, 3]}}, label="JSON" - ) - gr.Label(value={"cat": 0.7, "dog": 0.2, "fish": 0.1}) - gr.File() - with gr.Row(): - gr.ColorPicker() - gr.Video("https://gradio-static-files.s3.us-west-2.amazonaws.com/world.mp4") - gr.Gallery( - [ - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/lion.jpg", - "lion", - ), - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/logo.png", - "logo", - ), - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/tower.jpg", - "tower", - ), - ] - ).style(height="200px", grid=2) - - with gr.Row(): - with gr.Column(scale=2): - chatbot = gr.Chatbot([("Hello", "Hi")], label="Chatbot") - chat_btn = gr.Button("Add messages") - - def chat(history): - time.sleep(2) - yield [["How are you?", "I am good."]] - - chat_btn.click( - lambda history: history - + [["How are you?", "I am good."]] - + (time.sleep(2) or []), - chatbot, - chatbot, - ) - with gr.Column(scale=1): - with gr.Accordion("Advanced Settings"): - gr.Markdown("Hello") - gr.Number(label="Chatbot control 1") - gr.Number(label="Chatbot control 2") - gr.Number(label="Chatbot control 3") - - -if __name__ == "__main__": - demo.queue().launch() diff --git a/spaces/gradio/question-answering/run.py b/spaces/gradio/question-answering/run.py deleted file mode 100644 index 7ccc419cdd06862adf329afd31d33400be8e4bfe..0000000000000000000000000000000000000000 --- a/spaces/gradio/question-answering/run.py +++ /dev/null @@ -1,25 +0,0 @@ -import gradio as gr - -from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline - -model_name = "deepset/roberta-base-squad2" - -nlp = pipeline("question-answering", model=model_name, tokenizer=model_name) - -context = "The Amazon rainforest, also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species." -question = "Which continent is the Amazon rainforest in?" - - -def predict(context, question): - res = nlp({"question": question, "context": context}) - return res["answer"], res["score"] - - -gr.Interface( - predict, - inputs=[ - gr.Textbox(lines=7, value=context, label="Context Paragraph"), - gr.Textbox(lines=2, value=question, label="Question"), - ], - outputs=[gr.Textbox(label="Answer"), gr.Textbox(label="Score")], -).launch() diff --git a/spaces/hamacojr/CAT-Seg/cat_seg/data/datasets/register_pascal_59.py b/spaces/hamacojr/CAT-Seg/cat_seg/data/datasets/register_pascal_59.py deleted file mode 100644 index ff49702fc898ecf38420985d143c70f71169b91a..0000000000000000000000000000000000000000 --- a/spaces/hamacojr/CAT-Seg/cat_seg/data/datasets/register_pascal_59.py +++ /dev/null @@ -1,81 +0,0 @@ -import os - -from detectron2.data import DatasetCatalog, MetadataCatalog -from detectron2.data.datasets import load_sem_seg -import copy - - -stuff_colors = [[0, 192, 64], [0, 192, 64], [0, 64, 96], [128, 192, 192], - [0, 64, 64], [0, 192, 224], [0, 192, 192], [128, 192, 64], - [0, 192, 96], [128, 192, 64], [128, 32, 192], [0, 0, 224], - [64, 128, 32], [0, 160, 0], [0, 0, 0], [192, 128, 160], - [0, 32, 0], [0, 128, 128], [64, 128, 160], [128, 160, 0], - [0, 128, 0], [192, 128, 32], [128, 96, 128], [0, 0, 128], - [64, 0, 32], [0, 224, 128], [128, 0, 0], [192, 0, 160], - [0, 96, 128], [128, 128, 128], [64, 0, 160], [128, 224, 128], - [128, 128, 64], [192, 0, 32], [128, 96, 0], [128, 0, 192], - [0, 128, 32], [64, 224, 0], [0, 0, 64], [128, 128, 160], - [0, 0, 64], [0, 160, 192], [128, 0, 96], [128, 0, 192], - [0, 32, 192], [128, 128, 224], [0, 0, 192], [128, 160, 192], - [128, 128, 0], [128, 0, 32], [128, 32, 0], [128, 0, 128], - [64, 96, 0], [0, 128, 192], [0, 128, 160], [192, 224, 0], - [0, 128, 64], [128, 128, 32], [192, 32, 128], [0, 64, 192], - [0, 0, 32], [64, 160, 128], [128, 64, 64], [128, 0, 160], - [128, 64, 128], [244, 35, 232], [70, 70, 70], [102, 102, 156], - [190, 153, 153], [153, 153, 153], [250, 170, 30], [220, 220, 0], - [107, 142, 35], [152, 251, 152], [70, 130, 180], [220, 20, 60], - [255, 0, 0], [0, 0, 142], [0, 0, 70], [0, 60, 100], [0, 80, 100], - [0, 0, 230], [119, 11, 32], - [64, 128, 64], [128, 192, 32], [192, 32, 192], [64, 64, 192], - [0, 64, 32], [64, 160, 192], [192, 64, 64], [128, 64, 160], - [64, 32, 192], [192, 192, 192], [0, 64, 160], [192, 160, 192], - [192, 192, 0], [128, 64, 96], [192, 32, 64], [192, 64, 128], - [64, 192, 96], [64, 160, 64], [64, 64, 0]] - -def _get_pascal_context_59_meta(): - #context_classes = ["aeroplane", "bag", "bed", "bedclothes", "bench", "bicycle", "bird", "boat", "book", "bottle", "building", "bus", "cabinet", "car", "cat", "ceiling", "chair", "cloth", "computer", "cow", "cup", "curtain", "dog", "door", "fence", "floor", "flower", "food", "grass", "ground", "horse", "keyboard", "light", "motorbike", "mountain", "mouse", "person", "plate", "platform", "pottedplant", "road", "rock", "sheep", "shelves", "sidewalk", "sign", "sky", "snow", "sofa", "diningtable", "track", "train", "tree", "truck", "tvmonitor", "wall", "water", "window", "wood"]#, "background"] - context_classes = ["aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor", "bag", "bed", "bench", "book", "building", "cabinet", "ceiling", "cloth", "computer", "cup", "door", "fence", "floor", "flower", "food", "grass", "ground", "keyboard", "light", "mountain", "mouse", "curtain", "platform", "sign", "plate", "road", "rock", "shelves", "sidewalk", "sky", "snow", "bedclothes", "track", "tree", "truck", "wall", "water", "window", "wood"] - context_colors = [stuff_colors[i % len(stuff_colors)] for i in range(len(context_classes))] - ret = { - "stuff_colors" : context_colors, - "stuff_classes" : context_classes, - } - return ret - -def register_pascal_context_59(root): - root = os.path.join(root, "VOCdevkit", "VOC2010") - meta = _get_pascal_context_59_meta() - for name, image_dirname, sem_seg_dirname in [ - ("test", "JPEGImages", "annotations_detectron2/pc59_val"), - ]: - image_dir = os.path.join(root, image_dirname) - gt_dir = os.path.join(root, sem_seg_dirname) - name = f"context_59_{name}_sem_seg" - DatasetCatalog.register(name, lambda x=image_dir, y=gt_dir: load_sem_seg(y, x, gt_ext='png', image_ext='jpg')) - MetadataCatalog.get(name).set(image_root=image_dir, seg_seg_root=gt_dir, evaluator_type="sem_seg", ignore_label=255, **meta,) - -def _get_pascal_context_459_meta(): - context_459_classes = ["accordion", "aeroplane", "airconditioner", "antenna", "artillery", "ashtray", "atrium", "babycarriage", "bag", "ball", "balloon", "bambooweaving", "barrel", "baseballbat", "basket", "basketballbackboard", "bathtub", "bed", "bedclothes", "beer", "bell", "bench", "bicycle", "binoculars", "bird", "birdcage", "birdfeeder", "birdnest", "blackboard", "board", "boat", "bone", "book", "bottle", "bottleopener", "bowl", "box", "bracelet", "brick", "bridge", "broom", "brush", "bucket", "building", "bus", "cabinet", "cabinetdoor", "cage", "cake", "calculator", "calendar", "camel", "camera", "cameralens", "can", "candle", "candleholder", "cap", "car", "card", "cart", "case", "casetterecorder", "cashregister", "cat", "cd", "cdplayer", "ceiling", "cellphone", "cello", "chain", "chair", "chessboard", "chicken", "chopstick", "clip", "clippers", "clock", "closet", "cloth", "clothestree", "coffee", "coffeemachine", "comb", "computer", "concrete", "cone", "container", "controlbooth", "controller", "cooker", "copyingmachine", "coral", "cork", "corkscrew", "counter", "court", "cow", "crabstick", "crane", "crate", "cross", "crutch", "cup", "curtain", "cushion", "cuttingboard", "dais", "disc", "disccase", "dishwasher", "dock", "dog", "dolphin", "door", "drainer", "dray", "drinkdispenser", "drinkingmachine", "drop", "drug", "drum", "drumkit", "duck", "dumbbell", "earphone", "earrings", "egg", "electricfan", "electriciron", "electricpot", "electricsaw", "electronickeyboard", "engine", "envelope", "equipment", "escalator", "exhibitionbooth", "extinguisher", "eyeglass", "fan", "faucet", "faxmachine", "fence", "ferriswheel", "fireextinguisher", "firehydrant", "fireplace", "fish", "fishtank", "fishbowl", "fishingnet", "fishingpole", "flag", "flagstaff", "flame", "flashlight", "floor", "flower", "fly", "foam", "food", "footbridge", "forceps", "fork", "forklift", "fountain", "fox", "frame", "fridge", "frog", "fruit", "funnel", "furnace", "gamecontroller", "gamemachine", "gascylinder", "gashood", "gasstove", "giftbox", "glass", "glassmarble", "globe", "glove", "goal", "grandstand", "grass", "gravestone", "ground", "guardrail", "guitar", "gun", "hammer", "handcart", "handle", "handrail", "hanger", "harddiskdrive", "hat", "hay", "headphone", "heater", "helicopter", "helmet", "holder", "hook", "horse", "horse-drawncarriage", "hot-airballoon", "hydrovalve", "ice", "inflatorpump", "ipod", "iron", "ironingboard", "jar", "kart", "kettle", "key", "keyboard", "kitchenrange", "kite", "knife", "knifeblock", "ladder", "laddertruck", "ladle", "laptop", "leaves", "lid", "lifebuoy", "light", "lightbulb", "lighter", "line", "lion", "lobster", "lock", "machine", "mailbox", "mannequin", "map", "mask", "mat", "matchbook", "mattress", "menu", "metal", "meterbox", "microphone", "microwave", "mirror", "missile", "model", "money", "monkey", "mop", "motorbike", "mountain", "mouse", "mousepad", "musicalinstrument", "napkin", "net", "newspaper", "oar", "ornament", "outlet", "oven", "oxygenbottle", "pack", "pan", "paper", "paperbox", "papercutter", "parachute", "parasol", "parterre", "patio", "pelage", "pen", "pencontainer", "pencil", "person", "photo", "piano", "picture", "pig", "pillar", "pillow", "pipe", "pitcher", "plant", "plastic", "plate", "platform", "player", "playground", "pliers", "plume", "poker", "pokerchip", "pole", "pooltable", "postcard", "poster", "pot", "pottedplant", "printer", "projector", "pumpkin", "rabbit", "racket", "radiator", "radio", "rail", "rake", "ramp", "rangehood", "receiver", "recorder", "recreationalmachines", "remotecontrol", "road", "robot", "rock", "rocket", "rockinghorse", "rope", "rug", "ruler", "runway", "saddle", "sand", "saw", "scale", "scanner", "scissors", "scoop", "screen", "screwdriver", "sculpture", "scythe", "sewer", "sewingmachine", "shed", "sheep", "shell", "shelves", "shoe", "shoppingcart", "shovel", "sidecar", "sidewalk", "sign", "signallight", "sink", "skateboard", "ski", "sky", "sled", "slippers", "smoke", "snail", "snake", "snow", "snowmobiles", "sofa", "spanner", "spatula", "speaker", "speedbump", "spicecontainer", "spoon", "sprayer", "squirrel", "stage", "stair", "stapler", "stick", "stickynote", "stone", "stool", "stove", "straw", "stretcher", "sun", "sunglass", "sunshade", "surveillancecamera", "swan", "sweeper", "swimring", "swimmingpool", "swing", "switch", "table", "tableware", "tank", "tap", "tape", "tarp", "telephone", "telephonebooth", "tent", "tire", "toaster", "toilet", "tong", "tool", "toothbrush", "towel", "toy", "toycar", "track", "train", "trampoline", "trashbin", "tray", "tree", "tricycle", "tripod", "trophy", "truck", "tube", "turtle", "tvmonitor", "tweezers", "typewriter", "umbrella", "unknown", "vacuumcleaner", "vendingmachine", "videocamera", "videogameconsole", "videoplayer", "videotape", "violin", "wakeboard", "wall", "wallet", "wardrobe", "washingmachine", "watch", "water", "waterdispenser", "waterpipe", "waterskateboard", "watermelon", "whale", "wharf", "wheel", "wheelchair", "window", "windowblinds", "wineglass", "wire", "wood", "wool"] - context_colors = [stuff_colors[i % len(stuff_colors)] for i in range(len(context_459_classes))] - ret = { - "stuff_colors" : context_colors, - "stuff_classes" : context_459_classes, - } - return ret - -def register_pascal_context_459(root): - root = os.path.join(root, "VOCdevkit", "VOC2010") - meta = _get_pascal_context_459_meta() - for name, image_dirname, sem_seg_dirname in [ - ("test", "JPEGImages", "annotations_detectron2/pc459_val"), - ]: - image_dir = os.path.join(root, image_dirname) - gt_dir = os.path.join(root, sem_seg_dirname) - name = f"context_459_{name}_sem_seg" - DatasetCatalog.register(name, lambda x=image_dir, y=gt_dir: load_sem_seg(y, x, gt_ext='tif', image_ext='jpg')) - MetadataCatalog.get(name).set(image_root=image_dir, seg_seg_root=gt_dir, evaluator_type="sem_seg", ignore_label=459, **meta,) - - -_root = os.getenv("DETECTRON2_DATASETS", "datasets") -register_pascal_context_59(_root) -register_pascal_context_459(_root) \ No newline at end of file diff --git a/spaces/hands012/gpt-academic/docs/waifu_plugin/autoload.js b/spaces/hands012/gpt-academic/docs/waifu_plugin/autoload.js deleted file mode 100644 index 3464a5cd44b0d4e1b0f2528bd01fc1793275b964..0000000000000000000000000000000000000000 --- a/spaces/hands012/gpt-academic/docs/waifu_plugin/autoload.js +++ /dev/null @@ -1,30 +0,0 @@ -try { - $("").attr({href: "file=docs/waifu_plugin/waifu.css", rel: "stylesheet", type: "text/css"}).appendTo('head'); - $('body').append('
        '); - $.ajax({url: "file=docs/waifu_plugin/waifu-tips.js", dataType:"script", cache: true, success: function() { - $.ajax({url: "file=docs/waifu_plugin/live2d.js", dataType:"script", cache: true, success: function() { - /* 可直接修改部分参数 */ - live2d_settings['hitokotoAPI'] = "hitokoto.cn"; // 一言 API - live2d_settings['modelId'] = 5; // 默认模型 ID - live2d_settings['modelTexturesId'] = 1; // 默认材质 ID - live2d_settings['modelStorage'] = false; // 不储存模型 ID - live2d_settings['waifuSize'] = '210x187'; - live2d_settings['waifuTipsSize'] = '187x52'; - live2d_settings['canSwitchModel'] = true; - live2d_settings['canSwitchTextures'] = true; - live2d_settings['canSwitchHitokoto'] = false; - live2d_settings['canTakeScreenshot'] = false; - live2d_settings['canTurnToHomePage'] = false; - live2d_settings['canTurnToAboutPage'] = false; - live2d_settings['showHitokoto'] = false; // 显示一言 - live2d_settings['showF12Status'] = false; // 显示加载状态 - live2d_settings['showF12Message'] = false; // 显示看板娘消息 - live2d_settings['showF12OpenMsg'] = false; // 显示控制台打开提示 - live2d_settings['showCopyMessage'] = false; // 显示 复制内容 提示 - live2d_settings['showWelcomeMessage'] = true; // 显示进入面页欢迎词 - - /* 在 initModel 前添加 */ - initModel("file=docs/waifu_plugin/waifu-tips.json"); - }}); - }}); -} catch(err) { console.log("[Error] JQuery is not defined.") } diff --git a/spaces/hbestm/gpt-academic-play/crazy_functions/test_project/cpp/cppipc/ipc.cpp b/spaces/hbestm/gpt-academic-play/crazy_functions/test_project/cpp/cppipc/ipc.cpp deleted file mode 100644 index c713b852ea5a51fbeb4729b64561da482caaf351..0000000000000000000000000000000000000000 --- a/spaces/hbestm/gpt-academic-play/crazy_functions/test_project/cpp/cppipc/ipc.cpp +++ /dev/null @@ -1,701 +0,0 @@ - -#include -#include -#include -#include // std::pair, std::move, std::forward -#include -#include // aligned_storage_t -#include -#include -#include -#include - -#include "libipc/ipc.h" -#include "libipc/def.h" -#include "libipc/shm.h" -#include "libipc/pool_alloc.h" -#include "libipc/queue.h" -#include "libipc/policy.h" -#include "libipc/rw_lock.h" -#include "libipc/waiter.h" - -#include "libipc/utility/log.h" -#include "libipc/utility/id_pool.h" -#include "libipc/utility/scope_guard.h" -#include "libipc/utility/utility.h" - -#include "libipc/memory/resource.h" -#include "libipc/platform/detail.h" -#include "libipc/circ/elem_array.h" - -namespace { - -using msg_id_t = std::uint32_t; -using acc_t = std::atomic; - -template -struct msg_t; - -template -struct msg_t<0, AlignSize> { - msg_id_t cc_id_; - msg_id_t id_; - std::int32_t remain_; - bool storage_; -}; - -template -struct msg_t : msg_t<0, AlignSize> { - std::aligned_storage_t data_ {}; - - msg_t() = default; - msg_t(msg_id_t cc_id, msg_id_t id, std::int32_t remain, void const * data, std::size_t size) - : msg_t<0, AlignSize> {cc_id, id, remain, (data == nullptr) || (size == 0)} { - if (this->storage_) { - if (data != nullptr) { - // copy storage-id - *reinterpret_cast(&data_) = - *static_cast(data); - } - } - else std::memcpy(&data_, data, size); - } -}; - -template -ipc::buff_t make_cache(T& data, std::size_t size) { - auto ptr = ipc::mem::alloc(size); - std::memcpy(ptr, &data, (ipc::detail::min)(sizeof(data), size)); - return { ptr, size, ipc::mem::free }; -} - -struct cache_t { - std::size_t fill_; - ipc::buff_t buff_; - - cache_t(std::size_t f, ipc::buff_t && b) - : fill_(f), buff_(std::move(b)) - {} - - void append(void const * data, std::size_t size) { - if (fill_ >= buff_.size() || data == nullptr || size == 0) return; - auto new_fill = (ipc::detail::min)(fill_ + size, buff_.size()); - std::memcpy(static_cast(buff_.data()) + fill_, data, new_fill - fill_); - fill_ = new_fill; - } -}; - -auto cc_acc() { - static ipc::shm::handle acc_h("__CA_CONN__", sizeof(acc_t)); - return static_cast(acc_h.get()); -} - -IPC_CONSTEXPR_ std::size_t align_chunk_size(std::size_t size) noexcept { - return (((size - 1) / ipc::large_msg_align) + 1) * ipc::large_msg_align; -} - -IPC_CONSTEXPR_ std::size_t calc_chunk_size(std::size_t size) noexcept { - return ipc::make_align(alignof(std::max_align_t), align_chunk_size( - ipc::make_align(alignof(std::max_align_t), sizeof(std::atomic)) + size)); -} - -struct chunk_t { - std::atomic &conns() noexcept { - return *reinterpret_cast *>(this); - } - - void *data() noexcept { - return reinterpret_cast(this) - + ipc::make_align(alignof(std::max_align_t), sizeof(std::atomic)); - } -}; - -struct chunk_info_t { - ipc::id_pool<> pool_; - ipc::spin_lock lock_; - - IPC_CONSTEXPR_ static std::size_t chunks_mem_size(std::size_t chunk_size) noexcept { - return ipc::id_pool<>::max_count * chunk_size; - } - - ipc::byte_t *chunks_mem() noexcept { - return reinterpret_cast(this + 1); - } - - chunk_t *at(std::size_t chunk_size, ipc::storage_id_t id) noexcept { - if (id < 0) return nullptr; - return reinterpret_cast(chunks_mem() + (chunk_size * id)); - } -}; - -auto& chunk_storages() { - class chunk_handle_t { - ipc::shm::handle handle_; - - public: - chunk_info_t *get_info(std::size_t chunk_size) { - if (!handle_.valid() && - !handle_.acquire( ("__CHUNK_INFO__" + ipc::to_string(chunk_size)).c_str(), - sizeof(chunk_info_t) + chunk_info_t::chunks_mem_size(chunk_size) )) { - ipc::error("[chunk_storages] chunk_shm.id_info_.acquire failed: chunk_size = %zd\n", chunk_size); - return nullptr; - } - auto info = static_cast(handle_.get()); - if (info == nullptr) { - ipc::error("[chunk_storages] chunk_shm.id_info_.get failed: chunk_size = %zd\n", chunk_size); - return nullptr; - } - return info; - } - }; - static ipc::map chunk_hs; - return chunk_hs; -} - -chunk_info_t *chunk_storage_info(std::size_t chunk_size) { - auto &storages = chunk_storages(); - std::decay_t::iterator it; - { - static ipc::rw_lock lock; - IPC_UNUSED_ std::shared_lock guard {lock}; - if ((it = storages.find(chunk_size)) == storages.end()) { - using chunk_handle_t = std::decay_t::value_type::second_type; - guard.unlock(); - IPC_UNUSED_ std::lock_guard guard {lock}; - it = storages.emplace(chunk_size, chunk_handle_t{}).first; - } - } - return it->second.get_info(chunk_size); -} - -std::pair acquire_storage(std::size_t size, ipc::circ::cc_t conns) { - std::size_t chunk_size = calc_chunk_size(size); - auto info = chunk_storage_info(chunk_size); - if (info == nullptr) return {}; - - info->lock_.lock(); - info->pool_.prepare(); - // got an unique id - auto id = info->pool_.acquire(); - info->lock_.unlock(); - - auto chunk = info->at(chunk_size, id); - if (chunk == nullptr) return {}; - chunk->conns().store(conns, std::memory_order_relaxed); - return { id, chunk->data() }; -} - -void *find_storage(ipc::storage_id_t id, std::size_t size) { - if (id < 0) { - ipc::error("[find_storage] id is invalid: id = %ld, size = %zd\n", (long)id, size); - return nullptr; - } - std::size_t chunk_size = calc_chunk_size(size); - auto info = chunk_storage_info(chunk_size); - if (info == nullptr) return nullptr; - return info->at(chunk_size, id)->data(); -} - -void release_storage(ipc::storage_id_t id, std::size_t size) { - if (id < 0) { - ipc::error("[release_storage] id is invalid: id = %ld, size = %zd\n", (long)id, size); - return; - } - std::size_t chunk_size = calc_chunk_size(size); - auto info = chunk_storage_info(chunk_size); - if (info == nullptr) return; - info->lock_.lock(); - info->pool_.release(id); - info->lock_.unlock(); -} - -template -bool sub_rc(ipc::wr, - std::atomic &/*conns*/, ipc::circ::cc_t /*curr_conns*/, ipc::circ::cc_t /*conn_id*/) noexcept { - return true; -} - -template -bool sub_rc(ipc::wr, - std::atomic &conns, ipc::circ::cc_t curr_conns, ipc::circ::cc_t conn_id) noexcept { - auto last_conns = curr_conns & ~conn_id; - for (unsigned k = 0;;) { - auto chunk_conns = conns.load(std::memory_order_acquire); - if (conns.compare_exchange_weak(chunk_conns, chunk_conns & last_conns, std::memory_order_release)) { - return (chunk_conns & last_conns) == 0; - } - ipc::yield(k); - } -} - -template -void recycle_storage(ipc::storage_id_t id, std::size_t size, ipc::circ::cc_t curr_conns, ipc::circ::cc_t conn_id) { - if (id < 0) { - ipc::error("[recycle_storage] id is invalid: id = %ld, size = %zd\n", (long)id, size); - return; - } - std::size_t chunk_size = calc_chunk_size(size); - auto info = chunk_storage_info(chunk_size); - if (info == nullptr) return; - - auto chunk = info->at(chunk_size, id); - if (chunk == nullptr) return; - - if (!sub_rc(Flag{}, chunk->conns(), curr_conns, conn_id)) { - return; - } - info->lock_.lock(); - info->pool_.release(id); - info->lock_.unlock(); -} - -template -bool clear_message(void* p) { - auto msg = static_cast(p); - if (msg->storage_) { - std::int32_t r_size = static_cast(ipc::data_length) + msg->remain_; - if (r_size <= 0) { - ipc::error("[clear_message] invalid msg size: %d\n", (int)r_size); - return true; - } - release_storage( - *reinterpret_cast(&msg->data_), - static_cast(r_size)); - } - return true; -} - -struct conn_info_head { - - ipc::string name_; - msg_id_t cc_id_; // connection-info id - ipc::detail::waiter cc_waiter_, wt_waiter_, rd_waiter_; - ipc::shm::handle acc_h_; - - conn_info_head(char const * name) - : name_ {name} - , cc_id_ {(cc_acc() == nullptr) ? 0 : cc_acc()->fetch_add(1, std::memory_order_relaxed)} - , cc_waiter_{("__CC_CONN__" + name_).c_str()} - , wt_waiter_{("__WT_CONN__" + name_).c_str()} - , rd_waiter_{("__RD_CONN__" + name_).c_str()} - , acc_h_ {("__AC_CONN__" + name_).c_str(), sizeof(acc_t)} { - } - - void quit_waiting() { - cc_waiter_.quit_waiting(); - wt_waiter_.quit_waiting(); - rd_waiter_.quit_waiting(); - } - - auto acc() { - return static_cast(acc_h_.get()); - } - - auto& recv_cache() { - thread_local ipc::unordered_map tls; - return tls; - } -}; - -template -bool wait_for(W& waiter, F&& pred, std::uint64_t tm) { - if (tm == 0) return !pred(); - for (unsigned k = 0; pred();) { - bool ret = true; - ipc::sleep(k, [&k, &ret, &waiter, &pred, tm] { - ret = waiter.wait_if(std::forward(pred), tm); - k = 0; - }); - if (!ret) return false; // timeout or fail - if (k == 0) break; // k has been reset - } - return true; -} - -template -struct queue_generator { - - using queue_t = ipc::queue, Policy>; - - struct conn_info_t : conn_info_head { - queue_t que_; - - conn_info_t(char const * name) - : conn_info_head{name} - , que_{("__QU_CONN__" + - ipc::to_string(DataSize) + "__" + - ipc::to_string(AlignSize) + "__" + name).c_str()} { - } - - void disconnect_receiver() { - bool dis = que_.disconnect(); - this->quit_waiting(); - if (dis) { - this->recv_cache().clear(); - } - } - }; -}; - -template -struct detail_impl { - -using policy_t = Policy; -using flag_t = typename policy_t::flag_t; -using queue_t = typename queue_generator::queue_t; -using conn_info_t = typename queue_generator::conn_info_t; - -constexpr static conn_info_t* info_of(ipc::handle_t h) noexcept { - return static_cast(h); -} - -constexpr static queue_t* queue_of(ipc::handle_t h) noexcept { - return (info_of(h) == nullptr) ? nullptr : &(info_of(h)->que_); -} - -/* API implementations */ - -static void disconnect(ipc::handle_t h) { - auto que = queue_of(h); - if (que == nullptr) { - return; - } - que->shut_sending(); - assert(info_of(h) != nullptr); - info_of(h)->disconnect_receiver(); -} - -static bool reconnect(ipc::handle_t * ph, bool start_to_recv) { - assert(ph != nullptr); - assert(*ph != nullptr); - auto que = queue_of(*ph); - if (que == nullptr) { - return false; - } - if (start_to_recv) { - que->shut_sending(); - if (que->connect()) { // wouldn't connect twice - info_of(*ph)->cc_waiter_.broadcast(); - return true; - } - return false; - } - // start_to_recv == false - if (que->connected()) { - info_of(*ph)->disconnect_receiver(); - } - return que->ready_sending(); -} - -static bool connect(ipc::handle_t * ph, char const * name, bool start_to_recv) { - assert(ph != nullptr); - if (*ph == nullptr) { - *ph = ipc::mem::alloc(name); - } - return reconnect(ph, start_to_recv); -} - -static void destroy(ipc::handle_t h) { - disconnect(h); - ipc::mem::free(info_of(h)); -} - -static std::size_t recv_count(ipc::handle_t h) noexcept { - auto que = queue_of(h); - if (que == nullptr) { - return ipc::invalid_value; - } - return que->conn_count(); -} - -static bool wait_for_recv(ipc::handle_t h, std::size_t r_count, std::uint64_t tm) { - auto que = queue_of(h); - if (que == nullptr) { - return false; - } - return wait_for(info_of(h)->cc_waiter_, [que, r_count] { - return que->conn_count() < r_count; - }, tm); -} - -template -static bool send(F&& gen_push, ipc::handle_t h, void const * data, std::size_t size) { - if (data == nullptr || size == 0) { - ipc::error("fail: send(%p, %zd)\n", data, size); - return false; - } - auto que = queue_of(h); - if (que == nullptr) { - ipc::error("fail: send, queue_of(h) == nullptr\n"); - return false; - } - if (que->elems() == nullptr) { - ipc::error("fail: send, queue_of(h)->elems() == nullptr\n"); - return false; - } - if (!que->ready_sending()) { - ipc::error("fail: send, que->ready_sending() == false\n"); - return false; - } - ipc::circ::cc_t conns = que->elems()->connections(std::memory_order_relaxed); - if (conns == 0) { - ipc::error("fail: send, there is no receiver on this connection.\n"); - return false; - } - // calc a new message id - auto acc = info_of(h)->acc(); - if (acc == nullptr) { - ipc::error("fail: send, info_of(h)->acc() == nullptr\n"); - return false; - } - auto msg_id = acc->fetch_add(1, std::memory_order_relaxed); - auto try_push = std::forward(gen_push)(info_of(h), que, msg_id); - if (size > ipc::large_msg_limit) { - auto dat = acquire_storage(size, conns); - void * buf = dat.second; - if (buf != nullptr) { - std::memcpy(buf, data, size); - return try_push(static_cast(size) - - static_cast(ipc::data_length), &(dat.first), 0); - } - // try using message fragment - //ipc::log("fail: shm::handle for big message. msg_id: %zd, size: %zd\n", msg_id, size); - } - // push message fragment - std::int32_t offset = 0; - for (std::int32_t i = 0; i < static_cast(size / ipc::data_length); ++i, offset += ipc::data_length) { - if (!try_push(static_cast(size) - offset - static_cast(ipc::data_length), - static_cast(data) + offset, ipc::data_length)) { - return false; - } - } - // if remain > 0, this is the last message fragment - std::int32_t remain = static_cast(size) - offset; - if (remain > 0) { - if (!try_push(remain - static_cast(ipc::data_length), - static_cast(data) + offset, - static_cast(remain))) { - return false; - } - } - return true; -} - -static bool send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) { - return send([tm](auto info, auto que, auto msg_id) { - return [tm, info, que, msg_id](std::int32_t remain, void const * data, std::size_t size) { - if (!wait_for(info->wt_waiter_, [&] { - return !que->push( - [](void*) { return true; }, - info->cc_id_, msg_id, remain, data, size); - }, tm)) { - ipc::log("force_push: msg_id = %zd, remain = %d, size = %zd\n", msg_id, remain, size); - if (!que->force_push( - clear_message, - info->cc_id_, msg_id, remain, data, size)) { - return false; - } - } - info->rd_waiter_.broadcast(); - return true; - }; - }, h, data, size); -} - -static bool try_send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) { - return send([tm](auto info, auto que, auto msg_id) { - return [tm, info, que, msg_id](std::int32_t remain, void const * data, std::size_t size) { - if (!wait_for(info->wt_waiter_, [&] { - return !que->push( - [](void*) { return true; }, - info->cc_id_, msg_id, remain, data, size); - }, tm)) { - return false; - } - info->rd_waiter_.broadcast(); - return true; - }; - }, h, data, size); -} - -static ipc::buff_t recv(ipc::handle_t h, std::uint64_t tm) { - auto que = queue_of(h); - if (que == nullptr) { - ipc::error("fail: recv, queue_of(h) == nullptr\n"); - return {}; - } - if (!que->connected()) { - // hasn't connected yet, just return. - return {}; - } - auto& rc = info_of(h)->recv_cache(); - for (;;) { - // pop a new message - typename queue_t::value_t msg; - if (!wait_for(info_of(h)->rd_waiter_, [que, &msg] { - return !que->pop(msg); - }, tm)) { - // pop failed, just return. - return {}; - } - info_of(h)->wt_waiter_.broadcast(); - if ((info_of(h)->acc() != nullptr) && (msg.cc_id_ == info_of(h)->cc_id_)) { - continue; // ignore message to self - } - // msg.remain_ may minus & abs(msg.remain_) < data_length - std::int32_t r_size = static_cast(ipc::data_length) + msg.remain_; - if (r_size <= 0) { - ipc::error("fail: recv, r_size = %d\n", (int)r_size); - return {}; - } - std::size_t msg_size = static_cast(r_size); - // large message - if (msg.storage_) { - ipc::storage_id_t buf_id = *reinterpret_cast(&msg.data_); - void* buf = find_storage(buf_id, msg_size); - if (buf != nullptr) { - struct recycle_t { - ipc::storage_id_t storage_id; - ipc::circ::cc_t curr_conns; - ipc::circ::cc_t conn_id; - } *r_info = ipc::mem::alloc(recycle_t{ - buf_id, que->elems()->connections(std::memory_order_relaxed), que->connected_id() - }); - if (r_info == nullptr) { - ipc::log("fail: ipc::mem::alloc.\n"); - return ipc::buff_t{buf, msg_size}; // no recycle - } else { - return ipc::buff_t{buf, msg_size, [](void* p_info, std::size_t size) { - auto r_info = static_cast(p_info); - IPC_UNUSED_ auto finally = ipc::guard([r_info] { - ipc::mem::free(r_info); - }); - recycle_storage(r_info->storage_id, size, r_info->curr_conns, r_info->conn_id); - }, r_info}; - } - } else { - ipc::log("fail: shm::handle for large message. msg_id: %zd, buf_id: %zd, size: %zd\n", msg.id_, buf_id, msg_size); - continue; - } - } - // find cache with msg.id_ - auto cac_it = rc.find(msg.id_); - if (cac_it == rc.end()) { - if (msg_size <= ipc::data_length) { - return make_cache(msg.data_, msg_size); - } - // gc - if (rc.size() > 1024) { - std::vector need_del; - for (auto const & pair : rc) { - auto cmp = std::minmax(msg.id_, pair.first); - if (cmp.second - cmp.first > 8192) { - need_del.push_back(pair.first); - } - } - for (auto id : need_del) rc.erase(id); - } - // cache the first message fragment - rc.emplace(msg.id_, cache_t { ipc::data_length, make_cache(msg.data_, msg_size) }); - } - // has cached before this message - else { - auto& cac = cac_it->second; - // this is the last message fragment - if (msg.remain_ <= 0) { - cac.append(&(msg.data_), msg_size); - // finish this message, erase it from cache - auto buff = std::move(cac.buff_); - rc.erase(cac_it); - return buff; - } - // there are remain datas after this message - cac.append(&(msg.data_), ipc::data_length); - } - } -} - -static ipc::buff_t try_recv(ipc::handle_t h) { - return recv(h, 0); -} - -}; // detail_impl - -template -using policy_t = ipc::policy::choose; - -} // internal-linkage - -namespace ipc { - -template -ipc::handle_t chan_impl::inited() { - ipc::detail::waiter::init(); - return nullptr; -} - -template -bool chan_impl::connect(ipc::handle_t * ph, char const * name, unsigned mode) { - return detail_impl>::connect(ph, name, mode & receiver); -} - -template -bool chan_impl::reconnect(ipc::handle_t * ph, unsigned mode) { - return detail_impl>::reconnect(ph, mode & receiver); -} - -template -void chan_impl::disconnect(ipc::handle_t h) { - detail_impl>::disconnect(h); -} - -template -void chan_impl::destroy(ipc::handle_t h) { - detail_impl>::destroy(h); -} - -template -char const * chan_impl::name(ipc::handle_t h) { - auto info = detail_impl>::info_of(h); - return (info == nullptr) ? nullptr : info->name_.c_str(); -} - -template -std::size_t chan_impl::recv_count(ipc::handle_t h) { - return detail_impl>::recv_count(h); -} - -template -bool chan_impl::wait_for_recv(ipc::handle_t h, std::size_t r_count, std::uint64_t tm) { - return detail_impl>::wait_for_recv(h, r_count, tm); -} - -template -bool chan_impl::send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) { - return detail_impl>::send(h, data, size, tm); -} - -template -buff_t chan_impl::recv(ipc::handle_t h, std::uint64_t tm) { - return detail_impl>::recv(h, tm); -} - -template -bool chan_impl::try_send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) { - return detail_impl>::try_send(h, data, size, tm); -} - -template -buff_t chan_impl::try_recv(ipc::handle_t h) { - return detail_impl>::try_recv(h); -} - -template struct chan_impl>; -// template struct chan_impl>; // TBD -// template struct chan_impl>; // TBD -template struct chan_impl>; -template struct chan_impl>; - -} // namespace ipc diff --git a/spaces/hbestm/gpt-academic-play/docs/waifu_plugin/waifu.css b/spaces/hbestm/gpt-academic-play/docs/waifu_plugin/waifu.css deleted file mode 100644 index 42639df0794e46fc58f66e2c772e2bf9ba605eed..0000000000000000000000000000000000000000 --- a/spaces/hbestm/gpt-academic-play/docs/waifu_plugin/waifu.css +++ /dev/null @@ -1,290 +0,0 @@ -.waifu { - position: fixed; - bottom: 0; - z-index: 1; - font-size: 0; - -webkit-transform: translateY(3px); - transform: translateY(3px); -} -.waifu:hover { - -webkit-transform: translateY(0); - transform: translateY(0); -} -.waifu-tips { - opacity: 0; - margin: -20px 20px; - padding: 5px 10px; - border: 1px solid rgba(224, 186, 140, 0.62); - border-radius: 12px; - background-color: rgba(236, 217, 188, 0.5); - box-shadow: 0 3px 15px 2px rgba(191, 158, 118, 0.2); - text-overflow: ellipsis; - overflow: hidden; - position: absolute; - animation-delay: 5s; - animation-duration: 50s; - animation-iteration-count: infinite; - animation-name: shake; - animation-timing-function: ease-in-out; -} -.waifu-tool { - display: none; - color: #aaa; - top: 50px; - right: 10px; - position: absolute; -} -.waifu:hover .waifu-tool { - display: block; -} -.waifu-tool span { - display: block; - cursor: pointer; - color: #5b6c7d; - transition: 0.2s; -} -.waifu-tool span:hover { - color: #34495e; -} -.waifu #live2d{ - position: relative; -} - -@keyframes shake { - 2% { - transform: translate(0.5px, -1.5px) rotate(-0.5deg); - } - - 4% { - transform: translate(0.5px, 1.5px) rotate(1.5deg); - } - - 6% { - transform: translate(1.5px, 1.5px) rotate(1.5deg); - } - - 8% { - transform: translate(2.5px, 1.5px) rotate(0.5deg); - } - - 10% { - transform: translate(0.5px, 2.5px) rotate(0.5deg); - } - - 12% { - transform: translate(1.5px, 1.5px) rotate(0.5deg); - } - - 14% { - transform: translate(0.5px, 0.5px) rotate(0.5deg); - } - - 16% { - transform: translate(-1.5px, -0.5px) rotate(1.5deg); - } - - 18% { - transform: translate(0.5px, 0.5px) rotate(1.5deg); - } - - 20% { - transform: translate(2.5px, 2.5px) rotate(1.5deg); - } - - 22% { - transform: translate(0.5px, -1.5px) rotate(1.5deg); - } - - 24% { - transform: translate(-1.5px, 1.5px) rotate(-0.5deg); - } - - 26% { - transform: translate(1.5px, 0.5px) rotate(1.5deg); - } - - 28% { - transform: translate(-0.5px, -0.5px) rotate(-0.5deg); - } - - 30% { - transform: translate(1.5px, -0.5px) rotate(-0.5deg); - } - - 32% { - transform: translate(2.5px, -1.5px) rotate(1.5deg); - } - - 34% { - transform: translate(2.5px, 2.5px) rotate(-0.5deg); - } - - 36% { - transform: translate(0.5px, -1.5px) rotate(0.5deg); - } - - 38% { - transform: translate(2.5px, -0.5px) rotate(-0.5deg); - } - - 40% { - transform: translate(-0.5px, 2.5px) rotate(0.5deg); - } - - 42% { - transform: translate(-1.5px, 2.5px) rotate(0.5deg); - } - - 44% { - transform: translate(-1.5px, 1.5px) rotate(0.5deg); - } - - 46% { - transform: translate(1.5px, -0.5px) rotate(-0.5deg); - } - - 48% { - transform: translate(2.5px, -0.5px) rotate(0.5deg); - } - - 50% { - transform: translate(-1.5px, 1.5px) rotate(0.5deg); - } - - 52% { - transform: translate(-0.5px, 1.5px) rotate(0.5deg); - } - - 54% { - transform: translate(-1.5px, 1.5px) rotate(0.5deg); - } - - 56% { - transform: translate(0.5px, 2.5px) rotate(1.5deg); - } - - 58% { - transform: translate(2.5px, 2.5px) rotate(0.5deg); - } - - 60% { - transform: translate(2.5px, -1.5px) rotate(1.5deg); - } - - 62% { - transform: translate(-1.5px, 0.5px) rotate(1.5deg); - } - - 64% { - transform: translate(-1.5px, 1.5px) rotate(1.5deg); - } - - 66% { - transform: translate(0.5px, 2.5px) rotate(1.5deg); - } - - 68% { - transform: translate(2.5px, -1.5px) rotate(1.5deg); - } - - 70% { - transform: translate(2.5px, 2.5px) rotate(0.5deg); - } - - 72% { - transform: translate(-0.5px, -1.5px) rotate(1.5deg); - } - - 74% { - transform: translate(-1.5px, 2.5px) rotate(1.5deg); - } - - 76% { - transform: translate(-1.5px, 2.5px) rotate(1.5deg); - } - - 78% { - transform: translate(-1.5px, 2.5px) rotate(0.5deg); - } - - 80% { - transform: translate(-1.5px, 0.5px) rotate(-0.5deg); - } - - 82% { - transform: translate(-1.5px, 0.5px) rotate(-0.5deg); - } - - 84% { - transform: translate(-0.5px, 0.5px) rotate(1.5deg); - } - - 86% { - transform: translate(2.5px, 1.5px) rotate(0.5deg); - } - - 88% { - transform: translate(-1.5px, 0.5px) rotate(1.5deg); - } - - 90% { - transform: translate(-1.5px, -0.5px) rotate(-0.5deg); - } - - 92% { - transform: translate(-1.5px, -1.5px) rotate(1.5deg); - } - - 94% { - transform: translate(0.5px, 0.5px) rotate(-0.5deg); - } - - 96% { - transform: translate(2.5px, -0.5px) rotate(-0.5deg); - } - - 98% { - transform: translate(-1.5px, -1.5px) rotate(-0.5deg); - } - - 0%, 100% { - transform: translate(0, 0) rotate(0); - } -} -@font-face { - font-family: 'Flat-UI-Icons'; - src: url('flat-ui-icons-regular.eot'); - src: url('flat-ui-icons-regular.eot?#iefix') format('embedded-opentype'), url('flat-ui-icons-regular.woff') format('woff'), url('flat-ui-icons-regular.ttf') format('truetype'), url('flat-ui-icons-regular.svg#flat-ui-icons-regular') format('svg'); -} -[class^="fui-"], -[class*="fui-"] { - font-family: 'Flat-UI-Icons'; - speak: none; - font-style: normal; - font-weight: normal; - font-variant: normal; - text-transform: none; - -webkit-font-smoothing: antialiased; - -moz-osx-font-smoothing: grayscale; -} -.fui-cross:before { - content: "\e609"; -} -.fui-info-circle:before { - content: "\e60f"; -} -.fui-photo:before { - content: "\e62a"; -} -.fui-eye:before { - content: "\e62c"; -} -.fui-chat:before { - content: "\e62d"; -} -.fui-home:before { - content: "\e62e"; -} -.fui-user:before { - content: "\e631"; -} \ No newline at end of file diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/architectural_variants/nnUNetTrainerV2_NoNormalization_lr1en3.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/architectural_variants/nnUNetTrainerV2_NoNormalization_lr1en3.py deleted file mode 100644 index 83173aee22652d4def3f130557a1714bb5071858..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/architectural_variants/nnUNetTrainerV2_NoNormalization_lr1en3.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -from nnunet.training.network_training.nnUNet_variants.architectural_variants.nnUNetTrainerV2_NoNormalization import \ - nnUNetTrainerV2_NoNormalization - - -class nnUNetTrainerV2_NoNormalization_lr1en3(nnUNetTrainerV2_NoNormalization): - def __init__(self, plans_file, fold, output_folder=None, dataset_directory=None, batch_dice=True, stage=None, - unpack_data=True, deterministic=True, fp16=False): - super().__init__(plans_file, fold, output_folder, dataset_directory, batch_dice, stage, unpack_data, - deterministic, fp16) - self.initial_lr = 1e-3 diff --git a/spaces/huggingface-projects/diffusers-gallery-bot/app.py b/spaces/huggingface-projects/diffusers-gallery-bot/app.py deleted file mode 100644 index fcb36ad2a4082f6e22245e5618f265232cc7d1ac..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/diffusers-gallery-bot/app.py +++ /dev/null @@ -1,425 +0,0 @@ -from enum import Enum -import os -import re -import aiohttp -import requests -import json -import subprocess -import asyncio -from io import BytesIO -import uuid -import yaml - -from math import ceil -from tqdm import tqdm -from pathlib import Path -from huggingface_hub import Repository -from PIL import Image, ImageOps -from fastapi import FastAPI, BackgroundTasks -from fastapi.responses import HTMLResponse - -from fastapi_utils.tasks import repeat_every -from fastapi.middleware.cors import CORSMiddleware -import boto3 -from datetime import datetime -from db import Database - -AWS_ACCESS_KEY_ID = os.getenv("MY_AWS_ACCESS_KEY_ID") -AWS_SECRET_KEY = os.getenv("MY_AWS_SECRET_KEY") -AWS_S3_BUCKET_NAME = os.getenv("MY_AWS_S3_BUCKET_NAME") - - -HF_TOKEN = os.environ.get("HF_TOKEN") - -S3_DATA_FOLDER = Path("sd-multiplayer-data") - -DB_FOLDER = Path("diffusers-gallery-data") - -CLASSIFIER_URL = ( - "https://radames-aesthetic-style-nsfw-classifier.hf.space/run/inference" -) -ASSETS_URL = "https://d26smi9133w0oo.cloudfront.net/diffusers-gallery/" - -BLOCKED_MODELS_REGEX = re.compile(r"(CyberHarem)", re.IGNORECASE) - -s3 = boto3.client( - service_name="s3", - aws_access_key_id=AWS_ACCESS_KEY_ID, - aws_secret_access_key=AWS_SECRET_KEY, -) - - -repo = Repository( - local_dir=DB_FOLDER, - repo_type="dataset", - clone_from="huggingface-projects/diffusers-gallery-data", - use_auth_token=True, -) -repo.git_pull() - -database = Database(DB_FOLDER) - - -async def upload_resize_image_url(session, image_url): - print(f"Uploading image {image_url}") - try: - async with session.get(image_url) as response: - if response.status == 200 and ( - response.headers["content-type"].startswith("image") - or response.headers["content-type"].startswith("application") - ): - image = Image.open(BytesIO(await response.read())).convert("RGB") - # resize image proportional - image = ImageOps.fit(image, (400, 400), Image.LANCZOS) - image_bytes = BytesIO() - image.save(image_bytes, format="JPEG") - image_bytes.seek(0) - fname = f"{uuid.uuid4()}.jpg" - s3.upload_fileobj( - Fileobj=image_bytes, - Bucket=AWS_S3_BUCKET_NAME, - Key="diffusers-gallery/" + fname, - ExtraArgs={ - "ContentType": "image/jpeg", - "CacheControl": "max-age=31536000", - }, - ) - return fname - except Exception as e: - print(f"Error uploading image {image_url}: {e}") - return None - - -def fetch_models(page=0): - response = requests.get( - f"https://huggingface.co/models-json?pipeline_tag=text-to-image&p={page}" - ) - data = response.json() - return { - "models": [model for model in data["models"] if not model["private"]], - "numItemsPerPage": data["numItemsPerPage"], - "numTotalItems": data["numTotalItems"], - "pageIndex": data["pageIndex"], - } - - -def fetch_model_card(model_id): - response = requests.get(f"https://huggingface.co/{model_id}/raw/main/README.md") - return response.text - - -REGEX = re.compile(r'---(.*?)---', re.DOTALL) - -def get_yaml_data(text_content): - matches = REGEX.findall(text_content) - yaml_block = matches[0].strip() if matches else None - if yaml_block: - try: - data_dict = yaml.safe_load(yaml_block) - return data_dict - except yaml.YAMLError as exc: - print(exc) - return {} - -async def find_image_in_model_card(text, model_id): - base_url = f"https://huggingface.co/{model_id}/resolve/main/" - image_regex = re.compile(r"!\[.*\]\((.*?\.(png|jpg|jpeg|gif|bmp|webp))\)|src=\"(.*?\.(png|jpg|jpeg|gif|bmp|webp))\">", re.IGNORECASE) - matches = image_regex.findall(text) - urls = [] - for match in matches: - for url in match: - if url: - if not url.startswith("http") and not url.startswith("https"): - url = base_url + url - urls.append(url) - - if len(urls) == 0: - return [] - - print(urls) - async with aiohttp.ClientSession() as session: - tasks = [ - asyncio.ensure_future(upload_resize_image_url(session, image_url)) - for image_url in urls[0:3] - ] - return await asyncio.gather(*tasks) - - -def run_classifier(images): - images = [i for i in images if i is not None] - if len(images) > 0: - # classifying only the first image - images_urls = [ASSETS_URL + images[0]] - response = requests.post( - CLASSIFIER_URL, - json={ - "data": [ - {"urls": images_urls}, # json urls: list of images urls - False, # enable/disable gallery image output - None, # single image input - None, # files input - ] - }, - ).json() - - # data response is array data:[[{img0}, {img1}, {img2}...], Label, Gallery], - class_data = response["data"][0][0] - class_data_parsed = {row["label"]: round(row["score"], 3) for row in class_data} - - # update row data with classificator data - return class_data_parsed - else: - return {} - - -async def get_all_new_models(): - initial = fetch_models(0) - num_pages = ceil(initial["numTotalItems"] / initial["numItemsPerPage"]) - - print( - f"Total items: {initial['numTotalItems']} - Items per page: {initial['numItemsPerPage']}" - ) - print(f"Found {num_pages} pages") - - # fetch all models - new_models = [] - for page in tqdm(range(0, num_pages)): - print(f"Fetching page {page} of {num_pages}") - page_models = fetch_models(page) - new_models += page_models["models"] - return new_models - - -async def sync_data(): - print("Fetching models") - repo.git_pull() - all_models = await get_all_new_models() - print(f"Found {len(all_models)} models") - # save list of all models for ids - with open(DB_FOLDER / "models.json", "w") as f: - json.dump(all_models, f) - # with open(DB_FOLDER / "models.json", "r") as f: - # all_models = json.load(f) - - new_models_ids = [model["id"] for model in all_models] - new_models_ids = [model_id for model_id in new_models_ids if not re.match(BLOCKED_MODELS_REGEX, model_id)] - - # get existing models - with database.get_db() as db: - cursor = db.cursor() - cursor.execute("SELECT id FROM models") - existing_models = [row["id"] for row in cursor.fetchall()] - models_ids_to_add = list(set(new_models_ids) - set(existing_models)) - # find all models id to add from new_models - models = [model for model in all_models if model["id"] in models_ids_to_add] - - print(f"Found {len(models)} new models") - for model in tqdm(models): - model_id = model["id"] - print(f"\n\nFetching model {model_id}") - likes = model["likes"] - downloads = model["downloads"] - print("Fetching model card") - model_card = fetch_model_card(model_id) - print("Parsing model card") - model_card_data = get_yaml_data(model_card) - print("Finding images in model card") - images = await find_image_in_model_card(model_card, model_id) - - classifier = run_classifier(images) - print(images, classifier) - # update model row with image and classifier data - with database.get_db() as db: - cursor = db.cursor() - cursor.execute( - "INSERT INTO models(id, data, likes, downloads) VALUES (?, ?, ?, ?)", - [ - model_id, - json.dumps( - { - **model, - "meta": model_card_data, - "images": images, - "class": classifier, - } - ), - likes, - downloads, - ], - ) - db.commit() - print("\n\n\n\nTry to update images again\n\n\n") - with database.get_db() as db: - cursor = db.cursor() - cursor.execute("SELECT * from models") - to_all_models = list(cursor.fetchall()) - models_no_images = [] - for model in to_all_models: - model_data = json.loads(model["data"]) - images = model_data["images"] - filtered_images = [x for x in images if x is not None] - if len(filtered_images) == 0: - models_no_images.append(model) - - for model in tqdm(models_no_images): - model_id = model["id"] - model_data = json.loads(model["data"]) - print(f"\n\nFetching model {model_id}") - model_card = fetch_model_card(model_id) - print("Parsing model card") - model_card_data = get_yaml_data(model_card) - print("Finding images in model card") - images = await find_image_in_model_card(model_card, model_id) - classifier = run_classifier(images) - model_data["images"] = images - model_data["class"] = classifier - model_data["meta"] = model_card_data - # update model row with image and classifier data - with database.get_db() as db: - cursor = db.cursor() - cursor.execute( - "UPDATE models SET data = ? WHERE id = ?", - [json.dumps(model_data), model_id], - ) - db.commit() - - print("Update likes and downloads") - for model in tqdm(all_models): - model_id = model["id"] - likes = model["likes"] - downloads = model["downloads"] - with database.get_db() as db: - cursor = db.cursor() - cursor.execute( - "UPDATE models SET likes = ?, downloads = ? WHERE id = ?", - [likes, downloads, model_id], - ) - db.commit() - - print("Updating DB repository") - time = datetime.now().strftime("%Y-%m-%d %H:%M:%S") - cmd = f"git add . && git commit --amend -m 'update at {time}' && git push --force" - print(cmd) - subprocess.Popen(cmd, cwd=DB_FOLDER, shell=True) - - -app = FastAPI() -app.add_middleware( - CORSMiddleware, - allow_origins=["*"], - allow_credentials=True, - allow_methods=["*"], - allow_headers=["*"], -) - - -# @ app.get("/sync") -# async def sync(background_tasks: BackgroundTasks): -# await sync_data() -# return "Synced data to huggingface datasets" - - -MAX_PAGE_SIZE = 30 - - -class Sort(str, Enum): - trending = "trending" - recent = "recent" - likes = "likes" - - -class Style(str, Enum): - all = "all" - anime = "anime" - s3D = "3d" - realistic = "realistic" - nsfw = "nsfw" - lora = "lora" - - -@app.get("/api/models") -def get_page( - page: int = 1, sort: Sort = Sort.trending, style: Style = Style.all, tag: str = None -): - page = page if page > 0 else 1 - if sort == Sort.trending: - sort_query = "likes / MYPOWER((JULIANDAY('now') - JULIANDAY(datetime(json_extract(data, '$.lastModified')))) + 2, 2) DESC" - elif sort == Sort.recent: - sort_query = "datetime(json_extract(data, '$.lastModified')) DESC" - elif sort == Sort.likes: - sort_query = "likes DESC" - - if style == Style.all: - style_query = "isNFSW = false" - elif style == Style.anime: - style_query = "json_extract(data, '$.class.anime') > 0.1 AND isNFSW = false" - elif style == Style.s3D: - style_query = "json_extract(data, '$.class.3d') > 0.1 AND isNFSW = false" - elif style == Style.realistic: - style_query = "json_extract(data, '$.class.real_life') > 0.1 AND isNFSW = false" - elif style == Style.lora: - style_query = "json_extract(data, '$.meta.tags') LIKE '%lora%' AND isNFSW = false" - elif style == Style.nsfw: - style_query = "isNFSW = true" - - - - with database.get_db() as db: - cursor = db.cursor() - cursor.execute( - f""" - SELECT *, - COUNT(*) OVER() AS total, - isNFSW - FROM ( - SELECT *, - json_extract(data, '$.class.explicit') > 0.3 OR json_extract(data, '$.class.suggestive') > 0.3 AS isNFSW - FROM models - ) AS subquery - WHERE (? IS NULL AND likes > 1 OR ? IS NOT NULL) - AND {style_query} - AND (? IS NULL OR EXISTS ( - SELECT 1 - FROM json_each(json_extract(data, '$.meta.tags')) - WHERE json_each.value = ? - )) - ORDER BY {sort_query} - LIMIT {MAX_PAGE_SIZE} OFFSET {(page - 1) * MAX_PAGE_SIZE}; - """, - (tag, tag, tag, tag), - ) - results = cursor.fetchall() - total = results[0]["total"] if results else 0 - total_pages = (total + MAX_PAGE_SIZE - 1) // MAX_PAGE_SIZE - models_data = [] - for result in results: - data = json.loads(result["data"]) - images = data["images"] - filtered_images = [x for x in images if x is not None] - # clean nulls - data["images"] = filtered_images - # update downloads and likes from db table - data["downloads"] = result["downloads"] - data["likes"] = result["likes"] - data["isNFSW"] = bool(result["isNFSW"]) - models_data.append(data) - - return {"models": models_data, "totalPages": total_pages} - - -@app.get("/") -def read_root(): - # return html page from string - return HTMLResponse( - """ -

        Just a bot to sync data from diffusers gallery please go to - https://huggingface.co/spaces/huggingface-projects/diffusers-gallery -

        """ - ) - - -@app.on_event("startup") -@repeat_every(seconds=60 * 60 * 6, wait_first=False) -async def repeat_sync(): - await sync_data() - return "Synced data to huggingface datasets" diff --git a/spaces/hylee/photo2cartoon/p2c/train.py b/spaces/hylee/photo2cartoon/p2c/train.py deleted file mode 100644 index d276387963418adf6d1aaddd9d1da968c0faf7be..0000000000000000000000000000000000000000 --- a/spaces/hylee/photo2cartoon/p2c/train.py +++ /dev/null @@ -1,84 +0,0 @@ -from models import UgatitSadalinHourglass -import argparse -import shutil -from utils import * - - -def parse_args(): - """parsing and configuration""" - desc = "photo2cartoon" - parser = argparse.ArgumentParser(description=desc) - parser.add_argument('--phase', type=str, default='train', help='[train / test]') - parser.add_argument('--light', type=str2bool, default=True, help='[U-GAT-IT full version / U-GAT-IT light version]') - parser.add_argument('--dataset', type=str, default='photo2cartoon', help='dataset name') - - parser.add_argument('--iteration', type=int, default=1000000, help='The number of training iterations') - parser.add_argument('--batch_size', type=int, default=1, help='The size of batch size') - parser.add_argument('--print_freq', type=int, default=1000, help='The number of image print freq') - parser.add_argument('--save_freq', type=int, default=1000, help='The number of model save freq') - parser.add_argument('--decay_flag', type=str2bool, default=True, help='The decay_flag') - - parser.add_argument('--lr', type=float, default=0.0001, help='The learning rate') - parser.add_argument('--adv_weight', type=int, default=1, help='Weight for GAN') - parser.add_argument('--cycle_weight', type=int, default=50, help='Weight for Cycle') - parser.add_argument('--identity_weight', type=int, default=10, help='Weight for Identity') - parser.add_argument('--cam_weight', type=int, default=1000, help='Weight for CAM') - parser.add_argument('--faceid_weight', type=int, default=1, help='Weight for Face ID') - - parser.add_argument('--ch', type=int, default=32, help='base channel number per layer') - parser.add_argument('--n_dis', type=int, default=6, help='The number of discriminator layer') - - parser.add_argument('--img_size', type=int, default=256, help='The size of image') - parser.add_argument('--img_ch', type=int, default=3, help='The size of image channel') - - # parser.add_argument('--device', type=str, default='cuda:0', help='Set gpu mode: [cpu, cuda]') - parser.add_argument('--gpu_ids', type=int, default=[0], nargs='+', help='Set [0, 1, 2, 3] for multi-gpu training') - parser.add_argument('--benchmark_flag', type=str2bool, default=False) - parser.add_argument('--resume', type=str2bool, default=False) - parser.add_argument('--rho_clipper', type=float, default=1.0) - parser.add_argument('--w_clipper', type=float, default=1.0) - parser.add_argument('--pretrained_weights', type=str, default='', help='pretrained weight path') - - args = parser.parse_args() - args.result_dir = './experiment/{}-size{}-ch{}-{}-lr{}-adv{}-cyc{}-id{}-identity{}-cam{}'.format( - os.path.basename(__file__)[:-3], - args.img_size, - args.ch, - args.light, - args.lr, - args.adv_weight, - args.cycle_weight, - args.faceid_weight, - args.identity_weight, - args.cam_weight) - - return check_args(args) - - -def check_args(args): - check_folder(os.path.join(args.result_dir, args.dataset, 'model')) - check_folder(os.path.join(args.result_dir, args.dataset, 'img')) - check_folder(os.path.join(args.result_dir, args.dataset, 'test')) - shutil.copy(__file__, args.result_dir) - return args - - -def main(): - args = parse_args() - if args is None: - exit() - - gan = UgatitSadalinHourglass(args) - gan.build_model() - - if args.phase == 'train': - gan.train() - print(" [*] Training finished!") - - if args.phase == 'test': - gan.test() - print(" [*] Test finished!") - - -if __name__ == '__main__': - main() diff --git a/spaces/hyoo/imagine/app.py b/spaces/hyoo/imagine/app.py deleted file mode 100644 index 94501336141613557acc27a9c9f87aa91e1a5316..0000000000000000000000000000000000000000 --- a/spaces/hyoo/imagine/app.py +++ /dev/null @@ -1,29 +0,0 @@ -import gradio as gr - -# import os -# import sys -# import threading -# import time - -# def restart_script_periodically(): -# while True: -# time.sleep(600) # 10 minutes -# try: -# os.execl(sys.executable, sys.executable, *sys.argv) -# except: -# pass -# threading.Thread(target=restart_script_periodically, daemon=True).start() - -imagine = gr.Interface.load( "models/dreamlike-art/dreamlike-photoreal-2.0" ) - -with gr.Blocks( analytics_enabled=False ) as app: - - Prompt = gr.Textbox( label="Prompt" ) - Imagine = gr.Button( "Imagine" ) - Image = gr.Image() - Info = gr.Markdown( "# [$hyoo_artist](https://artist.hyoo.ru/)" ) - - Imagine.click( imagine, inputs=[ Prompt ], outputs=[ Image ], api_name="imagine" ) - - app.launch( inline=True ) - block.queue( concurrency_count=1 ) diff --git a/spaces/hysts-duplicates/comparing-captioning-models/style.css b/spaces/hysts-duplicates/comparing-captioning-models/style.css deleted file mode 100644 index c031280ed2fae5d64d3024157cbdbc57508db86b..0000000000000000000000000000000000000000 --- a/spaces/hysts-duplicates/comparing-captioning-models/style.css +++ /dev/null @@ -1,10 +0,0 @@ -h1 { - text-align: center; -} - -#duplicate-button { - margin: auto; - color: #fff; - background: #1565c0; - border-radius: 100vh; -} diff --git a/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/dataset.py b/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/dataset.py deleted file mode 100644 index 595eda79c56400a3243b2bd0d13a0dce9b8afd1d..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/dataset.py +++ /dev/null @@ -1,268 +0,0 @@ -import numbers -import os -import queue as Queue -import threading -from functools import partial -from typing import Iterable - -import mxnet as mx -import numpy as np -import torch -from torch import distributed -from torch.utils.data import DataLoader -from torch.utils.data import Dataset -from torchvision import transforms -from torchvision.datasets import ImageFolder -from utils.utils_distributed_sampler import DistributedSampler -from utils.utils_distributed_sampler import get_dist_info -from utils.utils_distributed_sampler import worker_init_fn - - -def get_dataloader( - root_dir, - local_rank, - batch_size, - dali=False, - seed=2048, - num_workers=2, -) -> Iterable: - - rec = os.path.join(root_dir, "train.rec") - idx = os.path.join(root_dir, "train.idx") - train_set = None - - # Synthetic - if root_dir == "synthetic": - train_set = SyntheticDataset() - dali = False - - # Mxnet RecordIO - elif os.path.exists(rec) and os.path.exists(idx): - train_set = MXFaceDataset(root_dir=root_dir, local_rank=local_rank) - - # Image Folder - else: - transform = transforms.Compose( - [ - transforms.RandomHorizontalFlip(), - transforms.ToTensor(), - transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]), - ] - ) - train_set = ImageFolder(root_dir, transform) - - # DALI - if dali: - return dali_data_iter(batch_size=batch_size, rec_file=rec, idx_file=idx, num_threads=2, local_rank=local_rank) - - rank, world_size = get_dist_info() - train_sampler = DistributedSampler(train_set, num_replicas=world_size, rank=rank, shuffle=True, seed=seed) - - if seed is None: - init_fn = None - else: - init_fn = partial(worker_init_fn, num_workers=num_workers, rank=rank, seed=seed) - - train_loader = DataLoaderX( - local_rank=local_rank, - dataset=train_set, - batch_size=batch_size, - sampler=train_sampler, - num_workers=num_workers, - pin_memory=True, - drop_last=True, - worker_init_fn=init_fn, - ) - - return train_loader - - -class BackgroundGenerator(threading.Thread): - def __init__(self, generator, local_rank, max_prefetch=6): - super(BackgroundGenerator, self).__init__() - self.queue = Queue.Queue(max_prefetch) - self.generator = generator - self.local_rank = local_rank - self.daemon = True - self.start() - - def run(self): - torch.cuda.set_device(self.local_rank) - for item in self.generator: - self.queue.put(item) - self.queue.put(None) - - def next(self): - next_item = self.queue.get() - if next_item is None: - raise StopIteration - return next_item - - def __next__(self): - return self.next() - - def __iter__(self): - return self - - -class DataLoaderX(DataLoader): - def __init__(self, local_rank, **kwargs): - super(DataLoaderX, self).__init__(**kwargs) - self.stream = torch.cuda.Stream(local_rank) - self.local_rank = local_rank - - def __iter__(self): - self.iter = super(DataLoaderX, self).__iter__() - self.iter = BackgroundGenerator(self.iter, self.local_rank) - self.preload() - return self - - def preload(self): - self.batch = next(self.iter, None) - if self.batch is None: - return None - with torch.cuda.stream(self.stream): - for k in range(len(self.batch)): - self.batch[k] = self.batch[k].to(device=self.local_rank, non_blocking=True) - - def __next__(self): - torch.cuda.current_stream().wait_stream(self.stream) - batch = self.batch - if batch is None: - raise StopIteration - self.preload() - return batch - - -class MXFaceDataset(Dataset): - def __init__(self, root_dir, local_rank): - super(MXFaceDataset, self).__init__() - self.transform = transforms.Compose( - [ - transforms.ToPILImage(), - transforms.RandomHorizontalFlip(), - transforms.ToTensor(), - transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]), - ] - ) - self.root_dir = root_dir - self.local_rank = local_rank - path_imgrec = os.path.join(root_dir, "train.rec") - path_imgidx = os.path.join(root_dir, "train.idx") - self.imgrec = mx.recordio.MXIndexedRecordIO(path_imgidx, path_imgrec, "r") - s = self.imgrec.read_idx(0) - header, _ = mx.recordio.unpack(s) - if header.flag > 0: - self.header0 = (int(header.label[0]), int(header.label[1])) - self.imgidx = np.array(range(1, int(header.label[0]))) - else: - self.imgidx = np.array(list(self.imgrec.keys)) - - def __getitem__(self, index): - idx = self.imgidx[index] - s = self.imgrec.read_idx(idx) - header, img = mx.recordio.unpack(s) - label = header.label - if not isinstance(label, numbers.Number): - label = label[0] - label = torch.tensor(label, dtype=torch.long) - sample = mx.image.imdecode(img).asnumpy() - if self.transform is not None: - sample = self.transform(sample) - return sample, label - - def __len__(self): - return len(self.imgidx) - - -class SyntheticDataset(Dataset): - def __init__(self): - super(SyntheticDataset, self).__init__() - img = np.random.randint(0, 255, size=(112, 112, 3), dtype=np.int32) - img = np.transpose(img, (2, 0, 1)) - img = torch.from_numpy(img).squeeze(0).float() - img = ((img / 255) - 0.5) / 0.5 - self.img = img - self.label = 1 - - def __getitem__(self, index): - return self.img, self.label - - def __len__(self): - return 1000000 - - -def dali_data_iter( - batch_size: int, - rec_file: str, - idx_file: str, - num_threads: int, - initial_fill=32768, - random_shuffle=True, - prefetch_queue_depth=1, - local_rank=0, - name="reader", - mean=(127.5, 127.5, 127.5), - std=(127.5, 127.5, 127.5), -): - """ - Parameters: - ---------- - initial_fill: int - Size of the buffer that is used for shuffling. If random_shuffle is False, this parameter is ignored. - - """ - rank: int = distributed.get_rank() - world_size: int = distributed.get_world_size() - import nvidia.dali.fn as fn - import nvidia.dali.types as types - from nvidia.dali.pipeline import Pipeline - from nvidia.dali.plugin.pytorch import DALIClassificationIterator - - pipe = Pipeline( - batch_size=batch_size, - num_threads=num_threads, - device_id=local_rank, - prefetch_queue_depth=prefetch_queue_depth, - ) - condition_flip = fn.random.coin_flip(probability=0.5) - with pipe: - jpegs, labels = fn.readers.mxnet( - path=rec_file, - index_path=idx_file, - initial_fill=initial_fill, - num_shards=world_size, - shard_id=rank, - random_shuffle=random_shuffle, - pad_last_batch=False, - name=name, - ) - images = fn.decoders.image(jpegs, device="mixed", output_type=types.RGB) - images = fn.crop_mirror_normalize(images, dtype=types.FLOAT, mean=mean, std=std, mirror=condition_flip) - pipe.set_outputs(images, labels) - pipe.build() - return DALIWarper( - DALIClassificationIterator( - pipelines=[pipe], - reader_name=name, - ) - ) - - -@torch.no_grad() -class DALIWarper(object): - def __init__(self, dali_iter): - self.iter = dali_iter - - def __next__(self): - data_dict = self.iter.__next__()[0] - tensor_data = data_dict["data"].cuda() - tensor_label: torch.Tensor = data_dict["label"].cuda().long() - tensor_label.squeeze_() - return tensor_data, tensor_label - - def __iter__(self): - return self - - def reset(self): - self.iter.reset() diff --git a/spaces/ibm-nasa-geospatial/Prithvi-100M-multi-temporal-crop-classification-demo/Dockerfile b/spaces/ibm-nasa-geospatial/Prithvi-100M-multi-temporal-crop-classification-demo/Dockerfile deleted file mode 100644 index 858c5baf4e0826a63e309fe9d29920cb4f2b2df5..0000000000000000000000000000000000000000 --- a/spaces/ibm-nasa-geospatial/Prithvi-100M-multi-temporal-crop-classification-demo/Dockerfile +++ /dev/null @@ -1,62 +0,0 @@ -FROM python:3.8 - - -RUN apt-get update && apt-get install --no-install-recommends -y \ - build-essential \ - # python3.8 \ - # python3-pip \ - # python3-setuptools \ - git \ - wget \ - && apt-get clean && rm -rf /var/lib/apt/lists/* - -RUN apt-get update && apt-get install ffmpeg libsm6 libxext6 -y - -WORKDIR /code - -RUN useradd -m -u 1000 user - -# Switch to the "user" user -USER user -# Set home to the user's home directory -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH \ - PYTHONPATH=$HOME/app \ - PYTHONUNBUFFERED=1 \ - GRADIO_ALLOW_FLAGGING=never \ - GRADIO_NUM_PORTS=1 \ - GRADIO_SERVER_NAME=0.0.0.0 \ - GRADIO_THEME=huggingface \ - SYSTEM=spaces - -# RUN conda install python=3.8 - -RUN pip install setuptools-rust -RUN pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 --extra-index-url https://download.pytorch.org/whl/cu113 -RUN pip install gradio scikit-image pillow openmim -RUN pip install --upgrade setuptools - -WORKDIR /home/user - -RUN --mount=type=secret,id=git_token,mode=0444,required=true \ - git clone --branch mmseg-only https://$(cat /run/secrets/git_token)@github.com/NASA-IMPACT/hls-foundation-os.git - - -WORKDIR hls-foundation-os - -RUN git checkout 9968269915db8402bf4a6d0549df9df57d489e5a - -RUN pip install -e . - -RUN mim install mmcv-full==1.6.2 -f https://download.openmmlab.com/mmcv/dist/11.3/1.11.0/index.html - -# Set the working directory to the user's home directory -WORKDIR $HOME/app - -# ENV LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/code/miniconda/lib" - -# Copy the current directory contents into the container at $HOME/app setting the owner to the user - -COPY --chown=user . $HOME/app - -CMD ["python3", "app.py"] \ No newline at end of file diff --git a/spaces/iccv23-diffusers-demo/LoraTheExplorer/share_btn.py b/spaces/iccv23-diffusers-demo/LoraTheExplorer/share_btn.py deleted file mode 100644 index cb4b3c67c3ef4018379592a837140b41f48cacbd..0000000000000000000000000000000000000000 --- a/spaces/iccv23-diffusers-demo/LoraTheExplorer/share_btn.py +++ /dev/null @@ -1,76 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - - async function getInputImgFile(imgEl){ - const res = await fetch(imgEl.src); - const blob = await res.blob(); - const imgId = Date.now() % 200; - const isPng = imgEl.src.startsWith(`data:image/png`); - if(isPng){ - const fileName = `sd-perception-${{imgId}}.png`; - return new File([blob], fileName, { type: 'image/png' }); - }else{ - const fileName = `sd-perception-${{imgId}}.jpg`; - return new File([blob], fileName, { type: 'image/jpeg' }); - } - } - - const gradioEl = document.querySelector("gradio-app").shadowRoot || document.querySelector('body > gradio-app'); - const selectedLoRA = gradioEl.querySelector('#selected_lora').innerHTML; - const inputPrompt = gradioEl.querySelector('#prompt input').value; - const outputImgEl = gradioEl.querySelector('#result-image img'); - - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - - const inputFile = await getInputImgFile(outputImgEl); - const urlInputImg = await uploadFile(inputFile); - - const descriptionMd = ` - -${selectedLoRA} - -### Prompt -${inputPrompt} - -#### Generated Image: - -`; - const params = new URLSearchParams({ - title: inputPrompt, - description: descriptionMd, - preview: true - }); - const paramsStr = params.toString(); - window.open(`https://huggingface.co/spaces/multimodalart/LoraTheExplorer/discussions/new?${paramsStr}`, '_blank'); - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" \ No newline at end of file diff --git a/spaces/innovatorved/whisper.api/app/tests/utils/constant.py b/spaces/innovatorved/whisper.api/app/tests/utils/constant.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Baixar Filme Uma Carta De Amor Dublado.epub VERIFIED.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Baixar Filme Uma Carta De Amor Dublado.epub VERIFIED.md deleted file mode 100644 index 649ef6ab4e7b64e1051be08f4471c1f91122f661..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Baixar Filme Uma Carta De Amor Dublado.epub VERIFIED.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Baixar Filme Uma Carta De Amor Dublado.epub


        Download Filehttps://urlin.us/2uEwM0



        - - 8a78ff9644
        -
        -
        -

        diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Childish Gambino Camp Deluxe Edition Free Download.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Childish Gambino Camp Deluxe Edition Free Download.md deleted file mode 100644 index e9fd7f5cfd26463a010a1324931c1255b36265c5..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Childish Gambino Camp Deluxe Edition Free Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

        childish gambino camp deluxe edition free download


        Downloadhttps://urlin.us/2uExV3



        - - d5da3c52bf
        -
        -
        -

        diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (((HOT)) Freemake Video Converter Gold 5 1 11).md b/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (((HOT)) Freemake Video Converter Gold 5 1 11).md deleted file mode 100644 index aa0a2b65593ee1fd742adc14ed0bd239f0a01699..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (((HOT)) Freemake Video Converter Gold 5 1 11).md +++ /dev/null @@ -1,7 +0,0 @@ -
        -

        it doesn’t matter if you are watching a video online or downloading it from online sources, you need a video converter to convert videos. vlc (video player) is capable of playing nearly all popular video formats. you can play your videos downloaded from online sources, play them in your vlc player and convert them to the most suitable format for your devices. also, it can play streamed and downloaded media content with high quality and bitrate. with its user-friendly interface, vlc media player can be operated by both experts and amateurs with no previous experience.

        -

        HD Online Player (Freemake Video Converter Gold 5 1 11)


        Download ->>->>->> https://urlin.us/2uEyyK



        -

        the conversion process depends on the converter that you are using, in that case the processing time depends on the size of the video and the quality of the output format you are converting to. in general, we can say that a video that is 300 mb in size will take about 10 minutes to convert, while a 500 mb file will usually take around 40 minutes. it also depends on the quality of the original video and the sound you are converting. lower resolution videos can be converted faster. also, if the original video was encoded with a lower bitrate, it will take longer to complete the conversion.

        -

        media converter express is a free tool that can help you manage your audio and video files, convert them to different formats and perform different actions. you can convert to any formats with just a couple of clicks. it is a very complete, user-friendly and feature-rich software that will make your multimedia management a lot easier and less time-consuming. for example, you can not only get a help file for the video converter, but also add subtitles to an audio file, combine several files into one, burn cds, convert to mp3, aac, ogg, wav or wma, and convert to any other format. you can even add cover art to the mp3 or ogg files.

        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/Arya 2 Tamil Dubbed Movie Download HOT!.md b/spaces/inreVtussa/clothingai/Examples/Arya 2 Tamil Dubbed Movie Download HOT!.md deleted file mode 100644 index 57272e9eee21c9c7e132e61de615cf85eb3e0517..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Arya 2 Tamil Dubbed Movie Download HOT!.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Arya 2 Tamil Dubbed Movie Download


        Download Zip ✓✓✓ https://tiurll.com/2uCkW8



        - -Anna Nagar Western Extn. Anna Road H.o; Anna Salai; Annai Anjugam Nagar; Annai Indira Nagar; Annamalai Nagar 1fdad05405
        -
        -
        -

        diff --git a/spaces/ismot/1702t1/dataset/communal/__init__.py b/spaces/ismot/1702t1/dataset/communal/__init__.py deleted file mode 100644 index 8ea6021ad3c5c3d080e03089095aec34106e5541..0000000000000000000000000000000000000000 --- a/spaces/ismot/1702t1/dataset/communal/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -""" -@Date: 2021/09/22 -@description: -""" diff --git a/spaces/ivy-1911/vits-uma-genshin-honkai/text/cleaners.py b/spaces/ivy-1911/vits-uma-genshin-honkai/text/cleaners.py deleted file mode 100644 index d26581deb399609163518054718ad80ecca5d934..0000000000000000000000000000000000000000 --- a/spaces/ivy-1911/vits-uma-genshin-honkai/text/cleaners.py +++ /dev/null @@ -1,475 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Cleaners are transformations that run over the input text at both training and eval time. - -Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners" -hyperparameter. Some cleaners are English-specific. You'll typically want to use: - 1. "english_cleaners" for English text - 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using - the Unidecode library (https://pypi.python.org/pypi/Unidecode) - 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update - the symbols in symbols.py to match your data). -''' - -import re -from unidecode import unidecode -import pyopenjtalk -from jamo import h2j, j2hcj -from pypinyin import lazy_pinyin, BOPOMOFO -import jieba, cn2an - - -# This is a list of Korean classifiers preceded by pure Korean numerals. -_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통' - -# Regular expression matching whitespace: -_whitespace_re = re.compile(r'\s+') - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile(r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile(r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [ - ('mrs', 'misess'), - ('mr', 'mister'), - ('dr', 'doctor'), - ('st', 'saint'), - ('co', 'company'), - ('jr', 'junior'), - ('maj', 'major'), - ('gen', 'general'), - ('drs', 'doctors'), - ('rev', 'reverend'), - ('lt', 'lieutenant'), - ('hon', 'honorable'), - ('sgt', 'sergeant'), - ('capt', 'captain'), - ('esq', 'esquire'), - ('ltd', 'limited'), - ('col', 'colonel'), - ('ft', 'fort'), -]] - -# List of (hangul, hangul divided) pairs: -_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄳ', 'ㄱㅅ'), - ('ㄵ', 'ㄴㅈ'), - ('ㄶ', 'ㄴㅎ'), - ('ㄺ', 'ㄹㄱ'), - ('ㄻ', 'ㄹㅁ'), - ('ㄼ', 'ㄹㅂ'), - ('ㄽ', 'ㄹㅅ'), - ('ㄾ', 'ㄹㅌ'), - ('ㄿ', 'ㄹㅍ'), - ('ㅀ', 'ㄹㅎ'), - ('ㅄ', 'ㅂㅅ'), - ('ㅘ', 'ㅗㅏ'), - ('ㅙ', 'ㅗㅐ'), - ('ㅚ', 'ㅗㅣ'), - ('ㅝ', 'ㅜㅓ'), - ('ㅞ', 'ㅜㅔ'), - ('ㅟ', 'ㅜㅣ'), - ('ㅢ', 'ㅡㅣ'), - ('ㅑ', 'ㅣㅏ'), - ('ㅒ', 'ㅣㅐ'), - ('ㅕ', 'ㅣㅓ'), - ('ㅖ', 'ㅣㅔ'), - ('ㅛ', 'ㅣㅗ'), - ('ㅠ', 'ㅣㅜ') -]] - -# List of (Latin alphabet, hangul) pairs: -_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', '에이'), - ('b', '비'), - ('c', '시'), - ('d', '디'), - ('e', '이'), - ('f', '에프'), - ('g', '지'), - ('h', '에이치'), - ('i', '아이'), - ('j', '제이'), - ('k', '케이'), - ('l', '엘'), - ('m', '엠'), - ('n', '엔'), - ('o', '오'), - ('p', '피'), - ('q', '큐'), - ('r', '아르'), - ('s', '에스'), - ('t', '티'), - ('u', '유'), - ('v', '브이'), - ('w', '더블유'), - ('x', '엑스'), - ('y', '와이'), - ('z', '제트') -]] - -# List of (Latin alphabet, bopomofo) pairs: -_latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', 'ㄟˉ'), - ('b', 'ㄅㄧˋ'), - ('c', 'ㄙㄧˉ'), - ('d', 'ㄉㄧˋ'), - ('e', 'ㄧˋ'), - ('f', 'ㄝˊㄈㄨˋ'), - ('g', 'ㄐㄧˋ'), - ('h', 'ㄝˇㄑㄩˋ'), - ('i', 'ㄞˋ'), - ('j', 'ㄐㄟˋ'), - ('k', 'ㄎㄟˋ'), - ('l', 'ㄝˊㄛˋ'), - ('m', 'ㄝˊㄇㄨˋ'), - ('n', 'ㄣˉ'), - ('o', 'ㄡˉ'), - ('p', 'ㄆㄧˉ'), - ('q', 'ㄎㄧㄡˉ'), - ('r', 'ㄚˋ'), - ('s', 'ㄝˊㄙˋ'), - ('t', 'ㄊㄧˋ'), - ('u', 'ㄧㄡˉ'), - ('v', 'ㄨㄧˉ'), - ('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'), - ('x', 'ㄝˉㄎㄨˋㄙˋ'), - ('y', 'ㄨㄞˋ'), - ('z', 'ㄗㄟˋ') -]] - - -# List of (bopomofo, romaji) pairs: -_bopomofo_to_romaji = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('ㄅㄛ', 'p⁼wo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p⁼'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't⁼'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k⁼'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'h'), - ('ㄐ', 'ʧ⁼'), - ('ㄑ', 'ʧʰ'), - ('ㄒ', 'ʃ'), - ('ㄓ', 'ʦ`⁼'), - ('ㄔ', 'ʦ`ʰ'), - ('ㄕ', 's`'), - ('ㄖ', 'ɹ`'), - ('ㄗ', 'ʦ⁼'), - ('ㄘ', 'ʦʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ə'), - ('ㄝ', 'e'), - ('ㄞ', 'ai'), - ('ㄟ', 'ei'), - ('ㄠ', 'au'), - ('ㄡ', 'ou'), - ('ㄧㄢ', 'yeNN'), - ('ㄢ', 'aNN'), - ('ㄧㄣ', 'iNN'), - ('ㄣ', 'əNN'), - ('ㄤ', 'aNg'), - ('ㄧㄥ', 'iNg'), - ('ㄨㄥ', 'uNg'), - ('ㄩㄥ', 'yuNg'), - ('ㄥ', 'əNg'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'ɥ'), - ('ˉ', '→'), - ('ˊ', '↑'), - ('ˇ', '↓↑'), - ('ˋ', '↓'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def lowercase(text): - return text.lower() - - -def collapse_whitespace(text): - return re.sub(_whitespace_re, ' ', text) - - -def convert_to_ascii(text): - return unidecode(text) - - -def japanese_to_romaji_with_accent(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = '' - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - if text!='': - text+=' ' - labels = pyopenjtalk.extract_fullcontext(sentence) - for n, label in enumerate(labels): - phoneme = re.search(r'\-([^\+]*)\+', label).group(1) - if phoneme not in ['sil','pau']: - text += phoneme.replace('ch','ʧ').replace('sh','ʃ').replace('cl','Q') - else: - continue - n_moras = int(re.search(r'/F:(\d+)_', label).group(1)) - a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1)) - a2 = int(re.search(r"\+(\d+)\+", label).group(1)) - a3 = int(re.search(r"\+(\d+)/", label).group(1)) - if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil','pau']: - a2_next=-1 - else: - a2_next = int(re.search(r"\+(\d+)\+", labels[n + 1]).group(1)) - # Accent phrase boundary - if a3 == 1 and a2_next == 1: - text += ' ' - # Falling - elif a1 == 0 and a2_next == a2 + 1 and a2 != n_moras: - text += '↓' - # Rising - elif a2 == 1 and a2_next == 2: - text += '↑' - if i -#include -#include // [[since C++14]]: std::exchange -#include -#include -#include -#include -#include -#include -#include // assert - -#include "libipc/def.h" -#include "libipc/shm.h" -#include "libipc/rw_lock.h" - -#include "libipc/utility/log.h" -#include "libipc/platform/detail.h" -#include "libipc/circ/elem_def.h" - -namespace ipc { -namespace detail { - -class queue_conn { -protected: - circ::cc_t connected_ = 0; - shm::handle elems_h_; - - template - Elems* open(char const * name) { - if (name == nullptr || name[0] == '\0') { - ipc::error("fail open waiter: name is empty!\n"); - return nullptr; - } - if (!elems_h_.acquire(name, sizeof(Elems))) { - return nullptr; - } - auto elems = static_cast(elems_h_.get()); - if (elems == nullptr) { - ipc::error("fail acquire elems: %s\n", name); - return nullptr; - } - elems->init(); - return elems; - } - - void close() { - elems_h_.release(); - } - -public: - queue_conn() = default; - queue_conn(const queue_conn&) = delete; - queue_conn& operator=(const queue_conn&) = delete; - - bool connected() const noexcept { - return connected_ != 0; - } - - circ::cc_t connected_id() const noexcept { - return connected_; - } - - template - auto connect(Elems* elems) noexcept - /*needs 'optional' here*/ - -> std::tuple().cursor())> { - if (elems == nullptr) return {}; - // if it's already connected, just return - if (connected()) return {connected(), false, 0}; - connected_ = elems->connect_receiver(); - return {connected(), true, elems->cursor()}; - } - - template - bool disconnect(Elems* elems) noexcept { - if (elems == nullptr) return false; - // if it's already disconnected, just return false - if (!connected()) return false; - elems->disconnect_receiver(std::exchange(connected_, 0)); - return true; - } -}; - -template -class queue_base : public queue_conn { - using base_t = queue_conn; - -public: - using elems_t = Elems; - using policy_t = typename elems_t::policy_t; - -protected: - elems_t * elems_ = nullptr; - decltype(std::declval().cursor()) cursor_ = 0; - bool sender_flag_ = false; - -public: - using base_t::base_t; - - queue_base() = default; - - explicit queue_base(char const * name) - : queue_base{} { - elems_ = open(name); - } - - explicit queue_base(elems_t * elems) noexcept - : queue_base{} { - assert(elems != nullptr); - elems_ = elems; - } - - /* not virtual */ ~queue_base() { - base_t::close(); - } - - elems_t * elems() noexcept { return elems_; } - elems_t const * elems() const noexcept { return elems_; } - - bool ready_sending() noexcept { - if (elems_ == nullptr) return false; - return sender_flag_ || (sender_flag_ = elems_->connect_sender()); - } - - void shut_sending() noexcept { - if (elems_ == nullptr) return; - if (!sender_flag_) return; - elems_->disconnect_sender(); - } - - bool connect() noexcept { - auto tp = base_t::connect(elems_); - if (std::get<0>(tp) && std::get<1>(tp)) { - cursor_ = std::get<2>(tp); - return true; - } - return std::get<0>(tp); - } - - bool disconnect() noexcept { - return base_t::disconnect(elems_); - } - - std::size_t conn_count() const noexcept { - return (elems_ == nullptr) ? static_cast(invalid_value) : elems_->conn_count(); - } - - bool valid() const noexcept { - return elems_ != nullptr; - } - - bool empty() const noexcept { - return !valid() || (cursor_ == elems_->cursor()); - } - - template - bool push(F&& prep, P&&... params) { - if (elems_ == nullptr) return false; - return elems_->push(this, [&](void* p) { - if (prep(p)) ::new (p) T(std::forward

        (params)...); - }); - } - - template - bool force_push(F&& prep, P&&... params) { - if (elems_ == nullptr) return false; - return elems_->force_push(this, [&](void* p) { - if (prep(p)) ::new (p) T(std::forward

        (params)...); - }); - } - - template - bool pop(T& item, F&& out) { - if (elems_ == nullptr) { - return false; - } - return elems_->pop(this, &(this->cursor_), [&item](void* p) { - ::new (&item) T(std::move(*static_cast(p))); - }, std::forward(out)); - } -}; - -} // namespace detail - -template -class queue final : public detail::queue_base> { - using base_t = detail::queue_base>; - -public: - using value_t = T; - - using base_t::base_t; - - template - bool push(P&&... params) { - return base_t::template push(std::forward

        (params)...); - } - - template - bool force_push(P&&... params) { - return base_t::template force_push(std::forward

        (params)...); - } - - bool pop(T& item) { - return base_t::pop(item, [](bool) {}); - } - - template - bool pop(T& item, F&& out) { - return base_t::pop(item, std::forward(out)); - } -}; - -} // namespace ipc diff --git a/spaces/jasonwu92/image-search-playground/utils.py b/spaces/jasonwu92/image-search-playground/utils.py deleted file mode 100644 index a4d6eb6cb9d1d7fd562ffb1dade732d8121863be..0000000000000000000000000000000000000000 --- a/spaces/jasonwu92/image-search-playground/utils.py +++ /dev/null @@ -1,170 +0,0 @@ -from sentence_transformers import SentenceTransformer, util as st_util -from transformers import CLIPModel, CLIPProcessor - -from PIL import Image -import requests -import os -import torch -torch.set_printoptions(precision=10) -from tqdm import tqdm -import s3fs -from io import BytesIO -import vector_db - -"sentence-transformer-clip-ViT-L-14" -"openai-clip" -model_names = ["fashion"] - -model_name_to_ids = { - "sentence-transformer-clip-ViT-L-14": "clip-ViT-L-14", - "fashion": "patrickjohncyh/fashion-clip", - "openai-clip": "openai/clip-vit-base-patch32", -} - -AWS_ACCESS_KEY_ID = os.environ["AWS_ACCESS_KEY_ID"] -AWS_SECRET_ACCESS_KEY = os.environ["AWS_SECRET_ACCESS_KEY"] - -# Define your bucket and dataset name. -S3_BUCKET = "s3://disco-io" - -fs = s3fs.S3FileSystem( - key=AWS_ACCESS_KEY_ID, - secret=AWS_SECRET_ACCESS_KEY, -) - -ROOT_DATA_PATH = os.path.join(S3_BUCKET, 'data') - -def get_data_path(): - return os.path.join(ROOT_DATA_PATH, cur_dataset) - -def get_image_path(): - return os.path.join(get_data_path(), 'images') - -def get_metadata_path(): - return os.path.join(get_data_path(), 'metadata') - -def get_embeddings_path(): - return os.path.join(get_metadata_path(), cur_dataset + '_embeddings.pq') - -model_dict = dict() - - -def download_to_s3(url, s3_path): - # Download the file from the URL - response = requests.get(url, stream=True) - response.raise_for_status() - - # Upload the file to the S3 path - with fs.open(s3_path, "wb") as s3_file: - for chunk in response.iter_content(chunk_size=8192): - s3_file.write(chunk) - - -def remove_all_files_from_s3_directory(s3_directory): - # List all objects in the S3 directory - objects = fs.ls(s3_directory) - - # Remove each object - for obj in objects: - try: - fs.rm(obj) - except: - print('Error removing file: ' + obj) - -def download_images(df, img_folder): - remove_all_files_from_s3_directory(img_folder) - for index, row in df.iterrows(): - try: - download_to_s3(row['IMG_URL'], os.path.join(img_folder, - row['title'].replace('/', '_').replace('\n', '') + '.jpg')) - except: - print('Error downloading image: ' + str(index) + row['title']) - - -def load_models(): - for model_name in model_name_to_ids: - if model_name not in model_dict: - model_dict[model_name] = dict() - if model_name.startswith('sentence-transformer'): - model_dict[model_name]['model'] = SentenceTransformer(model_name_to_ids[model_name]) - else: - model_dict[model_name]['hf_dir'] = model_name_to_ids[model_name] - model_dict[model_name]['model'] = CLIPModel.from_pretrained(model_name_to_ids[model_name]) - model_dict[model_name]['processor'] = CLIPProcessor.from_pretrained(model_name_to_ids[model_name]) - - -if len(model_dict) == 0: - print('Loading models...') - load_models() - - -def get_image_embedding(model_name, image): - """ - Takes an image as input and returns an embedding vector. - """ - model = model_dict[model_name]['model'] - if model_name.startswith('sentence-transformer'): - return model.encode(image) - else: - inputs = model_dict[model_name]['processor'](images=image, return_tensors="pt") - image_features = model.get_image_features(**inputs).detach().numpy()[0] - return image_features - -def s3_path_to_image(fs, s3_path): - """ - Takes an S3 path as input and returns a PIL Image object. - - Args: - s3_path (str): The path to the image in the S3 bucket, including the bucket name (e.g., "bucket_name/path/to/image.jpg"). - - Returns: - Image: A PIL Image object. - """ - with fs.open(s3_path, "rb") as f: - image_data = BytesIO(f.read()) - img = Image.open(image_data) - return img - -def generate_and_save_embeddings(): - # Get image embeddings - with torch.no_grad(): - for fp in tqdm(fs.ls(get_image_path()), desc="Generate embeddings for Images"): - if fp.endswith('.jpg'): - name = fp.split('/')[-1] - for model_name in model_name_to_ids.keys(): - s3_path = 's3://' + fp - vector_db.add_image_embedding_to_db( - embedding=get_image_embedding(model_name, s3_path_to_image(fs, s3_path)), - model_name=model_name, - dataset_name=cur_dataset, - path_to_image=s3_path, - image_name=name, - ) - - -def get_immediate_subdirectories(s3_path): - return [obj.split('/')[-1] for obj in fs.glob(f"{s3_path}/*") if fs.isdir(obj)] - -all_datasets = get_immediate_subdirectories(ROOT_DATA_PATH) -cur_dataset = all_datasets[0] - -def set_cur_dataset(dataset): - refresh_all_datasets() - print(f"Setting current dataset to {dataset}") - global cur_dataset - cur_dataset = dataset - -def refresh_all_datasets(): - global all_datasets - all_datasets = get_immediate_subdirectories(ROOT_DATA_PATH) - print(f"Refreshing all datasets: {all_datasets}") - -def url_to_image(url): - try: - response = requests.get(url) - response.raise_for_status() - img = Image.open(BytesIO(response.content)) - return img - except requests.exceptions.RequestException as e: - print(f"Error fetching image from URL: {url}") - return None \ No newline at end of file diff --git a/spaces/jbilcke-hf/ai-comic-factory/src/components/ui/label.tsx b/spaces/jbilcke-hf/ai-comic-factory/src/components/ui/label.tsx deleted file mode 100644 index 534182176bf87f9308355514adc884d2b69750a5..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/ai-comic-factory/src/components/ui/label.tsx +++ /dev/null @@ -1,26 +0,0 @@ -"use client" - -import * as React from "react" -import * as LabelPrimitive from "@radix-ui/react-label" -import { cva, type VariantProps } from "class-variance-authority" - -import { cn } from "@/lib/utils" - -const labelVariants = cva( - "text-sm font-medium leading-none peer-disabled:cursor-not-allowed peer-disabled:opacity-70" -) - -const Label = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef & - VariantProps ->(({ className, ...props }, ref) => ( - -)) -Label.displayName = LabelPrimitive.Root.displayName - -export { Label } diff --git a/spaces/jgerbscheid/dpa-example/dijkprofile_annotator/dataset/dataset.py b/spaces/jgerbscheid/dpa-example/dijkprofile_annotator/dataset/dataset.py deleted file mode 100644 index 0e957cbe9d8f3bae57a2f711881793c4fc54a2f9..0000000000000000000000000000000000000000 --- a/spaces/jgerbscheid/dpa-example/dijkprofile_annotator/dataset/dataset.py +++ /dev/null @@ -1,42 +0,0 @@ -import numpy as np -import torch.utils.data as data - - -class DijkprofileDataset(data.Dataset): - """Pytorch custom dataset class to use with the pytorch dataloader.""" - - def __init__(self, profile_dict, partition, custom_scaler_path=None): - """Dijkprofile Dataset, provides profiles and labels to pytorch model. - - Args: - profile_dict (dict): dict containing the profiles and labels - partition (list): list used to split the dataset into train and test - sets. list contains ids to use for this dataset, format is - as returned by sklearn.model_selection.train_test_split - """ - self.data_dict = profile_dict - self.list_IDs = partition - - print("scaler in dataset class is depracated and moved to preprocessing") - # load scaler - # if custom_scaler_path: - # self.scaler = joblib.load(custom_scaler_path) - # else: - # self.scaler = joblib.load(os.path.join(dir, config.SCALER_PATH)) - # # rescale all profiles profiles - # for key in profile_dict.keys(): - # profile_dict[key]['profile'] = self.scaler.transform( - # profile_dict[key]['profile'].reshape(-1, 1)).reshape(-1) - # profile_dict[key]['profile'] = profile_dict[key]['profile'] / 10 - - def __len__(self): - return len(self.list_IDs) - - def __getitem__(self, index): - id = self.list_IDs[index] - X = self.data_dict[id]['profile'].reshape(1,-1).astype(np.float32) - y = self.data_dict[id]['label'].reshape(1,-1) - return X, y - - def __str__(self): - return "".format(len(self.list_IDs)) diff --git a/spaces/jimschat/VITS-Umamusume-voice-synthesizer/text/korean.py b/spaces/jimschat/VITS-Umamusume-voice-synthesizer/text/korean.py deleted file mode 100644 index edee07429a450c55e3d8e246997faaa1e0b89cc9..0000000000000000000000000000000000000000 --- a/spaces/jimschat/VITS-Umamusume-voice-synthesizer/text/korean.py +++ /dev/null @@ -1,210 +0,0 @@ -import re -from jamo import h2j, j2hcj -import ko_pron - - -# This is a list of Korean classifiers preceded by pure Korean numerals. -_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통' - -# List of (hangul, hangul divided) pairs: -_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄳ', 'ㄱㅅ'), - ('ㄵ', 'ㄴㅈ'), - ('ㄶ', 'ㄴㅎ'), - ('ㄺ', 'ㄹㄱ'), - ('ㄻ', 'ㄹㅁ'), - ('ㄼ', 'ㄹㅂ'), - ('ㄽ', 'ㄹㅅ'), - ('ㄾ', 'ㄹㅌ'), - ('ㄿ', 'ㄹㅍ'), - ('ㅀ', 'ㄹㅎ'), - ('ㅄ', 'ㅂㅅ'), - ('ㅘ', 'ㅗㅏ'), - ('ㅙ', 'ㅗㅐ'), - ('ㅚ', 'ㅗㅣ'), - ('ㅝ', 'ㅜㅓ'), - ('ㅞ', 'ㅜㅔ'), - ('ㅟ', 'ㅜㅣ'), - ('ㅢ', 'ㅡㅣ'), - ('ㅑ', 'ㅣㅏ'), - ('ㅒ', 'ㅣㅐ'), - ('ㅕ', 'ㅣㅓ'), - ('ㅖ', 'ㅣㅔ'), - ('ㅛ', 'ㅣㅗ'), - ('ㅠ', 'ㅣㅜ') -]] - -# List of (Latin alphabet, hangul) pairs: -_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', '에이'), - ('b', '비'), - ('c', '시'), - ('d', '디'), - ('e', '이'), - ('f', '에프'), - ('g', '지'), - ('h', '에이치'), - ('i', '아이'), - ('j', '제이'), - ('k', '케이'), - ('l', '엘'), - ('m', '엠'), - ('n', '엔'), - ('o', '오'), - ('p', '피'), - ('q', '큐'), - ('r', '아르'), - ('s', '에스'), - ('t', '티'), - ('u', '유'), - ('v', '브이'), - ('w', '더블유'), - ('x', '엑스'), - ('y', '와이'), - ('z', '제트') -]] - -# List of (ipa, lazy ipa) pairs: -_ipa_to_lazy_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('t͡ɕ','ʧ'), - ('d͡ʑ','ʥ'), - ('ɲ','n^'), - ('ɕ','ʃ'), - ('ʷ','w'), - ('ɭ','l`'), - ('ʎ','ɾ'), - ('ɣ','ŋ'), - ('ɰ','ɯ'), - ('ʝ','j'), - ('ʌ','ə'), - ('ɡ','g'), - ('\u031a','#'), - ('\u0348','='), - ('\u031e',''), - ('\u0320',''), - ('\u0339','') -]] - - -def latin_to_hangul(text): - for regex, replacement in _latin_to_hangul: - text = re.sub(regex, replacement, text) - return text - - -def divide_hangul(text): - text = j2hcj(h2j(text)) - for regex, replacement in _hangul_divided: - text = re.sub(regex, replacement, text) - return text - - -def hangul_number(num, sino=True): - '''Reference https://github.com/Kyubyong/g2pK''' - num = re.sub(',', '', num) - - if num == '0': - return '영' - if not sino and num == '20': - return '스무' - - digits = '123456789' - names = '일이삼사오육칠팔구' - digit2name = {d: n for d, n in zip(digits, names)} - - modifiers = '한 두 세 네 다섯 여섯 일곱 여덟 아홉' - decimals = '열 스물 서른 마흔 쉰 예순 일흔 여든 아흔' - digit2mod = {d: mod for d, mod in zip(digits, modifiers.split())} - digit2dec = {d: dec for d, dec in zip(digits, decimals.split())} - - spelledout = [] - for i, digit in enumerate(num): - i = len(num) - i - 1 - if sino: - if i == 0: - name = digit2name.get(digit, '') - elif i == 1: - name = digit2name.get(digit, '') + '십' - name = name.replace('일십', '십') - else: - if i == 0: - name = digit2mod.get(digit, '') - elif i == 1: - name = digit2dec.get(digit, '') - if digit == '0': - if i % 4 == 0: - last_three = spelledout[-min(3, len(spelledout)):] - if ''.join(last_three) == '': - spelledout.append('') - continue - else: - spelledout.append('') - continue - if i == 2: - name = digit2name.get(digit, '') + '백' - name = name.replace('일백', '백') - elif i == 3: - name = digit2name.get(digit, '') + '천' - name = name.replace('일천', '천') - elif i == 4: - name = digit2name.get(digit, '') + '만' - name = name.replace('일만', '만') - elif i == 5: - name = digit2name.get(digit, '') + '십' - name = name.replace('일십', '십') - elif i == 6: - name = digit2name.get(digit, '') + '백' - name = name.replace('일백', '백') - elif i == 7: - name = digit2name.get(digit, '') + '천' - name = name.replace('일천', '천') - elif i == 8: - name = digit2name.get(digit, '') + '억' - elif i == 9: - name = digit2name.get(digit, '') + '십' - elif i == 10: - name = digit2name.get(digit, '') + '백' - elif i == 11: - name = digit2name.get(digit, '') + '천' - elif i == 12: - name = digit2name.get(digit, '') + '조' - elif i == 13: - name = digit2name.get(digit, '') + '십' - elif i == 14: - name = digit2name.get(digit, '') + '백' - elif i == 15: - name = digit2name.get(digit, '') + '천' - spelledout.append(name) - return ''.join(elem for elem in spelledout) - - -def number_to_hangul(text): - '''Reference https://github.com/Kyubyong/g2pK''' - tokens = set(re.findall(r'(\d[\d,]*)([\uac00-\ud71f]+)', text)) - for token in tokens: - num, classifier = token - if classifier[:2] in _korean_classifiers or classifier[0] in _korean_classifiers: - spelledout = hangul_number(num, sino=False) - else: - spelledout = hangul_number(num, sino=True) - text = text.replace(f'{num}{classifier}', f'{spelledout}{classifier}') - # digit by digit for remaining digits - digits = '0123456789' - names = '영일이삼사오육칠팔구' - for d, n in zip(digits, names): - text = text.replace(d, n) - return text - - -def korean_to_lazy_ipa(text): - text = latin_to_hangul(text) - text = number_to_hangul(text) - text=re.sub('[\uac00-\ud7af]+',lambda x:ko_pron.romanise(x.group(0),'ipa').split('] ~ [')[0],text) - for regex, replacement in _ipa_to_lazy_ipa: - text = re.sub(regex, replacement, text) - return text - - -def korean_to_ipa(text): - text = korean_to_lazy_ipa(text) - return text.replace('ʧ','tʃ').replace('ʥ','dʑ') diff --git a/spaces/jinmao/2/modules/shared.py b/spaces/jinmao/2/modules/shared.py deleted file mode 100644 index 4046900a39b2fc7bdd8005844a92dc7d4eb669b6..0000000000000000000000000000000000000000 --- a/spaces/jinmao/2/modules/shared.py +++ /dev/null @@ -1,24 +0,0 @@ -from modules.presets import API_URL - -class State: - interrupted = False - api_url = API_URL - - def interrupt(self): - self.interrupted = True - - def recover(self): - self.interrupted = False - - def set_api_url(self, api_url): - self.api_url = api_url - - def reset_api_url(self): - self.api_url = API_URL - return self.api_url - - def reset_all(self): - self.interrupted = False - self.api_url = API_URL - -state = State() diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/altair/utils/deprecation.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/altair/utils/deprecation.py deleted file mode 100644 index f0ed26ae98f9a71512f85ca589fd4e160033f97b..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/altair/utils/deprecation.py +++ /dev/null @@ -1,71 +0,0 @@ -import warnings -import functools - - -class AltairDeprecationWarning(UserWarning): - pass - - -def deprecated(message=None): - """Decorator to deprecate a function or class. - - Parameters - ---------- - message : string (optional) - The deprecation message - """ - - def wrapper(obj): - return _deprecate(obj, message=message) - - return wrapper - - -def _deprecate(obj, name=None, message=None): - """Return a version of a class or function that raises a deprecation warning. - - Parameters - ---------- - obj : class or function - The object to create a deprecated version of. - name : string (optional) - The name of the deprecated object - message : string (optional) - The deprecation message - - Returns - ------- - deprecated_obj : - The deprecated version of obj - - Examples - -------- - >>> class Foo: pass - >>> OldFoo = _deprecate(Foo, "OldFoo") - >>> f = OldFoo() # doctest: +SKIP - AltairDeprecationWarning: alt.OldFoo is deprecated. Use alt.Foo instead. - """ - if message is None: - message = "alt.{} is deprecated. Use alt.{} instead." "".format( - name, obj.__name__ - ) - if isinstance(obj, type): - return type( - name, - (obj,), - { - "__doc__": obj.__doc__, - "__init__": _deprecate(obj.__init__, "__init__", message), - }, - ) - elif callable(obj): - - @functools.wraps(obj) - def new_obj(*args, **kwargs): - warnings.warn(message, AltairDeprecationWarning, stacklevel=1) - return obj(*args, **kwargs) - - new_obj._deprecated = True - return new_obj - else: - raise ValueError("Cannot deprecate object of type {}".format(type(obj))) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bs4/tests/test_lxml.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bs4/tests/test_lxml.py deleted file mode 100644 index 5065b6fc835e019fcb7cd4e60a3a36d267f34235..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bs4/tests/test_lxml.py +++ /dev/null @@ -1,203 +0,0 @@ -"""Tests to ensure that the lxml tree builder generates good trees.""" - -import pickle -import pytest -import re -import warnings -from . import LXML_PRESENT, LXML_VERSION - -if LXML_PRESENT: - from bs4.builder import LXMLTreeBuilder, LXMLTreeBuilderForXML - -from bs4 import ( - BeautifulSoup, - BeautifulStoneSoup, - ) -from bs4.element import Comment, Doctype, SoupStrainer -from . import ( - HTMLTreeBuilderSmokeTest, - XMLTreeBuilderSmokeTest, - SOUP_SIEVE_PRESENT, - SoupTest, -) - -@pytest.mark.skipif( - not LXML_PRESENT, - reason="lxml seems not to be present, not testing its tree builder." -) -class TestLXMLTreeBuilder(SoupTest, HTMLTreeBuilderSmokeTest): - """See ``HTMLTreeBuilderSmokeTest``.""" - - @property - def default_builder(self): - return LXMLTreeBuilder - - def test_out_of_range_entity(self): - self.assert_soup( - "

        foo�bar

        ", "

        foobar

        ") - self.assert_soup( - "

        foo�bar

        ", "

        foobar

        ") - self.assert_soup( - "

        foo�bar

        ", "

        foobar

        ") - - def test_entities_in_foreign_document_encoding(self): - # We can't implement this case correctly because by the time we - # hear about markup like "“", it's been (incorrectly) converted into - # a string like u'\x93' - pass - - # In lxml < 2.3.5, an empty doctype causes a segfault. Skip this - # test if an old version of lxml is installed. - - @pytest.mark.skipif( - not LXML_PRESENT or LXML_VERSION < (2,3,5,0), - reason="Skipping doctype test for old version of lxml to avoid segfault." - ) - def test_empty_doctype(self): - soup = self.soup("") - doctype = soup.contents[0] - assert "" == doctype.strip() - - def test_beautifulstonesoup_is_xml_parser(self): - # Make sure that the deprecated BSS class uses an xml builder - # if one is installed. - with warnings.catch_warnings(record=True) as w: - soup = BeautifulStoneSoup("") - assert "" == str(soup.b) - [warning] = w - assert warning.filename == __file__ - assert "BeautifulStoneSoup class is deprecated" in str(warning.message) - - def test_tracking_line_numbers(self): - # The lxml TreeBuilder cannot keep track of line numbers from - # the original markup. Even if you ask for line numbers, we - # don't have 'em. - # - # This means that if you have a tag like or - # , attribute access will find it rather than - # giving you a numeric answer. - soup = self.soup( - "\n

        \n\n\ntext

        ", - store_line_numbers=True - ) - assert "sourceline" == soup.p.sourceline.name - assert "sourcepos" == soup.p.sourcepos.name - -@pytest.mark.skipif( - not LXML_PRESENT, - reason="lxml seems not to be present, not testing its XML tree builder." -) -class TestLXMLXMLTreeBuilder(SoupTest, XMLTreeBuilderSmokeTest): - """See ``HTMLTreeBuilderSmokeTest``.""" - - @property - def default_builder(self): - return LXMLTreeBuilderForXML - - def test_namespace_indexing(self): - soup = self.soup( - '\n' - '' - 'content' - 'content' - '' - '' - '' - '' - '' - ) - - # The BeautifulSoup object includes every namespace prefix - # defined in the entire document. This is the default set of - # namespaces used by soupsieve. - # - # Un-prefixed namespaces are not included, and if a given - # prefix is defined twice, only the first prefix encountered - # in the document shows up here. - assert soup._namespaces == { - 'xml': 'http://www.w3.org/XML/1998/namespace', - 'prefix': 'http://prefixed-namespace.com', - 'prefix2': 'http://another-namespace.com' - } - - # A Tag object includes only the namespace prefixes - # that were in scope when it was parsed. - - # We do not track un-prefixed namespaces as we can only hold - # one (the first one), and it will be recognized as the - # default namespace by soupsieve, even when operating from a - # tag with a different un-prefixed namespace. - assert soup.tag._namespaces == { - 'xml': 'http://www.w3.org/XML/1998/namespace', - } - - assert soup.tag2._namespaces == { - 'prefix': 'http://prefixed-namespace.com', - 'xml': 'http://www.w3.org/XML/1998/namespace', - } - - assert soup.subtag._namespaces == { - 'prefix2': 'http://another-namespace.com', - 'xml': 'http://www.w3.org/XML/1998/namespace', - } - - assert soup.subsubtag._namespaces == { - 'prefix2': 'http://another-namespace.com', - 'xml': 'http://www.w3.org/XML/1998/namespace', - } - - - @pytest.mark.skipif( - not SOUP_SIEVE_PRESENT, reason="Soup Sieve not installed" - ) - def test_namespace_interaction_with_select_and_find(self): - # Demonstrate how namespaces interact with select* and - # find* methods. - - soup = self.soup( - '\n' - '' - 'content' - 'content' - '' - '' - '' - '' - ) - - # soupselect uses namespace URIs. - assert soup.select_one('tag').name == 'tag' - assert soup.select_one('prefix|tag2').name == 'tag2' - - # If a prefix is declared more than once, only the first usage - # is registered with the BeautifulSoup object. - assert soup.select_one('prefix|tag3') is None - - # But you can always explicitly specify a namespace dictionary. - assert soup.select_one( - 'prefix|tag3', namespaces=soup.subtag._namespaces - ).name == 'tag3' - - # And a Tag (as opposed to the BeautifulSoup object) will - # have a set of default namespaces scoped to that Tag. - assert soup.subtag.select_one('prefix|tag3').name=='tag3' - - # the find() methods aren't fully namespace-aware; they just - # look at prefixes. - assert soup.find('tag').name == 'tag' - assert soup.find('prefix:tag2').name == 'tag2' - assert soup.find('prefix:tag3').name == 'tag3' - assert soup.subtag.find('prefix:tag3').name == 'tag3' - - def test_pickle_restores_builder(self): - # The lxml TreeBuilder is not picklable, so when unpickling - # a document created with it, a new TreeBuilder of the - # appropriate class is created. - soup = self.soup("some markup") - assert isinstance(soup.builder, self.default_builder) - pickled = pickle.dumps(soup) - unpickled = pickle.loads(pickled) - - assert "some markup" == unpickled.a.string - assert unpickled.builder != soup.builder - assert isinstance(unpickled.builder, self.default_builder) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/dnskeybase.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/dnskeybase.py deleted file mode 100644 index 3bfcf860d44fd4bb8a3d1bf866e021d100ec8f72..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/dnskeybase.py +++ /dev/null @@ -1,88 +0,0 @@ -# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license - -# Copyright (C) 2004-2007, 2009-2011 Nominum, Inc. -# -# Permission to use, copy, modify, and distribute this software and its -# documentation for any purpose with or without fee is hereby granted, -# provided that the above copyright notice and this permission notice -# appear in all copies. -# -# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES -# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF -# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR -# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES -# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN -# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT -# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. - -import base64 -import enum -import struct - -import dns.dnssectypes -import dns.exception -import dns.immutable -import dns.rdata - -# wildcard import -__all__ = ["SEP", "REVOKE", "ZONE"] # noqa: F822 - - -class Flag(enum.IntFlag): - SEP = 0x0001 - REVOKE = 0x0080 - ZONE = 0x0100 - - -@dns.immutable.immutable -class DNSKEYBase(dns.rdata.Rdata): - - """Base class for rdata that is like a DNSKEY record""" - - __slots__ = ["flags", "protocol", "algorithm", "key"] - - def __init__(self, rdclass, rdtype, flags, protocol, algorithm, key): - super().__init__(rdclass, rdtype) - self.flags = Flag(self._as_uint16(flags)) - self.protocol = self._as_uint8(protocol) - self.algorithm = dns.dnssectypes.Algorithm.make(algorithm) - self.key = self._as_bytes(key) - - def to_text(self, origin=None, relativize=True, **kw): - return "%d %d %d %s" % ( - self.flags, - self.protocol, - self.algorithm, - dns.rdata._base64ify(self.key, **kw), - ) - - @classmethod - def from_text( - cls, rdclass, rdtype, tok, origin=None, relativize=True, relativize_to=None - ): - flags = tok.get_uint16() - protocol = tok.get_uint8() - algorithm = tok.get_string() - b64 = tok.concatenate_remaining_identifiers().encode() - key = base64.b64decode(b64) - return cls(rdclass, rdtype, flags, protocol, algorithm, key) - - def _to_wire(self, file, compress=None, origin=None, canonicalize=False): - header = struct.pack("!HBB", self.flags, self.protocol, self.algorithm) - file.write(header) - file.write(self.key) - - @classmethod - def from_wire_parser(cls, rdclass, rdtype, parser, origin=None): - header = parser.get_struct("!HBB") - key = parser.get_remaining() - return cls(rdclass, rdtype, header[0], header[1], header[2], key) - - -### BEGIN generated Flag constants - -SEP = Flag.SEP -REVOKE = Flag.REVOKE -ZONE = Flag.ZONE - -### END generated Flag constants diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/misc/cliTools.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/misc/cliTools.py deleted file mode 100644 index 8322ea9ebb7cd1dd907829a985b9833058bc54c1..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/misc/cliTools.py +++ /dev/null @@ -1,52 +0,0 @@ -"""Collection of utilities for command-line interfaces and console scripts.""" -import os -import re - - -numberAddedRE = re.compile(r"#\d+$") - - -def makeOutputFileName( - input, outputDir=None, extension=None, overWrite=False, suffix="" -): - """Generates a suitable file name for writing output. - - Often tools will want to take a file, do some kind of transformation to it, - and write it out again. This function determines an appropriate name for the - output file, through one or more of the following steps: - - - changing the output directory - - appending suffix before file extension - - replacing the file extension - - suffixing the filename with a number (``#1``, ``#2``, etc.) to avoid - overwriting an existing file. - - Args: - input: Name of input file. - outputDir: Optionally, a new directory to write the file into. - suffix: Optionally, a string suffix is appended to file name before - the extension. - extension: Optionally, a replacement for the current file extension. - overWrite: Overwriting an existing file is permitted if true; if false - and the proposed filename exists, a new name will be generated by - adding an appropriate number suffix. - - Returns: - str: Suitable output filename - """ - dirName, fileName = os.path.split(input) - fileName, ext = os.path.splitext(fileName) - if outputDir: - dirName = outputDir - fileName = numberAddedRE.split(fileName)[0] - if extension is None: - extension = os.path.splitext(input)[1] - output = os.path.join(dirName, fileName + suffix + extension) - n = 1 - if not overWrite: - while os.path.exists(output): - output = os.path.join( - dirName, fileName + suffix + "#" + repr(n) + extension - ) - n += 1 - return output diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/misc/cython.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/misc/cython.py deleted file mode 100644 index 2a42d94a3591e0e8e47f184b303e4aec0a6337ef..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/misc/cython.py +++ /dev/null @@ -1,27 +0,0 @@ -""" Exports a no-op 'cython' namespace similar to -https://github.com/cython/cython/blob/master/Cython/Shadow.py - -This allows to optionally compile @cython decorated functions -(when cython is available at built time), or run the same code -as pure-python, without runtime dependency on cython module. - -We only define the symbols that we use. E.g. see fontTools.cu2qu -""" - -from types import SimpleNamespace - - -def _empty_decorator(x): - return x - - -compiled = False - -for name in ("double", "complex", "int"): - globals()[name] = None - -for name in ("cfunc", "inline"): - globals()[name] = _empty_decorator - -locals = lambda **_: _empty_decorator -returns = lambda _: _empty_decorator diff --git a/spaces/jordonpeter01/MusicGen/audiocraft/data/audio.py b/spaces/jordonpeter01/MusicGen/audiocraft/data/audio.py deleted file mode 100644 index 2048df6f175d7303bcf5c7b931922fd297908ead..0000000000000000000000000000000000000000 --- a/spaces/jordonpeter01/MusicGen/audiocraft/data/audio.py +++ /dev/null @@ -1,215 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Audio IO methods are defined in this module (info, read, write), -We rely on av library for faster read when possible, otherwise on torchaudio. -""" - -from dataclasses import dataclass -from pathlib import Path -import logging -import typing as tp - -import numpy as np -import soundfile -import torch -from torch.nn import functional as F -import torchaudio as ta - -import av - -from .audio_utils import f32_pcm, i16_pcm, normalize_audio - - -_av_initialized = False - - -def _init_av(): - global _av_initialized - if _av_initialized: - return - logger = logging.getLogger('libav.mp3') - logger.setLevel(logging.ERROR) - _av_initialized = True - - -@dataclass(frozen=True) -class AudioFileInfo: - sample_rate: int - duration: float - channels: int - - -def _av_info(filepath: tp.Union[str, Path]) -> AudioFileInfo: - _init_av() - with av.open(str(filepath)) as af: - stream = af.streams.audio[0] - sample_rate = stream.codec_context.sample_rate - duration = float(stream.duration * stream.time_base) - channels = stream.channels - return AudioFileInfo(sample_rate, duration, channels) - - -def _soundfile_info(filepath: tp.Union[str, Path]) -> AudioFileInfo: - info = soundfile.info(filepath) - return AudioFileInfo(info.samplerate, info.duration, info.channels) - - -def audio_info(filepath: tp.Union[str, Path]) -> AudioFileInfo: - # torchaudio no longer returns useful duration informations for some formats like mp3s. - filepath = Path(filepath) - if filepath.suffix in ['.flac', '.ogg']: # TODO: Validate .ogg can be safely read with av_info - # ffmpeg has some weird issue with flac. - return _soundfile_info(filepath) - else: - return _av_info(filepath) - - -def _av_read(filepath: tp.Union[str, Path], seek_time: float = 0, duration: float = -1.) -> tp.Tuple[torch.Tensor, int]: - """FFMPEG-based audio file reading using PyAV bindings. - Soundfile cannot read mp3 and av_read is more efficient than torchaudio. - - Args: - filepath (str or Path): Path to audio file to read. - seek_time (float): Time at which to start reading in the file. - duration (float): Duration to read from the file. If set to -1, the whole file is read. - Returns: - Tuple[torch.Tensor, int]: Tuple containing audio data and sample rate - """ - _init_av() - with av.open(str(filepath)) as af: - stream = af.streams.audio[0] - sr = stream.codec_context.sample_rate - num_frames = int(sr * duration) if duration >= 0 else -1 - frame_offset = int(sr * seek_time) - # we need a small negative offset otherwise we get some edge artifact - # from the mp3 decoder. - af.seek(int(max(0, (seek_time - 0.1)) / stream.time_base), stream=stream) - frames = [] - length = 0 - for frame in af.decode(streams=stream.index): - current_offset = int(frame.rate * frame.pts * frame.time_base) - strip = max(0, frame_offset - current_offset) - buf = torch.from_numpy(frame.to_ndarray()) - if buf.shape[0] != stream.channels: - buf = buf.view(-1, stream.channels).t() - buf = buf[:, strip:] - frames.append(buf) - length += buf.shape[1] - if num_frames > 0 and length >= num_frames: - break - assert frames - # If the above assert fails, it is likely because we seeked past the end of file point, - # in which case ffmpeg returns a single frame with only zeros, and a weird timestamp. - # This will need proper debugging, in due time. - wav = torch.cat(frames, dim=1) - assert wav.shape[0] == stream.channels - if num_frames > 0: - wav = wav[:, :num_frames] - return f32_pcm(wav), sr - - -def audio_read(filepath: tp.Union[str, Path], seek_time: float = 0., - duration: float = -1., pad: bool = False) -> tp.Tuple[torch.Tensor, int]: - """Read audio by picking the most appropriate backend tool based on the audio format. - - Args: - filepath (str or Path): Path to audio file to read. - seek_time (float): Time at which to start reading in the file. - duration (float): Duration to read from the file. If set to -1, the whole file is read. - pad (bool): Pad output audio if not reaching expected duration. - Returns: - Tuple[torch.Tensor, int]: Tuple containing audio data and sample rate. - """ - fp = Path(filepath) - if fp.suffix in ['.flac', '.ogg']: # TODO: check if we can safely use av_read for .ogg - # There is some bug with ffmpeg and reading flac - info = _soundfile_info(filepath) - frames = -1 if duration <= 0 else int(duration * info.sample_rate) - frame_offset = int(seek_time * info.sample_rate) - wav, sr = soundfile.read(filepath, start=frame_offset, frames=frames, dtype=np.float32) - assert info.sample_rate == sr, f"Mismatch of sample rates {info.sample_rate} {sr}" - wav = torch.from_numpy(wav).t().contiguous() - if len(wav.shape) == 1: - wav = torch.unsqueeze(wav, 0) - elif ( - fp.suffix in ['.wav', '.mp3'] and fp.suffix[1:] in ta.utils.sox_utils.list_read_formats() - and duration <= 0 and seek_time == 0 - ): - # Torchaudio is faster if we load an entire file at once. - wav, sr = ta.load(fp) - else: - wav, sr = _av_read(filepath, seek_time, duration) - if pad and duration > 0: - expected_frames = int(duration * sr) - wav = F.pad(wav, (0, expected_frames - wav.shape[-1])) - return wav, sr - - -def audio_write(stem_name: tp.Union[str, Path], - wav: torch.Tensor, sample_rate: int, - format: str = 'wav', mp3_rate: int = 320, normalize: bool = True, - strategy: str = 'peak', peak_clip_headroom_db: float = 1, - rms_headroom_db: float = 18, loudness_headroom_db: float = 14, - loudness_compressor: bool = False, - log_clipping: bool = True, make_parent_dir: bool = True, - add_suffix: bool = True) -> Path: - """Convenience function for saving audio to disk. Returns the filename the audio was written to. - - Args: - stem_name (str or Path): Filename without extension which will be added automatically. - format (str): Either "wav" or "mp3". - mp3_rate (int): kbps when using mp3s. - normalize (bool): if `True` (default), normalizes according to the prescribed - strategy (see after). If `False`, the strategy is only used in case clipping - would happen. - strategy (str): Can be either 'clip', 'peak', or 'rms'. Default is 'peak', - i.e. audio is normalized by its largest value. RMS normalizes by root-mean-square - with extra headroom to avoid clipping. 'clip' just clips. - peak_clip_headroom_db (float): Headroom in dB when doing 'peak' or 'clip' strategy. - rms_headroom_db (float): Headroom in dB when doing 'rms' strategy. This must be much larger - than the `peak_clip` one to avoid further clipping. - loudness_headroom_db (float): Target loudness for loudness normalization. - loudness_compressor (bool): Uses tanh for soft clipping when strategy is 'loudness'. - when strategy is 'loudness'log_clipping (bool): If True, basic logging on stderr when clipping still - occurs despite strategy (only for 'rms'). - make_parent_dir (bool): Make parent directory if it doesn't exist. - Returns: - Path: Path of the saved audio. - """ - assert wav.dtype.is_floating_point, "wav is not floating point" - if wav.dim() == 1: - wav = wav[None] - elif wav.dim() > 2: - raise ValueError("Input wav should be at most 2 dimension.") - assert wav.isfinite().all() - wav = normalize_audio(wav, normalize, strategy, peak_clip_headroom_db, - rms_headroom_db, loudness_headroom_db, log_clipping=log_clipping, - sample_rate=sample_rate, stem_name=str(stem_name)) - kwargs: dict = {} - if format == 'mp3': - suffix = '.mp3' - kwargs.update({"compression": mp3_rate}) - elif format == 'wav': - wav = i16_pcm(wav) - suffix = '.wav' - kwargs.update({"encoding": "PCM_S", "bits_per_sample": 16}) - else: - raise RuntimeError(f"Invalid format {format}. Only wav or mp3 are supported.") - if not add_suffix: - suffix = '' - path = Path(str(stem_name) + suffix) - if make_parent_dir: - path.parent.mkdir(exist_ok=True, parents=True) - try: - ta.save(path, wav, sample_rate, **kwargs) - except Exception: - if path.exists(): - # we do not want to leave half written files around. - path.unlink() - raise - return path diff --git a/spaces/kboaten/MIDI-Audio-Extension/MIDI-song-extender/musicautobot/multitask_transformer/__init__.py b/spaces/kboaten/MIDI-Audio-Extension/MIDI-song-extender/musicautobot/multitask_transformer/__init__.py deleted file mode 100644 index 4a9dfd7a60f3c7be63a2109e525e04672a70d069..0000000000000000000000000000000000000000 --- a/spaces/kboaten/MIDI-Audio-Extension/MIDI-song-extender/musicautobot/multitask_transformer/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .dataloader import * -from .model import * -from .learner import * \ No newline at end of file diff --git a/spaces/kcagle/AutoGPT/autogpt/memory/base.py b/spaces/kcagle/AutoGPT/autogpt/memory/base.py deleted file mode 100644 index 691e2299c4caa5c2e9af5b2436727834f3cc6c67..0000000000000000000000000000000000000000 --- a/spaces/kcagle/AutoGPT/autogpt/memory/base.py +++ /dev/null @@ -1,43 +0,0 @@ -"""Base class for memory providers.""" -import abc - -import openai - -from autogpt.config import AbstractSingleton, Config - -cfg = Config() - - -def get_ada_embedding(text): - text = text.replace("\n", " ") - if cfg.use_azure: - return openai.Embedding.create( - input=[text], - engine=cfg.get_azure_deployment_id_for_model("text-embedding-ada-002"), - )["data"][0]["embedding"] - else: - return openai.Embedding.create(input=[text], model="text-embedding-ada-002")[ - "data" - ][0]["embedding"] - - -class MemoryProviderSingleton(AbstractSingleton): - @abc.abstractmethod - def add(self, data): - pass - - @abc.abstractmethod - def get(self, data): - pass - - @abc.abstractmethod - def clear(self): - pass - - @abc.abstractmethod - def get_relevant(self, data, num_relevant=5): - pass - - @abc.abstractmethod - def get_stats(self): - pass diff --git a/spaces/kcagle/AutoGPT/scripts/check_requirements.py b/spaces/kcagle/AutoGPT/scripts/check_requirements.py deleted file mode 100644 index e4eab024a6280c0d54110c69b2e03de639325fa6..0000000000000000000000000000000000000000 --- a/spaces/kcagle/AutoGPT/scripts/check_requirements.py +++ /dev/null @@ -1,32 +0,0 @@ -import sys - -import pkg_resources - - -def main(): - requirements_file = sys.argv[1] - with open(requirements_file, "r") as f: - required_packages = [ - line.strip().split("#")[0].strip() for line in f.readlines() - ] - - installed_packages = [package.key for package in pkg_resources.working_set] - - missing_packages = [] - for package in required_packages: - if not package: # Skip empty lines - continue - package_name = package.strip().split("==")[0] - if package_name.lower() not in installed_packages: - missing_packages.append(package_name) - - if missing_packages: - print("Missing packages:") - print(", ".join(missing_packages)) - sys.exit(1) - else: - print("All packages are installed.") - - -if __name__ == "__main__": - main() diff --git a/spaces/kcagle/AutoGPT/tests/local_cache_test.py b/spaces/kcagle/AutoGPT/tests/local_cache_test.py deleted file mode 100644 index bb10862656bb500f319ac231ff5bd5438d6fe7e2..0000000000000000000000000000000000000000 --- a/spaces/kcagle/AutoGPT/tests/local_cache_test.py +++ /dev/null @@ -1,67 +0,0 @@ -# sourcery skip: snake-case-functions -"""Tests for LocalCache class""" -import os -import sys -import unittest - -import pytest - -from autogpt.memory.local import LocalCache - - -def mock_config() -> dict: - """Mock the Config class""" - return type( - "MockConfig", - (object,), - { - "debug_mode": False, - "continuous_mode": False, - "speak_mode": False, - "memory_index": "auto-gpt", - }, - ) - - -@pytest.mark.integration_test -class TestLocalCache(unittest.TestCase): - """Tests for LocalCache class""" - - def setUp(self) -> None: - """Set up the test environment""" - self.cfg = mock_config() - self.cache = LocalCache(self.cfg) - - def test_add(self) -> None: - """Test adding a text to the cache""" - text = "Sample text" - self.cache.add(text) - self.assertIn(text, self.cache.data.texts) - - def test_clear(self) -> None: - """Test clearing the cache""" - self.cache.clear() - self.assertEqual(self.cache.data.texts, []) - - def test_get(self) -> None: - """Test getting a text from the cache""" - text = "Sample text" - self.cache.add(text) - result = self.cache.get(text) - self.assertEqual(result, [text]) - - def test_get_relevant(self) -> None: - """Test getting relevant texts from the cache""" - text1 = "Sample text 1" - text2 = "Sample text 2" - self.cache.add(text1) - self.cache.add(text2) - result = self.cache.get_relevant(text1, 1) - self.assertEqual(result, [text1]) - - def test_get_stats(self) -> None: - """Test getting the cache stats""" - text = "Sample text" - self.cache.add(text) - stats = self.cache.get_stats() - self.assertEqual(stats, (4, self.cache.data.embeddings.shape)) diff --git "a/spaces/kenttate937/pelisplusss/avatar 2PEL\303\215CULA COMPLETA +ver en espa\303\261ol y latino MEGA.md" "b/spaces/kenttate937/pelisplusss/avatar 2PEL\303\215CULA COMPLETA +ver en espa\303\261ol y latino MEGA.md" deleted file mode 100644 index 24edec05bddfd3a26c3726c2d5af7b11c355b439..0000000000000000000000000000000000000000 --- "a/spaces/kenttate937/pelisplusss/avatar 2PEL\303\215CULA COMPLETA +ver en espa\303\261ol y latino MEGA.md" +++ /dev/null @@ -1,145 +0,0 @@ -Cuevana3 Ver Película avatar 2 (2023) Online Gratis | Disfruta de la Película Completa de avatar 2 en HD con Audio Español y Latino Subtitulado.avatar 2 (2023) película completa: ¿dónde ver la película en español?Ahora si, después de una breve reseña sobre avatar 2 la película, te voy dejar algunas opciones para verla de manera online.¿Dónde se puede ver avatar 2 en español online? - -Ver la película! películas y series de tv online gratis” - -●● Disponible para descargar, (Fast Flick 2023) 720p, 1080p, BrRip, DvdRip, Youtube, Reddit, multilingüe y de alta calidad ●● - -Ver Ahora ►► https://123mopie.xtzy.men/es/movie/76600 - -Clic aqui ►► https://123mopie.xtzy.men/es/movie/76600 - -Ver avatar 2 (2023) online y sur gratis en HD, es fácil en gracias a sus servidores, rapidos y sin ads. ¿Cómo ver avatar 2 (2023) película completa en - -Ver Películas avatar 2 (2023) Online Gratis en Español, Latino, Castellano y Subtitulado sin registrarse. Ver estrenos de películas y también las mejores películas en HD - -Ver avatar 2 (2023) película completa GRATIS en Español o con subtítulos en tu idioma, en HD y hasta en calidad 2023 HD con Audio Español Latino y Subtitulado. - -avatar 2 (2023) ; fecha de estreno, tráiler y dónde ver en España y Latinoamérica - -Te contamos todo sobre avatar 2 (2023) , cuándo se estrena en cines, tráiler y dónde podrás verla en España y Latinoamérica. - -avatar 2 (2023) de esta Película Desde 2023 se estrenó oficialmente en Hispanoamérica y España, esta Película es muy interesante y puede acompañarte a relajarte un poco en casa,voy a flashback de esta Película. - -En éste artículo vamos a saber dónde ver Películas online gratis en HD sin cortes, y asídisfrutar en familia, solos o con amigos de las mejores Películas de la actualidad como también de la historia del cine. Conoce las mejores páginas para ver Películas gratis. - -Ver avatar 2 (2023) la Película en español línea Online y Gratis - -Ver películas avatar 2 (2023) completas gratis en español es posible y de forma legal. Hay opciones alternativas a Netflix y Amazon que no requieren ningún tipo de pago ni suscripción y cuyo contenido es totalmente gratuito. Estas plataformas para ver cine avatar 2(2023) gratis en casa pueden ofrecer contenido sin costo gracias a cortes comerciales o bien porque tienen películas de dominio público. - -Ver avatar 2 (2023) Película completa en español Latino Subtitulado - -En nuestro sitio proporcionamos subtítulos y dabbing en latín, no tenga miedo por México,Chile, Perú, Bolivia, Uruguay, Paraguay, España, Argentina, Colombia y todas las regiones de habla latina, hemos proporcionado idiomas para sus Halloween Killsivas regiones. .Para disfrutar de todas estas funciones, puede registrarse y seguir en su cuenta premium. - -imaxstream es su recurso de entretenimiento número uno, Únete a cientos de miles demiembros satisfechos y disfruta de las mejores películas Está AQUÍ y es GRATIS. - -¿Dónde una serie de películas de acción, terror, aventuras, telenovelas mexicanas y turcas, drama, anime y muchas más, como las últimas noticias: Narcos: México, The Sinner 2 y Lareina del flow. Incluso le diremos qué películas se exhiben en los cines de Perú, México,España, Estados Unidos, Colombia, Argentina, Chile y otros países del mundo. - -Sinopsis de la película completa “avatar 2” - -¿Dónde ver avatar 2 online? - -A continuación te detallamos todo lo que debes saber para ver la mejore de la película ‘avatar 2’ cuando quieras, donde quieras y con quien quieras. - -Incluso aprenderás a ver películas gratis online de forma absolutamente legal y segura, este sin necesidad de pagar mensualmente una suscripción a servicios premium de streaming la película avatar 2 como Netflix, HBO Max, Amazon Prime Video, Hulu, Fox Premium, Movistar Play, Disney+, Crackle o Blim, o de bajar apps de Google Play o App Store que no te ayudarán mucho a satisfacer esa sed cinéfila. ¿No te es suficiente? ¿Quieres más trucos? También te enseñaremos a usar los sitios premium por avatar 2 película completa, sin pagar absolutamente nada. Incluso te contaremos qué películas están en la cartelera de los cines del Chile, Perú, México, España, Estados Unidos, Colombia, Argentina, Ecuador y demás países del mundo. - -avatar 2 (2023) Online: ¿Cómo Ver la Película completa en Español Gratis? - -Esto es cómo ver avatar 2 online, la película completa en español y subtítulos. - -Conoce cómo y dónde ver avatar 2 por Internet, la película completa en español latino y castellano o subtitulado, ver películas online y gratis en alta calidad. - -Existen dos grandes problemas a la hora de ver película avatar 2 gratis en internet: Los continuos parones en la reproducción de la película y la calidad en la que se reproduce. - -Seguramnte en más de una ocasión has buscado en Google “¿cómo puedo ver avatar 2 la película completa en español?” o “¿dónde puedo ver avatar 2 la película completa?”. - -No lo niegues. No eres el único. Todos los días, millones de personas intentan ver Película online desde sus computadoras, laptops, smartphones, tablets o cual sea el dispositivo móvil de su preferencia. - -Sin embargo, la navegación muchas veces termina en páginas web que no cumplen lo prometido, que aseguran tener los últimos estrenos, pero que solo te derivan de un site a otro, que te obligan a dar clic tras clic mientras te llenan la pantalla de publicidad, para finalmente dirigirte hasta un enlace que no funciona o que demora mucho en cargar. - -Esto hace que sea imposible disfrutar de verdad de una tarde/noche de películas. Además existe una ley no escrita y es que este tipo de cosas suelen ocurrir los mejores momentos de la película y acaba frustrando. - -Que esto ocurra se debe a muchos factores como: la conexión a Internet, la página desde la que estés viendo la película gratis o la calidad de reproducción elegida. - -Todos estos problemas se pueden solucionar, salvo la velocidad de tu internet, por ello en este aqui encontrarás solo páginas para ver películas en Internet gratis en castellano y sin cortes de gran calidad dónde estás problemas no existen o son muy poco comunes. - -Por supuesto esta página están libres de virus y el listado se actualiza conforme a las nuevas páginas que van apareciendo y aquellas que se van cerrando. - -De las páginas más conocidas, cabe duda de que cumple su objetivo a la perfección ¡Ver película avatar 2 online sin registrase en español y otros idiomas! - -Se trata de una página muy bien distribuida en la que puedes encontrar casi cualquier películas completas online, sin publicidad y en calidad Full HD y 4K. - -Algunas de las cosas más interesantes de esta página son: - -Las películas están ordenadas por género y por año lo que hace que sea muy fácil de usar. - -Puedes ver la película avatar 2 en formatos de calidad como Full HD. y sin publicidad. - -Posibilidad de ver la película avatar 2 online en español latino y castellano u otros idiomas. Esto depende de los idiomas disponibles y el gusto del espectador. - -¿Cómo puedes ver las películas de Batman en YouTube? - -Puedes suscribirte al servicio de paga de YouTube para acceder a contenido exclusivo que jamás has imaginado. Los tres primeros meses son gratis. - -YouTube es una de las páginas de curaduría de clásicos más populares en la red. El sitio está dedicado por completo a la distribución de películas de libre acceso, liberadas de derechos de autor. - -Por ejemplo, su catálogo de cine mudo es excepcional. ¿Lo mejor de todo? Puedes ver las películas 'Batman' desde YouTube, por lo que navegar es sencillísimo. - -Páginas Para Ver la Película Completa de avatar 2 Online en Español y Latino de Forma Legal y Gratis - -¿Páginas para ver película avatar 2 gratis? ¿Ver película avatar 2 online gratis en HD sin cortes? ¿Ver película avatar 2 online gratis en español? - -¡VER AQUI! - -Si eres de las personas a las que les encanta pasar los domingos de películas y manta, este artículo te interesa y mucho. - -Aquí podrás encontrar un definitivo con las mejores páginas para ver películas online gratis en español y latino. - -¡No pierdas más tiempo y conoce cómo ver cine online desde casa! - -Es una página para ver la película “avatar 2” gratis, pero este tipo de páginas abren y cierran continuamente debido a los derechos de autor. Por este motivo, cada vez es más difícil ver películas gratis en Internet. - -¡No te desesperes! con este aqui podrás encontrar las mejores páginas para ver película “avatar 2” online en castellano sin cortes y en buena calidad. - -Si quieres ver películas gratis y series online en español y latino solo debes de páginas web como Cuevana, ponerte al día. Y no necesitas una cuenta en de Netflix, HBO Max, Amazon Prime, Disney+, y otros para ver películas. - -Ver la película avatar 2 online gratis en español y latino | Gracias a Internet es posible ver pelis avatar 2 gratis online en español y también subtitulos latino sin necesidad de pagar una cuenta de premium como Netflix, HBO Max, Amazon Prime Video o Hulu. - -Si eres de las personas que busca en Google términos como "páginas para ver pelis online", "estrenos español online", "películas online en español", "películas gratis online", "ver pelis online", entre otros keywords, seguramente has sido llevado a páginas web de dudosa procedencia o que te obligan a registrarte con alguna cuenta en redes sociales. - -Si te hartaste de eso, a continuación podrás ver las mejores películas gratis online para disfrutar sin problemas, sin interrupciones y sin publicidad para convertir tu casa en un cine. - -Esta páginas para ver avatar 2 online sin publicidad y sin cortes, así que presta atención y apunta, que la buena experiencia cinéfila -o seriéfila- está plenamente garantizada en estos websites. - -Si no tienes los códigos de Netflix a la mano o tu conexión no te permite descargar películas gratis en Mega HD, conoce cómo ver películas de acción, terror, comedias, clásicos y hasta teen movies de la forma más fácil con solo unos clics. Hasta pelis de estreno puedes encontrar en español. - -Páginas web para ver película avatar 2 gratis son de fácil acceso. eso sí, solo necesitas crear una cuenta para ver y descargar de películas, la mayoría de estas páginas web para ver películas gratis son de fácil acceso y no es necesario el registro. Eso sí, algunas incluyen publicidad antes de la reproducción del título elegido, aunque esta es casi imperceptible. - -“Cuevana” es una plataforma donde puedes ver películas de manera gratuita sin publicidad y legal con un amplio catálogo de películas, donde el usuario puede filtrar los filmes por el género, es decir, Romance, Acción, Comedia, Drama, Horror, Aventura, Animación, Animes, Superhéroes. Cómic. DC Comics, Marvel, Disney, entre otros. - -Todas las películas son de alta calidad, incluye una sólida colección de programas de televisión, Para acceder a ellas gratis solo necesitas crear una cuenta. Esta página es gratuita y libre de anuncios. Además, ofrece artículos sobre estrenos independientes y comerciales. - -Somos una distribuidora que se destaca por sus innovadoras campañas de marketing y un eficiente portafolio de adquisiciones, esto nos ha permitido convertirnos en el distribuidor independiente número 1 de nuestros territorios. Actualmente estamos presentes en Chile, Mexico, Guatemala, Honduras, El Salvador, Nicaragua, Costa Rica, Panama, Colombia, Venezuela, Ecuador, Peru, Bolivia, Brazil, Paraguay, Argentina, Uruguay, Cuba, Haiti, the Dominican Republic, Puerto Rico. - -Espero que te haya servido éste artículo y puedas disfrutar de linda películas cómo avatar 2 completas. - -Puedes buscarlo en google: - -Dónde ver avatar 2 - -Cómo ver avatar 2 - -Cómo ver avatar 2 en español - -Dónde ver avatar 2 en español - -avatar 2 película completa - -avatar 2 película completa gratis - -avatar 2 película completa online - -avatar 2 película completa online gratis - -avatar 2 pelicula completa en español - -avatar 2 pelicula completa en español latino \ No newline at end of file diff --git a/spaces/kepl/gpt/g4f/Provider/Providers/Easychat.py b/spaces/kepl/gpt/g4f/Provider/Providers/Easychat.py deleted file mode 100644 index eb740da991eb8f740489f6bc76a1ad55f006663b..0000000000000000000000000000000000000000 --- a/spaces/kepl/gpt/g4f/Provider/Providers/Easychat.py +++ /dev/null @@ -1,55 +0,0 @@ -import requests -import os -import json -from ...typing import sha256, Dict, get_type_hints - -url = 'https://free.easychat.work' -model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k', - 'gpt-3.5-turbo-16k-0613', 'gpt-3.5-turbo-0613'] -supports_stream = True -needs_auth = False - - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - headers = { - 'authority': 'free.easychat.work', - 'accept': 'text/event-stream', - 'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3', - 'content-type': 'application/json', - 'endpoint': '', - 'origin': 'https://free.easychat.work', - 'plugins': '0', - 'referer': 'https://free.easychat.work/', - 'sec-ch-ua': '"Not.A/Brand";v="8", "Chromium";v="114", "Google Chrome";v="114"', - 'sec-ch-ua-mobile': '?0', - 'sec-ch-ua-platform': '"macOS"', - 'sec-fetch-dest': 'empty', - 'sec-fetch-mode': 'cors', - 'sec-fetch-site': 'same-origin', - 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36', - 'usesearch': 'false', - 'x-requested-with': 'XMLHttpRequest', - } - - json_data = { - 'messages': messages, - 'stream': True, - 'model': model, - 'temperature': 0.5, - 'presence_penalty': 0, - 'frequency_penalty': 0, - 'top_p': 1, - } - - response = requests.post('https://free.easychat.work/api/openai/v1/chat/completions', - headers=headers, json=json_data) - - for chunk in response.iter_lines(): - if b'content' in chunk: - data = json.loads(chunk.decode().split('data: ')[1]) - yield (data['choices'][0]['delta']['content']) - - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join( - [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) diff --git a/spaces/kevinwang676/ControlNet-with-GPT-4/app_shuffle.py b/spaces/kevinwang676/ControlNet-with-GPT-4/app_shuffle.py deleted file mode 100644 index 950d80832b76949412a22742a75e7b063f058d3a..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ControlNet-with-GPT-4/app_shuffle.py +++ /dev/null @@ -1,91 +0,0 @@ -#!/usr/bin/env python - -import gradio as gr - -from settings import ( - DEFAULT_IMAGE_RESOLUTION, - DEFAULT_NUM_IMAGES, - MAX_IMAGE_RESOLUTION, - MAX_NUM_IMAGES, - MAX_SEED, -) -from utils import randomize_seed_fn - - -def create_demo(process): - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - image = gr.Image() - prompt = gr.Textbox(label="Prompt") - run_button = gr.Button("Run") - with gr.Accordion("Advanced options", open=False): - preprocessor_name = gr.Radio( - label="Preprocessor", choices=["ContentShuffle", "None"], type="value", value="ContentShuffle" - ) - num_samples = gr.Slider( - label="Number of images", minimum=1, maximum=MAX_NUM_IMAGES, value=DEFAULT_NUM_IMAGES, step=1 - ) - image_resolution = gr.Slider( - label="Image resolution", - minimum=256, - maximum=MAX_IMAGE_RESOLUTION, - value=DEFAULT_IMAGE_RESOLUTION, - step=256, - ) - num_steps = gr.Slider(label="Number of steps", minimum=1, maximum=100, value=20, step=1) - guidance_scale = gr.Slider(label="Guidance scale", minimum=0.1, maximum=30.0, value=9.0, step=0.1) - seed = gr.Slider(label="Seed", minimum=0, maximum=MAX_SEED, step=1, value=0) - randomize_seed = gr.Checkbox(label="Randomize seed", value=True) - a_prompt = gr.Textbox(label="Additional prompt", value="best quality, extremely detailed") - n_prompt = gr.Textbox( - label="Negative prompt", - value="longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality", - ) - with gr.Column(): - result = gr.Gallery(label="Output", show_label=False, columns=2, object_fit="scale-down") - inputs = [ - image, - prompt, - a_prompt, - n_prompt, - num_samples, - image_resolution, - num_steps, - guidance_scale, - seed, - preprocessor_name, - ] - prompt.submit( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - queue=False, - api_name=False, - ).then( - fn=process, - inputs=inputs, - outputs=result, - api_name=False, - ) - run_button.click( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - queue=False, - api_name=False, - ).then( - fn=process, - inputs=inputs, - outputs=result, - api_name="content-shuffle", - ) - return demo - - -if __name__ == "__main__": - from model import Model - - model = Model(task_name="shuffle") - demo = create_demo(model.process_shuffle) - demo.queue().launch() diff --git a/spaces/kevinwang676/SadTalker/src/face3d/models/arcface_torch/docs/install.md b/spaces/kevinwang676/SadTalker/src/face3d/models/arcface_torch/docs/install.md deleted file mode 100644 index 6314a40441285e9236438e468caf8b71a407531a..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/SadTalker/src/face3d/models/arcface_torch/docs/install.md +++ /dev/null @@ -1,51 +0,0 @@ -## v1.8.0 -### Linux and Windows -```shell -# CUDA 11.0 -pip --default-timeout=100 install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html - -# CUDA 10.2 -pip --default-timeout=100 install torch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 - -# CPU only -pip --default-timeout=100 install torch==1.8.0+cpu torchvision==0.9.0+cpu torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html - -``` - - -## v1.7.1 -### Linux and Windows -```shell -# CUDA 11.0 -pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html - -# CUDA 10.2 -pip install torch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 - -# CUDA 10.1 -pip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html - -# CUDA 9.2 -pip install torch==1.7.1+cu92 torchvision==0.8.2+cu92 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html - -# CPU only -pip install torch==1.7.1+cpu torchvision==0.8.2+cpu torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html -``` - - -## v1.6.0 - -### Linux and Windows -```shell -# CUDA 10.2 -pip install torch==1.6.0 torchvision==0.7.0 - -# CUDA 10.1 -pip install torch==1.6.0+cu101 torchvision==0.7.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html - -# CUDA 9.2 -pip install torch==1.6.0+cu92 torchvision==0.7.0+cu92 -f https://download.pytorch.org/whl/torch_stable.html - -# CPU only -pip install torch==1.6.0+cpu torchvision==0.7.0+cpu -f https://download.pytorch.org/whl/torch_stable.html -``` \ No newline at end of file diff --git a/spaces/kevinwang676/VITS2-Mandarin/modules.py b/spaces/kevinwang676/VITS2-Mandarin/modules.py deleted file mode 100644 index 9c7fd9cd6eb8b7e0ec0e08957e970744a374a924..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VITS2-Mandarin/modules.py +++ /dev/null @@ -1,390 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/kevinwang676/VoiceChangers/src/face3d/models/arcface_torch/configs/ms1mv3_mbf.py b/spaces/kevinwang676/VoiceChangers/src/face3d/models/arcface_torch/configs/ms1mv3_mbf.py deleted file mode 100644 index b8a00d6305eeda5a94788017afc1cda0d4a4cd2a..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VoiceChangers/src/face3d/models/arcface_torch/configs/ms1mv3_mbf.py +++ /dev/null @@ -1,26 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() -config.loss = "arcface" -config.network = "mbf" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 1.0 -config.fp16 = True -config.momentum = 0.9 -config.weight_decay = 2e-4 -config.batch_size = 128 -config.lr = 0.1 # batch size is 512 - -config.rec = "/train_tmp/ms1m-retinaface-t1" -config.num_classes = 93431 -config.num_image = 5179510 -config.num_epoch = 30 -config.warmup_epoch = -1 -config.decay_epoch = [10, 20, 25] -config.val_targets = ["lfw", "cfp_fp", "agedb_30"] diff --git a/spaces/kevinwang676/vits-fast-finetuning-pcr/utils.py b/spaces/kevinwang676/vits-fast-finetuning-pcr/utils.py deleted file mode 100644 index a91f9eb2df9f2b097431432753212eb440f93020..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/vits-fast-finetuning-pcr/utils.py +++ /dev/null @@ -1,399 +0,0 @@ -import os -import glob -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -from scipy.io.wavfile import read -import torch -import regex as re - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - - - -zh_pattern = re.compile(r'[\u4e00-\u9fa5]') -en_pattern = re.compile(r'[a-zA-Z]') -jp_pattern = re.compile(r'[\u3040-\u30ff\u31f0-\u31ff]') -kr_pattern = re.compile(r'[\uac00-\ud7af\u1100-\u11ff\u3130-\u318f\ua960-\ua97f]') -num_pattern=re.compile(r'[0-9]') -comma=r"(?<=[.。!!??;;,,、::'\"‘“”’()()《》「」~——])" #向前匹配但固定长度 -tags={'ZH':'[ZH]','EN':'[EN]','JP':'[JA]','KR':'[KR]'} - -def tag_cjke(text): - '''为中英日韩加tag,中日正则分不开,故先分句分离中日再识别,以应对大部分情况''' - sentences = re.split(r"([.。!!??;;,,、::'\"‘“”’()()【】《》「」~——]+ *(?![0-9]))", text) #分句,排除小数点 - sentences.append("") - sentences = ["".join(i) for i in zip(sentences[0::2],sentences[1::2])] - # print(sentences) - prev_lang=None - tagged_text = "" - for s in sentences: - #全为符号跳过 - nu = re.sub(r'[\s\p{P}]+', '', s, flags=re.U).strip() - if len(nu)==0: - continue - s = re.sub(r'[()()《》「」【】‘“”’]+', '', s) - jp=re.findall(jp_pattern, s) - #本句含日语字符判断为日语 - if len(jp)>0: - prev_lang,tagged_jke=tag_jke(s,prev_lang) - tagged_text +=tagged_jke - else: - prev_lang,tagged_cke=tag_cke(s,prev_lang) - tagged_text +=tagged_cke - return tagged_text - -def tag_jke(text,prev_sentence=None): - '''为英日韩加tag''' - # 初始化标记变量 - tagged_text = "" - prev_lang = None - tagged=0 - # 遍历文本 - for char in text: - # 判断当前字符属于哪种语言 - if jp_pattern.match(char): - lang = "JP" - elif zh_pattern.match(char): - lang = "JP" - elif kr_pattern.match(char): - lang = "KR" - elif en_pattern.match(char): - lang = "EN" - # elif num_pattern.match(char): - # lang = prev_sentence - else: - lang = None - tagged_text += char - continue - # 如果当前语言与上一个语言不同,就添加标记 - if lang != prev_lang: - tagged=1 - if prev_lang==None: # 开头 - tagged_text =tags[lang]+tagged_text - else: - tagged_text =tagged_text+tags[prev_lang]+tags[lang] - - # 重置标记变量 - prev_lang = lang - - # 添加当前字符到标记文本中 - tagged_text += char - - # 在最后一个语言的结尾添加对应的标记 - if prev_lang: - tagged_text += tags[prev_lang] - if not tagged: - prev_lang=prev_sentence - tagged_text =tags[prev_lang]+tagged_text+tags[prev_lang] - - return prev_lang,tagged_text - -def tag_cke(text,prev_sentence=None): - '''为中英韩加tag''' - # 初始化标记变量 - tagged_text = "" - prev_lang = None - # 是否全略过未标签 - tagged=0 - - # 遍历文本 - for char in text: - # 判断当前字符属于哪种语言 - if zh_pattern.match(char): - lang = "ZH" - elif kr_pattern.match(char): - lang = "KR" - elif en_pattern.match(char): - lang = "EN" - # elif num_pattern.match(char): - # lang = prev_sentence - else: - # 略过 - lang = None - tagged_text += char - continue - - # 如果当前语言与上一个语言不同,添加标记 - if lang != prev_lang: - tagged=1 - if prev_lang==None: # 开头 - tagged_text =tags[lang]+tagged_text - else: - tagged_text =tagged_text+tags[prev_lang]+tags[lang] - - # 重置标记变量 - prev_lang = lang - - # 添加当前字符到标记文本中 - tagged_text += char - - # 在最后一个语言的结尾添加对应的标记 - if prev_lang: - tagged_text += tags[prev_lang] - # 未标签则继承上一句标签 - if tagged==0: - prev_lang=prev_sentence - tagged_text =tags[prev_lang]+tagged_text+tags[prev_lang] - return prev_lang,tagged_text - - -def load_checkpoint(checkpoint_path, model, optimizer=None, drop_speaker_emb=False): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - if k == 'emb_g.weight': - if drop_speaker_emb: - new_state_dict[k] = v - continue - v[:saved_state_dict[k].shape[0], :] = saved_state_dict[k] - new_state_dict[k] = v - else: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})".format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path): - logger.info("Saving model and optimizer state at iteration {} to {}".format( - iteration, checkpoint_path)) - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save({'model': state_dict, - 'iteration': iteration, - 'optimizer': optimizer.state_dict() if optimizer is not None else None, - 'learning_rate': learning_rate}, checkpoint_path) - - -def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats='HWC') - for k, v in audios.items(): - writer.add_audio(k, v, global_step, audio_sampling_rate) - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - print(x) - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/modified_finetune_speaker.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, default="pretrained_models", - help='Model name') - parser.add_argument('-n', '--max_epochs', type=int, default=50, - help='finetune epochs') - parser.add_argument('--drop_speaker_embed', type=bool, default=False, help='whether to drop existing characters') - - args = parser.parse_args() - model_dir = os.path.join("./", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - hparams.max_epochs = args.max_epochs - hparams.drop_speaker_embed = args.drop_speaker_embed - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r", encoding="utf-8") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() \ No newline at end of file diff --git a/spaces/kingli999/riffusion-riffusion-model-v12/app.py b/spaces/kingli999/riffusion-riffusion-model-v12/app.py deleted file mode 100644 index 5316590a67da22f64b3eb27b22d30c601ac3e704..0000000000000000000000000000000000000000 --- a/spaces/kingli999/riffusion-riffusion-model-v12/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/riffusion/riffusion-model-v1").launch() \ No newline at end of file diff --git a/spaces/kira4424/Tacotron-zero-short-voice-clone/vocoder/hifigan/inference.py b/spaces/kira4424/Tacotron-zero-short-voice-clone/vocoder/hifigan/inference.py deleted file mode 100644 index 8caf3485226d259cb2179780d09fbf71fc2d356f..0000000000000000000000000000000000000000 --- a/spaces/kira4424/Tacotron-zero-short-voice-clone/vocoder/hifigan/inference.py +++ /dev/null @@ -1,74 +0,0 @@ -from __future__ import absolute_import, division, print_function, unicode_literals - -import os -import json -import torch -from utils.util import AttrDict -from vocoder.hifigan.models import Generator - -generator = None # type: Generator -output_sample_rate = None -_device = None - - -def load_checkpoint(filepath, device): - assert os.path.isfile(filepath) - print("Loading '{}'".format(filepath)) - checkpoint_dict = torch.load(filepath, map_location=device) - print("Complete.") - return checkpoint_dict - - -def load_model(weights_fpath, config_fpath=None, verbose=True): - global generator, _device, output_sample_rate - - if verbose: - print("Building hifigan") - - if config_fpath == None: - model_config_fpaths = list(weights_fpath.parent.rglob("*.json")) - if len(model_config_fpaths) > 0: - config_fpath = model_config_fpaths[0] - else: - config_fpath = "./vocoder/hifigan/config_16k_.json" - with open(config_fpath) as f: - data = f.read() - json_config = json.loads(data) - h = AttrDict(json_config) - output_sample_rate = h.sampling_rate - torch.manual_seed(h.seed) - - if torch.cuda.is_available(): - # _model = _model.cuda() - _device = torch.device('cuda') - else: - _device = torch.device('cpu') - - generator = Generator(h).to(_device) - state_dict_g = load_checkpoint( - weights_fpath, _device - ) - generator.load_state_dict(state_dict_g['generator']) - generator.eval() - generator.remove_weight_norm() - - -def is_loaded(): - return generator is not None - - -def infer_waveform(mel, progress_callback=None): - - if generator is None: - raise Exception("Please load hifi-gan in memory before using it") - - mel = torch.FloatTensor(mel).to(_device) - mel = mel.unsqueeze(0) - - with torch.no_grad(): - y_g_hat = generator(mel) - audio = y_g_hat.squeeze() - audio = audio.cpu().numpy() - - return audio, output_sample_rate - diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/exp/upernet_global_small/test_config_h32.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/exp/upernet_global_small/test_config_h32.py deleted file mode 100644 index a31e3874f76f9f7b089ac8834d85df2441af9b0e..0000000000000000000000000000000000000000 --- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/exp/upernet_global_small/test_config_h32.py +++ /dev/null @@ -1,39 +0,0 @@ -_base_ = [ - '../../configs/_base_/models/upernet_uniformer.py', - '../../configs/_base_/datasets/ade20k.py', - '../../configs/_base_/default_runtime.py', - '../../configs/_base_/schedules/schedule_160k.py' -] -model = dict( - backbone=dict( - type='UniFormer', - embed_dim=[64, 128, 320, 512], - layers=[3, 4, 8, 3], - head_dim=64, - drop_path_rate=0.25, - windows=False, - hybrid=True, - window_size=32 - ), - decode_head=dict( - in_channels=[64, 128, 320, 512], - num_classes=150 - ), - auxiliary_head=dict( - in_channels=320, - num_classes=150 - )) - -# AdamW optimizer, no weight decay for position embedding & layer norm in backbone -optimizer = dict(_delete_=True, type='AdamW', lr=0.00006, betas=(0.9, 0.999), weight_decay=0.01, - paramwise_cfg=dict(custom_keys={'absolute_pos_embed': dict(decay_mult=0.), - 'relative_position_bias_table': dict(decay_mult=0.), - 'norm': dict(decay_mult=0.)})) - -lr_config = dict(_delete_=True, policy='poly', - warmup='linear', - warmup_iters=1500, - warmup_ratio=1e-6, - power=1.0, min_lr=0.0, by_epoch=False) - -data=dict(samples_per_gpu=2) \ No newline at end of file diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/simultaneous_translation/utils/functions.py b/spaces/koajoel/PolyFormer/fairseq/examples/simultaneous_translation/utils/functions.py deleted file mode 100644 index 590a6c11cea222ac9096b19f0e3dfe1b71b6c10b..0000000000000000000000000000000000000000 --- a/spaces/koajoel/PolyFormer/fairseq/examples/simultaneous_translation/utils/functions.py +++ /dev/null @@ -1,125 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - - -def prob_check(tensor, eps=1e-10): - assert not torch.isnan(tensor).any(), ( - "Nan in a probability tensor." - ) - # Add the eps here to prevent errors introduced by precision - assert tensor.le(1.0 + eps).all() and tensor.ge(0.0 - eps).all(), ( - "Incorrect values in a probability tensor" - ", 0.0 <= tensor <= 1.0" - ) - - -def exclusive_cumprod(tensor, dim: int, eps: float = 1e-10): - """ - Implementing exclusive cumprod. - There is cumprod in pytorch, however there is no exclusive mode. - cumprod(x) = [x1, x1x2, x2x3x4, ..., prod_{i=1}^n x_i] - exclusive means - cumprod(x) = [1, x1, x1x2, x1x2x3, ..., prod_{i=1}^{n-1} x_i] - """ - tensor_size = list(tensor.size()) - tensor_size[dim] = 1 - return_tensor = safe_cumprod( - torch.cat([torch.ones(tensor_size).type_as(tensor), tensor], dim=dim), - dim=dim, - eps=eps, - ) - - if dim == 0: - return return_tensor[:-1] - elif dim == 1: - return return_tensor[:, :-1] - elif dim == 2: - return return_tensor[:, :, :-1] - else: - raise RuntimeError( - "Cumprod on dimension 3 and more is not implemented" - ) - - -def safe_cumprod(tensor, dim: int, eps: float = 1e-10): - """ - An implementation of cumprod to prevent precision issue. - cumprod(x) - = [x1, x1x2, x1x2x3, ....] - = [exp(log(x1)), exp(log(x1) + log(x2)), exp(log(x1) + log(x2) + log(x3)), ...] - = exp(cumsum(log(x))) - """ - - if (tensor + eps < 0).any().item(): - raise RuntimeError( - "Safe cumprod can only take non-negative tensors as input." - "Consider use torch.cumprod if you want to calculate negative values." - ) - - log_tensor = torch.log(tensor + eps) - cumsum_log_tensor = torch.cumsum(log_tensor, dim) - exp_cumsum_log_tensor = torch.exp(cumsum_log_tensor) - return exp_cumsum_log_tensor - - -def moving_sum(x, start_idx: int, end_idx: int): - """ - From MONOTONIC CHUNKWISE ATTENTION - https://arxiv.org/pdf/1712.05382.pdf - Equation (18) - - x = [x_1, x_2, ..., x_N] - MovingSum(x, start_idx, end_idx)_n = Sigma_{m=n−(start_idx−1)}^{n+end_idx-1} x_m - for n in {1, 2, 3, ..., N} - - x : src_len, batch_size - start_idx : start idx - end_idx : end idx - - Example - src_len = 5 - batch_size = 3 - x = - [[ 0, 5, 10], - [ 1, 6, 11], - [ 2, 7, 12], - [ 3, 8, 13], - [ 4, 9, 14]] - - MovingSum(x, 3, 1) = - [[ 0, 5, 10], - [ 1, 11, 21], - [ 3, 18, 33], - [ 6, 21, 36], - [ 9, 24, 39]] - - MovingSum(x, 1, 3) = - [[ 3, 18, 33], - [ 6, 21, 36], - [ 9, 24, 39], - [ 7, 17, 27], - [ 4, 9, 14]] - """ - # TODO: Make dimension configurable - assert start_idx > 0 and end_idx > 0 - batch_size, tgt_len, src_len = x.size() - x = x.view(-1, src_len).unsqueeze(1) - # batch_size, 1, src_len - moving_sum_weight = torch.ones([1, 1, end_idx + start_idx - 1]).type_as(x) - - moving_sum = torch.nn.functional.conv1d( - x, moving_sum_weight, padding=start_idx + end_idx - 1 - ).squeeze(1) - - moving_sum = moving_sum[:, end_idx:-start_idx] - - assert src_len == moving_sum.size(1) - assert batch_size * tgt_len == moving_sum.size(0) - - moving_sum = moving_sum.view(batch_size, tgt_len, src_len) - - return moving_sum diff --git a/spaces/kurianbenoy/Pallakku/app.py b/spaces/kurianbenoy/Pallakku/app.py deleted file mode 100644 index a98cadcd1cf6ea9c64d7f0a7c63a993705ec4dcf..0000000000000000000000000000000000000000 --- a/spaces/kurianbenoy/Pallakku/app.py +++ /dev/null @@ -1,64 +0,0 @@ -# AUTOGENERATED! DO NOT EDIT! File to edit: app.ipynb. - -# %% auto 0 -__all__ = ['mf_transcribe', 'transcribe_malayalam_speech', 'gr_transcribe_malayalam_speech'] - -# %% app.ipynb 4 -import gradio as gr -from faster_whisper import WhisperModel - -# %% app.ipynb 8 -def transcribe_malayalam_speech(audio_file, compute_type="int8", device="cpu", folder="vegam-whisper-medium-ml-fp16"): - - model = WhisperModel(folder, device=device, compute_type=compute_type) - segments, info = model.transcribe(audio_file, beam_size=5) - - lst = [] - for segment in segments: - # print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text)) - lst.append(segment.text) - - return(" ".join(lst)) - -# %% app.ipynb 9 -def gr_transcribe_malayalam_speech(microphone, file_upload, compute_type="int8", device="cpu", folder="vegam-whisper-medium-ml-fp16"): - warn_output = "" - if (microphone is not None) and (file_upload is not None): - warn_output = ( - "WARNING: You've uploaded an audio file and used the microphone. " - "The recorded file from the microphone will be used and the uploaded audio will be discarded.\n" - ) - - elif (microphone is None) and (file_upload is None): - return "ERROR: You have to either use the microphone or upload an audio file" - - audio_file = microphone if microphone is not None else file_upload - - model = WhisperModel(folder, device=device, compute_type=compute_type) - segments, info = model.transcribe(audio_file, beam_size=5) - - lst = [] - for segment in segments: - # print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text)) - lst.append(segment.text) - - return(" ".join(lst)) - -# %% app.ipynb 16 -mf_transcribe = gr.Interface( - fn=gr_transcribe_malayalam_speech, - inputs=[ - gr.inputs.Audio(source="microphone", type="filepath", optional=True), - gr.inputs.Audio(source="upload", type="filepath", optional=True), - ], - outputs="text", - title="PALLAKKU (പല്ലക്ക്)", - description=( - "Pallakku is a Malayalam speech to text demo leveraging the model-weights of [vegam-whisper-medium-ml](https://huggingface.co/kurianbenoy/vegam-whisper-medium-ml-fp16)." - ), - article="Please note that this demo now uses CPU only and in my testing for a 5 seconds audio file it can take upto 15 seconds for results to come. If you are interested to use a GPU based API instead, feel free to contact the author @ kurian.bkk@gmail.com", - allow_flagging="never", -) - -# %% app.ipynb 17 -mf_transcribe.launch(share=False) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-4ccfb72c.css b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-4ccfb72c.css deleted file mode 100644 index a528c508c9856f09311ecdc208c5d65121782769..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-4ccfb72c.css +++ /dev/null @@ -1 +0,0 @@ -.wrap.svelte-1sc8eck{display:flex;flex-direction:column;flex-flow:column;margin:0;padding:0;height:100%}.codemirror-wrapper.svelte-1sc8eck{height:100%;overflow:auto}.cm-editor{height:100%}.cm-selectionBackground{background-color:#b9d2ff30!important}.cm-focused{outline:none!important}button.svelte-qi7jcw{position:relative;cursor:pointer;padding:5px;width:22px;height:22px}.check.svelte-qi7jcw{position:absolute;top:0;right:0;z-index:var(--layer-top);background:var(--background-fill-primary);padding:var(--size-1);width:100%;height:100%;color:var(--body-text-color)}a.svelte-14d303a{position:relative;cursor:pointer;padding:5px;width:22px;height:22px}.copied.svelte-14d303a{color:var(--color-green-500)}.check.svelte-14d303a{position:absolute;top:0;right:0;z-index:var(--layer-top);background:var(--background-fill-primary);padding:var(--size-1);width:100%;height:100%;color:var(--body-text-color)}div.svelte-1yin446{display:flex;position:absolute;top:var(--block-label-margin);right:var(--block-label-margin);align-items:center;z-index:var(--layer-2);transition:.15s;box-shadow:var(--shadow-drop);border:1px solid var(--border-color-primary);border-top:none;border-right:none;border-radius:var(--block-label-right-radius);background:var(--block-label-background-fill);overflow:hidden;color:var(--block-label-text-color);font:var(--font);font-size:var(--button-small-text-size)} diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/backends/backend_wxcairo.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/backends/backend_wxcairo.py deleted file mode 100644 index 0416a187d0915ac6f35d1dcd9e4a89b820cee147..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/backends/backend_wxcairo.py +++ /dev/null @@ -1,40 +0,0 @@ -import wx.lib.wxcairo as wxcairo - -from .. import _api -from .backend_cairo import cairo, FigureCanvasCairo -from .backend_wx import _BackendWx, _FigureCanvasWxBase, FigureFrameWx -from .backend_wx import ( # noqa: F401 # pylint: disable=W0611 - NavigationToolbar2Wx as NavigationToolbar2WxCairo) - - -@_api.deprecated( - "3.6", alternative="FigureFrameWx(..., canvas_class=FigureCanvasWxCairo)") -class FigureFrameWxCairo(FigureFrameWx): - def get_canvas(self, fig): - return FigureCanvasWxCairo(self, -1, fig) - - -class FigureCanvasWxCairo(FigureCanvasCairo, _FigureCanvasWxBase): - """ - The FigureCanvas contains the figure and does event handling. - - In the wxPython backend, it is derived from wxPanel, and (usually) lives - inside a frame instantiated by a FigureManagerWx. The parent window - probably implements a wxSizer to control the displayed control size - but - we give a hint as to our preferred minimum size. - """ - - def draw(self, drawDC=None): - size = self.figure.bbox.size.astype(int) - surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, *size) - self._renderer.set_context(cairo.Context(surface)) - self._renderer.dpi = self.figure.dpi - self.figure.draw(self._renderer) - self.bitmap = wxcairo.BitmapFromImageSurface(surface) - self._isDrawn = True - self.gui_repaint(drawDC=drawDC) - - -@_BackendWx.export -class _BackendWxCairo(_BackendWx): - FigureCanvas = FigureCanvasWxCairo diff --git a/spaces/leogabraneth/text-generation-webui-main/extensions/superboogav2/script.py b/spaces/leogabraneth/text-generation-webui-main/extensions/superboogav2/script.py deleted file mode 100644 index 0870ab4c3b6fd1e5abc274d799289a663eebcd54..0000000000000000000000000000000000000000 --- a/spaces/leogabraneth/text-generation-webui-main/extensions/superboogav2/script.py +++ /dev/null @@ -1,355 +0,0 @@ -""" -This file is responsible for the UI and how the application interracts with the rest of the system. -""" -import os -from pathlib import Path - -# Point to where nltk will find the required data. -os.environ['NLTK_DATA'] = str(Path("extensions/superboogav2/nltk_data").resolve()) - -import textwrap -import codecs -import gradio as gr - -import extensions.superboogav2.parameters as parameters - -from modules.logging_colors import logger -from modules import shared - -from .utils import create_metadata_source -from .chromadb import make_collector -from .download_urls import feed_url_into_collector -from .data_processor import process_and_add_to_collector -from .benchmark import benchmark -from .optimize import optimize -from .notebook_handler import input_modifier_internal -from .chat_handler import custom_generate_chat_prompt_internal -from .api import APIManager - -collector = None -api_manager = None - -def setup(): - global collector - global api_manager - collector = make_collector() - api_manager = APIManager(collector) - - if parameters.get_api_on(): - api_manager.start_server(parameters.get_api_port()) - -def _feed_data_into_collector(corpus): - yield '### Processing data...' - process_and_add_to_collector(corpus, collector, False, create_metadata_source('direct-text')) - yield '### Done.' - - -def _feed_file_into_collector(file): - yield '### Reading and processing the input dataset...' - text = file.decode('utf-8') - process_and_add_to_collector(text, collector, False, create_metadata_source('file')) - yield '### Done.' - - -def _feed_url_into_collector(urls): - for i in feed_url_into_collector(urls, collector): - yield i - yield '### Done.' - - -def _begin_benchmark(): - score, max_score = benchmark(Path("extensions/superboogav2/benchmark_texts/questions.json"), collector) - return f'**Score**: {score}/{max_score}' - - -def _begin_optimization(progress=gr.Progress()): - return optimize(collector, progress), *_get_optimizable_settings() - - -def _clear_data(): - collector.clear() - return "### Data Cleared!" - - -def _get_optimizable_settings() -> list: - preprocess_pipeline = [] - if parameters.should_to_lower(): - preprocess_pipeline.append('Lower Cases') - if parameters.should_remove_punctuation(): - preprocess_pipeline.append('Remove Punctuation') - if parameters.should_remove_specific_pos(): - preprocess_pipeline.append('Remove Adverbs') - if parameters.should_remove_stopwords(): - preprocess_pipeline.append('Remove Stop Words') - if parameters.should_lemmatize(): - preprocess_pipeline.append('Lemmatize') - if parameters.should_merge_spaces(): - preprocess_pipeline.append('Merge Spaces') - if parameters.should_strip(): - preprocess_pipeline.append('Strip Edges') - - return [ - parameters.get_time_power(), - parameters.get_time_steepness(), - parameters.get_significant_level(), - parameters.get_min_num_sentences(), - parameters.get_new_dist_strategy(), - parameters.get_delta_start(), - parameters.get_min_num_length(), - parameters.get_num_conversion_strategy(), - preprocess_pipeline, - parameters.get_chunk_count(), - parameters.get_context_len(), - parameters.get_chunk_len() - ] - - -def _apply_settings(optimization_steps, time_power, time_steepness, significant_level, min_sentences, new_dist_strat, delta_start, min_number_length, num_conversion, - preprocess_pipeline, api_port, api_on, injection_strategy, add_chat_to_data, manual, postfix, data_separator, prefix, max_token_count, - chunk_count, chunk_sep, context_len, chunk_regex, chunk_len, threads, strong_cleanup): - logger.debug('Applying settings.') - - try: - parameters.set_optimization_steps(optimization_steps) - parameters.set_significant_level(significant_level) - parameters.set_min_num_sentences(min_sentences) - parameters.set_new_dist_strategy(new_dist_strat) - parameters.set_delta_start(delta_start) - parameters.set_min_num_length(min_number_length) - parameters.set_num_conversion_strategy(num_conversion) - parameters.set_api_port(api_port) - parameters.set_api_on(api_on) - parameters.set_injection_strategy(injection_strategy) - parameters.set_add_chat_to_data(add_chat_to_data) - parameters.set_manual(manual) - parameters.set_postfix(codecs.decode(postfix, 'unicode_escape')) - parameters.set_data_separator(codecs.decode(data_separator, 'unicode_escape')) - parameters.set_prefix(codecs.decode(prefix, 'unicode_escape')) - parameters.set_max_token_count(max_token_count) - parameters.set_time_power(time_power) - parameters.set_time_steepness(time_steepness) - parameters.set_chunk_count(chunk_count) - parameters.set_chunk_separator(codecs.decode(chunk_sep, 'unicode_escape')) - parameters.set_context_len(context_len) - parameters.set_chunk_regex(chunk_regex) - parameters.set_chunk_len(chunk_len) - parameters.set_num_threads(threads) - parameters.set_strong_cleanup(strong_cleanup) - - preprocess_choices = ['Lower Cases', 'Remove Punctuation', 'Remove Adverbs', 'Remove Stop Words', 'Lemmatize', 'Merge Spaces', 'Strip Edges'] - for preprocess_method in preprocess_choices: - if preprocess_method == 'Lower Cases': - parameters.set_to_lower(preprocess_method in preprocess_pipeline) - elif preprocess_method == 'Remove Punctuation': - parameters.set_remove_punctuation(preprocess_method in preprocess_pipeline) - elif preprocess_method == 'Remove Adverbs': - parameters.set_remove_specific_pos(preprocess_method in preprocess_pipeline) - elif preprocess_method == 'Remove Stop Words': - parameters.set_remove_stopwords(preprocess_method in preprocess_pipeline) - elif preprocess_method == 'Lemmatize': - parameters.set_lemmatize(preprocess_method in preprocess_pipeline) - elif preprocess_method == 'Merge Spaces': - parameters.set_merge_spaces(preprocess_method in preprocess_pipeline) - elif preprocess_method == 'Strip Edges': - parameters.set_strip(preprocess_method in preprocess_pipeline) - - # Based on API on/off, start or stop the server - if api_manager is not None: - if parameters.get_api_on() and (not api_manager.is_server_running()): - api_manager.start_server(parameters.get_api_port()) - elif (not parameters.get_api_on()) and api_manager.is_server_running(): - api_manager.stop_server() - except Exception as e: - logger.warn(f'Could not properly apply settings: {str(e)}') - - -def custom_generate_chat_prompt(user_input, state, **kwargs): - return custom_generate_chat_prompt_internal(user_input, state, collector, **kwargs) - - -def input_modifier(string): - return input_modifier_internal(string, collector) - - -def ui(): - with gr.Accordion("Click for more information...", open=False): - gr.Markdown(textwrap.dedent(""" - - ## About - - This extension takes a dataset as input, breaks it into chunks, and adds the result to a local/offline Chroma database. - - The database is then queried during inference time to get the excerpts that are closest to your input. The idea is to create an arbitrarily large pseudo context. - - The core methodology was developed and contributed by kaiokendev, who is working on improvements to the method in this repository: https://github.com/kaiokendev/superbig - - ## Data input - - Start by entering some data in the interface below and then clicking on "Load data". - - Each time you load some new data, the old chunks are discarded. - - ## Chat mode - - #### Instruct - - On each turn, the chunks will be compared to your current input and the most relevant matches will be appended to the input in the following format: - - ``` - Consider the excerpts below as additional context: - ... - ``` - - The injection doesn't make it into the chat history. It is only used in the current generation. - - #### Regular chat - - The chunks from the external data sources are ignored, and the chroma database is built based on the chat history instead. The most relevant past exchanges relative to the present input are added to the context string. This way, the extension acts as a long term memory. - - ## Notebook/default modes - - Your question must be manually specified between `<|begin-user-input|>` and `<|end-user-input|>` tags, and the injection point must be specified with `<|injection-point|>`. - - The special tokens mentioned above (`<|begin-user-input|>`, `<|end-user-input|>`, and `<|injection-point|>`) are removed in the background before the text generation begins. - - Here is an example in Vicuna 1.1 format: - - ``` - A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. - - USER: - <|injection-point|> - - <|begin-user-input|>What datasets are mentioned in the text above?<|end-user-input|> - ASSISTANT: - ``` - """)) - - with gr.Row(): - with gr.Column(min_width=600): - with gr.Tab("Text input"): - data_input = gr.Textbox(lines=20, label='Input data') - update_data = gr.Button('Load data') - - with gr.Tab("URL input"): - url_input = gr.Textbox(lines=10, label='Input URLs', info='Enter one or more URLs separated by newline characters.') - strong_cleanup = gr.Checkbox(value=parameters.get_is_strong_cleanup(), label='Strong cleanup', info='Only keeps html elements that look like long-form text.') - threads = gr.Number(value=parameters.get_num_threads(), label='Threads', info='The number of threads to use while downloading the URLs.', precision=0) - update_url = gr.Button('Load data') - - with gr.Tab("File input"): - file_input = gr.File(label='Input file', type='binary') - update_file = gr.Button('Load data') - - with gr.Tab("Settings"): - with gr.Accordion("Processing settings", open=True): - chunk_len = gr.Textbox(value=parameters.get_chunk_len(), label='Chunk length', info='In characters, not tokens. This value is used when you click on "Load data".') - chunk_regex = gr.Textbox(value=parameters.get_chunk_regex(), label='Chunk regex', info='Will specifically add the captured text to the embeddings.') - context_len = gr.Textbox(value=parameters.get_context_len(), label='Context length', info='In characters, not tokens. How much context to load around each chunk.') - chunk_sep = gr.Textbox(value=codecs.encode(parameters.get_chunk_separator(), 'unicode_escape').decode(), label='Chunk separator', info='Used to manually split chunks. Manually split chunks longer than chunk length are split again. This value is used when you click on "Load data".') - - with gr.Accordion("Generation settings", open=False): - chunk_count = gr.Number(value=parameters.get_chunk_count(), label='Chunk count', info='The number of closest-matching chunks to include in the prompt.') - max_token_count = gr.Number(value=parameters.get_max_token_count(), label='Max Context Tokens', info='The context length in tokens will not exceed this value.') - prefix = gr.Textbox(value=codecs.encode(parameters.get_prefix(), 'unicode_escape').decode(), label='Prefix', info='What to put before the injection point.') - data_separator = gr.Textbox(value=codecs.encode(parameters.get_data_separator(), 'unicode_escape').decode(), label='Data separator', info='When multiple pieces of distant data are added, they might be unrelated. It\'s important to separate them.') - postfix = gr.Textbox(value=codecs.encode(parameters.get_postfix(), 'unicode_escape').decode(), label='Postfix', info='What to put after the injection point.') - with gr.Row(): - manual = gr.Checkbox(value=parameters.get_is_manual(), label="Is Manual", info="Manually specify when to use ChromaDB. Insert `!c` at the start or end of the message to trigger a query.", visible=shared.is_chat()) - add_chat_to_data = gr.Checkbox(value=parameters.get_add_chat_to_data(), label="Add Chat to Data", info="Automatically feed the chat history as you chat.", visible=shared.is_chat()) - injection_strategy = gr.Radio(choices=[parameters.PREPEND_TO_LAST, parameters.APPEND_TO_LAST, parameters.HIJACK_LAST_IN_CONTEXT], value=parameters.get_injection_strategy(), label='Injection Strategy', info='Where to inject the messages in chat or instruct mode.', visible=shared.is_chat()) - with gr.Row(): - api_on = gr.Checkbox(value=parameters.get_api_on(), label="Turn on API", info="Check this to turn on the API service.") - api_port = gr.Number(value=parameters.get_api_port(), label="API Port", info="The port on which the API service will run.") - - with gr.Accordion("Advanced settings", open=False): - preprocess_set_choices = [] - if parameters.should_to_lower(): - preprocess_set_choices.append('Lower Cases') - if parameters.should_remove_punctuation(): - preprocess_set_choices.append('Remove Punctuation') - if parameters.should_remove_specific_pos(): - preprocess_set_choices.append('Remove Adverbs') - if parameters.should_remove_stopwords(): - preprocess_set_choices.append('Remove Stop Words') - if parameters.should_lemmatize(): - preprocess_set_choices.append('Lemmatize') - if parameters.should_merge_spaces(): - preprocess_set_choices.append('Merge Spaces') - if parameters.should_strip(): - preprocess_set_choices.append('Strip Edges') - - preprocess_pipeline = gr.CheckboxGroup(label='Preprocessing pipeline', choices=[ - 'Lower Cases', - 'Remove Punctuation', - 'Remove Adverbs', - 'Remove Stop Words', - 'Lemmatize', - 'Merge Spaces', - 'Strip Edges', - ], value=preprocess_set_choices, interactive=True, info='How to preprocess the text before it is turned into an embedding.') - - with gr.Row(): - num_conversion = gr.Dropdown(choices=[parameters.NUM_TO_WORD_METHOD, parameters.NUM_TO_CHAR_METHOD, parameters.NUM_TO_CHAR_LONG_METHOD, 'None'], value=parameters.get_num_conversion_strategy(), label="Number Conversion Method", info='How to preprocess numbers before creating the embeddings.', interactive=True) - min_number_length = gr.Number(value=parameters.get_min_num_length(), label='Number Length Threshold', info='In digits. Only numbers that have at least that many digits will be converted.', interactive=True) - - delta_start = gr.Number(value=parameters.get_delta_start(), label='Delta Start Index', info='If the system encounters two identical embeddings, and they both start within the same delta, then only the first will be considered.', interactive=True) - new_dist_strat = gr.Dropdown(choices=[parameters.DIST_MIN_STRATEGY, parameters.DIST_HARMONIC_STRATEGY, parameters.DIST_GEOMETRIC_STRATEGY, parameters.DIST_ARITHMETIC_STRATEGY], value=parameters.get_new_dist_strategy(), label="Distance Strategy", info='When two embedding texts are merged, the distance of the new piece will be decided using one of these strategies.', interactive=True) - min_sentences = gr.Number(value=parameters.get_min_num_sentences(), label='Summary Threshold', info='In sentences. The minumum number of sentences to trigger text-rank summarization.', interactive=True) - significant_level = gr.Slider(0.8, 2, value=parameters.get_significant_level(), label='Significant Level', info='Defines the cut-off for what is considered a "significant" distance relative to the median distance among the returned samples.', interactive=True) - time_steepness = gr.Slider(0.01, 1.0, value=parameters.get_time_steepness(), label='Time Weighing Steepness', info='How differently two close excerpts are going to be weighed.') - time_power = gr.Slider(0.0, 1.0, value=parameters.get_time_power(), label='Time Weighing Power', info='How influencial is the weighing. At 1.0, old entries won\'t be considered') - - with gr.Tab("Benchmark"): - benchmark_button = gr.Button('Benchmark') - optimize_button = gr.Button('Optimize') - optimization_steps = gr.Number(value=parameters.get_optimization_steps(), label='Optimization Steps', info='For how many steps to optimize.', interactive=True) - - - clear_button = gr.Button('❌ Clear Data') - - - with gr.Column(): - last_updated = gr.Markdown() - - all_params = [optimization_steps, time_power, time_steepness, significant_level, min_sentences, new_dist_strat, delta_start, min_number_length, num_conversion, - preprocess_pipeline, api_port, api_on, injection_strategy, add_chat_to_data, manual, postfix, data_separator, prefix, max_token_count, - chunk_count, chunk_sep, context_len, chunk_regex, chunk_len, threads, strong_cleanup] - optimizable_params = [time_power, time_steepness, significant_level, min_sentences, new_dist_strat, delta_start, min_number_length, num_conversion, - preprocess_pipeline, chunk_count, context_len, chunk_len] - - - update_data.click(_feed_data_into_collector, [data_input], last_updated, show_progress=False) - update_url.click(_feed_url_into_collector, [url_input], last_updated, show_progress=False) - update_file.click(_feed_file_into_collector, [file_input], last_updated, show_progress=False) - benchmark_button.click(_begin_benchmark, [], last_updated, show_progress=True) - optimize_button.click(_begin_optimization, [], [last_updated] + optimizable_params, show_progress=True) - clear_button.click(_clear_data, [], last_updated, show_progress=False) - - - optimization_steps.input(fn=_apply_settings, inputs=all_params, show_progress=False) - time_power.input(fn=_apply_settings, inputs=all_params, show_progress=False) - time_steepness.input(fn=_apply_settings, inputs=all_params, show_progress=False) - significant_level.input(fn=_apply_settings, inputs=all_params, show_progress=False) - min_sentences.input(fn=_apply_settings, inputs=all_params, show_progress=False) - new_dist_strat.input(fn=_apply_settings, inputs=all_params, show_progress=False) - delta_start.input(fn=_apply_settings, inputs=all_params, show_progress=False) - min_number_length.input(fn=_apply_settings, inputs=all_params, show_progress=False) - num_conversion.input(fn=_apply_settings, inputs=all_params, show_progress=False) - preprocess_pipeline.input(fn=_apply_settings, inputs=all_params, show_progress=False) - api_port.input(fn=_apply_settings, inputs=all_params, show_progress=False) - api_on.input(fn=_apply_settings, inputs=all_params, show_progress=False) - injection_strategy.input(fn=_apply_settings, inputs=all_params, show_progress=False) - add_chat_to_data.input(fn=_apply_settings, inputs=all_params, show_progress=False) - manual.input(fn=_apply_settings, inputs=all_params, show_progress=False) - postfix.input(fn=_apply_settings, inputs=all_params, show_progress=False) - data_separator.input(fn=_apply_settings, inputs=all_params, show_progress=False) - prefix.input(fn=_apply_settings, inputs=all_params, show_progress=False) - max_token_count.input(fn=_apply_settings, inputs=all_params, show_progress=False) - chunk_count.input(fn=_apply_settings, inputs=all_params, show_progress=False) - chunk_sep.input(fn=_apply_settings, inputs=all_params, show_progress=False) - context_len.input(fn=_apply_settings, inputs=all_params, show_progress=False) - chunk_regex.input(fn=_apply_settings, inputs=all_params, show_progress=False) - chunk_len.input(fn=_apply_settings, inputs=all_params, show_progress=False) - threads.input(fn=_apply_settings, inputs=all_params, show_progress=False) - strong_cleanup.input(fn=_apply_settings, inputs=all_params, show_progress=False) \ No newline at end of file diff --git a/spaces/liimefruit/RVCollection/infer_pack/transforms.py b/spaces/liimefruit/RVCollection/infer_pack/transforms.py deleted file mode 100644 index 2cb0fc9dc2454c5af0243378d692454733947ac6..0000000000000000000000000000000000000000 --- a/spaces/liimefruit/RVCollection/infer_pack/transforms.py +++ /dev/null @@ -1,209 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = {"tails": tails, "tail_bound": tail_bound} - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1 - - -def unconstrained_rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails="linear", - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == "linear": - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError("{} tails are not implemented.".format(tails)) - - ( - outputs[inside_interval_mask], - logabsdet[inside_interval_mask], - ) = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, - right=tail_bound, - bottom=-tail_bound, - top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - ) - - return outputs, logabsdet - - -def rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0.0, - right=1.0, - bottom=0.0, - top=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError("Input to a transform is not within its domain") - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError("Minimal bin width too large for the number of bins") - if min_bin_height * num_bins > 1.0: - raise ValueError("Minimal bin height too large for the number of bins") - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) + input_heights * (input_delta - input_derivatives) - b = input_heights * input_derivatives - (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) - c = -input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * ( - input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta - ) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Asuras Wrath Pc Download [2021] Utorrent For Windows.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Asuras Wrath Pc Download [2021] Utorrent For Windows.md deleted file mode 100644 index 6bc433699df645721c63cfb3dcb5f63c3a6b2ecf..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Asuras Wrath Pc Download [2021] Utorrent For Windows.md +++ /dev/null @@ -1,6 +0,0 @@ - -

        C[nmed asuras wrath crack, asura wrath pc download utorrent for windows. Asuran Wrath is a first person action game that comes in a download package. The player is Asura, an avatar of God who is a warrior and in need of help against a group of Asuras called the sects. He can use physical, magical and tactical attacks to end the demons. The PlayStation 3 and Xbox 360 versions are available for download on the Internet. The game can also be played on PCs. It takes place in an imaginary world called Sajuva. In this version, the player is Asura, an avatar of God. He also requires help and weapons to fight against the sects.

        -

        Well, the other version that you can find for download is the Xbox 360 version of the game. It has also got exclusive missions and content that the standard version is lacking. The Xbox 360 version of the game can be downloaded from Microsoft's Xbox Live Marketplace website. The downloadable content are available for 800 Microsoft points (or about 7 US dollars or 1 euro). The Xbox 360 version has also got exclusive missions and content that the standard version is lacking. The Xbox 360 version of the game is also available for download on Xbox Live Marketplace. Asuras Wrath. 1.4.0 Full Version w/ Crack for Windows. Asuran Wrath is a first person action game that comes in a download package. The player is Asura, an avatar of God who is a warrior and in need of help against a group of Asuras called the sects. He can use physical, magical and tactical attacks to end the demons. The PlayStation 3 and Xbox 360 versions are available for download on the Internet. The game can also be played on PCs. It takes place in an imaginary world called Sajuva. In this version, the player is Asura, an avatar of God. He also requires help and weapons to fight against the sects.

        -

        asuras wrath pc download utorrent for windows


        DOWNLOAD ✏ ✏ ✏ https://bytlly.com/2uGwFd



        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Crack !!TOP!! Steinberg Cubase SX 3 (H20 Syncrhosoft Emu).md b/spaces/lincquiQcaudo/Top-20-Diffusion/Crack !!TOP!! Steinberg Cubase SX 3 (H20 Syncrhosoft Emu).md deleted file mode 100644 index e2ec84c77a7ca4b20e0d510b239182acd8920b74..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Crack !!TOP!! Steinberg Cubase SX 3 (H20 Syncrhosoft Emu).md +++ /dev/null @@ -1,6 +0,0 @@ -

        CRACK Steinberg Cubase SX 3 (H20 Syncrhosoft Emu)


        Download Zip »»» https://bytlly.com/2uGwuB



        - -August 31st, Requires an already installed Cubase SX 3 version on your computer! ... Soft H Full Crack Download Pc by sufregunto - Issuu ... H20 Cubase Sx3 Support Forums; Steinberg Cubase SX 3 (H20 Syncrhosoft Emu) ... 1fdad05405
        -
        -
        -

        diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Digital Electronics Book By Salivahanan Pdf 12 !FULL!.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Digital Electronics Book By Salivahanan Pdf 12 !FULL!.md deleted file mode 100644 index d65ed7ef39df1daf259bf36f9f3ac1e0fee4c0aa..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Digital Electronics Book By Salivahanan Pdf 12 !FULL!.md +++ /dev/null @@ -1,87 +0,0 @@ -
        -

        Digital Electronics Book by Salivahanan PDF 12: A Review

        -

        Digital electronics is a branch of engineering that deals with the design and analysis of electronic circuits that use digital signals. Digital electronics is essential for various applications such as computers, communication systems, robotics, and embedded systems. If you are looking for a comprehensive and student-friendly textbook on digital electronics, you might want to check out Digital Electronics Book by Salivahanan PDF 12.

        -

        What is Digital Electronics Book by Salivahanan PDF 12?

        -

        Digital Electronics Book by Salivahanan PDF 12 is a book written by S. Salivahanan and S. Pravin Kumar, published by Vikas Publishing House Pvt Limited in 2010. The book is intended for BE/BTech students of Electronics and Communication, Information Technology, Computer Science, Applied Physics, Computer Software, MCA and AIEE. The book covers the basics of digital technology, including the design aspects of circuits. The book also serves as a good reference for competitive examinations.

        -

        digital electronics book by salivahanan pdf 12


        Download File ——— https://bytlly.com/2uGx4r



        -

        What are the features of Digital Electronics Book by Salivahanan PDF 12?

        -

        Digital Electronics Book by Salivahanan PDF 12 has several features that make it a useful and engaging textbook for students and teachers alike. Some of these features are:

        -
          -
        • The book has a lucid and comprehensive presentation style that makes it easy to understand and follow.
        • -
        • The book has numerous illustrative examples and review questions that help students to test their knowledge and apply the concepts.
        • -
        • The book has a balanced coverage of both theory and practice, with topics ranging from number systems and codes to microprocessors and microcontrollers.
        • -
        • The book has a modular approach that allows flexibility in choosing the topics according to the syllabus and interest of the students.
        • -
        • The book has a rich pool of pedagogy that includes chapter objectives, summary, key terms, multiple choice questions, short answer questions, long answer questions, problems, exercises, and references.
        • -
        -

        How to download Digital Electronics Book by Salivahanan PDF 12?

        -

        If you are interested in downloading Digital Electronics Book by Salivahanan PDF 12, you can find it online on various websites that offer free or paid access to ebooks. Some of these websites are:

        -
          -
        • Google Books: You can preview some pages of the book on Google Books and buy the ebook from Amazon.com or Barnes&Noble.com.
        • -
        • Scribd: You can download the book as a PDF file from Scribd if you have a subscription or a free trial account.
        • -
        • PDF Drive: You can download the book as a PDF file from PDF Drive without any registration or payment.
        • -
        -

        Conclusion

        -

        Digital Electronics Book by Salivahanan PDF 12 is a well-written and well-structured textbook on digital electronics that covers both theoretical and practical aspects of the subject. The book is suitable for undergraduate students of engineering as well as for competitive examinations. The book is available online in PDF format for easy access and convenience. If you are looking for a reliable and comprehensive source of learning digital electronics, you should definitely consider Digital Electronics Book by Salivahanan PDF 12.

        -

        What are the benefits of reading Digital Electronics Book by Salivahanan PDF 12?

        -

        Reading Digital Electronics Book by Salivahanan PDF 12 can offer you many benefits, such as:

        -
          -
        • You can learn the fundamentals of digital electronics in a clear and concise manner, with examples and diagrams to illustrate the concepts.
        • -
        • You can enhance your skills and knowledge in digital circuit design, analysis, and implementation, with topics such as logic gates, flip-flops, counters, registers, multiplexers, decoders, encoders, adders, subtractors, comparators, converters, memory devices, programmable logic devices, microprocessors, and microcontrollers.
        • -
        • You can prepare yourself for competitive examinations and interviews, with review questions and problems at the end of each chapter.
        • -
        • You can access the book anytime and anywhere, with a PDF format that is compatible with various devices.
        • -
        -

        How to read Digital Electronics Book by Salivahanan PDF 12?

        -

        To read Digital Electronics Book by Salivahanan PDF 12, you need to follow some steps, such as:

        -
          -
        1. Download the book from one of the websites mentioned above.
        2. -
        3. Open the book with a PDF reader software or application.
        4. -
        5. Read the book from the beginning or choose a specific chapter according to your interest or syllabus.
        6. -
        7. Follow the examples and diagrams carefully and try to understand the logic behind them.
        8. -
        9. Solve the review questions and problems at the end of each chapter and check your answers with the solutions given at the end of the book.
        10. -
        11. Revise the key terms and summary given at the end of each chapter to reinforce your learning.
        12. -
        -

        Conclusion

        -

        Digital Electronics Book by Salivahanan PDF 12 is a valuable resource for anyone who wants to learn digital electronics in a simple and comprehensive way. The book covers both theory and practice of digital circuits and provides ample examples and questions to test your understanding. The book is available in PDF format for easy download and access. Whether you are a student, a teacher, or a professional, you will find Digital Electronics Book by Salivahanan PDF 12 useful and informative.

        -

        What are the topics covered in Digital Electronics Book by Salivahanan PDF 12?

        -

        Digital Electronics Book by Salivahanan PDF 12 covers a wide range of topics related to digital electronics, such as:

        -
          -
        • Number systems and codes: This chapter introduces the binary, octal, decimal, and hexadecimal number systems and their conversions. It also explains the various types of codes, such as BCD, excess-3, gray, ASCII, EBCDIC, and error-detecting and correcting codes.
        • -
        • Boolean algebra and logic gates: This chapter explains the basic concepts of Boolean algebra, such as laws, theorems, duality principle, and De Morgan's theorem. It also describes the various types of logic gates, such as AND, OR, NOT, NAND, NOR, XOR, and XNOR.
        • -
        • Simplification of Boolean functions: This chapter discusses the methods of simplifying Boolean functions using Karnaugh maps and Quine-McCluskey method. It also introduces the concept of don't care conditions and prime implicants.
        • -
        • Combinational logic circuits: This chapter deals with the design and analysis of combinational logic circuits, such as adders, subtractors, comparators, code converters, multiplexers, demultiplexers, encoders, decoders, parity generators and checkers.
        • -
        • Sequential logic circuits: This chapter covers the design and analysis of sequential logic circuits, such as flip-flops, counters, registers, shift registers, ring counters, Johnson counters, ripple counters, synchronous counters.
        • -
        • Programmable logic devices: This chapter introduces the concept of programmable logic devices (PLDs), such as programmable logic array (PLA), programmable array logic (PAL), generic array logic (GAL), complex programmable logic device (CPLD), and field programmable gate array (FPGA).
        • -
        • Memory devices: This chapter explains the various types of memory devices, such as random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), flash memory.
        • -
        • Microprocessors: This chapter gives an overview of microprocessors, such as their architecture, instruction set, addressing modes, programming techniques. It also discusses some popular microprocessors, such as 8085 and 8086.
        • -
        • Microcontrollers: This chapter gives an overview of microcontrollers, such as their architecture, instruction set, addressing modes. It also discusses some popular microcontrollers such as 8051 and PIC.
        • -
        -

        Conclusion

        -

        Digital Electronics Book by Salivahanan PDF 12 is a comprehensive and student-friendly textbook on digital electronics that covers both theoretical and practical aspects of the subject. The book is suitable for undergraduate students of engineering as well as for competitive examinations. The book is available online in PDF format for easy download and access. Whether you are a student, a teacher or a professional you will find Digital Electronics Book by Salivahanan PDF 12 useful and informative.

        -

        -

        What are the reviews of Digital Electronics Book by Salivahanan PDF 12?

        -

        Digital Electronics Book by Salivahanan PDF 12 has received positive reviews from students and teachers who have used it as a textbook or a reference book. Some of the reviews are:

        -
        -

        "This book is very good for beginners as well as advanced learners. It covers all the topics in a simple and systematic way. The examples and diagrams are very helpful to understand the concepts. The questions and problems are also very useful to practice and test your knowledge. I recommend this book to anyone who wants to learn digital electronics."

        -A student from Anna University -
        -
        -

        "This book is one of the best books on digital electronics that I have ever read. It is very comprehensive and covers both theory and practice of digital circuits. The book is very well-written and organized. The language is clear and easy to follow. The pedagogy is excellent and includes chapter objectives, summary, key terms, multiple choice questions, short answer questions, long answer questions, problems, exercises, and references. The book is also updated with the latest developments in digital technology. I use this book as a reference for my teaching and research."

        -A professor from IIT Delhi -
        -
        -

        "This book is a must-have for anyone who wants to learn digital electronics. It is very detailed and thorough in explaining the concepts and techniques of digital circuits. The book also provides many practical examples and applications of digital electronics in various fields. The book is very user-friendly and has a PDF format that can be easily downloaded and accessed on any device. The book is also very affordable and worth every penny."

        -A professional from Intel Corporation -
        -

        What are some other books on digital electronics?

        -

        If you are interested in reading more books on digital electronics, you can check out some of these books:

        -
          -
        • Digital Design by M. Morris Mano and Michael D. Ciletti: This book is a classic text on digital design that covers the principles and practices of modern digital design.
        • -
        • Fundamentals of Digital Logic with Verilog Design by Stephen Brown and Zvonko Vranesic: This book is a comprehensive introduction to digital logic design that integrates Verilog HDL with examples and exercises.
        • -
        • Modern Digital Electronics by R.P. Jain: This book is a popular text on digital electronics that covers both combinational and sequential logic circuits as well as microprocessors and microcontrollers.
        • -
        • Digital Systems: Principles and Applications by Ronald J. Tocci, Neal S. Widmer, and Gregory L. Moss: This book is a practical text on digital systems that covers topics such as number systems, codes, logic gates, flip-flops, counters, registers, memory devices, programmable logic devices, microprocessors, microcontrollers, interfacing devices, data converters, and troubleshooting techniques.
        • -
        • Digital Electronics: A Practical Approach with VHDL by William Kleitz: This book is a hands-on text on digital electronics that uses VHDL to illustrate the design of digital circuits.
        • -
        -

        Conclusion

        -

        Digital Electronics Book by Salivahanan PDF 12 is a comprehensive and student-friendly textbook on digital electronics that covers both theoretical and practical aspects of the subject. The book is suitable for undergraduate students of engineering as well as for competitive examinations. The book is available online in PDF format for easy download and access. The book has several features that make it a useful and engaging textbook, such as lucid and comprehensive presentation style, numerous illustrative examples and review questions, balanced coverage of both theory and practice, modular approach, and rich pool of pedagogy. The book also has positive reviews from students, teachers, and professionals who have used it as a textbook or a reference book. The book covers a wide range of topics related to digital electronics, such as number systems and codes, Boolean algebra and logic gates, simplification of Boolean functions, combinational logic circuits, sequential logic circuits, programmable logic devices, memory devices, microprocessors, and microcontrollers. The book also provides some other books on digital electronics that you can check out for further reading. Whether you are a student, a teacher or a professional you will find Digital Electronics Book by Salivahanan PDF 12 useful and informative.

        3cee63e6c2
        -
        -
        \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Gangadhar Meher Poems In Oriya Pdf Download.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Gangadhar Meher Poems In Oriya Pdf Download.md deleted file mode 100644 index ce294e43e4b83d439fe158d219523af0fda224a0..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Gangadhar Meher Poems In Oriya Pdf Download.md +++ /dev/null @@ -1,22 +0,0 @@ -

        gangadhar meher poems in oriya pdf download


        Downloadhttps://bytlly.com/2uGypm



        -
        -, Biography, Religion, Theology, History, Buddhism, Buddhism, India - -Description: “Gangadhar was a devotee of Jnaneswara temple and had many a time prayed before the image of Lord Krishna. In the month of Baishakh, A.D. 1238, Gangadhar had given a donation of nearly 100 cows for a celebration of Krishna Janmashtami”. Compiled by Gangadhar, this Gathanakamu has stories of Krishna and gopis; part of this Gathanakamu is reproduced in part in the Appendix. - -We are an organisation of people dedicated to the propagation and the dissemination of news and views on any subject or ideology through print, broadcast, cable, internet, media out-reach programmes, school lectures, articles, books and community outreach activities. We believe that when we share ideas we contribute to the growth and development of the individual, the organisation and the nation.Q: - -Mysql: Building and running binary tree of queries - -I have a set of data which is stored in a binary tree of binary trees. Each leaf node of the tree is a single row of a table, and each internal node is a query that returns data that joins three tables together. I want to be able to generate a tree of queries that will return the data I need. However, the queries all have to be SELECTs, so I am working with a codeigniter framework. So far I am storing each query in a string, and storing that string in a table. I need to loop through each node of the tree and run the query, and then grab the resulting rows and store those in the table (if they are empty, I will need to update the table). However, I want to ensure that I am selecting a single node of the tree. I was thinking that if I stored the result of the query as a string, I could use a CASE statement to determine which branch of the tree the node is a part of, and then store that query in the corresponding table. - -If I was doing this in PHP, I would use a switch statement to determine the branch. Is there an equivalent way to do this in mysql? - -A: - -First of all, it seems like you want to generate queries not to query a table. If that's the case, you should generate those queries first, then query them later, not the other way around. - -If 4fefd39f24
        -
        -
        -

        diff --git a/spaces/lithiumice/SadTalker/src/utils/audio.py b/spaces/lithiumice/SadTalker/src/utils/audio.py deleted file mode 100644 index 89433eb4c681112804fbed72b157700f553739a8..0000000000000000000000000000000000000000 --- a/spaces/lithiumice/SadTalker/src/utils/audio.py +++ /dev/null @@ -1,136 +0,0 @@ -import librosa -import librosa.filters -import numpy as np -# import tensorflow as tf -from scipy import signal -from scipy.io import wavfile -from src.utils.hparams import hparams as hp - -def load_wav(path, sr): - return librosa.core.load(path, sr=sr)[0] - -def save_wav(wav, path, sr): - wav *= 32767 / max(0.01, np.max(np.abs(wav))) - #proposed by @dsmiller - wavfile.write(path, sr, wav.astype(np.int16)) - -def save_wavenet_wav(wav, path, sr): - librosa.output.write_wav(path, wav, sr=sr) - -def preemphasis(wav, k, preemphasize=True): - if preemphasize: - return signal.lfilter([1, -k], [1], wav) - return wav - -def inv_preemphasis(wav, k, inv_preemphasize=True): - if inv_preemphasize: - return signal.lfilter([1], [1, -k], wav) - return wav - -def get_hop_size(): - hop_size = hp.hop_size - if hop_size is None: - assert hp.frame_shift_ms is not None - hop_size = int(hp.frame_shift_ms / 1000 * hp.sample_rate) - return hop_size - -def linearspectrogram(wav): - D = _stft(preemphasis(wav, hp.preemphasis, hp.preemphasize)) - S = _amp_to_db(np.abs(D)) - hp.ref_level_db - - if hp.signal_normalization: - return _normalize(S) - return S - -def melspectrogram(wav): - D = _stft(preemphasis(wav, hp.preemphasis, hp.preemphasize)) - S = _amp_to_db(_linear_to_mel(np.abs(D))) - hp.ref_level_db - - if hp.signal_normalization: - return _normalize(S) - return S - -def _lws_processor(): - import lws - return lws.lws(hp.n_fft, get_hop_size(), fftsize=hp.win_size, mode="speech") - -def _stft(y): - if hp.use_lws: - return _lws_processor(hp).stft(y).T - else: - return librosa.stft(y=y, n_fft=hp.n_fft, hop_length=get_hop_size(), win_length=hp.win_size) - -########################################################## -#Those are only correct when using lws!!! (This was messing with Wavenet quality for a long time!) -def num_frames(length, fsize, fshift): - """Compute number of time frames of spectrogram - """ - pad = (fsize - fshift) - if length % fshift == 0: - M = (length + pad * 2 - fsize) // fshift + 1 - else: - M = (length + pad * 2 - fsize) // fshift + 2 - return M - - -def pad_lr(x, fsize, fshift): - """Compute left and right padding - """ - M = num_frames(len(x), fsize, fshift) - pad = (fsize - fshift) - T = len(x) + 2 * pad - r = (M - 1) * fshift + fsize - T - return pad, pad + r -########################################################## -#Librosa correct padding -def librosa_pad_lr(x, fsize, fshift): - return 0, (x.shape[0] // fshift + 1) * fshift - x.shape[0] - -# Conversions -_mel_basis = None - -def _linear_to_mel(spectogram): - global _mel_basis - if _mel_basis is None: - _mel_basis = _build_mel_basis() - return np.dot(_mel_basis, spectogram) - -def _build_mel_basis(): - assert hp.fmax <= hp.sample_rate // 2 - return librosa.filters.mel(sr=hp.sample_rate, n_fft=hp.n_fft, n_mels=hp.num_mels, - fmin=hp.fmin, fmax=hp.fmax) - -def _amp_to_db(x): - min_level = np.exp(hp.min_level_db / 20 * np.log(10)) - return 20 * np.log10(np.maximum(min_level, x)) - -def _db_to_amp(x): - return np.power(10.0, (x) * 0.05) - -def _normalize(S): - if hp.allow_clipping_in_normalization: - if hp.symmetric_mels: - return np.clip((2 * hp.max_abs_value) * ((S - hp.min_level_db) / (-hp.min_level_db)) - hp.max_abs_value, - -hp.max_abs_value, hp.max_abs_value) - else: - return np.clip(hp.max_abs_value * ((S - hp.min_level_db) / (-hp.min_level_db)), 0, hp.max_abs_value) - - assert S.max() <= 0 and S.min() - hp.min_level_db >= 0 - if hp.symmetric_mels: - return (2 * hp.max_abs_value) * ((S - hp.min_level_db) / (-hp.min_level_db)) - hp.max_abs_value - else: - return hp.max_abs_value * ((S - hp.min_level_db) / (-hp.min_level_db)) - -def _denormalize(D): - if hp.allow_clipping_in_normalization: - if hp.symmetric_mels: - return (((np.clip(D, -hp.max_abs_value, - hp.max_abs_value) + hp.max_abs_value) * -hp.min_level_db / (2 * hp.max_abs_value)) - + hp.min_level_db) - else: - return ((np.clip(D, 0, hp.max_abs_value) * -hp.min_level_db / hp.max_abs_value) + hp.min_level_db) - - if hp.symmetric_mels: - return (((D + hp.max_abs_value) * -hp.min_level_db / (2 * hp.max_abs_value)) + hp.min_level_db) - else: - return ((D * -hp.min_level_db / hp.max_abs_value) + hp.min_level_db) diff --git a/spaces/ma-xu/LIVE/pybind11/tools/pybind11Tools.cmake b/spaces/ma-xu/LIVE/pybind11/tools/pybind11Tools.cmake deleted file mode 100644 index 10f15a30917056f8d69cff833e2c905aede08e50..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/pybind11/tools/pybind11Tools.cmake +++ /dev/null @@ -1,188 +0,0 @@ -# tools/pybind11Tools.cmake -- Build system for the pybind11 modules -# -# Copyright (c) 2015 Wenzel Jakob -# -# All rights reserved. Use of this source code is governed by a -# BSD-style license that can be found in the LICENSE file. - -# Built-in in CMake 3.5+ -include(CMakeParseArguments) - -if(pybind11_FIND_QUIETLY) - set(_pybind11_quiet QUIET) -endif() - -# If this is the first run, PYTHON_VERSION can stand in for PYBIND11_PYTHON_VERSION -if(NOT DEFINED PYBIND11_PYTHON_VERSION AND DEFINED PYTHON_VERSION) - message(WARNING "Set PYBIND11_PYTHON_VERSION to search for a specific version, not " - "PYTHON_VERSION (which is an output). Assuming that is what you " - "meant to do and continuing anyway.") - set(PYBIND11_PYTHON_VERSION - "${PYTHON_VERSION}" - CACHE STRING "Python version to use for compiling modules") - unset(PYTHON_VERSION) - unset(PYTHON_VERSION CACHE) -else() - # If this is set as a normal variable, promote it, otherwise, make an empty cache variable. - set(PYBIND11_PYTHON_VERSION - "${PYBIND11_PYTHON_VERSION}" - CACHE STRING "Python version to use for compiling modules") -endif() - -# A user can set versions manually too -set(Python_ADDITIONAL_VERSIONS - "3.9;3.8;3.7;3.6;3.5;3.4" - CACHE INTERNAL "") - -list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_LIST_DIR}") -find_package(PythonLibsNew ${PYBIND11_PYTHON_VERSION} MODULE REQUIRED ${_pybind11_quiet}) -list(REMOVE_AT CMAKE_MODULE_PATH -1) - -# Cache variables so pybind11_add_module can be used in parent projects -set(PYTHON_INCLUDE_DIRS - ${PYTHON_INCLUDE_DIRS} - CACHE INTERNAL "") -set(PYTHON_LIBRARIES - ${PYTHON_LIBRARIES} - CACHE INTERNAL "") -set(PYTHON_MODULE_PREFIX - ${PYTHON_MODULE_PREFIX} - CACHE INTERNAL "") -set(PYTHON_MODULE_EXTENSION - ${PYTHON_MODULE_EXTENSION} - CACHE INTERNAL "") -set(PYTHON_VERSION_MAJOR - ${PYTHON_VERSION_MAJOR} - CACHE INTERNAL "") -set(PYTHON_VERSION_MINOR - ${PYTHON_VERSION_MINOR} - CACHE INTERNAL "") -set(PYTHON_VERSION - ${PYTHON_VERSION} - CACHE INTERNAL "") -set(PYTHON_IS_DEBUG - "${PYTHON_IS_DEBUG}" - CACHE INTERNAL "") - -if(PYBIND11_MASTER_PROJECT) - if(PYTHON_MODULE_EXTENSION MATCHES "pypy") - if(NOT DEFINED PYPY_VERSION) - execute_process( - COMMAND ${PYTHON_EXECUTABLE} -c - [=[import sys; print(".".join(map(str, sys.pypy_version_info[:3])))]=] - OUTPUT_VARIABLE pypy_version) - set(PYPY_VERSION - ${pypy_version} - CACHE INTERNAL "") - endif() - message(STATUS "PYPY ${PYPY_VERSION} (Py ${PYTHON_VERSION})") - else() - message(STATUS "PYTHON ${PYTHON_VERSION}") - endif() -endif() - -# Only add Python for build - must be added during the import for config since it has to be re-discovered. -set_property( - TARGET pybind11::pybind11 - APPEND - PROPERTY INTERFACE_INCLUDE_DIRECTORIES $) - -# Python debug libraries expose slightly different objects before 3.8 -# https://docs.python.org/3.6/c-api/intro.html#debugging-builds -# https://stackoverflow.com/questions/39161202/how-to-work-around-missing-pymodule-create2-in-amd64-win-python35-d-lib -if(PYTHON_IS_DEBUG) - set_property( - TARGET pybind11::pybind11 - APPEND - PROPERTY INTERFACE_COMPILE_DEFINITIONS Py_DEBUG) -endif() - -set_property( - TARGET pybind11::module - APPEND - PROPERTY - INTERFACE_LINK_LIBRARIES pybind11::python_link_helper - "$<$,$>:$>") - -if(PYTHON_VERSION VERSION_LESS 3) - set_property( - TARGET pybind11::pybind11 - APPEND - PROPERTY INTERFACE_LINK_LIBRARIES pybind11::python2_no_register) -endif() - -set_property( - TARGET pybind11::embed - APPEND - PROPERTY INTERFACE_LINK_LIBRARIES pybind11::pybind11 $) - -function(pybind11_extension name) - # The prefix and extension are provided by FindPythonLibsNew.cmake - set_target_properties(${name} PROPERTIES PREFIX "${PYTHON_MODULE_PREFIX}" - SUFFIX "${PYTHON_MODULE_EXTENSION}") -endfunction() - -# Build a Python extension module: -# pybind11_add_module( [MODULE | SHARED] [EXCLUDE_FROM_ALL] -# [NO_EXTRAS] [THIN_LTO] source1 [source2 ...]) -# -function(pybind11_add_module target_name) - set(options MODULE SHARED EXCLUDE_FROM_ALL NO_EXTRAS SYSTEM THIN_LTO) - cmake_parse_arguments(ARG "${options}" "" "" ${ARGN}) - - if(ARG_MODULE AND ARG_SHARED) - message(FATAL_ERROR "Can't be both MODULE and SHARED") - elseif(ARG_SHARED) - set(lib_type SHARED) - else() - set(lib_type MODULE) - endif() - - if(ARG_EXCLUDE_FROM_ALL) - set(exclude_from_all EXCLUDE_FROM_ALL) - else() - set(exclude_from_all "") - endif() - - add_library(${target_name} ${lib_type} ${exclude_from_all} ${ARG_UNPARSED_ARGUMENTS}) - - target_link_libraries(${target_name} PRIVATE pybind11::module) - - if(ARG_SYSTEM) - message( - STATUS - "Warning: this does not have an effect - use NO_SYSTEM_FROM_IMPORTED if using imported targets" - ) - endif() - - pybind11_extension(${target_name}) - - # -fvisibility=hidden is required to allow multiple modules compiled against - # different pybind versions to work properly, and for some features (e.g. - # py::module_local). We force it on everything inside the `pybind11` - # namespace; also turning it on for a pybind module compilation here avoids - # potential warnings or issues from having mixed hidden/non-hidden types. - set_target_properties(${target_name} PROPERTIES CXX_VISIBILITY_PRESET "hidden" - CUDA_VISIBILITY_PRESET "hidden") - - if(ARG_NO_EXTRAS) - return() - endif() - - if(NOT DEFINED CMAKE_INTERPROCEDURAL_OPTIMIZATION) - if(ARG_THIN_LTO) - target_link_libraries(${target_name} PRIVATE pybind11::thin_lto) - else() - target_link_libraries(${target_name} PRIVATE pybind11::lto) - endif() - endif() - - if(NOT MSVC AND NOT ${CMAKE_BUILD_TYPE} MATCHES Debug|RelWithDebInfo) - pybind11_strip(${target_name}) - endif() - - if(MSVC) - target_link_libraries(${target_name} PRIVATE pybind11::windows_extras) - endif() - -endfunction() diff --git a/spaces/manavisrani07/gradio-lipsync-wav2lip/face_detection/detection/sfd/net_s3fd.py b/spaces/manavisrani07/gradio-lipsync-wav2lip/face_detection/detection/sfd/net_s3fd.py deleted file mode 100644 index fc64313c277ab594d0257585c70f147606693452..0000000000000000000000000000000000000000 --- a/spaces/manavisrani07/gradio-lipsync-wav2lip/face_detection/detection/sfd/net_s3fd.py +++ /dev/null @@ -1,129 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class L2Norm(nn.Module): - def __init__(self, n_channels, scale=1.0): - super(L2Norm, self).__init__() - self.n_channels = n_channels - self.scale = scale - self.eps = 1e-10 - self.weight = nn.Parameter(torch.Tensor(self.n_channels)) - self.weight.data *= 0.0 - self.weight.data += self.scale - - def forward(self, x): - norm = x.pow(2).sum(dim=1, keepdim=True).sqrt() + self.eps - x = x / norm * self.weight.view(1, -1, 1, 1) - return x - - -class s3fd(nn.Module): - def __init__(self): - super(s3fd, self).__init__() - self.conv1_1 = nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1) - self.conv1_2 = nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1) - - self.conv2_1 = nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1) - self.conv2_2 = nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1) - - self.conv3_1 = nn.Conv2d(128, 256, kernel_size=3, stride=1, padding=1) - self.conv3_2 = nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1) - self.conv3_3 = nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1) - - self.conv4_1 = nn.Conv2d(256, 512, kernel_size=3, stride=1, padding=1) - self.conv4_2 = nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1) - self.conv4_3 = nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1) - - self.conv5_1 = nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1) - self.conv5_2 = nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1) - self.conv5_3 = nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1) - - self.fc6 = nn.Conv2d(512, 1024, kernel_size=3, stride=1, padding=3) - self.fc7 = nn.Conv2d(1024, 1024, kernel_size=1, stride=1, padding=0) - - self.conv6_1 = nn.Conv2d(1024, 256, kernel_size=1, stride=1, padding=0) - self.conv6_2 = nn.Conv2d(256, 512, kernel_size=3, stride=2, padding=1) - - self.conv7_1 = nn.Conv2d(512, 128, kernel_size=1, stride=1, padding=0) - self.conv7_2 = nn.Conv2d(128, 256, kernel_size=3, stride=2, padding=1) - - self.conv3_3_norm = L2Norm(256, scale=10) - self.conv4_3_norm = L2Norm(512, scale=8) - self.conv5_3_norm = L2Norm(512, scale=5) - - self.conv3_3_norm_mbox_conf = nn.Conv2d(256, 4, kernel_size=3, stride=1, padding=1) - self.conv3_3_norm_mbox_loc = nn.Conv2d(256, 4, kernel_size=3, stride=1, padding=1) - self.conv4_3_norm_mbox_conf = nn.Conv2d(512, 2, kernel_size=3, stride=1, padding=1) - self.conv4_3_norm_mbox_loc = nn.Conv2d(512, 4, kernel_size=3, stride=1, padding=1) - self.conv5_3_norm_mbox_conf = nn.Conv2d(512, 2, kernel_size=3, stride=1, padding=1) - self.conv5_3_norm_mbox_loc = nn.Conv2d(512, 4, kernel_size=3, stride=1, padding=1) - - self.fc7_mbox_conf = nn.Conv2d(1024, 2, kernel_size=3, stride=1, padding=1) - self.fc7_mbox_loc = nn.Conv2d(1024, 4, kernel_size=3, stride=1, padding=1) - self.conv6_2_mbox_conf = nn.Conv2d(512, 2, kernel_size=3, stride=1, padding=1) - self.conv6_2_mbox_loc = nn.Conv2d(512, 4, kernel_size=3, stride=1, padding=1) - self.conv7_2_mbox_conf = nn.Conv2d(256, 2, kernel_size=3, stride=1, padding=1) - self.conv7_2_mbox_loc = nn.Conv2d(256, 4, kernel_size=3, stride=1, padding=1) - - def forward(self, x): - h = F.relu(self.conv1_1(x)) - h = F.relu(self.conv1_2(h)) - h = F.max_pool2d(h, 2, 2) - - h = F.relu(self.conv2_1(h)) - h = F.relu(self.conv2_2(h)) - h = F.max_pool2d(h, 2, 2) - - h = F.relu(self.conv3_1(h)) - h = F.relu(self.conv3_2(h)) - h = F.relu(self.conv3_3(h)) - f3_3 = h - h = F.max_pool2d(h, 2, 2) - - h = F.relu(self.conv4_1(h)) - h = F.relu(self.conv4_2(h)) - h = F.relu(self.conv4_3(h)) - f4_3 = h - h = F.max_pool2d(h, 2, 2) - - h = F.relu(self.conv5_1(h)) - h = F.relu(self.conv5_2(h)) - h = F.relu(self.conv5_3(h)) - f5_3 = h - h = F.max_pool2d(h, 2, 2) - - h = F.relu(self.fc6(h)) - h = F.relu(self.fc7(h)) - ffc7 = h - h = F.relu(self.conv6_1(h)) - h = F.relu(self.conv6_2(h)) - f6_2 = h - h = F.relu(self.conv7_1(h)) - h = F.relu(self.conv7_2(h)) - f7_2 = h - - f3_3 = self.conv3_3_norm(f3_3) - f4_3 = self.conv4_3_norm(f4_3) - f5_3 = self.conv5_3_norm(f5_3) - - cls1 = self.conv3_3_norm_mbox_conf(f3_3) - reg1 = self.conv3_3_norm_mbox_loc(f3_3) - cls2 = self.conv4_3_norm_mbox_conf(f4_3) - reg2 = self.conv4_3_norm_mbox_loc(f4_3) - cls3 = self.conv5_3_norm_mbox_conf(f5_3) - reg3 = self.conv5_3_norm_mbox_loc(f5_3) - cls4 = self.fc7_mbox_conf(ffc7) - reg4 = self.fc7_mbox_loc(ffc7) - cls5 = self.conv6_2_mbox_conf(f6_2) - reg5 = self.conv6_2_mbox_loc(f6_2) - cls6 = self.conv7_2_mbox_conf(f7_2) - reg6 = self.conv7_2_mbox_loc(f7_2) - - # max-out background label - chunk = torch.chunk(cls1, 4, 1) - bmax = torch.max(torch.max(chunk[0], chunk[1]), chunk[2]) - cls1 = torch.cat([bmax, chunk[3]], dim=1) - - return [cls1, reg1, cls2, reg2, cls3, reg3, cls4, reg4, cls5, reg5, cls6, reg6] diff --git a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Face_Detection/detect_all_dlib.py b/spaces/manhkhanhUIT/Image_Restoration_Colorization/Face_Detection/detect_all_dlib.py deleted file mode 100644 index 081b4c185e75f949dc6e2cf9ce55db78244452b6..0000000000000000000000000000000000000000 --- a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Face_Detection/detect_all_dlib.py +++ /dev/null @@ -1,184 +0,0 @@ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. - -import torch -import numpy as np -import skimage.io as io - -# from FaceSDK.face_sdk import FaceDetection -# from face_sdk import FaceDetection -import matplotlib.pyplot as plt -from matplotlib.patches import Rectangle -from skimage.transform import SimilarityTransform -from skimage.transform import warp -from PIL import Image -import torch.nn.functional as F -import torchvision as tv -import torchvision.utils as vutils -import time -import cv2 -import os -from skimage import img_as_ubyte -import json -import argparse -import dlib - - -def _standard_face_pts(): - pts = ( - np.array([196.0, 226.0, 316.0, 226.0, 256.0, 286.0, 220.0, 360.4, 292.0, 360.4], np.float32) / 256.0 - - 1.0 - ) - - return np.reshape(pts, (5, 2)) - - -def _origin_face_pts(): - pts = np.array([196.0, 226.0, 316.0, 226.0, 256.0, 286.0, 220.0, 360.4, 292.0, 360.4], np.float32) - - return np.reshape(pts, (5, 2)) - - -def get_landmark(face_landmarks, id): - part = face_landmarks.part(id) - x = part.x - y = part.y - - return (x, y) - - -def search(face_landmarks): - - x1, y1 = get_landmark(face_landmarks, 36) - x2, y2 = get_landmark(face_landmarks, 39) - x3, y3 = get_landmark(face_landmarks, 42) - x4, y4 = get_landmark(face_landmarks, 45) - - x_nose, y_nose = get_landmark(face_landmarks, 30) - - x_left_mouth, y_left_mouth = get_landmark(face_landmarks, 48) - x_right_mouth, y_right_mouth = get_landmark(face_landmarks, 54) - - x_left_eye = int((x1 + x2) / 2) - y_left_eye = int((y1 + y2) / 2) - x_right_eye = int((x3 + x4) / 2) - y_right_eye = int((y3 + y4) / 2) - - results = np.array( - [ - [x_left_eye, y_left_eye], - [x_right_eye, y_right_eye], - [x_nose, y_nose], - [x_left_mouth, y_left_mouth], - [x_right_mouth, y_right_mouth], - ] - ) - - return results - - -def compute_transformation_matrix(img, landmark, normalize, target_face_scale=1.0): - - std_pts = _standard_face_pts() # [-1,1] - target_pts = (std_pts * target_face_scale + 1) / 2 * 256.0 - - # print(target_pts) - - h, w, c = img.shape - if normalize == True: - landmark[:, 0] = landmark[:, 0] / h * 2 - 1.0 - landmark[:, 1] = landmark[:, 1] / w * 2 - 1.0 - - # print(landmark) - - affine = SimilarityTransform() - - affine.estimate(target_pts, landmark) - - return affine.params - - -def show_detection(image, box, landmark): - plt.imshow(image) - print(box[2] - box[0]) - plt.gca().add_patch( - Rectangle( - (box[1], box[0]), box[2] - box[0], box[3] - box[1], linewidth=1, edgecolor="r", facecolor="none" - ) - ) - plt.scatter(landmark[0][0], landmark[0][1]) - plt.scatter(landmark[1][0], landmark[1][1]) - plt.scatter(landmark[2][0], landmark[2][1]) - plt.scatter(landmark[3][0], landmark[3][1]) - plt.scatter(landmark[4][0], landmark[4][1]) - plt.show() - - -def affine2theta(affine, input_w, input_h, target_w, target_h): - # param = np.linalg.inv(affine) - param = affine - theta = np.zeros([2, 3]) - theta[0, 0] = param[0, 0] * input_h / target_h - theta[0, 1] = param[0, 1] * input_w / target_h - theta[0, 2] = (2 * param[0, 2] + param[0, 0] * input_h + param[0, 1] * input_w) / target_h - 1 - theta[1, 0] = param[1, 0] * input_h / target_w - theta[1, 1] = param[1, 1] * input_w / target_w - theta[1, 2] = (2 * param[1, 2] + param[1, 0] * input_h + param[1, 1] * input_w) / target_w - 1 - return theta - - -if __name__ == "__main__": - - parser = argparse.ArgumentParser() - parser.add_argument("--url", type=str, default="/home/jingliao/ziyuwan/celebrities", help="input") - parser.add_argument( - "--save_url", type=str, default="/home/jingliao/ziyuwan/celebrities_detected_face_reid", help="output" - ) - opts = parser.parse_args() - - url = opts.url - save_url = opts.save_url - - ### If the origin url is None, then we don't need to reid the origin image - - os.makedirs(url, exist_ok=True) - os.makedirs(save_url, exist_ok=True) - - face_detector = dlib.get_frontal_face_detector() - landmark_locator = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat") - - count = 0 - - map_id = {} - for x in os.listdir(url): - img_url = os.path.join(url, x) - pil_img = Image.open(img_url).convert("RGB") - - image = np.array(pil_img) - - start = time.time() - faces = face_detector(image) - done = time.time() - - if len(faces) == 0: - print("Warning: There is no face in %s" % (x)) - continue - - print(len(faces)) - - if len(faces) > 0: - for face_id in range(len(faces)): - current_face = faces[face_id] - face_landmarks = landmark_locator(image, current_face) - current_fl = search(face_landmarks) - - affine = compute_transformation_matrix(image, current_fl, False, target_face_scale=1.3) - aligned_face = warp(image, affine, output_shape=(256, 256, 3)) - img_name = x[:-4] + "_" + str(face_id + 1) - io.imsave(os.path.join(save_url, img_name + ".png"), img_as_ubyte(aligned_face)) - - count += 1 - - if count % 1000 == 0: - print("%d have finished ..." % (count)) - diff --git a/spaces/marrocovin/OPENAI_KEY/src/types/custom.d.ts b/spaces/marrocovin/OPENAI_KEY/src/types/custom.d.ts deleted file mode 100644 index c29288f02b084e67f1179853e776397ef2eb518e..0000000000000000000000000000000000000000 --- a/spaces/marrocovin/OPENAI_KEY/src/types/custom.d.ts +++ /dev/null @@ -1,10 +0,0 @@ -import { Express } from "express-serve-static-core"; -import { Key } from "../keys"; - -declare global { - namespace Express { - interface Request { - key?: Key; - } - } -} diff --git a/spaces/masakhane/dialogue-chat/README.md b/spaces/masakhane/dialogue-chat/README.md deleted file mode 100644 index 40d20bc4e8338001c474fe82785206c00daf0843..0000000000000000000000000000000000000000 --- a/spaces/masakhane/dialogue-chat/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Chat with Masakhane Dialogue Models -emoji: 🌍 -colorFrom: indigo -colorTo: pink -sdk: gradio -sdk_version: 3.37.0 -app_file: app.py -pinned: false -license: other -duplicated_from: huggingface-projects/llama-2-13b-chat ---- - -# Chat with Masakhane Dialogue Models diff --git a/spaces/mateuseap/magic-vocals/i18n/locale_diff.py b/spaces/mateuseap/magic-vocals/i18n/locale_diff.py deleted file mode 100644 index 257277965e0866a86d0361863a8f1b408c4f71ab..0000000000000000000000000000000000000000 --- a/spaces/mateuseap/magic-vocals/i18n/locale_diff.py +++ /dev/null @@ -1,45 +0,0 @@ -import json -import os -from collections import OrderedDict - -# Define the standard file name -standard_file = "zh_CN.json" - -# Find all JSON files in the directory -dir_path = "./" -languages = [ - f for f in os.listdir(dir_path) if f.endswith(".json") and f != standard_file -] - -# Load the standard file -with open(standard_file, "r", encoding="utf-8") as f: - standard_data = json.load(f, object_pairs_hook=OrderedDict) - -# Loop through each language file -for lang_file in languages: - # Load the language file - with open(lang_file, "r", encoding="utf-8") as f: - lang_data = json.load(f, object_pairs_hook=OrderedDict) - - # Find the difference between the language file and the standard file - diff = set(standard_data.keys()) - set(lang_data.keys()) - - miss = set(lang_data.keys()) - set(standard_data.keys()) - - # Add any missing keys to the language file - for key in diff: - lang_data[key] = key - - # Del any extra keys to the language file - for key in miss: - del lang_data[key] - - # Sort the keys of the language file to match the order of the standard file - lang_data = OrderedDict( - sorted(lang_data.items(), key=lambda x: list(standard_data.keys()).index(x[0])) - ) - - # Save the updated language file - with open(lang_file, "w", encoding="utf-8") as f: - json.dump(lang_data, f, ensure_ascii=False, indent=4) - f.write("\n") diff --git a/spaces/mayordp/DeepFakeAI/DeepFakeAI/__init__.py b/spaces/mayordp/DeepFakeAI/DeepFakeAI/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/mehradans92/decode-elm/test/__init__.py b/spaces/mehradans92/decode-elm/test/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/merve/anonymization/public/third_party/simple-statistics.min.js b/spaces/merve/anonymization/public/third_party/simple-statistics.min.js deleted file mode 100644 index 9191046b7dc959d771a904875817c2b9c26ff0e5..0000000000000000000000000000000000000000 --- a/spaces/merve/anonymization/public/third_party/simple-statistics.min.js +++ /dev/null @@ -1,3 +0,0 @@ -// https://github.com/simple-statistics/simple-statistics Copyright (c) 2014, Tom MacWright - -!function(t,r){"object"==typeof exports&&"undefined"!=typeof module?r(exports):"function"==typeof define&&define.amd?define(["exports"],r):r(t.ss={})}(this,function(t){"use strict";function r(t){if(0===t.length)return 0;for(var r,n=t[0],e=0,a=1;a=Math.abs(t[a])?e+=n-r+t[a]:e+=t[a]-r+n,n=r;return n+e}function g(t){if(0===t.length)throw new Error("mean requires at least one data point");return r(t)/t.length}function n(t,r){var n,e,a=g(t),o=0;if(2===r)for(e=0;er&&(r=t[n]);return r}function i(t,r){var n=t.length*r;if(0===t.length)throw new Error("quantile requires at least one data point.");if(r<0||1f&&p(t,n,e);sf;)l--}t[n]===f?p(t,n,l):p(t,++l,e),l<=r&&(n=l+1),r<=l&&(e=l-1)}}function p(t,r,n){var e=t[r];t[r]=t[n],t[n]=e}function s(t,r){var n=t.slice();if(Array.isArray(r)){!function(t,r){for(var n=[0],e=0;et[t.length-1])return 1;var n=function(t,r){var n=0,e=0,a=t.length;for(;e>>1]?a=n:e=-~n;return e}(t,r);if(t[n]!==r)return n/t.length;n++;var e=function(t,r){var n=0,e=0,a=t.length;for(;e=t[n=e+a>>>1]?e=-~n:a=n;return e}(t,r);if(e===n)return n/t.length;var a=e-n+1;return a*(e+n)/2/a/t.length}function m(t){var r=s(t,.75),n=s(t,.25);if("number"==typeof r&&"number"==typeof n)return r-n}function d(t){return+s(t,.5)}function b(t){for(var r=d(t),n=[],e=0;e=e[n][u]);--g)(s=x(h,u,o,i)+e[n-1][h-1])n&&(n=t[e]),t[e]t.length)throw new Error("cannot generate more classes than there are data values");var n=f(t);if(1===y(n))return[n];var e=S(r,n.length),a=S(r,n.length);!function(t,r,n){for(var e,a=r[0].length,o=t[Math.floor(a/2)],i=[],u=[],h=0;h=Math.abs(a)&&(c+=1);else if("greater"===n)for(h=0;h<=e;h++)o[h]>=a&&(c+=1);else for(h=0;h<=e;h++)o[h]<=a&&(c+=1);return c/e},t.bisect=function(t,r,n,e,a){if("function"!=typeof t)throw new TypeError("func must be a function");for(var o=0;o d.x) - .attr('y', d => d.y) - .attr('width', d => d.width) - .attr('height', d => d.height) - .attr('xlink:href', d => d.path) - .attr('alt', d => d.alt) - - - var buttonHeight = 35 - var buttonWidth = 130 - - var buttonSel = c.svg.appendMany('g.photo-button', data) - .translate((d,i) => [(i * 170) + 100, 0]) - .at({ - // class: "dropdown" - }) - .on('click', function(d, i){ - photoIndex = i - setActiveImage() - timer.stop(); - }) - - buttonSel.append('rect') - .at({ - height: buttonHeight, - width: buttonWidth, - // fill: '#fff' - }) - - buttonSel.append('text') - .at({ - textAnchor: 'middle', - // dominantBaseline: 'central', - dy: '.33em', - x: buttonWidth/2, - y: buttonHeight/2, - class: "monospace" - }) - .text((d,i) => 'ground truth ' + (i + 1)) - - // buttonSel.classed('dropdown', true); - - if (window.__photoPersonTimer) window.__photoPersonTimer.stop() - var timer = window.__photoPersonTimer = d3.interval(() => { - photoIndex = (photoIndex + 1) % data.length; - setActiveImage() - }, 2000) - - function setActiveImage(i){ - photoSel.st({opacity: (d, i) => i == photoIndex ? 1 : 0 }) - buttonSel.classed('is-active-button', (d, i) => i == photoIndex) - } - setActiveImage() -} - -createPhotoScroller(); - - - - diff --git a/spaces/merve/dataset-worldviews/public/measuring-fairness/slides.js b/spaces/merve/dataset-worldviews/public/measuring-fairness/slides.js deleted file mode 100644 index a66a04c7c483fee37424c6e9182e565a673a7aca..0000000000000000000000000000000000000000 --- a/spaces/merve/dataset-worldviews/public/measuring-fairness/slides.js +++ /dev/null @@ -1,102 +0,0 @@ -/* Copyright 2020 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - - - -window.makeSlides = function(){ - var slides = [ - { - textFill: '#aaa', - textStroke: 0, - rectFill: d => d.isSick ? lcolors.sick : lcolors.well, - rectOpacity: d => 0, - threshold: .8, - fpAxisOpacity: 0, - sexAxisOpacity: 0, - brAxisOpacity: 0, - truthAxisOpacity: 0, - mlAxisOpacity: 0, - pos: 'all', - botAxisY: c.width + 80, - }, - - { - textFill: d => d.isSick ? colors.sick : colors.well, - truthAxisOpacity: 1, - }, - - { - rectOpacity: d => 1, - mlAxisOpacity: 1, - - }, - - { - rectFill: d => d.grade > gs.curSlide.threshold ? lcolors.sick : lcolors.well, - textStroke: d => d.grade > gs.curSlide.threshold == d.isSick ? 0 : .6, - fpAxisOpacity: 1, - }, - - { - threshold: .61, - animateThreshold: true, - }, - - { - threshold: .89, - animateThreshold: true, - }, - - { - pos: 'sex', - fpAxisOpacity: 0, - sexAxisOpacity: 1, - threshold: .7508, - animateThreshold: false, - botAxisY: c.width + 150, - - }, - - { - brAxisOpacity: 1, - sexAxisOpacity: 0, - - }, - - { - - } - - ] - - var keys = [] - slides.forEach(d => keys = keys.concat(d3.keys(d))) - _.uniq(keys).forEach(str => { - var prev = null - slides.forEach(d => { - if (typeof(d[str]) === 'undefined'){ - d[str] = prev - } - prev = d[str] - }) - }) - - return slides -} - - - -if (window.init) window.init() diff --git a/spaces/mfidabel/controlnet-segment-anything/README.md b/spaces/mfidabel/controlnet-segment-anything/README.md deleted file mode 100644 index e491c5a14fa47a7e3182e751f25e4e3438ded024..0000000000000000000000000000000000000000 --- a/spaces/mfidabel/controlnet-segment-anything/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Controlnet Segment Anything -emoji: 😻 -colorFrom: gray -colorTo: indigo -sdk: gradio -sdk_version: 3.28.0 -app_file: app.py -pinned: false -license: mit -tags: -- jax-diffusers-event ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/microsoft-cognitive-service/mm-react/Dockerfile b/spaces/microsoft-cognitive-service/mm-react/Dockerfile deleted file mode 100644 index 482cefd0e80bbdada1d0d3ef2a17d729be144b84..0000000000000000000000000000000000000000 --- a/spaces/microsoft-cognitive-service/mm-react/Dockerfile +++ /dev/null @@ -1,18 +0,0 @@ -FROM python:3.10.9 - -WORKDIR /src - -COPY ./MM-REACT /src/MM-REACT - -COPY ./requirements.txt /src/requirements.txt - -COPY ./langchain-0.0.94-py3-none-any.whl /src/langchain-0.0.94-py3-none-any.whl - -RUN pip install --no-cache-dir /src/langchain-0.0.94-py3-none-any.whl - -RUN pip install --no-cache-dir --upgrade -r /src/requirements.txt - -WORKDIR /src/MM-REACT - - -CMD ["python", "app.py", "--port", "7860", "--openAIModel", "azureGPT4", "--noIntermediateConv"] \ No newline at end of file diff --git a/spaces/mingyuan/ReMoDiffuse/mogen/core/evaluation/evaluators/fid_evaluator.py b/spaces/mingyuan/ReMoDiffuse/mogen/core/evaluation/evaluators/fid_evaluator.py deleted file mode 100644 index 627910d0ddc0f81f55c436120ce383837927e100..0000000000000000000000000000000000000000 --- a/spaces/mingyuan/ReMoDiffuse/mogen/core/evaluation/evaluators/fid_evaluator.py +++ /dev/null @@ -1,58 +0,0 @@ -import numpy as np -import torch - -from ..get_model import get_motion_model -from .base_evaluator import BaseEvaluator -from ..utils import ( - calculate_activation_statistics, - calculate_frechet_distance) - - -class FIDEvaluator(BaseEvaluator): - - def __init__(self, - data_len=0, - motion_encoder_name=None, - motion_encoder_path=None, - batch_size=None, - drop_last=False, - replication_times=1, - replication_reduction='statistics', - **kwargs): - super().__init__( - replication_times=replication_times, - replication_reduction=replication_reduction, - batch_size=batch_size, - drop_last=drop_last, - eval_begin_idx=0, - eval_end_idx=data_len - ) - self.append_indexes = None - self.motion_encoder = get_motion_model(motion_encoder_name, motion_encoder_path) - self.model_list = [self.motion_encoder] - - def single_evaluate(self, results): - results = self.prepare_results(results) - device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - pred_motion = results['pred_motion'] - - pred_motion_length = results['pred_motion_length'] - pred_motion_mask = results['pred_motion_mask'] - motion = results['motion'] - motion_length = results['motion_length'] - motion_mask = results['motion_mask'] - self.motion_encoder.to(device) - self.motion_encoder.eval() - with torch.no_grad(): - pred_motion_emb = self.motion_encode(pred_motion, pred_motion_length, pred_motion_mask, device).cpu().detach().numpy() - gt_motion_emb = self.motion_encode(motion, motion_length, motion_mask, device).cpu().detach().numpy() - gt_mu, gt_cov = calculate_activation_statistics(gt_motion_emb) - pred_mu, pred_cov = calculate_activation_statistics(pred_motion_emb) - fid = calculate_frechet_distance(gt_mu, gt_cov, pred_mu, pred_cov) - return fid - - def parse_values(self, values): - metrics = {} - metrics['FID (mean)'] = values[0] - metrics['FID (conf)'] = values[1] - return metrics diff --git a/spaces/mishig/embeddings-similarity/README.md b/spaces/mishig/embeddings-similarity/README.md deleted file mode 100644 index 0d78629fe7bf0af6c3ab242db6b0d7837db88695..0000000000000000000000000000000000000000 --- a/spaces/mishig/embeddings-similarity/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Embeddings Similarity -emoji: 📚 -colorFrom: purple -colorTo: gray -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/mithril-security/blind_chat/src/lib/buildPrompt.ts b/spaces/mithril-security/blind_chat/src/lib/buildPrompt.ts deleted file mode 100644 index ef64e9870e2e5726c0729eaced5d8483dfce8bd7..0000000000000000000000000000000000000000 --- a/spaces/mithril-security/blind_chat/src/lib/buildPrompt.ts +++ /dev/null @@ -1,34 +0,0 @@ -import type { BackendModel } from "./server/models"; -import type { Message } from "./types/Message"; -import { collections } from "$lib/server/database"; -import { authCondition } from "./server/auth"; -/** - * Convert [{user: "assistant", content: "hi"}, {user: "user", content: "hello"}] to: - * - * <|assistant|>hi<|endoftext|><|prompter|>hello<|endoftext|><|assistant|> - */ - -interface buildPromptOptions { - messages: Pick[]; - model: BackendModel; - locals?: App.Locals; - webSearchId?: string; - preprompt?: string; -} - -export async function buildPrompt({ - messages, - model, - locals, - webSearchId, - preprompt, -}: buildPromptOptions): Promise { - return ( - model - .chatPromptRender({ messages, preprompt }) - // Not super precise, but it's truncated in the model's backend anyway - .split(" ") - .slice(-(model.parameters?.truncate ?? 0)) - .join(" ") - ); -} diff --git a/spaces/ml-energy/leaderboard/scripts/compute_system_metrics.py b/spaces/ml-energy/leaderboard/scripts/compute_system_metrics.py deleted file mode 100644 index edb59b39f8873a789c5e379c1ffbc1350b374f54..0000000000000000000000000000000000000000 --- a/spaces/ml-energy/leaderboard/scripts/compute_system_metrics.py +++ /dev/null @@ -1,30 +0,0 @@ -import os -import csv -from glob import glob - -import tyro -import pandas as pd - - -def main(data_dir: str, out_file: str) -> None: - """Compute metrics for all models in the given directory.""" - model_names = os.listdir(data_dir) - print(f"{model_names=}") - - if dirname := os.path.dirname(out_file): - os.makedirs(dirname, exist_ok=True) - out_csv = csv.writer(open(out_file, "w", newline="")) - metrics = ["throughput", "response_length", "latency", "energy"] - out_csv.writerow(["model", "batch_size"] + metrics) - - for model_name in model_names: - for benchmark_file in glob(f"{data_dir}/{model_name}/benchmark_batch_*.json"): - batch_size = int(benchmark_file.split("_")[-1][:-5]) - df = pd.read_json(benchmark_file) - out_csv.writerow( - [model_name.replace("--", "/"), str(batch_size)] + df[metrics].mean().to_list(), - ) - - -if __name__ == "__main__": - tyro.cli(main) diff --git a/spaces/mshukor/UnIVAL/models/taming/util.py b/spaces/mshukor/UnIVAL/models/taming/util.py deleted file mode 100644 index 7443b29e3d223a3ad396808f9717fb9a11c7507c..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/models/taming/util.py +++ /dev/null @@ -1,172 +0,0 @@ -import os, hashlib -import requests -from tqdm import tqdm -import importlib - -URL_MAP = { - "vgg_lpips": "https://heibox.uni-heidelberg.de/f/607503859c864bc1b30b/?dl=1" -} - -CKPT_MAP = { - "vgg_lpips": "vgg.pth" -} - -MD5_MAP = { - "vgg_lpips": "d507d7349b931f0638a25a48a722f98a" -} - - -def get_obj_from_str(string, reload=False): - module, cls = string.rsplit(".", 1) - if reload: - module_imp = importlib.import_module(module) - importlib.reload(module_imp) - return getattr(importlib.import_module(module, package=None), cls) - - -def instantiate_from_config(config): - if not "target" in config: - raise KeyError("Expected key `target` to instantiate.") - return get_obj_from_str(config["target"])(**config.get("params", dict())) - - -def download(url, local_path, chunk_size=1024): - os.makedirs(os.path.split(local_path)[0], exist_ok=True) - with requests.get(url, stream=True) as r: - total_size = int(r.headers.get("content-length", 0)) - with tqdm(total=total_size, unit="B", unit_scale=True) as pbar: - with open(local_path, "wb") as f: - for data in r.iter_content(chunk_size=chunk_size): - if data: - f.write(data) - pbar.update(chunk_size) - - -def md5_hash(path): - with open(path, "rb") as f: - content = f.read() - return hashlib.md5(content).hexdigest() - - -def get_ckpt_path(name, root, check=False): - assert name in URL_MAP - path = os.path.join(root, CKPT_MAP[name]) - if not os.path.exists(path) or (check and not md5_hash(path) == MD5_MAP[name]): - print("Downloading {} model from {} to {}".format(name, URL_MAP[name], path)) - download(URL_MAP[name], path) - md5 = md5_hash(path) - assert md5 == MD5_MAP[name], md5 - return path - - -class KeyNotFoundError(Exception): - def __init__(self, cause, keys=None, visited=None): - self.cause = cause - self.keys = keys - self.visited = visited - messages = list() - if keys is not None: - messages.append("Key not found: {}".format(keys)) - if visited is not None: - messages.append("Visited: {}".format(visited)) - messages.append("Cause:\n{}".format(cause)) - message = "\n".join(messages) - super().__init__(message) - - -def retrieve( - list_or_dict, key, splitval="/", default=None, expand=True, pass_success=False -): - """Given a nested list or dict return the desired value at key expanding - callable nodes if necessary and :attr:`expand` is ``True``. The expansion - is done in-place. - - Parameters - ---------- - list_or_dict : list or dict - Possibly nested list or dictionary. - key : str - key/to/value, path like string describing all keys necessary to - consider to get to the desired value. List indices can also be - passed here. - splitval : str - String that defines the delimiter between keys of the - different depth levels in `key`. - default : obj - Value returned if :attr:`key` is not found. - expand : bool - Whether to expand callable nodes on the path or not. - - Returns - ------- - The desired value or if :attr:`default` is not ``None`` and the - :attr:`key` is not found returns ``default``. - - Raises - ------ - Exception if ``key`` not in ``list_or_dict`` and :attr:`default` is - ``None``. - """ - - keys = key.split(splitval) - - success = True - try: - visited = [] - parent = None - last_key = None - for key in keys: - if callable(list_or_dict): - if not expand: - raise KeyNotFoundError( - ValueError( - "Trying to get past callable node with expand=False." - ), - keys=keys, - visited=visited, - ) - list_or_dict = list_or_dict() - parent[last_key] = list_or_dict - - last_key = key - parent = list_or_dict - - try: - if isinstance(list_or_dict, dict): - list_or_dict = list_or_dict[key] - else: - list_or_dict = list_or_dict[int(key)] - except (KeyError, IndexError, ValueError) as e: - raise KeyNotFoundError(e, keys=keys, visited=visited) - - visited += [key] - # final expansion of retrieved value - if expand and callable(list_or_dict): - list_or_dict = list_or_dict() - parent[last_key] = list_or_dict - except KeyNotFoundError as e: - if default is None: - raise e - else: - list_or_dict = default - success = False - - if not pass_success: - return list_or_dict - else: - return list_or_dict, success - - -if __name__ == "__main__": - config = {"keya": "a", - "keyb": "b", - "keyc": - {"cc1": 1, - "cc2": 2, - } - } - from omegaconf import OmegaConf - - config = OmegaConf.create(config) - print(config) - retrieve(config, "keya") diff --git a/spaces/muellerzr/accelerate-presentation/index.html b/spaces/muellerzr/accelerate-presentation/index.html deleted file mode 100644 index 4d0b8f1fd1cb44424933020397dbd4c5819f91db..0000000000000000000000000000000000000000 --- a/spaces/muellerzr/accelerate-presentation/index.html +++ /dev/null @@ -1,1089 +0,0 @@ - - - - - - - - - - - - - Accelerate, Three Powerful Sublibraries for PyTorch - - - - - - - - - - - - - - - - - - -
        -
        - -
        -

        Accelerate, Three Powerful Sublibraries for PyTorch

        - -
        -
        -
        -Zachary Mueller -
        -
        -
        - -
        -
        -

        Who am I?

        -
          -
        • Zachary Mueller
        • -
        • Deep Learning Software Engineer at 🤗
        • -
        • API design geek
        • -
        -
        -
        -

        What is 🤗 Accelerate?

        -
        -
        -
        -

        -

        graph LR
        -    A{"🤗 Accelerate#32;"}
        -    A --> B["Launching<br>Interface#32;"]
        -    A --> C["Training Library#32;"]
        -    A --> D["Big Model<br>Inference#32;"]
        -
        -
        - -
        -

        -
        -
        -
        -
        -
        -
        -

        A Launching Interface

        -

        Can’t I just use python do_the_thing.py?

        -
        -
        -

        A Launching Interface

        -

        Launching scripts in different environments is complicated:

        -
          -
        • python script.py
        • -
        • torchrun --nnodes=1 --nproc_per_node=2 script.py
        • -
        • deepspeed --num_gpus=2 script.py
        • -
        -

        And more!

        -
        -
        -

        A Launching Interface

        -

        But it doesn’t have to be:

        -
        accelerate launch script.py
        -

        A single command to launch with DeepSpeed, Fully Sharded Data Parallelism, across single and multi CPUs and GPUs, and to train on TPUs1 too!

        -
        -
        -

        A Launching Interface

        -

        Generate a device-specific configuration through accelerate config

        - -
        -
        -

        A Launching Interface

        -

        Or don’t. accelerate config doesn’t have to be done!

        -
        torchrun --nnodes=1 --nproc_per_node=2 script.py
        -accelerate launch --multi_gpu --nproc_per_node=2 script.py
        -

        A quick default configuration can be made too:

        -
        accelerate config default
        -
        -
        -

        A Launching Interface

        -

        With the notebook_launcher it’s also possible to launch code directly from your Jupyter environment too!

        -
        from accelerate import notebook_launcher
        -notebook_launcher(
        -    training_loop_function, 
        -    args, 
        -    num_processes=2
        -)
        -
        Launching training on 2 GPUs.
        -epoch 0: 88.12
        -epoch 1: 91.73
        -epoch 2: 92.58
        -epoch 3: 93.90
        -epoch 4: 94.71
        -
        -
        -
        -

        A Training Library

        -

        Okay, will accelerate launch make do_the_thing.py use all my GPUs magically?

        -
        -
        -

        A Training Library

        -
          -
        • Just showed that its possible using accelerate launch to launch a python script in various distributed environments
        • -
        • This does not mean that the script will just “use” that code and still run on the new compute efficiently.
        • -
        • Training on different computes often means many lines of code changed for each specific compute.
        • -
        • 🤗 accelerate solves this by ensuring the same code can be ran on a CPU or GPU, multiples, and on TPUs!
        • -
        -
        -
        -

        A Training Library

        -
        for batch in dataloader:
        -    optimizer.zero_grad()
        -    inputs, targets = batch
        -    inputs = inputs.to(device)
        -    targets = targets.to(device)
        -    outputs = model(inputs)
        -    loss = loss_function(outputs, targets)
        -    loss.backward()
        -    optimizer.step()
        -    scheduler.step()
        -
        -
        -

        A Training Library

        -
        -
        -




        -
        # For alignment purposes
        -for batch in dataloader:
        -    optimizer.zero_grad()
        -    inputs, targets = batch
        -    inputs = inputs.to(device)
        -    targets = targets.to(device)
        -    outputs = model(inputs)
        -    loss = loss_function(outputs, targets)
        -    loss.backward()
        -    optimizer.step()
        -    scheduler.step()
        -
        -
        from accelerate import Accelerator
        -accelerator = Accelerator()
        -dataloader, model, optimizer scheduler = (
        -    accelerator.prepare(
        -        dataloader, model, optimizer, scheduler
        -    )
        -)
        -
        -for batch in dataloader:
        -    optimizer.zero_grad()
        -    inputs, targets = batch
        -    # inputs = inputs.to(device)
        -    # targets = targets.to(device)
        -    outputs = model(inputs)
        -    loss = loss_function(outputs, targets)
        -    accelerator.backward(loss) # loss.backward()
        -    optimizer.step()
        -    scheduler.step()
        -
        -
        -
        -
        -

        A Training Library

        -

        What all happened in Accelerator.prepare?

        -
        -
          -
        1. Accelerator looked at the configuration
        2. -
        3. The dataloader was converted into one that can dispatch each batch onto a seperate GPU
        4. -
        5. The model was wrapped with the appropriate DDP wrapper from either torch.distributed or torch_xla
        6. -
        7. The optimizer and scheduler were both converted into an AcceleratedOptimizer and AcceleratedScheduler which knows how to handle any distributed scenario
        8. -
        -
        -
        -
        -

        A Training Library, Mixed Precision

        -

        🤗 accelerate also supports automatic mixed precision.

        -

        Through a single flag to the Accelerator object when calling accelerator.backward() the mixed precision of your choosing (such as bf16 or fp16) will be applied:

        -
        from accelerate import Accelerator
        -accelerator = Accelerator(mixed_precision="fp16")
        -...
        -for batch in dataloader:
        -    optimizer.zero_grad()
        -    inputs, targets = batch
        -    outputs = model(inputs)
        -    loss = loss_function(outputs, targets)
        -    accelerator.backward(loss)
        -    optimizer.step()
        -    scheduler.step()
        -
        -
        -

        A Training Library, Gradient Accumulation

        -

        Gradient accumulation in distributed setups often need extra care to ensure gradients are aligned when they need to be and the backward pass is computationally efficient.

        -

        🤗 accelerate can just easily handle this for you:

        -
        from accelerate import Accelerator
        -accelerator = Accelerator(gradient_accumulation_steps=4)
        -...
        -for batch in dataloader:
        -    with accelerator.accumulate(model):
        -        optimizer.zero_grad()
        -        inputs, targets = batch
        -        outputs = model(inputs)
        -        loss = loss_function(outputs, targets)
        -        accelerator.backward(loss)
        -        optimizer.step()
        -        scheduler.step()
        -
        -
        -

        A Training Library, Gradient Accumulation

        -
        ddp_model, dataloader = accelerator.prepare(model, dataloader)
        -
        -for index, batch in enumerate(dataloader):
        -    inputs, targets = batch
        -    if index != (len(dataloader)-1) or (index % 4) != 0:
        -        # Gradients don't sync
        -        with accelerator.no_sync(model):
        -            outputs = ddp_model(inputs)
        -            loss = loss_func(outputs, targets)
        -            accelerator.backward(loss)
        -    else:
        -        # Gradients finally sync
        -        outputs = ddp_model(inputs)
        -        loss = loss_func(outputs)
        -        accelerator.backward(loss)
        -
        -
        -
        -

        Big Model Inference

        -

        Stable Diffusion taking the world by storm

        -
        -
        -

        Bigger Models == Higher Compute

        -

        As more large models were being released, Hugging Face quickly realized there must be a way to continue our decentralization of Machine Learning and have the day-to-day programmer be able to leverage these big models.

        -

        Born out of this effort by Sylvain Gugger:

        -

        🤗 Accelerate: Big Model Inference.

        -
        -
        -

        The Basic Premise

        -
        -
          -
        • In PyTorch, there exists the meta device.

        • -
        • Super small footprint to load in huge models quickly by not loading in their weights immediatly.

        • -
        • As an input gets passed through each layer, we can load and unload parts of the PyTorch model quickly so that only a small portion of the big model is loaded in at a single time.

        • -
        • The end result? Stable Diffusion v1 can be ran on < 800mb of vRAM

        • -
        -
        -
        -
        -

        The Code

        -

        Generally you start with something like so:

        -
        import torch
        -
        -my_model = ModelClass(...)
        -state_dict = torch.load(checkpoint_file)
        -my_model.load_state_dict(state_dict)
        -

        But this has issues:

        -
          -
        1. The full version of the model is loaded at 3
        2. -
        3. Another version of the model is loaded into memory at 4
        4. -
        -

        If a 6 billion parameter model is being loaded, each model class has a dictionary of 24GB so 48GB of vRAM is needed

        -
        -
        -

        Empty Model Weights

        -

        We can fix step 1 by loading in an empty model skeleton at first:

        -
        from accelerate import init_empty_weights
        -
        -with init_empty_weights():
        -    my_model = ModelClass(...)
        -state_dict = torch.load(checkpoint_file)
        -my_model.load_state_dict(state_dict)
        -
        -
        -
        -
        - -
        -

        This code will not run

        -
        -
        -

        It is likely that just calling my_model(x) will fail as not all tensor operations are supported on the meta device.

        -
        -
        -
        -
        -
        -

        Sharded Checkpoints - The Concept

        -

        The next step is to have “Sharded Checkpoints” saved for your model.

        -

        Basically smaller chunks of your model weights stored that can be brought in at any particular time.

        -

        This reduces the amount of memory step 2 takes in since we can just load in a “chunk” of the model at a time, then swap it out for a new chunk through PyTorch hooks

        -
        -
        -

        Sharded Checkpoints - The Code

        -
        from accelerate import init_empty_weights, load_checkpoint_and_dispatch
        -
        -with init_empty_weights():
        -    my_model = ModelClass(...)
        -
        -my_model = load_checkpoint_and_dispatch(
        -    my_model, "sharded-weights", device_map="auto"
        -)
        -

        device_map="auto" will tell 🤗 Accelerate that it should determine where to put each layer of the model:

        -
          -
        1. Maximum space on the GPU(s)
        2. -
        3. Maximum space on the CPU(s)
        4. -
        5. Utilize disk space through memory-mapped tensors
        6. -
        -
        -
        -

        Big Model Inference Put Together

        -
        from accelerate import init_empty_weights, load_checkpoint_and_dispatch
        -
        -with init_empty_weights():
        -    my_model = ModelClass(...)
        -
        -my_model = load_checkpoint_and_dispatch(
        -    my_model, "sharded-weights", device_map="auto"
        -)
        -my_model.eval()
        -
        -for batch in dataloader:
        -    output = my_model(batch)
        -
        -
        -

        Is there an easier way?

        -

        The transformers library combined with the Hub makes all this code wrapping much easier for you with the pipeline

        -
        import torch
        -from transformers import pipeline
        -pipe = pipeline(
        -    task="text-generation",
        -    model="EleutherAI/gpt-j-6B",
        -    device_map="auto",
        -    torch_dtype=torch.float16
        -)
        -
        -text = pipe("This is some generated text, I think")
        -
        -
        -
        -

        What about Stable Diffusion?

        -

        A demo with diffusers & Weights and Biases

        -
        -
        -

        Some Handy Resources

        - - -
        - -
        -
        - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/spaces/myrad01/Inpaint-Anything/third_party/lama/bin/train.py b/spaces/myrad01/Inpaint-Anything/third_party/lama/bin/train.py deleted file mode 100644 index be9ca8c6ef2a0cb9143ab6a0f4d91f571b691a95..0000000000000000000000000000000000000000 --- a/spaces/myrad01/Inpaint-Anything/third_party/lama/bin/train.py +++ /dev/null @@ -1,72 +0,0 @@ -#!/usr/bin/env python3 - -import logging -import os -import sys -import traceback - -os.environ['OMP_NUM_THREADS'] = '1' -os.environ['OPENBLAS_NUM_THREADS'] = '1' -os.environ['MKL_NUM_THREADS'] = '1' -os.environ['VECLIB_MAXIMUM_THREADS'] = '1' -os.environ['NUMEXPR_NUM_THREADS'] = '1' - -import hydra -from omegaconf import OmegaConf -from pytorch_lightning import Trainer -from pytorch_lightning.callbacks import ModelCheckpoint -from pytorch_lightning.loggers import TensorBoardLogger -from pytorch_lightning.plugins import DDPPlugin - -from saicinpainting.training.trainers import make_training_model -from saicinpainting.utils import register_debug_signal_handlers, handle_ddp_subprocess, handle_ddp_parent_process, \ - handle_deterministic_config - -LOGGER = logging.getLogger(__name__) - - -@handle_ddp_subprocess() -@hydra.main(config_path='../configs/training', config_name='tiny_test.yaml') -def main(config: OmegaConf): - try: - need_set_deterministic = handle_deterministic_config(config) - - register_debug_signal_handlers() # kill -10 will result in traceback dumped into log - - is_in_ddp_subprocess = handle_ddp_parent_process() - - config.visualizer.outdir = os.path.join(os.getcwd(), config.visualizer.outdir) - if not is_in_ddp_subprocess: - LOGGER.info(OmegaConf.to_yaml(config)) - OmegaConf.save(config, os.path.join(os.getcwd(), 'config.yaml')) - - checkpoints_dir = os.path.join(os.getcwd(), 'models') - os.makedirs(checkpoints_dir, exist_ok=True) - - # there is no need to suppress this logger in ddp, because it handles rank on its own - metrics_logger = TensorBoardLogger(config.location.tb_dir, name=os.path.basename(os.getcwd())) - metrics_logger.log_hyperparams(config) - - training_model = make_training_model(config) - - trainer_kwargs = OmegaConf.to_container(config.trainer.kwargs, resolve=True) - if need_set_deterministic: - trainer_kwargs['deterministic'] = True - - trainer = Trainer( - # there is no need to suppress checkpointing in ddp, because it handles rank on its own - callbacks=ModelCheckpoint(dirpath=checkpoints_dir, **config.trainer.checkpoint_kwargs), - logger=metrics_logger, - default_root_dir=os.getcwd(), - **trainer_kwargs - ) - trainer.fit(training_model) - except KeyboardInterrupt: - LOGGER.warning('Interrupted by user') - except Exception as ex: - LOGGER.critical(f'Training failed due to {ex}:\n{traceback.format_exc()}') - sys.exit(1) - - -if __name__ == '__main__': - main() diff --git a/spaces/napoles3d/st_parade/app.py b/spaces/napoles3d/st_parade/app.py deleted file mode 100644 index a232afae7d46422b5a265a8040ec555cd9955e94..0000000000000000000000000000000000000000 --- a/spaces/napoles3d/st_parade/app.py +++ /dev/null @@ -1,43 +0,0 @@ -import streamlit as st -#from streamlit_player import st_player -import streamlit.components.v1 as components -#from streamlit_tags import st_tags, st_tags_sidebar -import urllib.request -st.set_page_config(layout="wide") - -list1=[] -list2=[] -maxtags =6 - - - -#col1, col2, col3 = st.columns([6,1,2]) -#with col1: -# sel_url=st.selectbox('Select url', keywords) -# components.iframe(sel_url,height=800,scrolling=True) - - -#with col3: -# url_youtube=st.text_input('Enter youtube URL','https://www.youtube.com/watch?v=_daTfgc4u3k') -# st_player(url_youtube) - - -url = "https://raw.githubusercontent.com/napoles-uach/MundaneApps/main/links.txt" -file = urllib.request.urlopen(url) - -urls=[] -for line in file: - decoded_line = line.decode("utf-8") - urls.append(decoded_line) - -sel_url=st.selectbox('Select url', urls) -components.iframe(sel_url,height=800,scrolling=True) - -#col1, col2, col3 = st.columns([6,1,2]) -#with col1: -# sel_url=st.selectbox('Select url', urls) -# components.iframe(sel_url,height=800,scrolling=True) - -#with col3: -# url_youtube=st.text_input('Enter youtube URL','https://www.youtube.com/watch?v=_daTfgc4u3k') -# st_player(url_youtube) diff --git a/spaces/nateraw/dino-clips/dino/hubconf.py b/spaces/nateraw/dino-clips/dino/hubconf.py deleted file mode 100644 index ef1cdeaed426148bf9101ea625645a2f63ebb083..0000000000000000000000000000000000000000 --- a/spaces/nateraw/dino-clips/dino/hubconf.py +++ /dev/null @@ -1,151 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -import torch -from torchvision.models.resnet import resnet50 - -import vision_transformer as vits - -dependencies = ["torch", "torchvision"] - - -def dino_vits16(pretrained=True, **kwargs): - """ - ViT-Small/16x16 pre-trained with DINO. - Achieves 74.5% top-1 accuracy on ImageNet with k-NN classification. - """ - model = vits.__dict__["vit_small"](patch_size=16, num_classes=0, **kwargs) - if pretrained: - state_dict = torch.hub.load_state_dict_from_url( - url="https://dl.fbaipublicfiles.com/dino/dino_deitsmall16_pretrain/dino_deitsmall16_pretrain.pth", - map_location="cpu", - ) - model.load_state_dict(state_dict, strict=True) - return model - - -def dino_vits8(pretrained=True, **kwargs): - """ - ViT-Small/8x8 pre-trained with DINO. - Achieves 78.3% top-1 accuracy on ImageNet with k-NN classification. - """ - model = vits.__dict__["vit_small"](patch_size=8, num_classes=0, **kwargs) - if pretrained: - state_dict = torch.hub.load_state_dict_from_url( - url="https://dl.fbaipublicfiles.com/dino/dino_deitsmall8_pretrain/dino_deitsmall8_pretrain.pth", - map_location="cpu", - ) - model.load_state_dict(state_dict, strict=True) - return model - - -def dino_vitb16(pretrained=True, **kwargs): - """ - ViT-Base/16x16 pre-trained with DINO. - Achieves 76.1% top-1 accuracy on ImageNet with k-NN classification. - """ - model = vits.__dict__["vit_base"](patch_size=16, num_classes=0, **kwargs) - if pretrained: - state_dict = torch.hub.load_state_dict_from_url( - url="https://dl.fbaipublicfiles.com/dino/dino_vitbase16_pretrain/dino_vitbase16_pretrain.pth", - map_location="cpu", - ) - model.load_state_dict(state_dict, strict=True) - return model - - -def dino_vitb8(pretrained=True, **kwargs): - """ - ViT-Base/8x8 pre-trained with DINO. - Achieves 77.4% top-1 accuracy on ImageNet with k-NN classification. - """ - model = vits.__dict__["vit_base"](patch_size=8, num_classes=0, **kwargs) - if pretrained: - state_dict = torch.hub.load_state_dict_from_url( - url="https://dl.fbaipublicfiles.com/dino/dino_vitbase8_pretrain/dino_vitbase8_pretrain.pth", - map_location="cpu", - ) - model.load_state_dict(state_dict, strict=True) - return model - - -def dino_resnet50(pretrained=True, **kwargs): - """ - ResNet-50 pre-trained with DINO. - Achieves 75.3% top-1 accuracy on ImageNet linear evaluation benchmark (requires to train `fc`). - """ - model = resnet50(pretrained=False, **kwargs) - model.fc = torch.nn.Identity() - if pretrained: - state_dict = torch.hub.load_state_dict_from_url( - url="https://dl.fbaipublicfiles.com/dino/dino_resnet50_pretrain/dino_resnet50_pretrain.pth", - map_location="cpu", - ) - model.load_state_dict(state_dict, strict=False) - return model - - -def dino_xcit_small_12_p16(pretrained=True, **kwargs): - """ - XCiT-Small-12/16 pre-trained with DINO. - """ - model = torch.hub.load('facebookresearch/xcit', "xcit_small_12_p16", num_classes=0, **kwargs) - if pretrained: - state_dict = torch.hub.load_state_dict_from_url( - url="https://dl.fbaipublicfiles.com/dino/dino_xcit_small_12_p16_pretrain/dino_xcit_small_12_p16_pretrain.pth", - map_location="cpu", - ) - model.load_state_dict(state_dict, strict=True) - return model - - -def dino_xcit_small_12_p8(pretrained=True, **kwargs): - """ - XCiT-Small-12/8 pre-trained with DINO. - """ - model = torch.hub.load('facebookresearch/xcit', "xcit_small_12_p8", num_classes=0, **kwargs) - if pretrained: - state_dict = torch.hub.load_state_dict_from_url( - url="https://dl.fbaipublicfiles.com/dino/dino_xcit_small_12_p8_pretrain/dino_xcit_small_12_p8_pretrain.pth", - map_location="cpu", - ) - model.load_state_dict(state_dict, strict=True) - return model - - -def dino_xcit_medium_24_p16(pretrained=True, **kwargs): - """ - XCiT-Medium-24/16 pre-trained with DINO. - """ - model = torch.hub.load('facebookresearch/xcit', "xcit_medium_24_p16", num_classes=0, **kwargs) - if pretrained: - state_dict = torch.hub.load_state_dict_from_url( - url="https://dl.fbaipublicfiles.com/dino/dino_xcit_medium_24_p16_pretrain/dino_xcit_medium_24_p16_pretrain.pth", - map_location="cpu", - ) - model.load_state_dict(state_dict, strict=True) - return model - - -def dino_xcit_medium_24_p8(pretrained=True, **kwargs): - """ - XCiT-Medium-24/8 pre-trained with DINO. - """ - model = torch.hub.load('facebookresearch/xcit', "xcit_medium_24_p8", num_classes=0, **kwargs) - if pretrained: - state_dict = torch.hub.load_state_dict_from_url( - url="https://dl.fbaipublicfiles.com/dino/dino_xcit_medium_24_p8_pretrain/dino_xcit_medium_24_p8_pretrain.pth", - map_location="cpu", - ) - model.load_state_dict(state_dict, strict=True) - return model diff --git a/spaces/nateraw/lavila/app.py b/spaces/nateraw/lavila/app.py deleted file mode 100644 index 24c652a5be26dff0377acc4028011cc07c50ee70..0000000000000000000000000000000000000000 --- a/spaces/nateraw/lavila/app.py +++ /dev/null @@ -1,164 +0,0 @@ -import sys -sys.path.insert(0, './') - -import decord -import numpy as np -import torch -import os - -from lavila.data.video_transforms import Permute -from lavila.data.datasets import get_frame_ids, video_loader_by_frames -from lavila.models.models import VCLM_OPENAI_TIMESFORMER_BASE_GPT2 -from lavila.models.tokenizer import MyGPT2Tokenizer -from collections import OrderedDict -import torch -import torchvision.transforms as transforms -import torchvision.transforms._transforms_video as transforms_video -import gradio as gr - -def get_frame_ids(start_frame, end_frame, num_segments=32, jitter=True): - seg_size = float(end_frame - start_frame - 1) / num_segments - seq = [] - for i in range(num_segments): - start = int(np.round(seg_size * i) + start_frame) - end = int(np.round(seg_size * (i + 1)) + start_frame) - end = min(end, end_frame) - if jitter: - frame_id = np.random.randint(low=start, high=(end + 1)) - else: - frame_id = (start + end) // 2 - seq.append(frame_id) - return seq - -def video_loader_by_frames(root, vid, frame_ids): - vr = decord.VideoReader(os.path.join(root, vid)) - try: - frames = vr.get_batch(frame_ids).asnumpy() - frames = [torch.tensor(frame, dtype=torch.float32) for frame in frames] - except (IndexError, decord.DECORDError) as error: - print(error) - print("Erroneous video: ", vid) - frames = [torch.zeros((240, 320, 3)) for _ in range(len(frame_ids))] - return torch.stack(frames, dim=0) - -def iter_clips(video_path, num_segments=4, stride_size=16): - # The video is represented by `num_seg=4` frames - vr = decord.VideoReader(video_path) - frame_sample_size = num_segments * stride_size - max_start_frame = len(vr) - frame_sample_size - curr_frame = 0 - fps = vr.get_avg_fps() - while curr_frame == 0 or curr_frame < max_start_frame: - stop_frame = min(curr_frame + frame_sample_size, len(vr)) - curr_sec, stop_sec = curr_frame / fps, stop_frame / fps - frame_ids = get_frame_ids(curr_frame, stop_frame, num_segments=num_segments, jitter=False) - frames = video_loader_by_frames('./', video_path, frame_ids) - yield curr_sec, stop_sec, frames - curr_frame += frame_sample_size - - -class Pipeline: - def __init__(self, path=""): - ckpt_path = os.path.join(path, 'vclm_openai_timesformer_base_gpt2_base.pt_ego4d.jobid_319630.ep_0002.md5sum_68a71f.pth') - ckpt = torch.load(ckpt_path, map_location='cpu') - state_dict = OrderedDict() - for k, v in ckpt['state_dict'].items(): - state_dict[k.replace('module.', '')] = v - - self.device = 'cuda' if torch.cuda.is_available() else 'cpu' - self.model = VCLM_OPENAI_TIMESFORMER_BASE_GPT2( - text_use_cls_token=False, - project_embed_dim=256, - gated_xattn=True, - timesformer_gated_xattn=False, - freeze_lm_vclm=False, - freeze_visual_vclm=False, - freeze_visual_vclm_temporal=False, - num_frames=4, - drop_path_rate=0. - ) - self.model.load_state_dict(state_dict, strict=True) - self.model.to(self.device) - self.model.eval() - - self.tokenizer = MyGPT2Tokenizer('gpt2', add_bos=True) - - crop_size = 224 - self.val_transform = transforms.Compose([ - Permute([3, 0, 1, 2]), - transforms.Resize(crop_size), - transforms.CenterCrop(crop_size), - transforms_video.NormalizeVideo(mean=[108.3272985, 116.7460125, 104.09373615000001], std=[68.5005327, 66.6321579, 70.32316305]) - ]) - - def decode_one(self, generated_ids, tokenizer): - # get the index of - if tokenizer.eos_token_id == tokenizer.bos_token_id: - if tokenizer.eos_token_id in generated_ids[1:].tolist(): - eos_id = generated_ids[1:].tolist().index(tokenizer.eos_token_id) + 1 - else: - eos_id = len(generated_ids.tolist()) - 1 - elif tokenizer.eos_token_id in generated_ids.tolist(): - eos_id = generated_ids.tolist().index(tokenizer.eos_token_id) - else: - eos_id = len(generated_ids.tolist()) - 1 - generated_text_str = tokenizer.tokenizer.decode(generated_ids[1:eos_id].tolist()) - return generated_text_str - - def __call__(self, video_path, temperature=0.7, top_p=0.95, max_text_length=120, num_return_sequences=5): - text = "" - MAX_ITERATIONS = 5 - with torch.autocast(self.device): - for clip_idx, (start, stop, frames) in enumerate(iter_clips(video_path)): - text_to_add = f"{'-'*30} Predictions From: {start:2.3f}-{stop:2.3f} seconds {'-'*30}\n" - print(text_to_add) - text += text_to_add - frames = self.val_transform(frames).unsqueeze(0) - if self.device == 'cuda': - frames = frames.to(self.device).half() - - with torch.no_grad(): - image_features = self.model.encode_image(frames) - generated_text_ids, ppls = self.model.generate( - image_features, - self.tokenizer, - target=None, # free-form generation - max_text_length=max_text_length, - top_k=None, - top_p=top_p, # nucleus sampling - num_return_sequences=num_return_sequences, # number of candidates: 10 - temperature=temperature, - early_stopping=True, - ) - for i in range(num_return_sequences): - generated_text_str = self.decode_one(generated_text_ids[i], self.tokenizer) - text_to_add = '\t{}: {}\n'.format(i, generated_text_str) - print(text_to_add) - text += text_to_add - - if (clip_idx+1) >= MAX_ITERATIONS: - return text - return text - - -pipeline = Pipeline() -def fn(video_path, temperature=0.7, top_p=0.95, max_text_length=120, num_return_sequences=5): - return pipeline(video_path, temperature, top_p, max_text_length, num_return_sequences) - -title = "LaViLa" -description = """LaViLa (**L**anguage **a**ugmented **Vi**deo **La**nguage Pretraining) is a new approach to learning video representations from Large Language Models (LLMs). We repurpose LLMs to be visually conditioned "Narrators", and use them to automatically generate video-language paired data. We use this data to then learn a video-langauge representation, outperforming prior work by large margins. \nGradio Demo for LaVila. To use it, simply upload your video, or click one of the examples to load them. Read more at the links below.""" -article = "

        Github Repo | Paper on arxiv

        visitor badge

        " - -interface = gr.Interface( - fn, - inputs=[ - gr.Video(label='video_path'), - gr.Slider(0.0, 1.0, 0.7, label='temperature'), - gr.Slider(0.0, 1.0, 0.95, label='top_p'), - ], - outputs='text', - examples=[['eating_spaghetti.mp4', 0.7, 0.95], ['assets/3c0dffd0-e38e-4643-bc48-d513943dc20b_012_014.mp4', 0.7, 0.95]], - title=title, - description=description, - article=article, -).launch() \ No newline at end of file diff --git a/spaces/neojex/LuxembourgishTextClassifier/app.py b/spaces/neojex/LuxembourgishTextClassifier/app.py deleted file mode 100644 index 07dcae4ca30610c8cde3d74b3727e12724032485..0000000000000000000000000000000000000000 --- a/spaces/neojex/LuxembourgishTextClassifier/app.py +++ /dev/null @@ -1,58 +0,0 @@ -import gradio as gr -import torch -import torch.nn as nn -from transformers import BertModel, BertTokenizer - -# define the labels for the mutli-classification model -class_names = ['Negative', 'Neutral', 'Positive'] - -# Build the Sentiment Classifier class -class SentimentClassifier(nn.Module): - # Constructor class - def __init__(self, n_classes): - super(SentimentClassifier, self).__init__() - self.bert = BertModel.from_pretrained('lothritz/LuxemBERT') - self.drop = nn.Dropout(p=0.3) - self.out = nn.Linear(self.bert.config.hidden_size, n_classes) - - # Forward propagaion class - def forward(self, input_ids, attention_mask): - _, pooled_output = self.bert( - input_ids=input_ids, - attention_mask=attention_mask, - return_dict=False - ) - # Add a dropout layer - output = self.drop(pooled_output) - return self.out(output) -# load the CNN binary classification model -model = SentimentClassifier(len(class_names)) -model.load_state_dict(torch.load('./pytorch_model.bin', map_location=torch.device('cpu'))) -tokenizer = BertTokenizer.from_pretrained('./') - -def encode(text): - encoded_text = tokenizer.encode_plus( - text, - max_length=50, - add_special_tokens=True, - return_token_type_ids=False, - pad_to_max_length=True, - return_attention_mask=True, - return_tensors='pt', - ) - return encoded_text - -def classify(text): - encoded_comment = encode(text) - input_ids = encoded_comment['input_ids'] - attention_mask = encoded_comment['attention_mask'] - - output = model(input_ids, attention_mask) - _, prediction = torch.max(output, dim=1) - - return class_names[prediction] - -demo = gr.Interface(fn=classify, inputs="text", outputs="text", title="Sentiment Analyser", description="Text classifer for Luxembourgish") - - -demo.launch() \ No newline at end of file diff --git a/spaces/nev/dalle-6D/README.md b/spaces/nev/dalle-6D/README.md deleted file mode 100644 index 1ccaa0a4b68da4aa2430fd237fdc2f8bf33b7ff3..0000000000000000000000000000000000000000 --- a/spaces/nev/dalle-6D/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Dalle 6D -emoji: ⚡ -colorFrom: yellow -colorTo: gray -sdk: gradio -sdk_version: 3.0.26 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ngxson/poet-cat/README.md b/spaces/ngxson/poet-cat/README.md deleted file mode 100644 index 05b7c355361cbbb7b973d2042ebe56402b68a05c..0000000000000000000000000000000000000000 --- a/spaces/ngxson/poet-cat/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Poet Cat -emoji: 🐱 -colorFrom: purple -colorTo: yellow -sdk: docker -app_port: 7860 -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/dev/run_inference_tests.sh b/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/dev/run_inference_tests.sh deleted file mode 100644 index 46556b80a3ee793bdf6a79f5de2ec88cac902189..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/dev/run_inference_tests.sh +++ /dev/null @@ -1,33 +0,0 @@ -#!/bin/bash -e -# Copyright (c) Facebook, Inc. and its affiliates. - -BIN="python train_net.py" -OUTPUT="inference_test_output" -NUM_GPUS=2 -IMS_PER_GPU=2 -IMS_PER_BATCH=$(( NUM_GPUS * IMS_PER_GPU )) - -CFG_LIST=( "${@:1}" ) - -if [ ${#CFG_LIST[@]} -eq 0 ]; then - CFG_LIST=( ./configs/quick_schedules/*inference_acc_test.yaml ) -fi - -echo "========================================================================" -echo "Configs to run:" -echo "${CFG_LIST[@]}" -echo "========================================================================" - -for cfg in "${CFG_LIST[@]}"; do - echo "========================================================================" - echo "Running $cfg ..." - echo "========================================================================" - $BIN \ - --eval-only \ - --num-gpus $NUM_GPUS \ - --config-file "$cfg" \ - OUTPUT_DIR "$OUTPUT" \ - SOLVER.IMS_PER_BATCH $IMS_PER_BATCH - rm -rf $OUTPUT -done - diff --git a/spaces/noeljb/hashtag-recommendation-engine/app.py b/spaces/noeljb/hashtag-recommendation-engine/app.py deleted file mode 100644 index b89751df07dc929d6a418f6051e95d21b1c36982..0000000000000000000000000000000000000000 --- a/spaces/noeljb/hashtag-recommendation-engine/app.py +++ /dev/null @@ -1,254 +0,0 @@ -#Importing libraries -import pandas as pd -from google_drive_downloader import GoogleDriveDownloader as gdd -import pygsheets -import re -import requests -import spacy -from spacy.tokenizer import _get_regex_pattern -import contractions -from sklearn.metrics.pairwise import cosine_similarity -from sklearn.feature_extraction.text import TfidfVectorizer -from math import sqrt -from ast import literal_eval -import numpy as np -import gradio as gr -import os - -# Initiallization - -#Downloading necessary spacy models -try: - nlp = spacy.load('en_core_web_md') -except: - spacy.cli.download('en_core_web_md') - nlp = spacy.load('en_core_web_md') - -#initiating bearer token -bearer_token = os.environ['bearer_token'] -#Retrieving the tweet db for comparision - -#Initializing google drive parameters -gdrive_id = os.environ['gdrive_id'] - -gdd.download_file_from_google_drive(file_id=gdrive_id, - dest_path='./secret_key.json', - unzip=True) - -#authenticating with google sheets with pygsheets -client = pygsheets.authorize(service_account_file='secret_key.json') - -#open google sheet -gsheet_key = os.environ['gsheet_key'] -google_sheet = client.open_by_key(gsheet_key) - -#selecting specific sheets -Tweet_sheet_old = google_sheet.worksheet_by_title('Htag Recom tweets') -Tweet_Db_main = Tweet_sheet_old.get_as_df() - -#Defining functions - -# Function to fetch necessary user info -def create_url(user_names_list, user_fields): - user_names = ','.join(user_names_list) if len(user_names_list)>1 else user_names_list[0] - usernames = f"usernames={user_names}" - url = "https://api.twitter.com/2/users/by?{}&{}".format(usernames, user_fields) - return url - -def bearer_oauth(r): - """ - Method required by bearer token authentication. - """ - r.headers["Authorization"] = f"Bearer {bearer_token}" - r.headers["User-Agent"] = "v2UserLookupPython" - return r - -def connect_to_endpoint(url): - response = requests.request("GET", url, auth=bearer_oauth,) - if response.status_code != 200: - raise Exception( - "Request returned an error: {} {}".format( - response.status_code, response.text - ) - ) - return response.json() - -def get_display_name(list_of_user_names): - - user_fields = "user.fields=name,username" - url = create_url(list_of_user_names,user_fields) - json_response = connect_to_endpoint(url) - - for user in json_response['data']: #for valid users whose data is returned - try: - display_name = user['name'] - except: - display_name = re.findall("@([a-zA-Z0-9_]{1,50})",user['username'])[0] - - if 'errors' in list(json_response.keys()): - for user in json_response['errors']: #for invalid users - display_name = user["value"] - return display_name - -# Defining function to clean up hashtag and mentions in tweet body -def Remove_trailing_hashtags_and_replacing_usernames (tweet): - """Funtion to remove trailing hashtags or remove # symbols from body of tweet. This function also replaces @ mentions with the respective usernames""" - # get default pattern for tokens that don't get split - re_token_match = _get_regex_pattern(nlp.Defaults.token_match) - # add your patterns (here: hashtags and in-word hyphens) - re_token_match = f"({re_token_match}|#\w+|\w+-\w+)" - - # overwrite token_match function of the tokenizer - nlp.tokenizer.token_match = re.compile(re_token_match).match - doc = nlp(tweet) - tweet_cleaned = "" - for token in doc: - if bool(re.findall("@([a-zA-Z0-9_]{1,50})", token.text)): #check if it is a @ mention - try: - tweet_cleaned=tweet_cleaned+" "+get_display_name(re.findall("@([a-zA-Z0-9_]{1,50})", token.text)) #replacing @ with user name - except: - tweet_cleaned=tweet_cleaned+" "+token.text - else: - if token.text == str(doc[0]): #check if it is the first word - if bool(re.findall("#([a-zA-Z0-9_]{1,50})", token.text)): #check if it is a hashtag - - if len(re.findall('([A-Z][^A-Z]*)', token.text))>1 and not(token.text.isupper()): - updated_word="" - for sub_word in re.findall('([A-Z][^A-Z]*)', token.text): - if updated_word=="": - updated_word+=sub_word - else: - updated_word= updated_word+" "+sub_word - elif len(re.sub("_"," ",token.text).split())>1: - updated_word="" - for sub_word in re.sub("_"," ",token.text).split(): - if updated_word=="": - updated_word+=sub_word - else: - updated_word= updated_word+" "+sub_word - else: - updated_word = re.findall("#([a-zA-Z0-9_]{1,50})", token.text)[0] - - tweet_cleaned=tweet_cleaned+" "+updated_word - - else: - tweet_cleaned=tweet_cleaned+" "+token.text - else: - if bool(re.findall("#([a-zA-Z0-9_]{1,50})", token.text)): #check if it is a hashtag - if token.nbor(-1).pos_ in ['SCONJ' ,'PART', 'DET', 'CCONJ', 'CONJ' ,'AUX', 'ADP', 'ADJ', 'VERB' ,'INTJ' ,'PRON', 'ADV']: #check pos of previous word - - if len(re.findall('([A-Z][^A-Z]*)', token.text))>1 and not(token.text.isupper()): - updated_word="" - for sub_word in re.findall('([A-Z][^A-Z]*)', token.text): - if updated_word=="": - updated_word+=sub_word - else: - updated_word= updated_word+" "+sub_word - elif len(re.sub("_"," ",token.text).split())>1: - updated_word="" - for sub_word in re.sub("_"," ",token.text).split(): - if updated_word=="": - updated_word+=sub_word - else: - updated_word= updated_word+" "+sub_word - else: - updated_word = re.findall("#([a-zA-Z0-9_]{1,50})", token.text)[0] - - tweet_cleaned=tweet_cleaned+" "+updated_word - else: - pass #remove hashtag - else: - tweet_cleaned=tweet_cleaned+" "+token.text - return tweet_cleaned -def clean_tweet(new_tweet): - """Function to clean the tweet text entered""" - #cleaning the tweet - new_tweet_cleaned= ' '.join(re.sub("([^@_#'.!?0-9A-Za-z \t])|(\w+:\/\/\S+)"," ",new_tweet).split()) - #cleaning the text again. This time removing the trailing hashtags or removing # symbol from tweet body. We also replace @ mentions with the associated display names - new_tweet_cleaned2= Remove_trailing_hashtags_and_replacing_usernames(new_tweet_cleaned) - #cleaning the text again. This time fixing the contractions - new_tweet_cleaned3= contractions.fix(new_tweet_cleaned2) - return new_tweet_cleaned3 - -def hashtag_generator(Tweet,hashtag_count): - """Function that will generate hashtags for the entered text""" - - # Computing additional columns and similarity scores - - compare_DB = Tweet_Db_main.copy() #working on a copy - compare_DB = compare_DB[compare_DB['Hashtags'].notnull()] #removing any nulls - - #cleaning the entered tweet text - new_tweet_cleaned3 = clean_tweet(Tweet) - - #computing cosine similarity - TfIdf_cos_similarity = [] - - for tweet in compare_DB['Tweet Text cleaned']: - """Computing TF_IDF cosine similarity""" - similarity_list = [new_tweet_cleaned3]+[tweet] - vectorizer = TfidfVectorizer() - X = vectorizer.fit_transform(similarity_list) - arr = X.toarray() - - TfIdf_cos_similarity.append(cosine_similarity(arr)[0,1]) - #creating a column for cosine similarity - compare_DB['Cosine Similarity'] = TfIdf_cos_similarity - - # Creating a new row for each hashtag and removing duplicated rows - compare_DB['Hashtags'] = compare_DB['Hashtags'].apply(literal_eval) #convert to list type - compare_DB_expanded = compare_DB.explode('Hashtags').drop_duplicates(keep='first').reset_index(drop=True) - #Computing user influence - compare_DB_expanded['Avg Verified Status'] = compare_DB_expanded.groupby(['Hashtags'])['Verified Status Num'].transform('mean') - compare_DB_expanded['Avg Follower Count'] = compare_DB_expanded.groupby(['Hashtags'])['Followers'].transform('mean') - #setting parameters - alpha = 1 - beta = 0.25 - compare_DB_expanded['Influence Score'] = alpha * compare_DB_expanded['Avg Verified Status'] + beta * np.log(compare_DB_expanded['Avg Follower Count']+1) - - #computing hashtag frequency - compare_DB_expanded['Hashtag Freq'] = compare_DB_expanded.groupby(['Hashtags'])['Followers'].transform('count')/compare_DB_expanded.shape[0] - - # #Evaluating the cut off values of scores (done initially to find optimum cutt off points. Commenting out rather than deleting for future reference) - # compare_DB_expanded['Influence Score'].describe() - # compare_DB_expanded['Cosine Similarity'].describe() - # compare_DB_expanded['Hashtag Freq'].describe() - # compare_DB_expanded[compare_DB_expanded['Cosine Similarity'].apply(lambda x: True if (x >= 0.3) else False)]['Hashtags'].unique() - - - #computing recommendation scores (RS) - - compare_DB_expanded['RS Cosine'] = compare_DB_expanded['Cosine Similarity'].apply(lambda x: 1 if (x >= 0.3) else 0) - compare_DB_expanded['RS Influence'] = compare_DB_expanded['Influence Score'].apply(lambda x: 1 if (x >= 4.1) else 0) - compare_DB_expanded['RS Frequency'] = compare_DB_expanded['Hashtag Freq'].apply(lambda x: 1 if (x >= 0.001) else 0) - # generating hashtags to recommend - compare_DB_expanded['compound score'] = compare_DB_expanded['Cosine Similarity']*compare_DB_expanded['Influence Score'] - candidate_hashtags = compare_DB_expanded[(compare_DB_expanded['RS Cosine']+compare_DB_expanded['RS Influence']+compare_DB_expanded['RS Frequency'])>1].sort_values(by=['compound score'])['Hashtags'].str.lower().drop_duplicates(keep='first').reset_index(drop=True) - # Subsetting for top 10 or lesser hashtags among candidates - if len(candidate_hashtags)>hashtag_count: - recommended_hashtags = candidate_hashtags[0:hashtag_count] - else: - recommended_hashtags = candidate_hashtags - # Recommending relevant hashtags to users - - htag_list = "The hashtags recommended for entered text are:" - - if len(recommended_hashtags)==0: - print("Sorry no suggestions generated.") - htag_list = "" - else: - for htag in recommended_hashtags: - htag_list += " #"+htag - - return(htag_list) - -# Wrapping recommender function around gradio wrapper -htag_recommender = gr.Interface(fn = hashtag_generator, - inputs = [gr.inputs.Textbox(lines = 10, placeholder = "Enter the tweet here...."),gr.inputs.Slider(1,10,step=1,label="Maximum number of recommended hashtags")], - outputs = "text", - allow_flagging = "never", - title = "Hashtag recommendation engine" - ) - -#Initializing Gradio interface -htag_recommender.launch() \ No newline at end of file diff --git a/spaces/nola-ai/Recipe_Meal_Planner/README.md b/spaces/nola-ai/Recipe_Meal_Planner/README.md deleted file mode 100644 index 428b059eb609023a41ec079969930d75afcee015..0000000000000000000000000000000000000000 --- a/spaces/nola-ai/Recipe_Meal_Planner/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: recipAI - Meal Planner -emoji: 👨🏻‍🍳 -colorFrom: blue -colorTo: purple -sdk: gradio -sdk_version: 3.40.1 -app_file: app.py -pinned: true ---- - -We are using llama2 and custom prompt engineering to create a multi-course meal based on your desired ingredients and choice of cuisine. All of the recipes generated attempt to create a cohesive meal. - -Disclaimer: The recipes provided by this app are for entertainment purposes only and may not include edible ingredients or may give inaccurate cooking times which may result in food that is not safe for consumption. By proceeding, you agree to use at your own risk. \ No newline at end of file diff --git a/spaces/odettecantswim/vits-models-genshin/utils.py b/spaces/odettecantswim/vits-models-genshin/utils.py deleted file mode 100644 index a91f9eb2df9f2b097431432753212eb440f93020..0000000000000000000000000000000000000000 --- a/spaces/odettecantswim/vits-models-genshin/utils.py +++ /dev/null @@ -1,399 +0,0 @@ -import os -import glob -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -from scipy.io.wavfile import read -import torch -import regex as re - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - - - -zh_pattern = re.compile(r'[\u4e00-\u9fa5]') -en_pattern = re.compile(r'[a-zA-Z]') -jp_pattern = re.compile(r'[\u3040-\u30ff\u31f0-\u31ff]') -kr_pattern = re.compile(r'[\uac00-\ud7af\u1100-\u11ff\u3130-\u318f\ua960-\ua97f]') -num_pattern=re.compile(r'[0-9]') -comma=r"(?<=[.。!!??;;,,、::'\"‘“”’()()《》「」~——])" #向前匹配但固定长度 -tags={'ZH':'[ZH]','EN':'[EN]','JP':'[JA]','KR':'[KR]'} - -def tag_cjke(text): - '''为中英日韩加tag,中日正则分不开,故先分句分离中日再识别,以应对大部分情况''' - sentences = re.split(r"([.。!!??;;,,、::'\"‘“”’()()【】《》「」~——]+ *(?![0-9]))", text) #分句,排除小数点 - sentences.append("") - sentences = ["".join(i) for i in zip(sentences[0::2],sentences[1::2])] - # print(sentences) - prev_lang=None - tagged_text = "" - for s in sentences: - #全为符号跳过 - nu = re.sub(r'[\s\p{P}]+', '', s, flags=re.U).strip() - if len(nu)==0: - continue - s = re.sub(r'[()()《》「」【】‘“”’]+', '', s) - jp=re.findall(jp_pattern, s) - #本句含日语字符判断为日语 - if len(jp)>0: - prev_lang,tagged_jke=tag_jke(s,prev_lang) - tagged_text +=tagged_jke - else: - prev_lang,tagged_cke=tag_cke(s,prev_lang) - tagged_text +=tagged_cke - return tagged_text - -def tag_jke(text,prev_sentence=None): - '''为英日韩加tag''' - # 初始化标记变量 - tagged_text = "" - prev_lang = None - tagged=0 - # 遍历文本 - for char in text: - # 判断当前字符属于哪种语言 - if jp_pattern.match(char): - lang = "JP" - elif zh_pattern.match(char): - lang = "JP" - elif kr_pattern.match(char): - lang = "KR" - elif en_pattern.match(char): - lang = "EN" - # elif num_pattern.match(char): - # lang = prev_sentence - else: - lang = None - tagged_text += char - continue - # 如果当前语言与上一个语言不同,就添加标记 - if lang != prev_lang: - tagged=1 - if prev_lang==None: # 开头 - tagged_text =tags[lang]+tagged_text - else: - tagged_text =tagged_text+tags[prev_lang]+tags[lang] - - # 重置标记变量 - prev_lang = lang - - # 添加当前字符到标记文本中 - tagged_text += char - - # 在最后一个语言的结尾添加对应的标记 - if prev_lang: - tagged_text += tags[prev_lang] - if not tagged: - prev_lang=prev_sentence - tagged_text =tags[prev_lang]+tagged_text+tags[prev_lang] - - return prev_lang,tagged_text - -def tag_cke(text,prev_sentence=None): - '''为中英韩加tag''' - # 初始化标记变量 - tagged_text = "" - prev_lang = None - # 是否全略过未标签 - tagged=0 - - # 遍历文本 - for char in text: - # 判断当前字符属于哪种语言 - if zh_pattern.match(char): - lang = "ZH" - elif kr_pattern.match(char): - lang = "KR" - elif en_pattern.match(char): - lang = "EN" - # elif num_pattern.match(char): - # lang = prev_sentence - else: - # 略过 - lang = None - tagged_text += char - continue - - # 如果当前语言与上一个语言不同,添加标记 - if lang != prev_lang: - tagged=1 - if prev_lang==None: # 开头 - tagged_text =tags[lang]+tagged_text - else: - tagged_text =tagged_text+tags[prev_lang]+tags[lang] - - # 重置标记变量 - prev_lang = lang - - # 添加当前字符到标记文本中 - tagged_text += char - - # 在最后一个语言的结尾添加对应的标记 - if prev_lang: - tagged_text += tags[prev_lang] - # 未标签则继承上一句标签 - if tagged==0: - prev_lang=prev_sentence - tagged_text =tags[prev_lang]+tagged_text+tags[prev_lang] - return prev_lang,tagged_text - - -def load_checkpoint(checkpoint_path, model, optimizer=None, drop_speaker_emb=False): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - if k == 'emb_g.weight': - if drop_speaker_emb: - new_state_dict[k] = v - continue - v[:saved_state_dict[k].shape[0], :] = saved_state_dict[k] - new_state_dict[k] = v - else: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})".format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path): - logger.info("Saving model and optimizer state at iteration {} to {}".format( - iteration, checkpoint_path)) - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save({'model': state_dict, - 'iteration': iteration, - 'optimizer': optimizer.state_dict() if optimizer is not None else None, - 'learning_rate': learning_rate}, checkpoint_path) - - -def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats='HWC') - for k, v in audios.items(): - writer.add_audio(k, v, global_step, audio_sampling_rate) - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - print(x) - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/modified_finetune_speaker.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, default="pretrained_models", - help='Model name') - parser.add_argument('-n', '--max_epochs', type=int, default=50, - help='finetune epochs') - parser.add_argument('--drop_speaker_embed', type=bool, default=False, help='whether to drop existing characters') - - args = parser.parse_args() - model_dir = os.path.join("./", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - hparams.max_epochs = args.max_epochs - hparams.drop_speaker_embed = args.drop_speaker_embed - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r", encoding="utf-8") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() \ No newline at end of file diff --git a/spaces/oliver2023/chatgpt-on-wechat/bridge/context.py b/spaces/oliver2023/chatgpt-on-wechat/bridge/context.py deleted file mode 100644 index 50be1001b72d7811d5dab0beaff167d636f7e472..0000000000000000000000000000000000000000 --- a/spaces/oliver2023/chatgpt-on-wechat/bridge/context.py +++ /dev/null @@ -1,57 +0,0 @@ -# encoding:utf-8 - -from enum import Enum - -class ContextType (Enum): - TEXT = 1 # 文本消息 - VOICE = 2 # 音频消息 - IMAGE_CREATE = 3 # 创建图片命令 - - def __str__(self): - return self.name -class Context: - def __init__(self, type : ContextType = None , content = None, kwargs = dict()): - self.type = type - self.content = content - self.kwargs = kwargs - - def __contains__(self, key): - if key == 'type': - return self.type is not None - elif key == 'content': - return self.content is not None - else: - return key in self.kwargs - - def __getitem__(self, key): - if key == 'type': - return self.type - elif key == 'content': - return self.content - else: - return self.kwargs[key] - - def get(self, key, default=None): - try: - return self[key] - except KeyError: - return default - - def __setitem__(self, key, value): - if key == 'type': - self.type = value - elif key == 'content': - self.content = value - else: - self.kwargs[key] = value - - def __delitem__(self, key): - if key == 'type': - self.type = None - elif key == 'content': - self.content = None - else: - del self.kwargs[key] - - def __str__(self): - return "Context(type={}, content={}, kwargs={})".format(self.type, self.content, self.kwargs) \ No newline at end of file diff --git a/spaces/omdena-lc/omdena-ng-lagos-chatbot-interface/app.py b/spaces/omdena-lc/omdena-ng-lagos-chatbot-interface/app.py deleted file mode 100644 index ba71456be42715c72fb2d9867adf572bd44a2463..0000000000000000000000000000000000000000 --- a/spaces/omdena-lc/omdena-ng-lagos-chatbot-interface/app.py +++ /dev/null @@ -1,83 +0,0 @@ -import requests -import streamlit as st -import time - -st.title("Omdena Chatbot Interface") - -# Edit API url here -url = 'https://omdena-lc-omdena-ng-lagos-chatbot-model.hf.space' - -# Initialize chat history -if "messages" not in st.session_state: - st.session_state.messages = [] - -# Display chat messages from history on app rerun -for message in st.session_state.messages: - with st.chat_message(message["role"]): - st.markdown(message["content"]) - -# Accept user input -if user_input := st.chat_input("What is up?"): - # Add user message to chat history - st.session_state.messages.append({"role": "user", "content": user_input}) - # Display user message in chat message container - with st.chat_message("user"): - st.markdown(user_input) - - # Send user input to Rasa webhook - payload = {"sender": "user", "message": user_input} - response = requests.post(url+'/webhooks/rest/webhook', json=payload) - bot_reply = response.json() - - # Extract assistant response - if bot_reply !=[]: - assistant_response = "" - if len(bot_reply)>1: - for reply in bot_reply[1:]: - assistant_response+=(" "+reply['text']) - else: - assistant_response = bot_reply[0]['text'] - else: - assistant_response = 'API request returned with an empty list []. Please continue with a different question' - - # Display assistant response in chat message container - with st.chat_message("assistant"): - message_placeholder = st.empty() - full_response = "" - # Simulate stream of response with milliseconds delay - for chunk in assistant_response.split(): - full_response += chunk + " " - time.sleep(0.05) - # Add a blinking cursor to simulate typing - message_placeholder.markdown(full_response + "▌") - message_placeholder.markdown(full_response) - - # Add assistant response to chat history - st.session_state.messages.append({"role": "assistant", "content": full_response}) - - # Save to google sheet - # Deployed web app URL for writing google sheets - webhook_url = "https://script.google.com/macros/s/AKfycbzhikyq7IduuEPGmrvcmJV9YlziiVyBysQ_oYf7lOzF8w9zg--BI2S_5cLuftp0pKqy/exec" - action = "?action=addData" - # Data to send - data = { - "user": user_input, - "bot": assistant_response - } - try: - # Send POST request to the webhook URL - response = requests.post(webhook_url + action, json=data) - except: - pass - - -# Add debug button to display RASA version, Model Name -with st.expander("Debug"): - if st.button("Show Debug Info"): - request_ids = ['/status', '/version'] - results = [requests.get(url+request_id).json() for request_id in request_ids] - st.write(results) - else: - st.write("") - - diff --git a/spaces/onuri/asst/README.md b/spaces/onuri/asst/README.md deleted file mode 100644 index 0db429b6700ddaa6e2ea9793d5971e9a3e05177d..0000000000000000000000000000000000000000 --- a/spaces/onuri/asst/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Assisto -emoji: 📊 -colorFrom: blue -colorTo: gray -sdk: gradio -sdk_version: 3.20.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/operance/revit-id-to-guid/~BROMIUM/app.py b/spaces/operance/revit-id-to-guid/~BROMIUM/app.py deleted file mode 100644 index 105f0d2ed4070a2fa18aae87096f751091aa04ad..0000000000000000000000000000000000000000 Binary files a/spaces/operance/revit-id-to-guid/~BROMIUM/app.py and /dev/null differ diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/audioldm.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/audioldm.md deleted file mode 100644 index 47dcc7212f3c4667cd8d91295c6b15e893fc64ed..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/audioldm.md +++ /dev/null @@ -1,50 +0,0 @@ - - -# AudioLDM - -AudioLDM was proposed in [AudioLDM: Text-to-Audio Generation with Latent Diffusion Models](https://huggingface.co/papers/2301.12503) by Haohe Liu et al. Inspired by [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/overview), AudioLDM -is a text-to-audio _latent diffusion model (LDM)_ that learns continuous audio representations from [CLAP](https://huggingface.co/docs/transformers/main/model_doc/clap) -latents. AudioLDM takes a text prompt as input and predicts the corresponding audio. It can generate text-conditional -sound effects, human speech and music. - -The abstract from the paper is: - -*Text-to-audio (TTA) system has recently gained attention for its ability to synthesize general audio based on text descriptions. However, previous studies in TTA have limited generation quality with high computational costs. In this study, we propose AudioLDM, a TTA system that is built on a latent space to learn the continuous audio representations from contrastive language-audio pretraining (CLAP) latents. The pretrained CLAP models enable us to train LDMs with audio embedding while providing text embedding as a condition during sampling. By learning the latent representations of audio signals and their compositions without modeling the cross-modal relationship, AudioLDM is advantageous in both generation quality and computational efficiency. Trained on AudioCaps with a single GPU, AudioLDM achieves state-of-the-art TTA performance measured by both objective and subjective metrics (e.g., frechet distance). Moreover, AudioLDM is the first TTA system that enables various text-guided audio manipulations (e.g., style transfer) in a zero-shot fashion. Our implementation and demos are available at https://audioldm.github.io.* - -The original codebase can be found at [haoheliu/AudioLDM](https://github.com/haoheliu/AudioLDM). - -## Tips - -When constructing a prompt, keep in mind: - -* Descriptive prompt inputs work best; you can use adjectives to describe the sound (for example, "high quality" or "clear") and make the prompt context specific (for example, "water stream in a forest" instead of "stream"). -* It's best to use general terms like "cat" or "dog" instead of specific names or abstract objects the model may not be familiar with. - -During inference: - -* The _quality_ of the predicted audio sample can be controlled by the `num_inference_steps` argument; higher steps give higher quality audio at the expense of slower inference. -* The _length_ of the predicted audio sample can be controlled by varying the `audio_length_in_s` argument. - - - -Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines. - - - -## AudioLDMPipeline -[[autodoc]] AudioLDMPipeline - - all - - __call__ - -## AudioPipelineOutput -[[autodoc]] pipelines.AudioPipelineOutput \ No newline at end of file diff --git a/spaces/permutans/LayoutLMv3-FUNSD/app.py b/spaces/permutans/LayoutLMv3-FUNSD/app.py deleted file mode 100644 index 2ba4f511d45c8a2fc2185cffc541573f0ad21b37..0000000000000000000000000000000000000000 --- a/spaces/permutans/LayoutLMv3-FUNSD/app.py +++ /dev/null @@ -1,126 +0,0 @@ -import os - -os.system("pip install pyyaml==5.1") -# workaround: install old version of pytorch since detectron2 hasn't released packages for pytorch 1.9 (issue: https://github.com/facebookresearch/detectron2/issues/3158) -os.system( - "pip install torch==1.8.0+cu101 torchvision==0.9.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html" -) - -# install detectron2 that matches pytorch 1.8 -# See https://detectron2.readthedocs.io/tutorials/install.html for instructions -os.system( - "pip install -q detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/torch1.8/index.html" -) - -## install PyTesseract -os.system("pip install -q pytesseract") - -import gradio as gr -import numpy as np -from transformers import LayoutLMv3Processor, LayoutLMv3ForTokenClassification -from datasets import load_dataset -from PIL import Image, ImageDraw, ImageFont - -processor = LayoutLMv3Processor.from_pretrained("microsoft/layoutlmv3-base") -model = LayoutLMv3ForTokenClassification.from_pretrained( - "nielsr/layoutlmv3-finetuned-funsd" -) - -# load image example -dataset = load_dataset("nielsr/funsd", split="test") -image = Image.open(dataset[0]["image_path"]).convert("RGB") -image = Image.open("./invoice.png") -image.save("document.png") - -labels = dataset.features["ner_tags"].feature.names -id2label = {v: k for v, k in enumerate(labels)} -label2color = { - "question": "blue", - "answer": "green", - "header": "orange", - "other": "violet", -} - - -def unnormalize_box(bbox, width, height): - return [ - width * (bbox[0] / 1000), - height * (bbox[1] / 1000), - width * (bbox[2] / 1000), - height * (bbox[3] / 1000), - ] - - -def iob_to_label(label): - label = label[2:] - if not label: - return "other" - return label - - -def process_image(image): - width, height = image.size - - # encode - encoding = processor( - image, truncation=True, return_offsets_mapping=True, return_tensors="pt" - ) - offset_mapping = encoding.pop("offset_mapping") - - # forward pass - outputs = model(**encoding) - - # get predictions - predictions = outputs.logits.argmax(-1).squeeze().tolist() - token_boxes = encoding.bbox.squeeze().tolist() - - # only keep non-subword predictions - is_subword = np.array(offset_mapping.squeeze().tolist())[:, 0] != 0 - true_predictions = [ - id2label[pred] for idx, pred in enumerate(predictions) if not is_subword[idx] - ] - true_boxes = [ - unnormalize_box(box, width, height) - for idx, box in enumerate(token_boxes) - if not is_subword[idx] - ] - - # draw predictions over the image - draw = ImageDraw.Draw(image) - font = ImageFont.load_default() - for prediction, box in zip(true_predictions, true_boxes): - predicted_label = iob_to_label(prediction).lower() - draw.rectangle(box, outline=label2color[predicted_label]) - draw.text( - (box[0] + 10, box[1] - 10), - text=predicted_label, - fill=label2color[predicted_label], - font=font, - ) - - return image - - -title = "Interactive demo: LayoutLMv3" -description = "Demo for Microsoft's LayoutLMv3, a Transformer for state-of-the-art document image understanding tasks. This particular model is fine-tuned on FUNSD, a dataset of manually annotated forms. It annotates the words appearing in the image as QUESTION/ANSWER/HEADER/OTHER. To use it, simply upload an image or use the example image below and click 'Submit'. Results will show up in a few seconds. If you want to make the output bigger, right-click on it and select 'Open image in new tab'." -article = "

        LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking | Github Repo

        " -examples = [["document.png"]] - -css = ".output-image, .input-image {height: 40rem !important; width: 100% !important;}" -# css = "@media screen and (max-width: 600px) { .output_image, .input_image {height:20rem !important; width: 100% !important;} }" -# css = ".output_image, .input_image {height: 600px !important}" - -css = ".image-preview {height: auto !important;}" - -iface = gr.Interface( - fn=process_image, - inputs=gr.inputs.Image(type="pil"), - outputs=gr.outputs.Image(type="pil", label="annotated image"), - title=title, - description=description, - article=article, - examples=examples, - css=css, - enable_queue=True, -) -iface.launch(debug=True) diff --git a/spaces/peteralexandercharles/wav2vec2-uk-demo/inference_timestamps.py b/spaces/peteralexandercharles/wav2vec2-uk-demo/inference_timestamps.py deleted file mode 100644 index 658fe4931406de539d41099e11d614b80eaf5d1b..0000000000000000000000000000000000000000 --- a/spaces/peteralexandercharles/wav2vec2-uk-demo/inference_timestamps.py +++ /dev/null @@ -1,86 +0,0 @@ -import argparse -from time import gmtime, strftime - -import torch -import torchaudio -from pathlib import Path -from transformers import Wav2Vec2ProcessorWithLM, Wav2Vec2ForCTC, Wav2Vec2CTCTokenizer - - -def main(args): - tokenizer = Wav2Vec2CTCTokenizer.from_pretrained(args.model_id) - processor = Wav2Vec2ProcessorWithLM.from_pretrained(args.model_id) - model = Wav2Vec2ForCTC.from_pretrained(args.model_id) - model.to('cpu') - - files = args.path_files.split(',') - - for path_file in files: - print('File:', path_file) - - wav_file_path = str(Path(path_file).absolute()) - waveform, sample_rate = torchaudio.load(wav_file_path) - - if sample_rate != 16000: - resample = torchaudio.transforms.Resample( - sample_rate, 16000, resampling_method='sinc_interpolation') - sample_rate = 16000 - speech_array = resample(waveform) - sp = speech_array.squeeze().numpy() - else: - sp = waveform.squeeze().numpy() - - # stride_length_s is a tuple of the left and right stride length. - # With only 1 number, both sides get the same stride, by default - # the stride_length on one side is 1/6th of the chunk_length_s - input_values = processor(sp, - sample_rate=16000, - chunk_length_s=args.chunk_length_s, - stride_length_s=(args.stride_length_s_l, args.stride_length_s_r), - return_tensors="pt").input_values - - with torch.no_grad(): - logits = model(input_values).logits - - # prediction = tokenizer.decode(pred_ids[0], output_word_offsets=True) - # prediction = tokenizer.decode(pred_ids[0], output_char_offsets=True) - - pred_ids = torch.argmax(logits, axis=-1).cpu().tolist() - prediction = tokenizer.decode(pred_ids[0], output_word_offsets=True) - - print(f'Sample rate: {sample_rate}') - time_offset = 320 / sample_rate - - for item in prediction.word_offsets: - r = item - - s = round(r['start_offset'] * time_offset, 2) - e = round(r['end_offset'] * time_offset, 2) - - print(f"{s} - {e}: {r['word']}") - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - - parser.add_argument( - "--path_files", type=str, required=True, help="WAV files to transcribe, separated by a comma" - ) - parser.add_argument( - "--model_id", type=str, required=True, help="Model identifier. Should be loadable with 🤗 Transformers" - ) - parser.add_argument( - "--chunk_length_s", type=float, default=None, help="Chunk length in seconds. Defaults to 5 seconds." - ) - parser.add_argument( - "--stride_length_s_l", type=int, default=None, help="Stride of the audio chunks, left value." - ) - parser.add_argument( - "--stride_length_s_r", type=int, default=None, help="Stride of the audio chunks, right value." - ) - parser.add_argument( - "--log_outputs", action="store_true", help="If defined, write outputs to log file for analysis." - ) - args = parser.parse_args() - - main(args) diff --git a/spaces/pietrolesci/wordify/src/utils.py b/spaces/pietrolesci/wordify/src/utils.py deleted file mode 100644 index 87be4e1065cf6cb70bfda389f60b41e77cd8ce03..0000000000000000000000000000000000000000 --- a/spaces/pietrolesci/wordify/src/utils.py +++ /dev/null @@ -1,52 +0,0 @@ -import base64 -from typing import List, Tuple - -import streamlit as st -from pandas.core.frame import DataFrame -from PIL import Image - -from .configs import ColumnNames, SupportedFiles - - -def get_col_indices(cols: List) -> Tuple[int, int]: - """Ugly but works""" - cols = [i.lower() for i in cols] - try: - label_index = cols.index(ColumnNames.LABEL.value) - except: - label_index = 0 - - try: - text_index = cols.index(ColumnNames.TEXT.value) - except: - text_index = 0 - - return text_index, label_index - - -@st.cache -def get_logo(path: str) -> Image: - return Image.open(path) - - -@st.experimental_memo -def read_file(uploaded_file) -> DataFrame: - file_type = uploaded_file.name.split(".")[-1] - read_fn = SupportedFiles[file_type].value[0] - df = read_fn(uploaded_file) - df = df.dropna() - return df - - -@st.cache -def convert_df(df: DataFrame) -> bytes: - # IMPORTANT: Cache the conversion to prevent computation on every rerun - return df.to_csv(index=False, sep=";").encode("utf-8") - - -def download_button(dataframe: DataFrame, name: str) -> None: - csv = dataframe.to_csv(index=False) - # some strings <-> bytes conversions necessary here - b64 = base64.b64encode(csv.encode()).decode() - href = f'Download' - st.write(href, unsafe_allow_html=True) diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/requests/sessions.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/requests/sessions.py deleted file mode 100644 index dbcf2a7b0ee2898b72714b756e4b27fbbad4beab..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/requests/sessions.py +++ /dev/null @@ -1,833 +0,0 @@ -""" -requests.sessions -~~~~~~~~~~~~~~~~~ - -This module provides a Session object to manage and persist settings across -requests (cookies, auth, proxies). -""" -import os -import sys -import time -from collections import OrderedDict -from datetime import timedelta - -from ._internal_utils import to_native_string -from .adapters import HTTPAdapter -from .auth import _basic_auth_str -from .compat import Mapping, cookielib, urljoin, urlparse -from .cookies import ( - RequestsCookieJar, - cookiejar_from_dict, - extract_cookies_to_jar, - merge_cookies, -) -from .exceptions import ( - ChunkedEncodingError, - ContentDecodingError, - InvalidSchema, - TooManyRedirects, -) -from .hooks import default_hooks, dispatch_hook - -# formerly defined here, reexposed here for backward compatibility -from .models import ( # noqa: F401 - DEFAULT_REDIRECT_LIMIT, - REDIRECT_STATI, - PreparedRequest, - Request, -) -from .status_codes import codes -from .structures import CaseInsensitiveDict -from .utils import ( # noqa: F401 - DEFAULT_PORTS, - default_headers, - get_auth_from_url, - get_environ_proxies, - get_netrc_auth, - requote_uri, - resolve_proxies, - rewind_body, - should_bypass_proxies, - to_key_val_list, -) - -# Preferred clock, based on which one is more accurate on a given system. -if sys.platform == "win32": - preferred_clock = time.perf_counter -else: - preferred_clock = time.time - - -def merge_setting(request_setting, session_setting, dict_class=OrderedDict): - """Determines appropriate setting for a given request, taking into account - the explicit setting on that request, and the setting in the session. If a - setting is a dictionary, they will be merged together using `dict_class` - """ - - if session_setting is None: - return request_setting - - if request_setting is None: - return session_setting - - # Bypass if not a dictionary (e.g. verify) - if not ( - isinstance(session_setting, Mapping) and isinstance(request_setting, Mapping) - ): - return request_setting - - merged_setting = dict_class(to_key_val_list(session_setting)) - merged_setting.update(to_key_val_list(request_setting)) - - # Remove keys that are set to None. Extract keys first to avoid altering - # the dictionary during iteration. - none_keys = [k for (k, v) in merged_setting.items() if v is None] - for key in none_keys: - del merged_setting[key] - - return merged_setting - - -def merge_hooks(request_hooks, session_hooks, dict_class=OrderedDict): - """Properly merges both requests and session hooks. - - This is necessary because when request_hooks == {'response': []}, the - merge breaks Session hooks entirely. - """ - if session_hooks is None or session_hooks.get("response") == []: - return request_hooks - - if request_hooks is None or request_hooks.get("response") == []: - return session_hooks - - return merge_setting(request_hooks, session_hooks, dict_class) - - -class SessionRedirectMixin: - def get_redirect_target(self, resp): - """Receives a Response. Returns a redirect URI or ``None``""" - # Due to the nature of how requests processes redirects this method will - # be called at least once upon the original response and at least twice - # on each subsequent redirect response (if any). - # If a custom mixin is used to handle this logic, it may be advantageous - # to cache the redirect location onto the response object as a private - # attribute. - if resp.is_redirect: - location = resp.headers["location"] - # Currently the underlying http module on py3 decode headers - # in latin1, but empirical evidence suggests that latin1 is very - # rarely used with non-ASCII characters in HTTP headers. - # It is more likely to get UTF8 header rather than latin1. - # This causes incorrect handling of UTF8 encoded location headers. - # To solve this, we re-encode the location in latin1. - location = location.encode("latin1") - return to_native_string(location, "utf8") - return None - - def should_strip_auth(self, old_url, new_url): - """Decide whether Authorization header should be removed when redirecting""" - old_parsed = urlparse(old_url) - new_parsed = urlparse(new_url) - if old_parsed.hostname != new_parsed.hostname: - return True - # Special case: allow http -> https redirect when using the standard - # ports. This isn't specified by RFC 7235, but is kept to avoid - # breaking backwards compatibility with older versions of requests - # that allowed any redirects on the same host. - if ( - old_parsed.scheme == "http" - and old_parsed.port in (80, None) - and new_parsed.scheme == "https" - and new_parsed.port in (443, None) - ): - return False - - # Handle default port usage corresponding to scheme. - changed_port = old_parsed.port != new_parsed.port - changed_scheme = old_parsed.scheme != new_parsed.scheme - default_port = (DEFAULT_PORTS.get(old_parsed.scheme, None), None) - if ( - not changed_scheme - and old_parsed.port in default_port - and new_parsed.port in default_port - ): - return False - - # Standard case: root URI must match - return changed_port or changed_scheme - - def resolve_redirects( - self, - resp, - req, - stream=False, - timeout=None, - verify=True, - cert=None, - proxies=None, - yield_requests=False, - **adapter_kwargs, - ): - """Receives a Response. Returns a generator of Responses or Requests.""" - - hist = [] # keep track of history - - url = self.get_redirect_target(resp) - previous_fragment = urlparse(req.url).fragment - while url: - prepared_request = req.copy() - - # Update history and keep track of redirects. - # resp.history must ignore the original request in this loop - hist.append(resp) - resp.history = hist[1:] - - try: - resp.content # Consume socket so it can be released - except (ChunkedEncodingError, ContentDecodingError, RuntimeError): - resp.raw.read(decode_content=False) - - if len(resp.history) >= self.max_redirects: - raise TooManyRedirects( - f"Exceeded {self.max_redirects} redirects.", response=resp - ) - - # Release the connection back into the pool. - resp.close() - - # Handle redirection without scheme (see: RFC 1808 Section 4) - if url.startswith("//"): - parsed_rurl = urlparse(resp.url) - url = ":".join([to_native_string(parsed_rurl.scheme), url]) - - # Normalize url case and attach previous fragment if needed (RFC 7231 7.1.2) - parsed = urlparse(url) - if parsed.fragment == "" and previous_fragment: - parsed = parsed._replace(fragment=previous_fragment) - elif parsed.fragment: - previous_fragment = parsed.fragment - url = parsed.geturl() - - # Facilitate relative 'location' headers, as allowed by RFC 7231. - # (e.g. '/path/to/resource' instead of 'http://domain.tld/path/to/resource') - # Compliant with RFC3986, we percent encode the url. - if not parsed.netloc: - url = urljoin(resp.url, requote_uri(url)) - else: - url = requote_uri(url) - - prepared_request.url = to_native_string(url) - - self.rebuild_method(prepared_request, resp) - - # https://github.com/psf/requests/issues/1084 - if resp.status_code not in ( - codes.temporary_redirect, - codes.permanent_redirect, - ): - # https://github.com/psf/requests/issues/3490 - purged_headers = ("Content-Length", "Content-Type", "Transfer-Encoding") - for header in purged_headers: - prepared_request.headers.pop(header, None) - prepared_request.body = None - - headers = prepared_request.headers - headers.pop("Cookie", None) - - # Extract any cookies sent on the response to the cookiejar - # in the new request. Because we've mutated our copied prepared - # request, use the old one that we haven't yet touched. - extract_cookies_to_jar(prepared_request._cookies, req, resp.raw) - merge_cookies(prepared_request._cookies, self.cookies) - prepared_request.prepare_cookies(prepared_request._cookies) - - # Rebuild auth and proxy information. - proxies = self.rebuild_proxies(prepared_request, proxies) - self.rebuild_auth(prepared_request, resp) - - # A failed tell() sets `_body_position` to `object()`. This non-None - # value ensures `rewindable` will be True, allowing us to raise an - # UnrewindableBodyError, instead of hanging the connection. - rewindable = prepared_request._body_position is not None and ( - "Content-Length" in headers or "Transfer-Encoding" in headers - ) - - # Attempt to rewind consumed file-like object. - if rewindable: - rewind_body(prepared_request) - - # Override the original request. - req = prepared_request - - if yield_requests: - yield req - else: - - resp = self.send( - req, - stream=stream, - timeout=timeout, - verify=verify, - cert=cert, - proxies=proxies, - allow_redirects=False, - **adapter_kwargs, - ) - - extract_cookies_to_jar(self.cookies, prepared_request, resp.raw) - - # extract redirect url, if any, for the next loop - url = self.get_redirect_target(resp) - yield resp - - def rebuild_auth(self, prepared_request, response): - """When being redirected we may want to strip authentication from the - request to avoid leaking credentials. This method intelligently removes - and reapplies authentication where possible to avoid credential loss. - """ - headers = prepared_request.headers - url = prepared_request.url - - if "Authorization" in headers and self.should_strip_auth( - response.request.url, url - ): - # If we get redirected to a new host, we should strip out any - # authentication headers. - del headers["Authorization"] - - # .netrc might have more auth for us on our new host. - new_auth = get_netrc_auth(url) if self.trust_env else None - if new_auth is not None: - prepared_request.prepare_auth(new_auth) - - def rebuild_proxies(self, prepared_request, proxies): - """This method re-evaluates the proxy configuration by considering the - environment variables. If we are redirected to a URL covered by - NO_PROXY, we strip the proxy configuration. Otherwise, we set missing - proxy keys for this URL (in case they were stripped by a previous - redirect). - - This method also replaces the Proxy-Authorization header where - necessary. - - :rtype: dict - """ - headers = prepared_request.headers - scheme = urlparse(prepared_request.url).scheme - new_proxies = resolve_proxies(prepared_request, proxies, self.trust_env) - - if "Proxy-Authorization" in headers: - del headers["Proxy-Authorization"] - - try: - username, password = get_auth_from_url(new_proxies[scheme]) - except KeyError: - username, password = None, None - - # urllib3 handles proxy authorization for us in the standard adapter. - # Avoid appending this to TLS tunneled requests where it may be leaked. - if not scheme.startswith('https') and username and password: - headers["Proxy-Authorization"] = _basic_auth_str(username, password) - - return new_proxies - - def rebuild_method(self, prepared_request, response): - """When being redirected we may want to change the method of the request - based on certain specs or browser behavior. - """ - method = prepared_request.method - - # https://tools.ietf.org/html/rfc7231#section-6.4.4 - if response.status_code == codes.see_other and method != "HEAD": - method = "GET" - - # Do what the browsers do, despite standards... - # First, turn 302s into GETs. - if response.status_code == codes.found and method != "HEAD": - method = "GET" - - # Second, if a POST is responded to with a 301, turn it into a GET. - # This bizarre behaviour is explained in Issue 1704. - if response.status_code == codes.moved and method == "POST": - method = "GET" - - prepared_request.method = method - - -class Session(SessionRedirectMixin): - """A Requests session. - - Provides cookie persistence, connection-pooling, and configuration. - - Basic Usage:: - - >>> import requests - >>> s = requests.Session() - >>> s.get('https://httpbin.org/get') - - - Or as a context manager:: - - >>> with requests.Session() as s: - ... s.get('https://httpbin.org/get') - - """ - - __attrs__ = [ - "headers", - "cookies", - "auth", - "proxies", - "hooks", - "params", - "verify", - "cert", - "adapters", - "stream", - "trust_env", - "max_redirects", - ] - - def __init__(self): - - #: A case-insensitive dictionary of headers to be sent on each - #: :class:`Request ` sent from this - #: :class:`Session `. - self.headers = default_headers() - - #: Default Authentication tuple or object to attach to - #: :class:`Request `. - self.auth = None - - #: Dictionary mapping protocol or protocol and host to the URL of the proxy - #: (e.g. {'http': 'foo.bar:3128', 'http://host.name': 'foo.bar:4012'}) to - #: be used on each :class:`Request `. - self.proxies = {} - - #: Event-handling hooks. - self.hooks = default_hooks() - - #: Dictionary of querystring data to attach to each - #: :class:`Request `. The dictionary values may be lists for - #: representing multivalued query parameters. - self.params = {} - - #: Stream response content default. - self.stream = False - - #: SSL Verification default. - #: Defaults to `True`, requiring requests to verify the TLS certificate at the - #: remote end. - #: If verify is set to `False`, requests will accept any TLS certificate - #: presented by the server, and will ignore hostname mismatches and/or - #: expired certificates, which will make your application vulnerable to - #: man-in-the-middle (MitM) attacks. - #: Only set this to `False` for testing. - self.verify = True - - #: SSL client certificate default, if String, path to ssl client - #: cert file (.pem). If Tuple, ('cert', 'key') pair. - self.cert = None - - #: Maximum number of redirects allowed. If the request exceeds this - #: limit, a :class:`TooManyRedirects` exception is raised. - #: This defaults to requests.models.DEFAULT_REDIRECT_LIMIT, which is - #: 30. - self.max_redirects = DEFAULT_REDIRECT_LIMIT - - #: Trust environment settings for proxy configuration, default - #: authentication and similar. - self.trust_env = True - - #: A CookieJar containing all currently outstanding cookies set on this - #: session. By default it is a - #: :class:`RequestsCookieJar `, but - #: may be any other ``cookielib.CookieJar`` compatible object. - self.cookies = cookiejar_from_dict({}) - - # Default connection adapters. - self.adapters = OrderedDict() - self.mount("https://", HTTPAdapter()) - self.mount("http://", HTTPAdapter()) - - def __enter__(self): - return self - - def __exit__(self, *args): - self.close() - - def prepare_request(self, request): - """Constructs a :class:`PreparedRequest ` for - transmission and returns it. The :class:`PreparedRequest` has settings - merged from the :class:`Request ` instance and those of the - :class:`Session`. - - :param request: :class:`Request` instance to prepare with this - session's settings. - :rtype: requests.PreparedRequest - """ - cookies = request.cookies or {} - - # Bootstrap CookieJar. - if not isinstance(cookies, cookielib.CookieJar): - cookies = cookiejar_from_dict(cookies) - - # Merge with session cookies - merged_cookies = merge_cookies( - merge_cookies(RequestsCookieJar(), self.cookies), cookies - ) - - # Set environment's basic authentication if not explicitly set. - auth = request.auth - if self.trust_env and not auth and not self.auth: - auth = get_netrc_auth(request.url) - - p = PreparedRequest() - p.prepare( - method=request.method.upper(), - url=request.url, - files=request.files, - data=request.data, - json=request.json, - headers=merge_setting( - request.headers, self.headers, dict_class=CaseInsensitiveDict - ), - params=merge_setting(request.params, self.params), - auth=merge_setting(auth, self.auth), - cookies=merged_cookies, - hooks=merge_hooks(request.hooks, self.hooks), - ) - return p - - def request( - self, - method, - url, - params=None, - data=None, - headers=None, - cookies=None, - files=None, - auth=None, - timeout=None, - allow_redirects=True, - proxies=None, - hooks=None, - stream=None, - verify=None, - cert=None, - json=None, - ): - """Constructs a :class:`Request `, prepares it and sends it. - Returns :class:`Response ` object. - - :param method: method for the new :class:`Request` object. - :param url: URL for the new :class:`Request` object. - :param params: (optional) Dictionary or bytes to be sent in the query - string for the :class:`Request`. - :param data: (optional) Dictionary, list of tuples, bytes, or file-like - object to send in the body of the :class:`Request`. - :param json: (optional) json to send in the body of the - :class:`Request`. - :param headers: (optional) Dictionary of HTTP Headers to send with the - :class:`Request`. - :param cookies: (optional) Dict or CookieJar object to send with the - :class:`Request`. - :param files: (optional) Dictionary of ``'filename': file-like-objects`` - for multipart encoding upload. - :param auth: (optional) Auth tuple or callable to enable - Basic/Digest/Custom HTTP Auth. - :param timeout: (optional) How long to wait for the server to send - data before giving up, as a float, or a :ref:`(connect timeout, - read timeout) ` tuple. - :type timeout: float or tuple - :param allow_redirects: (optional) Set to True by default. - :type allow_redirects: bool - :param proxies: (optional) Dictionary mapping protocol or protocol and - hostname to the URL of the proxy. - :param stream: (optional) whether to immediately download the response - content. Defaults to ``False``. - :param verify: (optional) Either a boolean, in which case it controls whether we verify - the server's TLS certificate, or a string, in which case it must be a path - to a CA bundle to use. Defaults to ``True``. When set to - ``False``, requests will accept any TLS certificate presented by - the server, and will ignore hostname mismatches and/or expired - certificates, which will make your application vulnerable to - man-in-the-middle (MitM) attacks. Setting verify to ``False`` - may be useful during local development or testing. - :param cert: (optional) if String, path to ssl client cert file (.pem). - If Tuple, ('cert', 'key') pair. - :rtype: requests.Response - """ - # Create the Request. - req = Request( - method=method.upper(), - url=url, - headers=headers, - files=files, - data=data or {}, - json=json, - params=params or {}, - auth=auth, - cookies=cookies, - hooks=hooks, - ) - prep = self.prepare_request(req) - - proxies = proxies or {} - - settings = self.merge_environment_settings( - prep.url, proxies, stream, verify, cert - ) - - # Send the request. - send_kwargs = { - "timeout": timeout, - "allow_redirects": allow_redirects, - } - send_kwargs.update(settings) - resp = self.send(prep, **send_kwargs) - - return resp - - def get(self, url, **kwargs): - r"""Sends a GET request. Returns :class:`Response` object. - - :param url: URL for the new :class:`Request` object. - :param \*\*kwargs: Optional arguments that ``request`` takes. - :rtype: requests.Response - """ - - kwargs.setdefault("allow_redirects", True) - return self.request("GET", url, **kwargs) - - def options(self, url, **kwargs): - r"""Sends a OPTIONS request. Returns :class:`Response` object. - - :param url: URL for the new :class:`Request` object. - :param \*\*kwargs: Optional arguments that ``request`` takes. - :rtype: requests.Response - """ - - kwargs.setdefault("allow_redirects", True) - return self.request("OPTIONS", url, **kwargs) - - def head(self, url, **kwargs): - r"""Sends a HEAD request. Returns :class:`Response` object. - - :param url: URL for the new :class:`Request` object. - :param \*\*kwargs: Optional arguments that ``request`` takes. - :rtype: requests.Response - """ - - kwargs.setdefault("allow_redirects", False) - return self.request("HEAD", url, **kwargs) - - def post(self, url, data=None, json=None, **kwargs): - r"""Sends a POST request. Returns :class:`Response` object. - - :param url: URL for the new :class:`Request` object. - :param data: (optional) Dictionary, list of tuples, bytes, or file-like - object to send in the body of the :class:`Request`. - :param json: (optional) json to send in the body of the :class:`Request`. - :param \*\*kwargs: Optional arguments that ``request`` takes. - :rtype: requests.Response - """ - - return self.request("POST", url, data=data, json=json, **kwargs) - - def put(self, url, data=None, **kwargs): - r"""Sends a PUT request. Returns :class:`Response` object. - - :param url: URL for the new :class:`Request` object. - :param data: (optional) Dictionary, list of tuples, bytes, or file-like - object to send in the body of the :class:`Request`. - :param \*\*kwargs: Optional arguments that ``request`` takes. - :rtype: requests.Response - """ - - return self.request("PUT", url, data=data, **kwargs) - - def patch(self, url, data=None, **kwargs): - r"""Sends a PATCH request. Returns :class:`Response` object. - - :param url: URL for the new :class:`Request` object. - :param data: (optional) Dictionary, list of tuples, bytes, or file-like - object to send in the body of the :class:`Request`. - :param \*\*kwargs: Optional arguments that ``request`` takes. - :rtype: requests.Response - """ - - return self.request("PATCH", url, data=data, **kwargs) - - def delete(self, url, **kwargs): - r"""Sends a DELETE request. Returns :class:`Response` object. - - :param url: URL for the new :class:`Request` object. - :param \*\*kwargs: Optional arguments that ``request`` takes. - :rtype: requests.Response - """ - - return self.request("DELETE", url, **kwargs) - - def send(self, request, **kwargs): - """Send a given PreparedRequest. - - :rtype: requests.Response - """ - # Set defaults that the hooks can utilize to ensure they always have - # the correct parameters to reproduce the previous request. - kwargs.setdefault("stream", self.stream) - kwargs.setdefault("verify", self.verify) - kwargs.setdefault("cert", self.cert) - if "proxies" not in kwargs: - kwargs["proxies"] = resolve_proxies(request, self.proxies, self.trust_env) - - # It's possible that users might accidentally send a Request object. - # Guard against that specific failure case. - if isinstance(request, Request): - raise ValueError("You can only send PreparedRequests.") - - # Set up variables needed for resolve_redirects and dispatching of hooks - allow_redirects = kwargs.pop("allow_redirects", True) - stream = kwargs.get("stream") - hooks = request.hooks - - # Get the appropriate adapter to use - adapter = self.get_adapter(url=request.url) - - # Start time (approximately) of the request - start = preferred_clock() - - # Send the request - r = adapter.send(request, **kwargs) - - # Total elapsed time of the request (approximately) - elapsed = preferred_clock() - start - r.elapsed = timedelta(seconds=elapsed) - - # Response manipulation hooks - r = dispatch_hook("response", hooks, r, **kwargs) - - # Persist cookies - if r.history: - - # If the hooks create history then we want those cookies too - for resp in r.history: - extract_cookies_to_jar(self.cookies, resp.request, resp.raw) - - extract_cookies_to_jar(self.cookies, request, r.raw) - - # Resolve redirects if allowed. - if allow_redirects: - # Redirect resolving generator. - gen = self.resolve_redirects(r, request, **kwargs) - history = [resp for resp in gen] - else: - history = [] - - # Shuffle things around if there's history. - if history: - # Insert the first (original) request at the start - history.insert(0, r) - # Get the last request made - r = history.pop() - r.history = history - - # If redirects aren't being followed, store the response on the Request for Response.next(). - if not allow_redirects: - try: - r._next = next( - self.resolve_redirects(r, request, yield_requests=True, **kwargs) - ) - except StopIteration: - pass - - if not stream: - r.content - - return r - - def merge_environment_settings(self, url, proxies, stream, verify, cert): - """ - Check the environment and merge it with some settings. - - :rtype: dict - """ - # Gather clues from the surrounding environment. - if self.trust_env: - # Set environment's proxies. - no_proxy = proxies.get("no_proxy") if proxies is not None else None - env_proxies = get_environ_proxies(url, no_proxy=no_proxy) - for (k, v) in env_proxies.items(): - proxies.setdefault(k, v) - - # Look for requests environment configuration - # and be compatible with cURL. - if verify is True or verify is None: - verify = ( - os.environ.get("REQUESTS_CA_BUNDLE") - or os.environ.get("CURL_CA_BUNDLE") - or verify - ) - - # Merge all the kwargs. - proxies = merge_setting(proxies, self.proxies) - stream = merge_setting(stream, self.stream) - verify = merge_setting(verify, self.verify) - cert = merge_setting(cert, self.cert) - - return {"proxies": proxies, "stream": stream, "verify": verify, "cert": cert} - - def get_adapter(self, url): - """ - Returns the appropriate connection adapter for the given URL. - - :rtype: requests.adapters.BaseAdapter - """ - for (prefix, adapter) in self.adapters.items(): - - if url.lower().startswith(prefix.lower()): - return adapter - - # Nothing matches :-/ - raise InvalidSchema(f"No connection adapters were found for {url!r}") - - def close(self): - """Closes all adapters and as such the session""" - for v in self.adapters.values(): - v.close() - - def mount(self, prefix, adapter): - """Registers a connection adapter to a prefix. - - Adapters are sorted in descending order by prefix length. - """ - self.adapters[prefix] = adapter - keys_to_move = [k for k in self.adapters if len(k) < len(prefix)] - - for key in keys_to_move: - self.adapters[key] = self.adapters.pop(key) - - def __getstate__(self): - state = {attr: getattr(self, attr, None) for attr in self.__attrs__} - return state - - def __setstate__(self, state): - for attr, value in state.items(): - setattr(self, attr, value) - - -def session(): - """ - Returns a :class:`Session` for context-management. - - .. deprecated:: 1.0.0 - - This method has been deprecated since version 1.0.0 and is only kept for - backwards compatibility. New code should use :class:`~requests.sessions.Session` - to create a session. This may be removed at a future date. - - :rtype: Session - """ - return Session() diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/syntax.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/syntax.py deleted file mode 100644 index 570337664835d01904c8ff708626b447edc5640a..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/syntax.py +++ /dev/null @@ -1,948 +0,0 @@ -import os.path -import platform -import re -import sys -import textwrap -from abc import ABC, abstractmethod -from pathlib import Path -from typing import ( - Any, - Dict, - Iterable, - List, - NamedTuple, - Optional, - Sequence, - Set, - Tuple, - Type, - Union, -) - -from pip._vendor.pygments.lexer import Lexer -from pip._vendor.pygments.lexers import get_lexer_by_name, guess_lexer_for_filename -from pip._vendor.pygments.style import Style as PygmentsStyle -from pip._vendor.pygments.styles import get_style_by_name -from pip._vendor.pygments.token import ( - Comment, - Error, - Generic, - Keyword, - Name, - Number, - Operator, - String, - Token, - Whitespace, -) -from pip._vendor.pygments.util import ClassNotFound - -from pip._vendor.rich.containers import Lines -from pip._vendor.rich.padding import Padding, PaddingDimensions - -from ._loop import loop_first -from .cells import cell_len -from .color import Color, blend_rgb -from .console import Console, ConsoleOptions, JustifyMethod, RenderResult -from .jupyter import JupyterMixin -from .measure import Measurement -from .segment import Segment, Segments -from .style import Style, StyleType -from .text import Text - -TokenType = Tuple[str, ...] - -WINDOWS = platform.system() == "Windows" -DEFAULT_THEME = "monokai" - -# The following styles are based on https://github.com/pygments/pygments/blob/master/pygments/formatters/terminal.py -# A few modifications were made - -ANSI_LIGHT: Dict[TokenType, Style] = { - Token: Style(), - Whitespace: Style(color="white"), - Comment: Style(dim=True), - Comment.Preproc: Style(color="cyan"), - Keyword: Style(color="blue"), - Keyword.Type: Style(color="cyan"), - Operator.Word: Style(color="magenta"), - Name.Builtin: Style(color="cyan"), - Name.Function: Style(color="green"), - Name.Namespace: Style(color="cyan", underline=True), - Name.Class: Style(color="green", underline=True), - Name.Exception: Style(color="cyan"), - Name.Decorator: Style(color="magenta", bold=True), - Name.Variable: Style(color="red"), - Name.Constant: Style(color="red"), - Name.Attribute: Style(color="cyan"), - Name.Tag: Style(color="bright_blue"), - String: Style(color="yellow"), - Number: Style(color="blue"), - Generic.Deleted: Style(color="bright_red"), - Generic.Inserted: Style(color="green"), - Generic.Heading: Style(bold=True), - Generic.Subheading: Style(color="magenta", bold=True), - Generic.Prompt: Style(bold=True), - Generic.Error: Style(color="bright_red"), - Error: Style(color="red", underline=True), -} - -ANSI_DARK: Dict[TokenType, Style] = { - Token: Style(), - Whitespace: Style(color="bright_black"), - Comment: Style(dim=True), - Comment.Preproc: Style(color="bright_cyan"), - Keyword: Style(color="bright_blue"), - Keyword.Type: Style(color="bright_cyan"), - Operator.Word: Style(color="bright_magenta"), - Name.Builtin: Style(color="bright_cyan"), - Name.Function: Style(color="bright_green"), - Name.Namespace: Style(color="bright_cyan", underline=True), - Name.Class: Style(color="bright_green", underline=True), - Name.Exception: Style(color="bright_cyan"), - Name.Decorator: Style(color="bright_magenta", bold=True), - Name.Variable: Style(color="bright_red"), - Name.Constant: Style(color="bright_red"), - Name.Attribute: Style(color="bright_cyan"), - Name.Tag: Style(color="bright_blue"), - String: Style(color="yellow"), - Number: Style(color="bright_blue"), - Generic.Deleted: Style(color="bright_red"), - Generic.Inserted: Style(color="bright_green"), - Generic.Heading: Style(bold=True), - Generic.Subheading: Style(color="bright_magenta", bold=True), - Generic.Prompt: Style(bold=True), - Generic.Error: Style(color="bright_red"), - Error: Style(color="red", underline=True), -} - -RICH_SYNTAX_THEMES = {"ansi_light": ANSI_LIGHT, "ansi_dark": ANSI_DARK} -NUMBERS_COLUMN_DEFAULT_PADDING = 2 - - -class SyntaxTheme(ABC): - """Base class for a syntax theme.""" - - @abstractmethod - def get_style_for_token(self, token_type: TokenType) -> Style: - """Get a style for a given Pygments token.""" - raise NotImplementedError # pragma: no cover - - @abstractmethod - def get_background_style(self) -> Style: - """Get the background color.""" - raise NotImplementedError # pragma: no cover - - -class PygmentsSyntaxTheme(SyntaxTheme): - """Syntax theme that delegates to Pygments theme.""" - - def __init__(self, theme: Union[str, Type[PygmentsStyle]]) -> None: - self._style_cache: Dict[TokenType, Style] = {} - if isinstance(theme, str): - try: - self._pygments_style_class = get_style_by_name(theme) - except ClassNotFound: - self._pygments_style_class = get_style_by_name("default") - else: - self._pygments_style_class = theme - - self._background_color = self._pygments_style_class.background_color - self._background_style = Style(bgcolor=self._background_color) - - def get_style_for_token(self, token_type: TokenType) -> Style: - """Get a style from a Pygments class.""" - try: - return self._style_cache[token_type] - except KeyError: - try: - pygments_style = self._pygments_style_class.style_for_token(token_type) - except KeyError: - style = Style.null() - else: - color = pygments_style["color"] - bgcolor = pygments_style["bgcolor"] - style = Style( - color="#" + color if color else "#000000", - bgcolor="#" + bgcolor if bgcolor else self._background_color, - bold=pygments_style["bold"], - italic=pygments_style["italic"], - underline=pygments_style["underline"], - ) - self._style_cache[token_type] = style - return style - - def get_background_style(self) -> Style: - return self._background_style - - -class ANSISyntaxTheme(SyntaxTheme): - """Syntax theme to use standard colors.""" - - def __init__(self, style_map: Dict[TokenType, Style]) -> None: - self.style_map = style_map - self._missing_style = Style.null() - self._background_style = Style.null() - self._style_cache: Dict[TokenType, Style] = {} - - def get_style_for_token(self, token_type: TokenType) -> Style: - """Look up style in the style map.""" - try: - return self._style_cache[token_type] - except KeyError: - # Styles form a hierarchy - # We need to go from most to least specific - # e.g. ("foo", "bar", "baz") to ("foo", "bar") to ("foo",) - get_style = self.style_map.get - token = tuple(token_type) - style = self._missing_style - while token: - _style = get_style(token) - if _style is not None: - style = _style - break - token = token[:-1] - self._style_cache[token_type] = style - return style - - def get_background_style(self) -> Style: - return self._background_style - - -SyntaxPosition = Tuple[int, int] - - -class _SyntaxHighlightRange(NamedTuple): - """ - A range to highlight in a Syntax object. - `start` and `end` are 2-integers tuples, where the first integer is the line number - (starting from 1) and the second integer is the column index (starting from 0). - """ - - style: StyleType - start: SyntaxPosition - end: SyntaxPosition - - -class Syntax(JupyterMixin): - """Construct a Syntax object to render syntax highlighted code. - - Args: - code (str): Code to highlight. - lexer (Lexer | str): Lexer to use (see https://pygments.org/docs/lexers/) - theme (str, optional): Color theme, aka Pygments style (see https://pygments.org/docs/styles/#getting-a-list-of-available-styles). Defaults to "monokai". - dedent (bool, optional): Enable stripping of initial whitespace. Defaults to False. - line_numbers (bool, optional): Enable rendering of line numbers. Defaults to False. - start_line (int, optional): Starting number for line numbers. Defaults to 1. - line_range (Tuple[int | None, int | None], optional): If given should be a tuple of the start and end line to render. - A value of None in the tuple indicates the range is open in that direction. - highlight_lines (Set[int]): A set of line numbers to highlight. - code_width: Width of code to render (not including line numbers), or ``None`` to use all available width. - tab_size (int, optional): Size of tabs. Defaults to 4. - word_wrap (bool, optional): Enable word wrapping. - background_color (str, optional): Optional background color, or None to use theme color. Defaults to None. - indent_guides (bool, optional): Show indent guides. Defaults to False. - padding (PaddingDimensions): Padding to apply around the syntax. Defaults to 0 (no padding). - """ - - _pygments_style_class: Type[PygmentsStyle] - _theme: SyntaxTheme - - @classmethod - def get_theme(cls, name: Union[str, SyntaxTheme]) -> SyntaxTheme: - """Get a syntax theme instance.""" - if isinstance(name, SyntaxTheme): - return name - theme: SyntaxTheme - if name in RICH_SYNTAX_THEMES: - theme = ANSISyntaxTheme(RICH_SYNTAX_THEMES[name]) - else: - theme = PygmentsSyntaxTheme(name) - return theme - - def __init__( - self, - code: str, - lexer: Union[Lexer, str], - *, - theme: Union[str, SyntaxTheme] = DEFAULT_THEME, - dedent: bool = False, - line_numbers: bool = False, - start_line: int = 1, - line_range: Optional[Tuple[Optional[int], Optional[int]]] = None, - highlight_lines: Optional[Set[int]] = None, - code_width: Optional[int] = None, - tab_size: int = 4, - word_wrap: bool = False, - background_color: Optional[str] = None, - indent_guides: bool = False, - padding: PaddingDimensions = 0, - ) -> None: - self.code = code - self._lexer = lexer - self.dedent = dedent - self.line_numbers = line_numbers - self.start_line = start_line - self.line_range = line_range - self.highlight_lines = highlight_lines or set() - self.code_width = code_width - self.tab_size = tab_size - self.word_wrap = word_wrap - self.background_color = background_color - self.background_style = ( - Style(bgcolor=background_color) if background_color else Style() - ) - self.indent_guides = indent_guides - self.padding = padding - - self._theme = self.get_theme(theme) - self._stylized_ranges: List[_SyntaxHighlightRange] = [] - - @classmethod - def from_path( - cls, - path: str, - encoding: str = "utf-8", - lexer: Optional[Union[Lexer, str]] = None, - theme: Union[str, SyntaxTheme] = DEFAULT_THEME, - dedent: bool = False, - line_numbers: bool = False, - line_range: Optional[Tuple[int, int]] = None, - start_line: int = 1, - highlight_lines: Optional[Set[int]] = None, - code_width: Optional[int] = None, - tab_size: int = 4, - word_wrap: bool = False, - background_color: Optional[str] = None, - indent_guides: bool = False, - padding: PaddingDimensions = 0, - ) -> "Syntax": - """Construct a Syntax object from a file. - - Args: - path (str): Path to file to highlight. - encoding (str): Encoding of file. - lexer (str | Lexer, optional): Lexer to use. If None, lexer will be auto-detected from path/file content. - theme (str, optional): Color theme, aka Pygments style (see https://pygments.org/docs/styles/#getting-a-list-of-available-styles). Defaults to "emacs". - dedent (bool, optional): Enable stripping of initial whitespace. Defaults to True. - line_numbers (bool, optional): Enable rendering of line numbers. Defaults to False. - start_line (int, optional): Starting number for line numbers. Defaults to 1. - line_range (Tuple[int, int], optional): If given should be a tuple of the start and end line to render. - highlight_lines (Set[int]): A set of line numbers to highlight. - code_width: Width of code to render (not including line numbers), or ``None`` to use all available width. - tab_size (int, optional): Size of tabs. Defaults to 4. - word_wrap (bool, optional): Enable word wrapping of code. - background_color (str, optional): Optional background color, or None to use theme color. Defaults to None. - indent_guides (bool, optional): Show indent guides. Defaults to False. - padding (PaddingDimensions): Padding to apply around the syntax. Defaults to 0 (no padding). - - Returns: - [Syntax]: A Syntax object that may be printed to the console - """ - code = Path(path).read_text(encoding=encoding) - - if not lexer: - lexer = cls.guess_lexer(path, code=code) - - return cls( - code, - lexer, - theme=theme, - dedent=dedent, - line_numbers=line_numbers, - line_range=line_range, - start_line=start_line, - highlight_lines=highlight_lines, - code_width=code_width, - tab_size=tab_size, - word_wrap=word_wrap, - background_color=background_color, - indent_guides=indent_guides, - padding=padding, - ) - - @classmethod - def guess_lexer(cls, path: str, code: Optional[str] = None) -> str: - """Guess the alias of the Pygments lexer to use based on a path and an optional string of code. - If code is supplied, it will use a combination of the code and the filename to determine the - best lexer to use. For example, if the file is ``index.html`` and the file contains Django - templating syntax, then "html+django" will be returned. If the file is ``index.html``, and no - templating language is used, the "html" lexer will be used. If no string of code - is supplied, the lexer will be chosen based on the file extension.. - - Args: - path (AnyStr): The path to the file containing the code you wish to know the lexer for. - code (str, optional): Optional string of code that will be used as a fallback if no lexer - is found for the supplied path. - - Returns: - str: The name of the Pygments lexer that best matches the supplied path/code. - """ - lexer: Optional[Lexer] = None - lexer_name = "default" - if code: - try: - lexer = guess_lexer_for_filename(path, code) - except ClassNotFound: - pass - - if not lexer: - try: - _, ext = os.path.splitext(path) - if ext: - extension = ext.lstrip(".").lower() - lexer = get_lexer_by_name(extension) - except ClassNotFound: - pass - - if lexer: - if lexer.aliases: - lexer_name = lexer.aliases[0] - else: - lexer_name = lexer.name - - return lexer_name - - def _get_base_style(self) -> Style: - """Get the base style.""" - default_style = self._theme.get_background_style() + self.background_style - return default_style - - def _get_token_color(self, token_type: TokenType) -> Optional[Color]: - """Get a color (if any) for the given token. - - Args: - token_type (TokenType): A token type tuple from Pygments. - - Returns: - Optional[Color]: Color from theme, or None for no color. - """ - style = self._theme.get_style_for_token(token_type) - return style.color - - @property - def lexer(self) -> Optional[Lexer]: - """The lexer for this syntax, or None if no lexer was found. - - Tries to find the lexer by name if a string was passed to the constructor. - """ - - if isinstance(self._lexer, Lexer): - return self._lexer - try: - return get_lexer_by_name( - self._lexer, - stripnl=False, - ensurenl=True, - tabsize=self.tab_size, - ) - except ClassNotFound: - return None - - def highlight( - self, - code: str, - line_range: Optional[Tuple[Optional[int], Optional[int]]] = None, - ) -> Text: - """Highlight code and return a Text instance. - - Args: - code (str): Code to highlight. - line_range(Tuple[int, int], optional): Optional line range to highlight. - - Returns: - Text: A text instance containing highlighted syntax. - """ - - base_style = self._get_base_style() - justify: JustifyMethod = ( - "default" if base_style.transparent_background else "left" - ) - - text = Text( - justify=justify, - style=base_style, - tab_size=self.tab_size, - no_wrap=not self.word_wrap, - ) - _get_theme_style = self._theme.get_style_for_token - - lexer = self.lexer - - if lexer is None: - text.append(code) - else: - if line_range: - # More complicated path to only stylize a portion of the code - # This speeds up further operations as there are less spans to process - line_start, line_end = line_range - - def line_tokenize() -> Iterable[Tuple[Any, str]]: - """Split tokens to one per line.""" - assert lexer # required to make MyPy happy - we know lexer is not None at this point - - for token_type, token in lexer.get_tokens(code): - while token: - line_token, new_line, token = token.partition("\n") - yield token_type, line_token + new_line - - def tokens_to_spans() -> Iterable[Tuple[str, Optional[Style]]]: - """Convert tokens to spans.""" - tokens = iter(line_tokenize()) - line_no = 0 - _line_start = line_start - 1 if line_start else 0 - - # Skip over tokens until line start - while line_no < _line_start: - try: - _token_type, token = next(tokens) - except StopIteration: - break - yield (token, None) - if token.endswith("\n"): - line_no += 1 - # Generate spans until line end - for token_type, token in tokens: - yield (token, _get_theme_style(token_type)) - if token.endswith("\n"): - line_no += 1 - if line_end and line_no >= line_end: - break - - text.append_tokens(tokens_to_spans()) - - else: - text.append_tokens( - (token, _get_theme_style(token_type)) - for token_type, token in lexer.get_tokens(code) - ) - if self.background_color is not None: - text.stylize(f"on {self.background_color}") - - if self._stylized_ranges: - self._apply_stylized_ranges(text) - - return text - - def stylize_range( - self, style: StyleType, start: SyntaxPosition, end: SyntaxPosition - ) -> None: - """ - Adds a custom style on a part of the code, that will be applied to the syntax display when it's rendered. - Line numbers are 1-based, while column indexes are 0-based. - - Args: - style (StyleType): The style to apply. - start (Tuple[int, int]): The start of the range, in the form `[line number, column index]`. - end (Tuple[int, int]): The end of the range, in the form `[line number, column index]`. - """ - self._stylized_ranges.append(_SyntaxHighlightRange(style, start, end)) - - def _get_line_numbers_color(self, blend: float = 0.3) -> Color: - background_style = self._theme.get_background_style() + self.background_style - background_color = background_style.bgcolor - if background_color is None or background_color.is_system_defined: - return Color.default() - foreground_color = self._get_token_color(Token.Text) - if foreground_color is None or foreground_color.is_system_defined: - return foreground_color or Color.default() - new_color = blend_rgb( - background_color.get_truecolor(), - foreground_color.get_truecolor(), - cross_fade=blend, - ) - return Color.from_triplet(new_color) - - @property - def _numbers_column_width(self) -> int: - """Get the number of characters used to render the numbers column.""" - column_width = 0 - if self.line_numbers: - column_width = ( - len(str(self.start_line + self.code.count("\n"))) - + NUMBERS_COLUMN_DEFAULT_PADDING - ) - return column_width - - def _get_number_styles(self, console: Console) -> Tuple[Style, Style, Style]: - """Get background, number, and highlight styles for line numbers.""" - background_style = self._get_base_style() - if background_style.transparent_background: - return Style.null(), Style(dim=True), Style.null() - if console.color_system in ("256", "truecolor"): - number_style = Style.chain( - background_style, - self._theme.get_style_for_token(Token.Text), - Style(color=self._get_line_numbers_color()), - self.background_style, - ) - highlight_number_style = Style.chain( - background_style, - self._theme.get_style_for_token(Token.Text), - Style(bold=True, color=self._get_line_numbers_color(0.9)), - self.background_style, - ) - else: - number_style = background_style + Style(dim=True) - highlight_number_style = background_style + Style(dim=False) - return background_style, number_style, highlight_number_style - - def __rich_measure__( - self, console: "Console", options: "ConsoleOptions" - ) -> "Measurement": - _, right, _, left = Padding.unpack(self.padding) - padding = left + right - if self.code_width is not None: - width = self.code_width + self._numbers_column_width + padding + 1 - return Measurement(self._numbers_column_width, width) - lines = self.code.splitlines() - width = ( - self._numbers_column_width - + padding - + (max(cell_len(line) for line in lines) if lines else 0) - ) - if self.line_numbers: - width += 1 - return Measurement(self._numbers_column_width, width) - - def __rich_console__( - self, console: Console, options: ConsoleOptions - ) -> RenderResult: - segments = Segments(self._get_syntax(console, options)) - if self.padding: - yield Padding( - segments, style=self._theme.get_background_style(), pad=self.padding - ) - else: - yield segments - - def _get_syntax( - self, - console: Console, - options: ConsoleOptions, - ) -> Iterable[Segment]: - """ - Get the Segments for the Syntax object, excluding any vertical/horizontal padding - """ - transparent_background = self._get_base_style().transparent_background - code_width = ( - ( - (options.max_width - self._numbers_column_width - 1) - if self.line_numbers - else options.max_width - ) - if self.code_width is None - else self.code_width - ) - - ends_on_nl, processed_code = self._process_code(self.code) - text = self.highlight(processed_code, self.line_range) - - if not self.line_numbers and not self.word_wrap and not self.line_range: - if not ends_on_nl: - text.remove_suffix("\n") - # Simple case of just rendering text - style = ( - self._get_base_style() - + self._theme.get_style_for_token(Comment) - + Style(dim=True) - + self.background_style - ) - if self.indent_guides and not options.ascii_only: - text = text.with_indent_guides(self.tab_size, style=style) - text.overflow = "crop" - if style.transparent_background: - yield from console.render( - text, options=options.update(width=code_width) - ) - else: - syntax_lines = console.render_lines( - text, - options.update(width=code_width, height=None, justify="left"), - style=self.background_style, - pad=True, - new_lines=True, - ) - for syntax_line in syntax_lines: - yield from syntax_line - return - - start_line, end_line = self.line_range or (None, None) - line_offset = 0 - if start_line: - line_offset = max(0, start_line - 1) - lines: Union[List[Text], Lines] = text.split("\n", allow_blank=ends_on_nl) - if self.line_range: - if line_offset > len(lines): - return - lines = lines[line_offset:end_line] - - if self.indent_guides and not options.ascii_only: - style = ( - self._get_base_style() - + self._theme.get_style_for_token(Comment) - + Style(dim=True) - + self.background_style - ) - lines = ( - Text("\n") - .join(lines) - .with_indent_guides(self.tab_size, style=style + Style(italic=False)) - .split("\n", allow_blank=True) - ) - - numbers_column_width = self._numbers_column_width - render_options = options.update(width=code_width) - - highlight_line = self.highlight_lines.__contains__ - _Segment = Segment - new_line = _Segment("\n") - - line_pointer = "> " if options.legacy_windows else "❱ " - - ( - background_style, - number_style, - highlight_number_style, - ) = self._get_number_styles(console) - - for line_no, line in enumerate(lines, self.start_line + line_offset): - if self.word_wrap: - wrapped_lines = console.render_lines( - line, - render_options.update(height=None, justify="left"), - style=background_style, - pad=not transparent_background, - ) - else: - segments = list(line.render(console, end="")) - if options.no_wrap: - wrapped_lines = [segments] - else: - wrapped_lines = [ - _Segment.adjust_line_length( - segments, - render_options.max_width, - style=background_style, - pad=not transparent_background, - ) - ] - - if self.line_numbers: - wrapped_line_left_pad = _Segment( - " " * numbers_column_width + " ", background_style - ) - for first, wrapped_line in loop_first(wrapped_lines): - if first: - line_column = str(line_no).rjust(numbers_column_width - 2) + " " - if highlight_line(line_no): - yield _Segment(line_pointer, Style(color="red")) - yield _Segment(line_column, highlight_number_style) - else: - yield _Segment(" ", highlight_number_style) - yield _Segment(line_column, number_style) - else: - yield wrapped_line_left_pad - yield from wrapped_line - yield new_line - else: - for wrapped_line in wrapped_lines: - yield from wrapped_line - yield new_line - - def _apply_stylized_ranges(self, text: Text) -> None: - """ - Apply stylized ranges to a text instance, - using the given code to determine the right portion to apply the style to. - - Args: - text (Text): Text instance to apply the style to. - """ - code = text.plain - newlines_offsets = [ - # Let's add outer boundaries at each side of the list: - 0, - # N.B. using "\n" here is much faster than using metacharacters such as "^" or "\Z": - *[ - match.start() + 1 - for match in re.finditer("\n", code, flags=re.MULTILINE) - ], - len(code) + 1, - ] - - for stylized_range in self._stylized_ranges: - start = _get_code_index_for_syntax_position( - newlines_offsets, stylized_range.start - ) - end = _get_code_index_for_syntax_position( - newlines_offsets, stylized_range.end - ) - if start is not None and end is not None: - text.stylize(stylized_range.style, start, end) - - def _process_code(self, code: str) -> Tuple[bool, str]: - """ - Applies various processing to a raw code string - (normalises it so it always ends with a line return, dedents it if necessary, etc.) - - Args: - code (str): The raw code string to process - - Returns: - Tuple[bool, str]: the boolean indicates whether the raw code ends with a line return, - while the string is the processed code. - """ - ends_on_nl = code.endswith("\n") - processed_code = code if ends_on_nl else code + "\n" - processed_code = ( - textwrap.dedent(processed_code) if self.dedent else processed_code - ) - processed_code = processed_code.expandtabs(self.tab_size) - return ends_on_nl, processed_code - - -def _get_code_index_for_syntax_position( - newlines_offsets: Sequence[int], position: SyntaxPosition -) -> Optional[int]: - """ - Returns the index of the code string for the given positions. - - Args: - newlines_offsets (Sequence[int]): The offset of each newline character found in the code snippet. - position (SyntaxPosition): The position to search for. - - Returns: - Optional[int]: The index of the code string for this position, or `None` - if the given position's line number is out of range (if it's the column that is out of range - we silently clamp its value so that it reaches the end of the line) - """ - lines_count = len(newlines_offsets) - - line_number, column_index = position - if line_number > lines_count or len(newlines_offsets) < (line_number + 1): - return None # `line_number` is out of range - line_index = line_number - 1 - line_length = newlines_offsets[line_index + 1] - newlines_offsets[line_index] - 1 - # If `column_index` is out of range: let's silently clamp it: - column_index = min(line_length, column_index) - return newlines_offsets[line_index] + column_index - - -if __name__ == "__main__": # pragma: no cover - import argparse - import sys - - parser = argparse.ArgumentParser( - description="Render syntax to the console with Rich" - ) - parser.add_argument( - "path", - metavar="PATH", - help="path to file, or - for stdin", - ) - parser.add_argument( - "-c", - "--force-color", - dest="force_color", - action="store_true", - default=None, - help="force color for non-terminals", - ) - parser.add_argument( - "-i", - "--indent-guides", - dest="indent_guides", - action="store_true", - default=False, - help="display indent guides", - ) - parser.add_argument( - "-l", - "--line-numbers", - dest="line_numbers", - action="store_true", - help="render line numbers", - ) - parser.add_argument( - "-w", - "--width", - type=int, - dest="width", - default=None, - help="width of output (default will auto-detect)", - ) - parser.add_argument( - "-r", - "--wrap", - dest="word_wrap", - action="store_true", - default=False, - help="word wrap long lines", - ) - parser.add_argument( - "-s", - "--soft-wrap", - action="store_true", - dest="soft_wrap", - default=False, - help="enable soft wrapping mode", - ) - parser.add_argument( - "-t", "--theme", dest="theme", default="monokai", help="pygments theme" - ) - parser.add_argument( - "-b", - "--background-color", - dest="background_color", - default=None, - help="Override background color", - ) - parser.add_argument( - "-x", - "--lexer", - default=None, - dest="lexer_name", - help="Lexer name", - ) - parser.add_argument( - "-p", "--padding", type=int, default=0, dest="padding", help="Padding" - ) - parser.add_argument( - "--highlight-line", - type=int, - default=None, - dest="highlight_line", - help="The line number (not index!) to highlight", - ) - args = parser.parse_args() - - from pip._vendor.rich.console import Console - - console = Console(force_terminal=args.force_color, width=args.width) - - if args.path == "-": - code = sys.stdin.read() - syntax = Syntax( - code=code, - lexer=args.lexer_name, - line_numbers=args.line_numbers, - word_wrap=args.word_wrap, - theme=args.theme, - background_color=args.background_color, - indent_guides=args.indent_guides, - padding=args.padding, - highlight_lines={args.highlight_line}, - ) - else: - syntax = Syntax.from_path( - args.path, - lexer=args.lexer_name, - line_numbers=args.line_numbers, - word_wrap=args.word_wrap, - theme=args.theme, - background_color=args.background_color, - indent_guides=args.indent_guides, - padding=args.padding, - highlight_lines={args.highlight_line}, - ) - console.print(syntax, soft_wrap=args.soft_wrap) diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/bcppcompiler.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/bcppcompiler.py deleted file mode 100644 index ba45ea2b9500e62b8cf6786432336f5b1ddddec1..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/bcppcompiler.py +++ /dev/null @@ -1,401 +0,0 @@ -"""distutils.bcppcompiler - -Contains BorlandCCompiler, an implementation of the abstract CCompiler class -for the Borland C++ compiler. -""" - -# This implementation by Lyle Johnson, based on the original msvccompiler.py -# module and using the directions originally published by Gordon Williams. - -# XXX looks like there's a LOT of overlap between these two classes: -# someone should sit down and factor out the common code as -# WindowsCCompiler! --GPW - - -import os -import warnings - -from .errors import ( - DistutilsExecError, - CompileError, - LibError, - LinkError, - UnknownFileError, -) -from .ccompiler import CCompiler, gen_preprocess_options -from .file_util import write_file -from .dep_util import newer -from ._log import log - - -warnings.warn( - "bcppcompiler is deprecated and slated to be removed " - "in the future. Please discontinue use or file an issue " - "with pypa/distutils describing your use case.", - DeprecationWarning, -) - - -class BCPPCompiler(CCompiler): - """Concrete class that implements an interface to the Borland C/C++ - compiler, as defined by the CCompiler abstract class. - """ - - compiler_type = 'bcpp' - - # Just set this so CCompiler's constructor doesn't barf. We currently - # don't use the 'set_executables()' bureaucracy provided by CCompiler, - # as it really isn't necessary for this sort of single-compiler class. - # Would be nice to have a consistent interface with UnixCCompiler, - # though, so it's worth thinking about. - executables = {} - - # Private class data (need to distinguish C from C++ source for compiler) - _c_extensions = ['.c'] - _cpp_extensions = ['.cc', '.cpp', '.cxx'] - - # Needed for the filename generation methods provided by the - # base class, CCompiler. - src_extensions = _c_extensions + _cpp_extensions - obj_extension = '.obj' - static_lib_extension = '.lib' - shared_lib_extension = '.dll' - static_lib_format = shared_lib_format = '%s%s' - exe_extension = '.exe' - - def __init__(self, verbose=0, dry_run=0, force=0): - super().__init__(verbose, dry_run, force) - - # These executables are assumed to all be in the path. - # Borland doesn't seem to use any special registry settings to - # indicate their installation locations. - - self.cc = "bcc32.exe" - self.linker = "ilink32.exe" - self.lib = "tlib.exe" - - self.preprocess_options = None - self.compile_options = ['/tWM', '/O2', '/q', '/g0'] - self.compile_options_debug = ['/tWM', '/Od', '/q', '/g0'] - - self.ldflags_shared = ['/Tpd', '/Gn', '/q', '/x'] - self.ldflags_shared_debug = ['/Tpd', '/Gn', '/q', '/x'] - self.ldflags_static = [] - self.ldflags_exe = ['/Gn', '/q', '/x'] - self.ldflags_exe_debug = ['/Gn', '/q', '/x', '/r'] - - # -- Worker methods ------------------------------------------------ - - def compile( # noqa: C901 - self, - sources, - output_dir=None, - macros=None, - include_dirs=None, - debug=0, - extra_preargs=None, - extra_postargs=None, - depends=None, - ): - macros, objects, extra_postargs, pp_opts, build = self._setup_compile( - output_dir, macros, include_dirs, sources, depends, extra_postargs - ) - compile_opts = extra_preargs or [] - compile_opts.append('-c') - if debug: - compile_opts.extend(self.compile_options_debug) - else: - compile_opts.extend(self.compile_options) - - for obj in objects: - try: - src, ext = build[obj] - except KeyError: - continue - # XXX why do the normpath here? - src = os.path.normpath(src) - obj = os.path.normpath(obj) - # XXX _setup_compile() did a mkpath() too but before the normpath. - # Is it possible to skip the normpath? - self.mkpath(os.path.dirname(obj)) - - if ext == '.res': - # This is already a binary file -- skip it. - continue # the 'for' loop - if ext == '.rc': - # This needs to be compiled to a .res file -- do it now. - try: - self.spawn(["brcc32", "-fo", obj, src]) - except DistutilsExecError as msg: - raise CompileError(msg) - continue # the 'for' loop - - # The next two are both for the real compiler. - if ext in self._c_extensions: - input_opt = "" - elif ext in self._cpp_extensions: - input_opt = "-P" - else: - # Unknown file type -- no extra options. The compiler - # will probably fail, but let it just in case this is a - # file the compiler recognizes even if we don't. - input_opt = "" - - output_opt = "-o" + obj - - # Compiler command line syntax is: "bcc32 [options] file(s)". - # Note that the source file names must appear at the end of - # the command line. - try: - self.spawn( - [self.cc] - + compile_opts - + pp_opts - + [input_opt, output_opt] - + extra_postargs - + [src] - ) - except DistutilsExecError as msg: - raise CompileError(msg) - - return objects - - # compile () - - def create_static_lib( - self, objects, output_libname, output_dir=None, debug=0, target_lang=None - ): - (objects, output_dir) = self._fix_object_args(objects, output_dir) - output_filename = self.library_filename(output_libname, output_dir=output_dir) - - if self._need_link(objects, output_filename): - lib_args = [output_filename, '/u'] + objects - if debug: - pass # XXX what goes here? - try: - self.spawn([self.lib] + lib_args) - except DistutilsExecError as msg: - raise LibError(msg) - else: - log.debug("skipping %s (up-to-date)", output_filename) - - # create_static_lib () - - def link( # noqa: C901 - self, - target_desc, - objects, - output_filename, - output_dir=None, - libraries=None, - library_dirs=None, - runtime_library_dirs=None, - export_symbols=None, - debug=0, - extra_preargs=None, - extra_postargs=None, - build_temp=None, - target_lang=None, - ): - # XXX this ignores 'build_temp'! should follow the lead of - # msvccompiler.py - - (objects, output_dir) = self._fix_object_args(objects, output_dir) - (libraries, library_dirs, runtime_library_dirs) = self._fix_lib_args( - libraries, library_dirs, runtime_library_dirs - ) - - if runtime_library_dirs: - log.warning( - "I don't know what to do with 'runtime_library_dirs': %s", - str(runtime_library_dirs), - ) - - if output_dir is not None: - output_filename = os.path.join(output_dir, output_filename) - - if self._need_link(objects, output_filename): - # Figure out linker args based on type of target. - if target_desc == CCompiler.EXECUTABLE: - startup_obj = 'c0w32' - if debug: - ld_args = self.ldflags_exe_debug[:] - else: - ld_args = self.ldflags_exe[:] - else: - startup_obj = 'c0d32' - if debug: - ld_args = self.ldflags_shared_debug[:] - else: - ld_args = self.ldflags_shared[:] - - # Create a temporary exports file for use by the linker - if export_symbols is None: - def_file = '' - else: - head, tail = os.path.split(output_filename) - modname, ext = os.path.splitext(tail) - temp_dir = os.path.dirname(objects[0]) # preserve tree structure - def_file = os.path.join(temp_dir, '%s.def' % modname) - contents = ['EXPORTS'] - for sym in export_symbols or []: - contents.append(' {}=_{}'.format(sym, sym)) - self.execute(write_file, (def_file, contents), "writing %s" % def_file) - - # Borland C++ has problems with '/' in paths - objects2 = map(os.path.normpath, objects) - # split objects in .obj and .res files - # Borland C++ needs them at different positions in the command line - objects = [startup_obj] - resources = [] - for file in objects2: - (base, ext) = os.path.splitext(os.path.normcase(file)) - if ext == '.res': - resources.append(file) - else: - objects.append(file) - - for ell in library_dirs: - ld_args.append("/L%s" % os.path.normpath(ell)) - ld_args.append("/L.") # we sometimes use relative paths - - # list of object files - ld_args.extend(objects) - - # XXX the command-line syntax for Borland C++ is a bit wonky; - # certain filenames are jammed together in one big string, but - # comma-delimited. This doesn't mesh too well with the - # Unix-centric attitude (with a DOS/Windows quoting hack) of - # 'spawn()', so constructing the argument list is a bit - # awkward. Note that doing the obvious thing and jamming all - # the filenames and commas into one argument would be wrong, - # because 'spawn()' would quote any filenames with spaces in - # them. Arghghh!. Apparently it works fine as coded... - - # name of dll/exe file - ld_args.extend([',', output_filename]) - # no map file and start libraries - ld_args.append(',,') - - for lib in libraries: - # see if we find it and if there is a bcpp specific lib - # (xxx_bcpp.lib) - libfile = self.find_library_file(library_dirs, lib, debug) - if libfile is None: - ld_args.append(lib) - # probably a BCPP internal library -- don't warn - else: - # full name which prefers bcpp_xxx.lib over xxx.lib - ld_args.append(libfile) - - # some default libraries - ld_args.extend(('import32', 'cw32mt')) - - # def file for export symbols - ld_args.extend([',', def_file]) - # add resource files - ld_args.append(',') - ld_args.extend(resources) - - if extra_preargs: - ld_args[:0] = extra_preargs - if extra_postargs: - ld_args.extend(extra_postargs) - - self.mkpath(os.path.dirname(output_filename)) - try: - self.spawn([self.linker] + ld_args) - except DistutilsExecError as msg: - raise LinkError(msg) - - else: - log.debug("skipping %s (up-to-date)", output_filename) - - # link () - - # -- Miscellaneous methods ----------------------------------------- - - def find_library_file(self, dirs, lib, debug=0): - # List of effective library names to try, in order of preference: - # xxx_bcpp.lib is better than xxx.lib - # and xxx_d.lib is better than xxx.lib if debug is set - # - # The "_bcpp" suffix is to handle a Python installation for people - # with multiple compilers (primarily Distutils hackers, I suspect - # ;-). The idea is they'd have one static library for each - # compiler they care about, since (almost?) every Windows compiler - # seems to have a different format for static libraries. - if debug: - dlib = lib + "_d" - try_names = (dlib + "_bcpp", lib + "_bcpp", dlib, lib) - else: - try_names = (lib + "_bcpp", lib) - - for dir in dirs: - for name in try_names: - libfile = os.path.join(dir, self.library_filename(name)) - if os.path.exists(libfile): - return libfile - else: - # Oops, didn't find it in *any* of 'dirs' - return None - - # overwrite the one from CCompiler to support rc and res-files - def object_filenames(self, source_filenames, strip_dir=0, output_dir=''): - if output_dir is None: - output_dir = '' - obj_names = [] - for src_name in source_filenames: - # use normcase to make sure '.rc' is really '.rc' and not '.RC' - (base, ext) = os.path.splitext(os.path.normcase(src_name)) - if ext not in (self.src_extensions + ['.rc', '.res']): - raise UnknownFileError( - "unknown file type '{}' (from '{}')".format(ext, src_name) - ) - if strip_dir: - base = os.path.basename(base) - if ext == '.res': - # these can go unchanged - obj_names.append(os.path.join(output_dir, base + ext)) - elif ext == '.rc': - # these need to be compiled to .res-files - obj_names.append(os.path.join(output_dir, base + '.res')) - else: - obj_names.append(os.path.join(output_dir, base + self.obj_extension)) - return obj_names - - # object_filenames () - - def preprocess( - self, - source, - output_file=None, - macros=None, - include_dirs=None, - extra_preargs=None, - extra_postargs=None, - ): - (_, macros, include_dirs) = self._fix_compile_args(None, macros, include_dirs) - pp_opts = gen_preprocess_options(macros, include_dirs) - pp_args = ['cpp32.exe'] + pp_opts - if output_file is not None: - pp_args.append('-o' + output_file) - if extra_preargs: - pp_args[:0] = extra_preargs - if extra_postargs: - pp_args.extend(extra_postargs) - pp_args.append(source) - - # We need to preprocess: either we're being forced to, or the - # source file is newer than the target (or the target doesn't - # exist). - if self.force or output_file is None or newer(source, output_file): - if output_file: - self.mkpath(os.path.dirname(output_file)) - try: - self.spawn(pp_args) - except DistutilsExecError as msg: - print(msg) - raise CompileError(msg) - - # preprocess() diff --git a/spaces/plzdontcry/dakubettergpt/src/hooks/useInitialiseNewChat.ts b/spaces/plzdontcry/dakubettergpt/src/hooks/useInitialiseNewChat.ts deleted file mode 100644 index 999754409d6a0e81ae5f22b6ed6be1a162cfe7df..0000000000000000000000000000000000000000 --- a/spaces/plzdontcry/dakubettergpt/src/hooks/useInitialiseNewChat.ts +++ /dev/null @@ -1,18 +0,0 @@ -import React from 'react'; -import useStore from '@store/store'; -import { MessageInterface } from '@type/chat'; -import { generateDefaultChat } from '@constants/chat'; - -const useInitialiseNewChat = () => { - const setChats = useStore((state) => state.setChats); - const setCurrentChatIndex = useStore((state) => state.setCurrentChatIndex); - - const initialiseNewChat = () => { - setChats([generateDefaultChat()]); - setCurrentChatIndex(0); - }; - - return initialiseNewChat; -}; - -export default useInitialiseNewChat; diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/pens/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/pens/__init__.py deleted file mode 100644 index 156cb232a7aa80eee1526c7598f72043de10473f..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/pens/__init__.py +++ /dev/null @@ -1 +0,0 @@ -"""Empty __init__.py file to signal Python this directory is a package.""" diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_simple_templates/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_simple_templates/__init__.py deleted file mode 100644 index 8c61b4e4b15ae686dd6d7370b054835a792ffca4..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_simple_templates/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .simpledropdown import SimpleDropdown -from .simpletextbox import SimpleTextbox - -__all__ = ["SimpleDropdown", "SimpleTextbox"] diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Example-ee935b0c.css b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Example-ee935b0c.css deleted file mode 100644 index 259b46e48a07998f806756482a6f76510a64a6be..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Example-ee935b0c.css +++ /dev/null @@ -1 +0,0 @@ -div.svelte-h6ogpl{width:var(--size-10);height:var(--size-10)}.table.svelte-h6ogpl{margin:0 auto} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/markdown_it/parser_core.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/markdown_it/parser_core.py deleted file mode 100644 index ca5ab2566ba5bd00e654a7af39e3603717ae7194..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/markdown_it/parser_core.py +++ /dev/null @@ -1,45 +0,0 @@ -""" - * class Core - * - * Top-level rules executor. Glues block/inline parsers and does intermediate - * transformations. -""" -from __future__ import annotations - -from typing import Callable - -from .ruler import Ruler -from .rules_core import ( - block, - inline, - linkify, - normalize, - replace, - smartquotes, - text_join, -) -from .rules_core.state_core import StateCore - -RuleFuncCoreType = Callable[[StateCore], None] - -_rules: list[tuple[str, RuleFuncCoreType]] = [ - ("normalize", normalize), - ("block", block), - ("inline", inline), - ("linkify", linkify), - ("replacements", replace), - ("smartquotes", smartquotes), - ("text_join", text_join), -] - - -class ParserCore: - def __init__(self) -> None: - self.ruler = Ruler[RuleFuncCoreType]() - for name, rule in _rules: - self.ruler.push(name, rule) - - def process(self, state: StateCore) -> None: - """Executes core chain rules.""" - for rule in self.ruler.getRules(""): - rule(state) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/multipart/tests/test_multipart.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/multipart/tests/test_multipart.py deleted file mode 100644 index 089f4518c1be6954b38166da39a2026f01c8af72..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/multipart/tests/test_multipart.py +++ /dev/null @@ -1,1305 +0,0 @@ -import os -import sys -import glob -import yaml -import base64 -import random -import tempfile -import unittest -from .compat import ( - parametrize, - parametrize_class, - slow_test, -) -from io import BytesIO -from unittest.mock import MagicMock, Mock, patch - -from ..multipart import * - - -# Get the current directory for our later test cases. -curr_dir = os.path.abspath(os.path.dirname(__file__)) - - -def force_bytes(val): - if isinstance(val, str): - val = val.encode(sys.getfilesystemencoding()) - - return val - - -class TestField(unittest.TestCase): - def setUp(self): - self.f = Field('foo') - - def test_name(self): - self.assertEqual(self.f.field_name, 'foo') - - def test_data(self): - self.f.write(b'test123') - self.assertEqual(self.f.value, b'test123') - - def test_cache_expiration(self): - self.f.write(b'test') - self.assertEqual(self.f.value, b'test') - self.f.write(b'123') - self.assertEqual(self.f.value, b'test123') - - def test_finalize(self): - self.f.write(b'test123') - self.f.finalize() - self.assertEqual(self.f.value, b'test123') - - def test_close(self): - self.f.write(b'test123') - self.f.close() - self.assertEqual(self.f.value, b'test123') - - def test_from_value(self): - f = Field.from_value(b'name', b'value') - self.assertEqual(f.field_name, b'name') - self.assertEqual(f.value, b'value') - - f2 = Field.from_value(b'name', None) - self.assertEqual(f2.value, None) - - def test_equality(self): - f1 = Field.from_value(b'name', b'value') - f2 = Field.from_value(b'name', b'value') - - self.assertEqual(f1, f2) - - def test_equality_with_other(self): - f = Field.from_value(b'foo', b'bar') - self.assertFalse(f == b'foo') - self.assertFalse(b'foo' == f) - - def test_set_none(self): - f = Field(b'foo') - self.assertEqual(f.value, b'') - - f.set_none() - self.assertEqual(f.value, None) - - -class TestFile(unittest.TestCase): - def setUp(self): - self.c = {} - self.d = force_bytes(tempfile.mkdtemp()) - self.f = File(b'foo.txt', config=self.c) - - def assert_data(self, data): - f = self.f.file_object - f.seek(0) - self.assertEqual(f.read(), data) - f.seek(0) - f.truncate() - - def assert_exists(self): - full_path = os.path.join(self.d, self.f.actual_file_name) - self.assertTrue(os.path.exists(full_path)) - - def test_simple(self): - self.f.write(b'foobar') - self.assert_data(b'foobar') - - def test_invalid_write(self): - m = Mock() - m.write.return_value = 5 - self.f._fileobj = m - v = self.f.write(b'foobar') - self.assertEqual(v, 5) - - def test_file_fallback(self): - self.c['MAX_MEMORY_FILE_SIZE'] = 1 - - self.f.write(b'1') - self.assertTrue(self.f.in_memory) - self.assert_data(b'1') - - self.f.write(b'123') - self.assertFalse(self.f.in_memory) - self.assert_data(b'123') - - # Test flushing too. - old_obj = self.f.file_object - self.f.flush_to_disk() - self.assertFalse(self.f.in_memory) - self.assertIs(self.f.file_object, old_obj) - - def test_file_fallback_with_data(self): - self.c['MAX_MEMORY_FILE_SIZE'] = 10 - - self.f.write(b'1' * 10) - self.assertTrue(self.f.in_memory) - - self.f.write(b'2' * 10) - self.assertFalse(self.f.in_memory) - - self.assert_data(b'11111111112222222222') - - def test_file_name(self): - # Write to this dir. - self.c['UPLOAD_DIR'] = self.d - self.c['MAX_MEMORY_FILE_SIZE'] = 10 - - # Write. - self.f.write(b'12345678901') - self.assertFalse(self.f.in_memory) - - # Assert that the file exists - self.assertIsNotNone(self.f.actual_file_name) - self.assert_exists() - - def test_file_full_name(self): - # Write to this dir. - self.c['UPLOAD_DIR'] = self.d - self.c['UPLOAD_KEEP_FILENAME'] = True - self.c['MAX_MEMORY_FILE_SIZE'] = 10 - - # Write. - self.f.write(b'12345678901') - self.assertFalse(self.f.in_memory) - - # Assert that the file exists - self.assertEqual(self.f.actual_file_name, b'foo') - self.assert_exists() - - def test_file_full_name_with_ext(self): - self.c['UPLOAD_DIR'] = self.d - self.c['UPLOAD_KEEP_FILENAME'] = True - self.c['UPLOAD_KEEP_EXTENSIONS'] = True - self.c['MAX_MEMORY_FILE_SIZE'] = 10 - - # Write. - self.f.write(b'12345678901') - self.assertFalse(self.f.in_memory) - - # Assert that the file exists - self.assertEqual(self.f.actual_file_name, b'foo.txt') - self.assert_exists() - - def test_file_full_name_with_ext(self): - self.c['UPLOAD_DIR'] = self.d - self.c['UPLOAD_KEEP_FILENAME'] = True - self.c['UPLOAD_KEEP_EXTENSIONS'] = True - self.c['MAX_MEMORY_FILE_SIZE'] = 10 - - # Write. - self.f.write(b'12345678901') - self.assertFalse(self.f.in_memory) - - # Assert that the file exists - self.assertEqual(self.f.actual_file_name, b'foo.txt') - self.assert_exists() - - def test_no_dir_with_extension(self): - self.c['UPLOAD_KEEP_EXTENSIONS'] = True - self.c['MAX_MEMORY_FILE_SIZE'] = 10 - - # Write. - self.f.write(b'12345678901') - self.assertFalse(self.f.in_memory) - - # Assert that the file exists - ext = os.path.splitext(self.f.actual_file_name)[1] - self.assertEqual(ext, b'.txt') - self.assert_exists() - - def test_invalid_dir_with_name(self): - # Write to this dir. - self.c['UPLOAD_DIR'] = force_bytes(os.path.join('/', 'tmp', 'notexisting')) - self.c['UPLOAD_KEEP_FILENAME'] = True - self.c['MAX_MEMORY_FILE_SIZE'] = 5 - - # Write. - with self.assertRaises(FileError): - self.f.write(b'1234567890') - - def test_invalid_dir_no_name(self): - # Write to this dir. - self.c['UPLOAD_DIR'] = force_bytes(os.path.join('/', 'tmp', 'notexisting')) - self.c['UPLOAD_KEEP_FILENAME'] = False - self.c['MAX_MEMORY_FILE_SIZE'] = 5 - - # Write. - with self.assertRaises(FileError): - self.f.write(b'1234567890') - - # TODO: test uploading two files with the same name. - - -class TestParseOptionsHeader(unittest.TestCase): - def test_simple(self): - t, p = parse_options_header('application/json') - self.assertEqual(t, b'application/json') - self.assertEqual(p, {}) - - def test_blank(self): - t, p = parse_options_header('') - self.assertEqual(t, b'') - self.assertEqual(p, {}) - - def test_single_param(self): - t, p = parse_options_header('application/json;par=val') - self.assertEqual(t, b'application/json') - self.assertEqual(p, {b'par': b'val'}) - - def test_single_param_with_spaces(self): - t, p = parse_options_header(b'application/json; par=val') - self.assertEqual(t, b'application/json') - self.assertEqual(p, {b'par': b'val'}) - - def test_multiple_params(self): - t, p = parse_options_header(b'application/json;par=val;asdf=foo') - self.assertEqual(t, b'application/json') - self.assertEqual(p, {b'par': b'val', b'asdf': b'foo'}) - - def test_quoted_param(self): - t, p = parse_options_header(b'application/json;param="quoted"') - self.assertEqual(t, b'application/json') - self.assertEqual(p, {b'param': b'quoted'}) - - def test_quoted_param_with_semicolon(self): - t, p = parse_options_header(b'application/json;param="quoted;with;semicolons"') - self.assertEqual(p[b'param'], b'quoted;with;semicolons') - - def test_quoted_param_with_escapes(self): - t, p = parse_options_header(b'application/json;param="This \\" is \\" a \\" quote"') - self.assertEqual(p[b'param'], b'This " is " a " quote') - - def test_handles_ie6_bug(self): - t, p = parse_options_header(b'text/plain; filename="C:\\this\\is\\a\\path\\file.txt"') - - self.assertEqual(p[b'filename'], b'file.txt') - - -class TestBaseParser(unittest.TestCase): - def setUp(self): - self.b = BaseParser() - self.b.callbacks = {} - - def test_callbacks(self): - # The stupid list-ness is to get around lack of nonlocal on py2 - l = [0] - def on_foo(): - l[0] += 1 - - self.b.set_callback('foo', on_foo) - self.b.callback('foo') - self.assertEqual(l[0], 1) - - self.b.set_callback('foo', None) - self.b.callback('foo') - self.assertEqual(l[0], 1) - - -class TestQuerystringParser(unittest.TestCase): - def assert_fields(self, *args, **kwargs): - if kwargs.pop('finalize', True): - self.p.finalize() - - self.assertEqual(self.f, list(args)) - if kwargs.get('reset', True): - self.f = [] - - def setUp(self): - self.reset() - - def reset(self): - self.f = [] - - name_buffer = [] - data_buffer = [] - - def on_field_name(data, start, end): - name_buffer.append(data[start:end]) - - def on_field_data(data, start, end): - data_buffer.append(data[start:end]) - - def on_field_end(): - self.f.append(( - b''.join(name_buffer), - b''.join(data_buffer) - )) - - del name_buffer[:] - del data_buffer[:] - - callbacks = { - 'on_field_name': on_field_name, - 'on_field_data': on_field_data, - 'on_field_end': on_field_end - } - - self.p = QuerystringParser(callbacks) - - def test_simple_querystring(self): - self.p.write(b'foo=bar') - - self.assert_fields((b'foo', b'bar')) - - def test_querystring_blank_beginning(self): - self.p.write(b'&foo=bar') - - self.assert_fields((b'foo', b'bar')) - - def test_querystring_blank_end(self): - self.p.write(b'foo=bar&') - - self.assert_fields((b'foo', b'bar')) - - def test_multiple_querystring(self): - self.p.write(b'foo=bar&asdf=baz') - - self.assert_fields( - (b'foo', b'bar'), - (b'asdf', b'baz') - ) - - def test_streaming_simple(self): - self.p.write(b'foo=bar&') - self.assert_fields( - (b'foo', b'bar'), - finalize=False - ) - - self.p.write(b'asdf=baz') - self.assert_fields( - (b'asdf', b'baz') - ) - - def test_streaming_break(self): - self.p.write(b'foo=one') - self.assert_fields(finalize=False) - - self.p.write(b'two') - self.assert_fields(finalize=False) - - self.p.write(b'three') - self.assert_fields(finalize=False) - - self.p.write(b'&asd') - self.assert_fields( - (b'foo', b'onetwothree'), - finalize=False - ) - - self.p.write(b'f=baz') - self.assert_fields( - (b'asdf', b'baz') - ) - - def test_semicolon_separator(self): - self.p.write(b'foo=bar;asdf=baz') - - self.assert_fields( - (b'foo', b'bar'), - (b'asdf', b'baz') - ) - - def test_too_large_field(self): - self.p.max_size = 15 - - # Note: len = 8 - self.p.write(b"foo=bar&") - self.assert_fields((b'foo', b'bar'), finalize=False) - - # Note: len = 8, only 7 bytes processed - self.p.write(b'a=123456') - self.assert_fields((b'a', b'12345')) - - def test_invalid_max_size(self): - with self.assertRaises(ValueError): - p = QuerystringParser(max_size=-100) - - def test_strict_parsing_pass(self): - data = b'foo=bar&another=asdf' - for first, last in split_all(data): - self.reset() - self.p.strict_parsing = True - - print(f"{first!r} / {last!r}") - - self.p.write(first) - self.p.write(last) - self.assert_fields((b'foo', b'bar'), (b'another', b'asdf')) - - def test_strict_parsing_fail_double_sep(self): - data = b'foo=bar&&another=asdf' - for first, last in split_all(data): - self.reset() - self.p.strict_parsing = True - - cnt = 0 - with self.assertRaises(QuerystringParseError) as cm: - cnt += self.p.write(first) - cnt += self.p.write(last) - self.p.finalize() - - # The offset should occur at 8 bytes into the data (as a whole), - # so we calculate the offset into the chunk. - if cm is not None: - self.assertEqual(cm.exception.offset, 8 - cnt) - - def test_double_sep(self): - data = b'foo=bar&&another=asdf' - for first, last in split_all(data): - print(f" {first!r} / {last!r} ") - self.reset() - - cnt = 0 - cnt += self.p.write(first) - cnt += self.p.write(last) - - self.assert_fields((b'foo', b'bar'), (b'another', b'asdf')) - - def test_strict_parsing_fail_no_value(self): - self.p.strict_parsing = True - with self.assertRaises(QuerystringParseError) as cm: - self.p.write(b'foo=bar&blank&another=asdf') - - if cm is not None: - self.assertEqual(cm.exception.offset, 8) - - def test_success_no_value(self): - self.p.write(b'foo=bar&blank&another=asdf') - self.assert_fields( - (b'foo', b'bar'), - (b'blank', b''), - (b'another', b'asdf') - ) - - def test_repr(self): - # Issue #29; verify we don't assert on repr() - _ignored = repr(self.p) - - -class TestOctetStreamParser(unittest.TestCase): - def setUp(self): - self.d = [] - self.started = 0 - self.finished = 0 - - def on_start(): - self.started += 1 - - def on_data(data, start, end): - self.d.append(data[start:end]) - - def on_end(): - self.finished += 1 - - callbacks = { - 'on_start': on_start, - 'on_data': on_data, - 'on_end': on_end - } - - self.p = OctetStreamParser(callbacks) - - def assert_data(self, data, finalize=True): - self.assertEqual(b''.join(self.d), data) - self.d = [] - - def assert_started(self, val=True): - if val: - self.assertEqual(self.started, 1) - else: - self.assertEqual(self.started, 0) - - def assert_finished(self, val=True): - if val: - self.assertEqual(self.finished, 1) - else: - self.assertEqual(self.finished, 0) - - def test_simple(self): - # Assert is not started - self.assert_started(False) - - # Write something, it should then be started + have data - self.p.write(b'foobar') - self.assert_started() - self.assert_data(b'foobar') - - # Finalize, and check - self.assert_finished(False) - self.p.finalize() - self.assert_finished() - - def test_multiple_chunks(self): - self.p.write(b'foo') - self.p.write(b'bar') - self.p.write(b'baz') - self.p.finalize() - - self.assert_data(b'foobarbaz') - self.assert_finished() - - def test_max_size(self): - self.p.max_size = 5 - - self.p.write(b'0123456789') - self.p.finalize() - - self.assert_data(b'01234') - self.assert_finished() - - def test_invalid_max_size(self): - with self.assertRaises(ValueError): - q = OctetStreamParser(max_size='foo') - - -class TestBase64Decoder(unittest.TestCase): - # Note: base64('foobar') == 'Zm9vYmFy' - def setUp(self): - self.f = BytesIO() - self.d = Base64Decoder(self.f) - - def assert_data(self, data, finalize=True): - if finalize: - self.d.finalize() - - self.f.seek(0) - self.assertEqual(self.f.read(), data) - self.f.seek(0) - self.f.truncate() - - def test_simple(self): - self.d.write(b'Zm9vYmFy') - self.assert_data(b'foobar') - - def test_bad(self): - with self.assertRaises(DecodeError): - self.d.write(b'Zm9v!mFy') - - def test_split_properly(self): - self.d.write(b'Zm9v') - self.d.write(b'YmFy') - self.assert_data(b'foobar') - - def test_bad_split(self): - buff = b'Zm9v' - for i in range(1, 4): - first, second = buff[:i], buff[i:] - - self.setUp() - self.d.write(first) - self.d.write(second) - self.assert_data(b'foo') - - def test_long_bad_split(self): - buff = b'Zm9vYmFy' - for i in range(5, 8): - first, second = buff[:i], buff[i:] - - self.setUp() - self.d.write(first) - self.d.write(second) - self.assert_data(b'foobar') - - def test_close_and_finalize(self): - parser = Mock() - f = Base64Decoder(parser) - - f.finalize() - parser.finalize.assert_called_once_with() - - f.close() - parser.close.assert_called_once_with() - - def test_bad_length(self): - self.d.write(b'Zm9vYmF') # missing ending 'y' - - with self.assertRaises(DecodeError): - self.d.finalize() - - -class TestQuotedPrintableDecoder(unittest.TestCase): - def setUp(self): - self.f = BytesIO() - self.d = QuotedPrintableDecoder(self.f) - - def assert_data(self, data, finalize=True): - if finalize: - self.d.finalize() - - self.f.seek(0) - self.assertEqual(self.f.read(), data) - self.f.seek(0) - self.f.truncate() - - def test_simple(self): - self.d.write(b'foobar') - self.assert_data(b'foobar') - - def test_with_escape(self): - self.d.write(b'foo=3Dbar') - self.assert_data(b'foo=bar') - - def test_with_newline_escape(self): - self.d.write(b'foo=\r\nbar') - self.assert_data(b'foobar') - - def test_with_only_newline_escape(self): - self.d.write(b'foo=\nbar') - self.assert_data(b'foobar') - - def test_with_split_escape(self): - self.d.write(b'foo=3') - self.d.write(b'Dbar') - self.assert_data(b'foo=bar') - - def test_with_split_newline_escape_1(self): - self.d.write(b'foo=\r') - self.d.write(b'\nbar') - self.assert_data(b'foobar') - - def test_with_split_newline_escape_2(self): - self.d.write(b'foo=') - self.d.write(b'\r\nbar') - self.assert_data(b'foobar') - - def test_close_and_finalize(self): - parser = Mock() - f = QuotedPrintableDecoder(parser) - - f.finalize() - parser.finalize.assert_called_once_with() - - f.close() - parser.close.assert_called_once_with() - - def test_not_aligned(self): - """ - https://github.com/andrew-d/python-multipart/issues/6 - """ - self.d.write(b'=3AX') - self.assert_data(b':X') - - # Additional offset tests - self.d.write(b'=3') - self.d.write(b'AX') - self.assert_data(b':X') - - self.d.write(b'q=3AX') - self.assert_data(b'q:X') - - -# Load our list of HTTP test cases. -http_tests_dir = os.path.join(curr_dir, 'test_data', 'http') - -# Read in all test cases and load them. -NON_PARAMETRIZED_TESTS = {'single_field_blocks'} -http_tests = [] -for f in os.listdir(http_tests_dir): - # Only load the HTTP test cases. - fname, ext = os.path.splitext(f) - if fname in NON_PARAMETRIZED_TESTS: - continue - - if ext == '.http': - # Get the YAML file and load it too. - yaml_file = os.path.join(http_tests_dir, fname + '.yaml') - - # Load both. - with open(os.path.join(http_tests_dir, f), 'rb') as f: - test_data = f.read() - - with open(yaml_file, 'rb') as f: - yaml_data = yaml.safe_load(f) - - http_tests.append({ - 'name': fname, - 'test': test_data, - 'result': yaml_data - }) - - -def split_all(val): - """ - This function will split an array all possible ways. For example: - split_all([1,2,3,4]) - will give: - ([1], [2,3,4]), ([1,2], [3,4]), ([1,2,3], [4]) - """ - for i in range(1, len(val) - 1): - yield (val[:i], val[i:]) - - -@parametrize_class -class TestFormParser(unittest.TestCase): - def make(self, boundary, config={}): - self.ended = False - self.files = [] - self.fields = [] - - def on_field(f): - self.fields.append(f) - - def on_file(f): - self.files.append(f) - - def on_end(): - self.ended = True - - # Get a form-parser instance. - self.f = FormParser('multipart/form-data', on_field, on_file, on_end, - boundary=boundary, config=config) - - def assert_file_data(self, f, data): - o = f.file_object - o.seek(0) - file_data = o.read() - self.assertEqual(file_data, data) - - def assert_file(self, field_name, file_name, data): - # Find this file. - found = None - for f in self.files: - if f.field_name == field_name: - found = f - break - - # Assert that we found it. - self.assertIsNotNone(found) - - try: - # Assert about this file. - self.assert_file_data(found, data) - self.assertEqual(found.file_name, file_name) - - # Remove it from our list. - self.files.remove(found) - finally: - # Close our file - found.close() - - def assert_field(self, name, value): - # Find this field in our fields list. - found = None - for f in self.fields: - if f.field_name == name: - found = f - break - - # Assert that it exists and matches. - self.assertIsNotNone(found) - self.assertEqual(value, found.value) - - # Remove it for future iterations. - self.fields.remove(found) - - @parametrize('param', http_tests) - def test_http(self, param): - # Firstly, create our parser with the given boundary. - boundary = param['result']['boundary'] - if isinstance(boundary, str): - boundary = boundary.encode('latin-1') - self.make(boundary) - - # Now, we feed the parser with data. - exc = None - try: - processed = self.f.write(param['test']) - self.f.finalize() - except MultipartParseError as e: - processed = 0 - exc = e - - # print(repr(param)) - # print("") - # print(repr(self.fields)) - # print(repr(self.files)) - - # Do we expect an error? - if 'error' in param['result']['expected']: - self.assertIsNotNone(exc) - self.assertEqual(param['result']['expected']['error'], exc.offset) - return - - # No error! - self.assertEqual(processed, len(param['test'])) - - # Assert that the parser gave us the appropriate fields/files. - for e in param['result']['expected']: - # Get our type and name. - type = e['type'] - name = e['name'].encode('latin-1') - - if type == 'field': - self.assert_field(name, e['data']) - - elif type == 'file': - self.assert_file( - name, - e['file_name'].encode('latin-1'), - e['data'] - ) - - else: - assert False - - def test_random_splitting(self): - """ - This test runs a simple multipart body with one field and one file - through every possible split. - """ - # Load test data. - test_file = 'single_field_single_file.http' - with open(os.path.join(http_tests_dir, test_file), 'rb') as f: - test_data = f.read() - - # We split the file through all cases. - for first, last in split_all(test_data): - # Create form parser. - self.make('boundary') - - # Feed with data in 2 chunks. - i = 0 - i += self.f.write(first) - i += self.f.write(last) - self.f.finalize() - - # Assert we processed everything. - self.assertEqual(i, len(test_data)) - - # Assert that our file and field are here. - self.assert_field(b'field', b'test1') - self.assert_file(b'file', b'file.txt', b'test2') - - def test_feed_single_bytes(self): - """ - This test parses a simple multipart body 1 byte at a time. - """ - # Load test data. - test_file = 'single_field_single_file.http' - with open(os.path.join(http_tests_dir, test_file), 'rb') as f: - test_data = f.read() - - # Create form parser. - self.make('boundary') - - # Write all bytes. - # NOTE: Can't simply do `for b in test_data`, since that gives - # an integer when iterating over a bytes object on Python 3. - i = 0 - for x in range(len(test_data)): - b = test_data[x:x + 1] - i += self.f.write(b) - - self.f.finalize() - - # Assert we processed everything. - self.assertEqual(i, len(test_data)) - - # Assert that our file and field are here. - self.assert_field(b'field', b'test1') - self.assert_file(b'file', b'file.txt', b'test2') - - def test_feed_blocks(self): - """ - This test parses a simple multipart body 1 byte at a time. - """ - # Load test data. - test_file = 'single_field_blocks.http' - with open(os.path.join(http_tests_dir, test_file), 'rb') as f: - test_data = f.read() - - for c in range(1, len(test_data) + 1): - # Skip first `d` bytes - not interesting - for d in range(c): - - # Create form parser. - self.make('boundary') - # Skip - i = 0 - self.f.write(test_data[:d]) - i += d - for x in range(d, len(test_data), c): - # Write a chunk to achieve condition - # `i == data_length - 1` - # in boundary search loop (multipatr.py:1302) - b = test_data[x:x + c] - i += self.f.write(b) - - self.f.finalize() - - # Assert we processed everything. - self.assertEqual(i, len(test_data)) - - # Assert that our field is here. - self.assert_field(b'field', - b'0123456789ABCDEFGHIJ0123456789ABCDEFGHIJ') - - @slow_test - def test_request_body_fuzz(self): - """ - This test randomly fuzzes the request body to ensure that no strange - exceptions are raised and we don't end up in a strange state. The - fuzzing consists of randomly doing one of the following: - - Adding a random byte at a random offset - - Randomly deleting a single byte - - Randomly swapping two bytes - """ - # Load test data. - test_file = 'single_field_single_file.http' - with open(os.path.join(http_tests_dir, test_file), 'rb') as f: - test_data = f.read() - - iterations = 1000 - successes = 0 - failures = 0 - exceptions = 0 - - print("Running %d iterations of fuzz testing:" % (iterations,)) - for i in range(iterations): - # Create a bytearray to mutate. - fuzz_data = bytearray(test_data) - - # Pick what we're supposed to do. - choice = random.choice([1, 2, 3]) - if choice == 1: - # Add a random byte. - i = random.randrange(len(test_data)) - b = random.randrange(256) - - fuzz_data.insert(i, b) - msg = "Inserting byte %r at offset %d" % (b, i) - - elif choice == 2: - # Remove a random byte. - i = random.randrange(len(test_data)) - del fuzz_data[i] - - msg = "Deleting byte at offset %d" % (i,) - - elif choice == 3: - # Swap two bytes. - i = random.randrange(len(test_data) - 1) - fuzz_data[i], fuzz_data[i + 1] = fuzz_data[i + 1], fuzz_data[i] - - msg = "Swapping bytes %d and %d" % (i, i + 1) - - # Print message, so if this crashes, we can inspect the output. - print(" " + msg) - - # Create form parser. - self.make('boundary') - - # Feed with data, and ignore form parser exceptions. - i = 0 - try: - i = self.f.write(bytes(fuzz_data)) - self.f.finalize() - except FormParserError: - exceptions += 1 - else: - if i == len(fuzz_data): - successes += 1 - else: - failures += 1 - - print("--------------------------------------------------") - print("Successes: %d" % (successes,)) - print("Failures: %d" % (failures,)) - print("Exceptions: %d" % (exceptions,)) - - @slow_test - def test_request_body_fuzz_random_data(self): - """ - This test will fuzz the multipart parser with some number of iterations - of randomly-generated data. - """ - iterations = 1000 - successes = 0 - failures = 0 - exceptions = 0 - - print("Running %d iterations of fuzz testing:" % (iterations,)) - for i in range(iterations): - data_size = random.randrange(100, 4096) - data = os.urandom(data_size) - print(" Testing with %d random bytes..." % (data_size,)) - - # Create form parser. - self.make('boundary') - - # Feed with data, and ignore form parser exceptions. - i = 0 - try: - i = self.f.write(bytes(data)) - self.f.finalize() - except FormParserError: - exceptions += 1 - else: - if i == len(data): - successes += 1 - else: - failures += 1 - - print("--------------------------------------------------") - print("Successes: %d" % (successes,)) - print("Failures: %d" % (failures,)) - print("Exceptions: %d" % (exceptions,)) - - def test_bad_start_boundary(self): - self.make('boundary') - data = b'--boundary\rfoobar' - with self.assertRaises(MultipartParseError): - self.f.write(data) - - self.make('boundary') - data = b'--boundaryfoobar' - with self.assertRaises(MultipartParseError): - i = self.f.write(data) - - def test_octet_stream(self): - files = [] - def on_file(f): - files.append(f) - on_field = Mock() - on_end = Mock() - - f = FormParser('application/octet-stream', on_field, on_file, on_end=on_end, file_name=b'foo.txt') - self.assertTrue(isinstance(f.parser, OctetStreamParser)) - - f.write(b'test') - f.write(b'1234') - f.finalize() - - # Assert that we only received a single file, with the right data, and that we're done. - self.assertFalse(on_field.called) - self.assertEqual(len(files), 1) - self.assert_file_data(files[0], b'test1234') - self.assertTrue(on_end.called) - - def test_querystring(self): - fields = [] - def on_field(f): - fields.append(f) - on_file = Mock() - on_end = Mock() - - def simple_test(f): - # Reset tracking. - del fields[:] - on_file.reset_mock() - on_end.reset_mock() - - # Write test data. - f.write(b'foo=bar') - f.write(b'&test=asdf') - f.finalize() - - # Assert we only received 2 fields... - self.assertFalse(on_file.called) - self.assertEqual(len(fields), 2) - - # ...assert that we have the correct data... - self.assertEqual(fields[0].field_name, b'foo') - self.assertEqual(fields[0].value, b'bar') - - self.assertEqual(fields[1].field_name, b'test') - self.assertEqual(fields[1].value, b'asdf') - - # ... and assert that we've finished. - self.assertTrue(on_end.called) - - f = FormParser('application/x-www-form-urlencoded', on_field, on_file, on_end=on_end) - self.assertTrue(isinstance(f.parser, QuerystringParser)) - simple_test(f) - - f = FormParser('application/x-url-encoded', on_field, on_file, on_end=on_end) - self.assertTrue(isinstance(f.parser, QuerystringParser)) - simple_test(f) - - def test_close_methods(self): - parser = Mock() - f = FormParser('application/x-url-encoded', None, None) - f.parser = parser - - f.finalize() - parser.finalize.assert_called_once_with() - - f.close() - parser.close.assert_called_once_with() - - def test_bad_content_type(self): - # We should raise a ValueError for a bad Content-Type - with self.assertRaises(ValueError): - f = FormParser('application/bad', None, None) - - def test_no_boundary_given(self): - # We should raise a FormParserError when parsing a multipart message - # without a boundary. - with self.assertRaises(FormParserError): - f = FormParser('multipart/form-data', None, None) - - def test_bad_content_transfer_encoding(self): - data = b'----boundary\r\nContent-Disposition: form-data; name="file"; filename="test.txt"\r\nContent-Type: text/plain\r\nContent-Transfer-Encoding: badstuff\r\n\r\nTest\r\n----boundary--\r\n' - - files = [] - def on_file(f): - files.append(f) - on_field = Mock() - on_end = Mock() - - # Test with erroring. - config = {'UPLOAD_ERROR_ON_BAD_CTE': True} - f = FormParser('multipart/form-data', on_field, on_file, - on_end=on_end, boundary='--boundary', config=config) - - with self.assertRaises(FormParserError): - f.write(data) - f.finalize() - - # Test without erroring. - config = {'UPLOAD_ERROR_ON_BAD_CTE': False} - f = FormParser('multipart/form-data', on_field, on_file, - on_end=on_end, boundary='--boundary', config=config) - - f.write(data) - f.finalize() - self.assert_file_data(files[0], b'Test') - - def test_handles_None_fields(self): - fields = [] - def on_field(f): - fields.append(f) - on_file = Mock() - on_end = Mock() - - f = FormParser('application/x-www-form-urlencoded', on_field, on_file, on_end=on_end) - f.write(b'foo=bar&another&baz=asdf') - f.finalize() - - self.assertEqual(fields[0].field_name, b'foo') - self.assertEqual(fields[0].value, b'bar') - - self.assertEqual(fields[1].field_name, b'another') - self.assertEqual(fields[1].value, None) - - self.assertEqual(fields[2].field_name, b'baz') - self.assertEqual(fields[2].value, b'asdf') - - def test_max_size_multipart(self): - # Load test data. - test_file = 'single_field_single_file.http' - with open(os.path.join(http_tests_dir, test_file), 'rb') as f: - test_data = f.read() - - # Create form parser. - self.make('boundary') - - # Set the maximum length that we can process to be halfway through the - # given data. - self.f.parser.max_size = len(test_data) / 2 - - i = self.f.write(test_data) - self.f.finalize() - - # Assert we processed the correct amount. - self.assertEqual(i, len(test_data) / 2) - - def test_max_size_form_parser(self): - # Load test data. - test_file = 'single_field_single_file.http' - with open(os.path.join(http_tests_dir, test_file), 'rb') as f: - test_data = f.read() - - # Create form parser setting the maximum length that we can process to - # be halfway through the given data. - size = len(test_data) / 2 - self.make('boundary', config={'MAX_BODY_SIZE': size}) - - i = self.f.write(test_data) - self.f.finalize() - - # Assert we processed the correct amount. - self.assertEqual(i, len(test_data) / 2) - - def test_octet_stream_max_size(self): - files = [] - def on_file(f): - files.append(f) - on_field = Mock() - on_end = Mock() - - f = FormParser('application/octet-stream', on_field, on_file, - on_end=on_end, file_name=b'foo.txt', - config={'MAX_BODY_SIZE': 10}) - - f.write(b'0123456789012345689') - f.finalize() - - self.assert_file_data(files[0], b'0123456789') - - def test_invalid_max_size_multipart(self): - with self.assertRaises(ValueError): - q = MultipartParser(b'bound', max_size='foo') - - -class TestHelperFunctions(unittest.TestCase): - def test_create_form_parser(self): - r = create_form_parser({'Content-Type': 'application/octet-stream'}, - None, None) - self.assertTrue(isinstance(r, FormParser)) - - def test_create_form_parser_error(self): - headers = {} - with self.assertRaises(ValueError): - create_form_parser(headers, None, None) - - def test_parse_form(self): - on_field = Mock() - on_file = Mock() - - parse_form( - {'Content-Type': 'application/octet-stream', - }, - BytesIO(b'123456789012345'), - on_field, - on_file - ) - - assert on_file.call_count == 1 - - # Assert that the first argument of the call (a File object) has size - # 15 - i.e. all data is written. - self.assertEqual(on_file.call_args[0][0].size, 15) - - def test_parse_form_content_length(self): - files = [] - def on_file(file): - files.append(file) - - parse_form( - {'Content-Type': 'application/octet-stream', - 'Content-Length': '10' - }, - BytesIO(b'123456789012345'), - None, - on_file - ) - - self.assertEqual(len(files), 1) - self.assertEqual(files[0].size, 10) - - - -def suite(): - suite = unittest.TestSuite() - suite.addTest(unittest.defaultTestLoader.loadTestsFromTestCase(TestFile)) - suite.addTest(unittest.defaultTestLoader.loadTestsFromTestCase(TestParseOptionsHeader)) - suite.addTest(unittest.defaultTestLoader.loadTestsFromTestCase(TestBaseParser)) - suite.addTest(unittest.defaultTestLoader.loadTestsFromTestCase(TestQuerystringParser)) - suite.addTest(unittest.defaultTestLoader.loadTestsFromTestCase(TestOctetStreamParser)) - suite.addTest(unittest.defaultTestLoader.loadTestsFromTestCase(TestBase64Decoder)) - suite.addTest(unittest.defaultTestLoader.loadTestsFromTestCase(TestQuotedPrintableDecoder)) - suite.addTest(unittest.defaultTestLoader.loadTestsFromTestCase(TestFormParser)) - suite.addTest(unittest.defaultTestLoader.loadTestsFromTestCase(TestHelperFunctions)) - - return suite diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/setup.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/setup.py deleted file mode 100644 index 522756fc9db359002c7208b75094b103323f13c6..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/setup.py +++ /dev/null @@ -1,17 +0,0 @@ -#!/usr/bin/env python3 -def configuration(parent_package='',top_path=None): - from numpy.distutils.misc_util import Configuration - config = Configuration('distutils', parent_package, top_path) - config.add_subpackage('command') - config.add_subpackage('fcompiler') - config.add_subpackage('tests') - config.add_data_files('site.cfg') - config.add_data_files('mingw/gfortran_vs2003_hack.c') - config.add_data_dir('checks') - config.add_data_files('*.pyi') - config.make_config_py() - return config - -if __name__ == '__main__': - from numpy.distutils.core import setup - setup(configuration=configuration) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/test_f2py2e.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/test_f2py2e.py deleted file mode 100644 index 5f7b56a68a9d9642d635ca204fb2795972fa8eb2..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/test_f2py2e.py +++ /dev/null @@ -1,791 +0,0 @@ -import textwrap, re, sys, subprocess, shlex -from pathlib import Path -from collections import namedtuple - -import pytest - -from . import util -from numpy.f2py.f2py2e import main as f2pycli - -######################### -# CLI utils and classes # -######################### - -PPaths = namedtuple("PPaths", "finp, f90inp, pyf, wrap77, wrap90, cmodf") - - -def get_io_paths(fname_inp, mname="untitled"): - """Takes in a temporary file for testing and returns the expected output and input paths - - Here expected output is essentially one of any of the possible generated - files. - - ..note:: - - Since this does not actually run f2py, none of these are guaranteed to - exist, and module names are typically incorrect - - Parameters - ---------- - fname_inp : str - The input filename - mname : str, optional - The name of the module, untitled by default - - Returns - ------- - genp : NamedTuple PPaths - The possible paths which are generated, not all of which exist - """ - bpath = Path(fname_inp) - return PPaths( - finp=bpath.with_suffix(".f"), - f90inp=bpath.with_suffix(".f90"), - pyf=bpath.with_suffix(".pyf"), - wrap77=bpath.with_name(f"{mname}-f2pywrappers.f"), - wrap90=bpath.with_name(f"{mname}-f2pywrappers2.f90"), - cmodf=bpath.with_name(f"{mname}module.c"), - ) - - -############## -# CLI Fixtures and Tests # -############# - - -@pytest.fixture(scope="session") -def hello_world_f90(tmpdir_factory): - """Generates a single f90 file for testing""" - fdat = util.getpath("tests", "src", "cli", "hiworld.f90").read_text() - fn = tmpdir_factory.getbasetemp() / "hello.f90" - fn.write_text(fdat, encoding="ascii") - return fn - - -@pytest.fixture(scope="session") -def gh23598_warn(tmpdir_factory): - """F90 file for testing warnings in gh23598""" - fdat = util.getpath("tests", "src", "crackfortran", "gh23598Warn.f90").read_text() - fn = tmpdir_factory.getbasetemp() / "gh23598Warn.f90" - fn.write_text(fdat, encoding="ascii") - return fn - - -@pytest.fixture(scope="session") -def hello_world_f77(tmpdir_factory): - """Generates a single f77 file for testing""" - fdat = util.getpath("tests", "src", "cli", "hi77.f").read_text() - fn = tmpdir_factory.getbasetemp() / "hello.f" - fn.write_text(fdat, encoding="ascii") - return fn - - -@pytest.fixture(scope="session") -def retreal_f77(tmpdir_factory): - """Generates a single f77 file for testing""" - fdat = util.getpath("tests", "src", "return_real", "foo77.f").read_text() - fn = tmpdir_factory.getbasetemp() / "foo.f" - fn.write_text(fdat, encoding="ascii") - return fn - -@pytest.fixture(scope="session") -def f2cmap_f90(tmpdir_factory): - """Generates a single f90 file for testing""" - fdat = util.getpath("tests", "src", "f2cmap", "isoFortranEnvMap.f90").read_text() - f2cmap = util.getpath("tests", "src", "f2cmap", ".f2py_f2cmap").read_text() - fn = tmpdir_factory.getbasetemp() / "f2cmap.f90" - fmap = tmpdir_factory.getbasetemp() / "mapfile" - fn.write_text(fdat, encoding="ascii") - fmap.write_text(f2cmap, encoding="ascii") - return fn - - -def test_gh23598_warn(capfd, gh23598_warn, monkeypatch): - foutl = get_io_paths(gh23598_warn, mname="test") - ipath = foutl.f90inp - monkeypatch.setattr( - sys, "argv", - f'f2py {ipath} -m test'.split()) - - with util.switchdir(ipath.parent): - f2pycli() # Generate files - wrapper = foutl.wrap90.read_text() - assert "intproductf2pywrap, intpr" not in wrapper - - -def test_gen_pyf(capfd, hello_world_f90, monkeypatch): - """Ensures that a signature file is generated via the CLI - CLI :: -h - """ - ipath = Path(hello_world_f90) - opath = Path(hello_world_f90).stem + ".pyf" - monkeypatch.setattr(sys, "argv", f'f2py -h {opath} {ipath}'.split()) - - with util.switchdir(ipath.parent): - f2pycli() # Generate wrappers - out, _ = capfd.readouterr() - assert "Saving signatures to file" in out - assert Path(f'{opath}').exists() - - -def test_gen_pyf_stdout(capfd, hello_world_f90, monkeypatch): - """Ensures that a signature file can be dumped to stdout - CLI :: -h - """ - ipath = Path(hello_world_f90) - monkeypatch.setattr(sys, "argv", f'f2py -h stdout {ipath}'.split()) - with util.switchdir(ipath.parent): - f2pycli() - out, _ = capfd.readouterr() - assert "Saving signatures to file" in out - assert "function hi() ! in " in out - - -def test_gen_pyf_no_overwrite(capfd, hello_world_f90, monkeypatch): - """Ensures that the CLI refuses to overwrite signature files - CLI :: -h without --overwrite-signature - """ - ipath = Path(hello_world_f90) - monkeypatch.setattr(sys, "argv", f'f2py -h faker.pyf {ipath}'.split()) - - with util.switchdir(ipath.parent): - Path("faker.pyf").write_text("Fake news", encoding="ascii") - with pytest.raises(SystemExit): - f2pycli() # Refuse to overwrite - _, err = capfd.readouterr() - assert "Use --overwrite-signature to overwrite" in err - - -@pytest.mark.xfail -def test_f2py_skip(capfd, retreal_f77, monkeypatch): - """Tests that functions can be skipped - CLI :: skip: - """ - foutl = get_io_paths(retreal_f77, mname="test") - ipath = foutl.finp - toskip = "t0 t4 t8 sd s8 s4" - remaining = "td s0" - monkeypatch.setattr( - sys, "argv", - f'f2py {ipath} -m test skip: {toskip}'.split()) - - with util.switchdir(ipath.parent): - f2pycli() - out, err = capfd.readouterr() - for skey in toskip.split(): - assert ( - f'buildmodule: Could not found the body of interfaced routine "{skey}". Skipping.' - in err) - for rkey in remaining.split(): - assert f'Constructing wrapper function "{rkey}"' in out - - -def test_f2py_only(capfd, retreal_f77, monkeypatch): - """Test that functions can be kept by only: - CLI :: only: - """ - foutl = get_io_paths(retreal_f77, mname="test") - ipath = foutl.finp - toskip = "t0 t4 t8 sd s8 s4" - tokeep = "td s0" - monkeypatch.setattr( - sys, "argv", - f'f2py {ipath} -m test only: {tokeep}'.split()) - - with util.switchdir(ipath.parent): - f2pycli() - out, err = capfd.readouterr() - for skey in toskip.split(): - assert ( - f'buildmodule: Could not find the body of interfaced routine "{skey}". Skipping.' - in err) - for rkey in tokeep.split(): - assert f'Constructing wrapper function "{rkey}"' in out - - -def test_file_processing_switch(capfd, hello_world_f90, retreal_f77, - monkeypatch): - """Tests that it is possible to return to file processing mode - CLI :: : - BUG: numpy-gh #20520 - """ - foutl = get_io_paths(retreal_f77, mname="test") - ipath = foutl.finp - toskip = "t0 t4 t8 sd s8 s4" - ipath2 = Path(hello_world_f90) - tokeep = "td s0 hi" # hi is in ipath2 - mname = "blah" - monkeypatch.setattr( - sys, - "argv", - f'f2py {ipath} -m {mname} only: {tokeep} : {ipath2}'.split( - ), - ) - - with util.switchdir(ipath.parent): - f2pycli() - out, err = capfd.readouterr() - for skey in toskip.split(): - assert ( - f'buildmodule: Could not find the body of interfaced routine "{skey}". Skipping.' - in err) - for rkey in tokeep.split(): - assert f'Constructing wrapper function "{rkey}"' in out - - -def test_mod_gen_f77(capfd, hello_world_f90, monkeypatch): - """Checks the generation of files based on a module name - CLI :: -m - """ - MNAME = "hi" - foutl = get_io_paths(hello_world_f90, mname=MNAME) - ipath = foutl.f90inp - monkeypatch.setattr(sys, "argv", f'f2py {ipath} -m {MNAME}'.split()) - with util.switchdir(ipath.parent): - f2pycli() - - # Always generate C module - assert Path.exists(foutl.cmodf) - # File contains a function, check for F77 wrappers - assert Path.exists(foutl.wrap77) - - -def test_lower_cmod(capfd, hello_world_f77, monkeypatch): - """Lowers cases by flag or when -h is present - - CLI :: --[no-]lower - """ - foutl = get_io_paths(hello_world_f77, mname="test") - ipath = foutl.finp - capshi = re.compile(r"HI\(\)") - capslo = re.compile(r"hi\(\)") - # Case I: --lower is passed - monkeypatch.setattr(sys, "argv", f'f2py {ipath} -m test --lower'.split()) - with util.switchdir(ipath.parent): - f2pycli() - out, _ = capfd.readouterr() - assert capslo.search(out) is not None - assert capshi.search(out) is None - # Case II: --no-lower is passed - monkeypatch.setattr(sys, "argv", - f'f2py {ipath} -m test --no-lower'.split()) - with util.switchdir(ipath.parent): - f2pycli() - out, _ = capfd.readouterr() - assert capslo.search(out) is None - assert capshi.search(out) is not None - - -def test_lower_sig(capfd, hello_world_f77, monkeypatch): - """Lowers cases in signature files by flag or when -h is present - - CLI :: --[no-]lower -h - """ - foutl = get_io_paths(hello_world_f77, mname="test") - ipath = foutl.finp - # Signature files - capshi = re.compile(r"Block: HI") - capslo = re.compile(r"Block: hi") - # Case I: --lower is implied by -h - # TODO: Clean up to prevent passing --overwrite-signature - monkeypatch.setattr( - sys, - "argv", - f'f2py {ipath} -h {foutl.pyf} -m test --overwrite-signature'.split(), - ) - - with util.switchdir(ipath.parent): - f2pycli() - out, _ = capfd.readouterr() - assert capslo.search(out) is not None - assert capshi.search(out) is None - - # Case II: --no-lower overrides -h - monkeypatch.setattr( - sys, - "argv", - f'f2py {ipath} -h {foutl.pyf} -m test --overwrite-signature --no-lower' - .split(), - ) - - with util.switchdir(ipath.parent): - f2pycli() - out, _ = capfd.readouterr() - assert capslo.search(out) is None - assert capshi.search(out) is not None - - -def test_build_dir(capfd, hello_world_f90, monkeypatch): - """Ensures that the build directory can be specified - - CLI :: --build-dir - """ - ipath = Path(hello_world_f90) - mname = "blah" - odir = "tttmp" - monkeypatch.setattr(sys, "argv", - f'f2py -m {mname} {ipath} --build-dir {odir}'.split()) - - with util.switchdir(ipath.parent): - f2pycli() - out, _ = capfd.readouterr() - assert f"Wrote C/API module \"{mname}\"" in out - - -def test_overwrite(capfd, hello_world_f90, monkeypatch): - """Ensures that the build directory can be specified - - CLI :: --overwrite-signature - """ - ipath = Path(hello_world_f90) - monkeypatch.setattr( - sys, "argv", - f'f2py -h faker.pyf {ipath} --overwrite-signature'.split()) - - with util.switchdir(ipath.parent): - Path("faker.pyf").write_text("Fake news", encoding="ascii") - f2pycli() - out, _ = capfd.readouterr() - assert "Saving signatures to file" in out - - -def test_latexdoc(capfd, hello_world_f90, monkeypatch): - """Ensures that TeX documentation is written out - - CLI :: --latex-doc - """ - ipath = Path(hello_world_f90) - mname = "blah" - monkeypatch.setattr(sys, "argv", - f'f2py -m {mname} {ipath} --latex-doc'.split()) - - with util.switchdir(ipath.parent): - f2pycli() - out, _ = capfd.readouterr() - assert "Documentation is saved to file" in out - with Path(f"{mname}module.tex").open() as otex: - assert "\\documentclass" in otex.read() - - -def test_nolatexdoc(capfd, hello_world_f90, monkeypatch): - """Ensures that TeX documentation is written out - - CLI :: --no-latex-doc - """ - ipath = Path(hello_world_f90) - mname = "blah" - monkeypatch.setattr(sys, "argv", - f'f2py -m {mname} {ipath} --no-latex-doc'.split()) - - with util.switchdir(ipath.parent): - f2pycli() - out, _ = capfd.readouterr() - assert "Documentation is saved to file" not in out - - -def test_shortlatex(capfd, hello_world_f90, monkeypatch): - """Ensures that truncated documentation is written out - - TODO: Test to ensure this has no effect without --latex-doc - CLI :: --latex-doc --short-latex - """ - ipath = Path(hello_world_f90) - mname = "blah" - monkeypatch.setattr( - sys, - "argv", - f'f2py -m {mname} {ipath} --latex-doc --short-latex'.split(), - ) - - with util.switchdir(ipath.parent): - f2pycli() - out, _ = capfd.readouterr() - assert "Documentation is saved to file" in out - with Path(f"./{mname}module.tex").open() as otex: - assert "\\documentclass" not in otex.read() - - -def test_restdoc(capfd, hello_world_f90, monkeypatch): - """Ensures that RsT documentation is written out - - CLI :: --rest-doc - """ - ipath = Path(hello_world_f90) - mname = "blah" - monkeypatch.setattr(sys, "argv", - f'f2py -m {mname} {ipath} --rest-doc'.split()) - - with util.switchdir(ipath.parent): - f2pycli() - out, _ = capfd.readouterr() - assert "ReST Documentation is saved to file" in out - with Path(f"./{mname}module.rest").open() as orst: - assert r".. -*- rest -*-" in orst.read() - - -def test_norestexdoc(capfd, hello_world_f90, monkeypatch): - """Ensures that TeX documentation is written out - - CLI :: --no-rest-doc - """ - ipath = Path(hello_world_f90) - mname = "blah" - monkeypatch.setattr(sys, "argv", - f'f2py -m {mname} {ipath} --no-rest-doc'.split()) - - with util.switchdir(ipath.parent): - f2pycli() - out, _ = capfd.readouterr() - assert "ReST Documentation is saved to file" not in out - - -def test_debugcapi(capfd, hello_world_f90, monkeypatch): - """Ensures that debugging wrappers are written - - CLI :: --debug-capi - """ - ipath = Path(hello_world_f90) - mname = "blah" - monkeypatch.setattr(sys, "argv", - f'f2py -m {mname} {ipath} --debug-capi'.split()) - - with util.switchdir(ipath.parent): - f2pycli() - with Path(f"./{mname}module.c").open() as ocmod: - assert r"#define DEBUGCFUNCS" in ocmod.read() - - -@pytest.mark.xfail(reason="Consistently fails on CI.") -def test_debugcapi_bld(hello_world_f90, monkeypatch): - """Ensures that debugging wrappers work - - CLI :: --debug-capi -c - """ - ipath = Path(hello_world_f90) - mname = "blah" - monkeypatch.setattr(sys, "argv", - f'f2py -m {mname} {ipath} -c --debug-capi'.split()) - - with util.switchdir(ipath.parent): - f2pycli() - cmd_run = shlex.split("python3 -c \"import blah; blah.hi()\"") - rout = subprocess.run(cmd_run, capture_output=True, encoding='UTF-8') - eout = ' Hello World\n' - eerr = textwrap.dedent("""\ -debug-capi:Python C/API function blah.hi() -debug-capi:float hi=:output,hidden,scalar -debug-capi:hi=0 -debug-capi:Fortran subroutine `f2pywraphi(&hi)' -debug-capi:hi=0 -debug-capi:Building return value. -debug-capi:Python C/API function blah.hi: successful. -debug-capi:Freeing memory. - """) - assert rout.stdout == eout - assert rout.stderr == eerr - - -def test_wrapfunc_def(capfd, hello_world_f90, monkeypatch): - """Ensures that fortran subroutine wrappers for F77 are included by default - - CLI :: --[no]-wrap-functions - """ - # Implied - ipath = Path(hello_world_f90) - mname = "blah" - monkeypatch.setattr(sys, "argv", f'f2py -m {mname} {ipath}'.split()) - - with util.switchdir(ipath.parent): - f2pycli() - out, _ = capfd.readouterr() - assert r"Fortran 77 wrappers are saved to" in out - - # Explicit - monkeypatch.setattr(sys, "argv", - f'f2py -m {mname} {ipath} --wrap-functions'.split()) - - with util.switchdir(ipath.parent): - f2pycli() - out, _ = capfd.readouterr() - assert r"Fortran 77 wrappers are saved to" in out - - -def test_nowrapfunc(capfd, hello_world_f90, monkeypatch): - """Ensures that fortran subroutine wrappers for F77 can be disabled - - CLI :: --no-wrap-functions - """ - ipath = Path(hello_world_f90) - mname = "blah" - monkeypatch.setattr(sys, "argv", - f'f2py -m {mname} {ipath} --no-wrap-functions'.split()) - - with util.switchdir(ipath.parent): - f2pycli() - out, _ = capfd.readouterr() - assert r"Fortran 77 wrappers are saved to" not in out - - -def test_inclheader(capfd, hello_world_f90, monkeypatch): - """Add to the include directories - - CLI :: -include - TODO: Document this in the help string - """ - ipath = Path(hello_world_f90) - mname = "blah" - monkeypatch.setattr( - sys, - "argv", - f'f2py -m {mname} {ipath} -include -include '. - split(), - ) - - with util.switchdir(ipath.parent): - f2pycli() - with Path(f"./{mname}module.c").open() as ocmod: - ocmr = ocmod.read() - assert "#include " in ocmr - assert "#include " in ocmr - - -def test_inclpath(): - """Add to the include directories - - CLI :: --include-paths - """ - # TODO: populate - pass - - -def test_hlink(): - """Add to the include directories - - CLI :: --help-link - """ - # TODO: populate - pass - - -def test_f2cmap(capfd, f2cmap_f90, monkeypatch): - """Check that Fortran-to-Python KIND specs can be passed - - CLI :: --f2cmap - """ - ipath = Path(f2cmap_f90) - monkeypatch.setattr(sys, "argv", f'f2py -m blah {ipath} --f2cmap mapfile'.split()) - - with util.switchdir(ipath.parent): - f2pycli() - out, _ = capfd.readouterr() - assert "Reading f2cmap from 'mapfile' ..." in out - assert "Mapping \"real(kind=real32)\" to \"float\"" in out - assert "Mapping \"real(kind=real64)\" to \"double\"" in out - assert "Mapping \"integer(kind=int64)\" to \"long_long\"" in out - assert "Successfully applied user defined f2cmap changes" in out - - -def test_quiet(capfd, hello_world_f90, monkeypatch): - """Reduce verbosity - - CLI :: --quiet - """ - ipath = Path(hello_world_f90) - monkeypatch.setattr(sys, "argv", f'f2py -m blah {ipath} --quiet'.split()) - - with util.switchdir(ipath.parent): - f2pycli() - out, _ = capfd.readouterr() - assert len(out) == 0 - - -def test_verbose(capfd, hello_world_f90, monkeypatch): - """Increase verbosity - - CLI :: --verbose - """ - ipath = Path(hello_world_f90) - monkeypatch.setattr(sys, "argv", f'f2py -m blah {ipath} --verbose'.split()) - - with util.switchdir(ipath.parent): - f2pycli() - out, _ = capfd.readouterr() - assert "analyzeline" in out - - -def test_version(capfd, monkeypatch): - """Ensure version - - CLI :: -v - """ - monkeypatch.setattr(sys, "argv", 'f2py -v'.split()) - # TODO: f2py2e should not call sys.exit() after printing the version - with pytest.raises(SystemExit): - f2pycli() - out, _ = capfd.readouterr() - import numpy as np - assert np.__version__ == out.strip() - - -@pytest.mark.xfail(reason="Consistently fails on CI.") -def test_npdistop(hello_world_f90, monkeypatch): - """ - CLI :: -c - """ - ipath = Path(hello_world_f90) - monkeypatch.setattr(sys, "argv", f'f2py -m blah {ipath} -c'.split()) - - with util.switchdir(ipath.parent): - f2pycli() - cmd_run = shlex.split("python -c \"import blah; blah.hi()\"") - rout = subprocess.run(cmd_run, capture_output=True, encoding='UTF-8') - eout = ' Hello World\n' - assert rout.stdout == eout - - -# Numpy distutils flags -# TODO: These should be tested separately - - -def test_npd_fcompiler(): - """ - CLI :: -c --fcompiler - """ - # TODO: populate - pass - - -def test_npd_compiler(): - """ - CLI :: -c --compiler - """ - # TODO: populate - pass - - -def test_npd_help_fcompiler(): - """ - CLI :: -c --help-fcompiler - """ - # TODO: populate - pass - - -def test_npd_f77exec(): - """ - CLI :: -c --f77exec - """ - # TODO: populate - pass - - -def test_npd_f90exec(): - """ - CLI :: -c --f90exec - """ - # TODO: populate - pass - - -def test_npd_f77flags(): - """ - CLI :: -c --f77flags - """ - # TODO: populate - pass - - -def test_npd_f90flags(): - """ - CLI :: -c --f90flags - """ - # TODO: populate - pass - - -def test_npd_opt(): - """ - CLI :: -c --opt - """ - # TODO: populate - pass - - -def test_npd_arch(): - """ - CLI :: -c --arch - """ - # TODO: populate - pass - - -def test_npd_noopt(): - """ - CLI :: -c --noopt - """ - # TODO: populate - pass - - -def test_npd_noarch(): - """ - CLI :: -c --noarch - """ - # TODO: populate - pass - - -def test_npd_debug(): - """ - CLI :: -c --debug - """ - # TODO: populate - pass - - -def test_npd_link_auto(): - """ - CLI :: -c --link- - """ - # TODO: populate - pass - - -def test_npd_lib(): - """ - CLI :: -c -L/path/to/lib/ -l - """ - # TODO: populate - pass - - -def test_npd_define(): - """ - CLI :: -D - """ - # TODO: populate - pass - - -def test_npd_undefine(): - """ - CLI :: -U - """ - # TODO: populate - pass - - -def test_npd_incl(): - """ - CLI :: -I/path/to/include/ - """ - # TODO: populate - pass - - -def test_npd_linker(): - """ - CLI :: .o .so .a - """ - # TODO: populate - pass diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/ops/array_ops.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/ops/array_ops.py deleted file mode 100644 index b39930da9f711a86de16bf0ae511b0d3b94666ab..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/ops/array_ops.py +++ /dev/null @@ -1,600 +0,0 @@ -""" -Functions for arithmetic and comparison operations on NumPy arrays and -ExtensionArrays. -""" -from __future__ import annotations - -import datetime -from functools import partial -import operator -from typing import ( - TYPE_CHECKING, - Any, -) -import warnings - -import numpy as np - -from pandas._libs import ( - NaT, - Timedelta, - Timestamp, - lib, - ops as libops, -) -from pandas._libs.tslibs import ( - BaseOffset, - get_supported_reso, - get_unit_from_dtype, - is_supported_unit, - is_unitless, - npy_unit_to_abbrev, -) -from pandas.util._exceptions import find_stack_level - -from pandas.core.dtypes.cast import ( - construct_1d_object_array_from_listlike, - find_common_type, -) -from pandas.core.dtypes.common import ( - ensure_object, - is_bool_dtype, - is_list_like, - is_numeric_v_string_like, - is_object_dtype, - is_scalar, -) -from pandas.core.dtypes.generic import ( - ABCExtensionArray, - ABCIndex, - ABCSeries, -) -from pandas.core.dtypes.missing import ( - isna, - notna, -) - -from pandas.core import roperator -from pandas.core.computation import expressions -from pandas.core.construction import ensure_wrapped_if_datetimelike -from pandas.core.ops import missing -from pandas.core.ops.dispatch import should_extension_dispatch -from pandas.core.ops.invalid import invalid_comparison - -if TYPE_CHECKING: - from pandas._typing import ( - ArrayLike, - Shape, - ) - -# ----------------------------------------------------------------------------- -# Masking NA values and fallbacks for operations numpy does not support - - -def fill_binop(left, right, fill_value): - """ - If a non-None fill_value is given, replace null entries in left and right - with this value, but only in positions where _one_ of left/right is null, - not both. - - Parameters - ---------- - left : array-like - right : array-like - fill_value : object - - Returns - ------- - left : array-like - right : array-like - - Notes - ----- - Makes copies if fill_value is not None and NAs are present. - """ - if fill_value is not None: - left_mask = isna(left) - right_mask = isna(right) - - # one but not both - mask = left_mask ^ right_mask - - if left_mask.any(): - # Avoid making a copy if we can - left = left.copy() - left[left_mask & mask] = fill_value - - if right_mask.any(): - # Avoid making a copy if we can - right = right.copy() - right[right_mask & mask] = fill_value - - return left, right - - -def comp_method_OBJECT_ARRAY(op, x, y): - if isinstance(y, list): - # e.g. test_tuple_categories - y = construct_1d_object_array_from_listlike(y) - - if isinstance(y, (np.ndarray, ABCSeries, ABCIndex)): - if not is_object_dtype(y.dtype): - y = y.astype(np.object_) - - if isinstance(y, (ABCSeries, ABCIndex)): - y = y._values - - if x.shape != y.shape: - raise ValueError("Shapes must match", x.shape, y.shape) - result = libops.vec_compare(x.ravel(), y.ravel(), op) - else: - result = libops.scalar_compare(x.ravel(), y, op) - return result.reshape(x.shape) - - -def _masked_arith_op(x: np.ndarray, y, op): - """ - If the given arithmetic operation fails, attempt it again on - only the non-null elements of the input array(s). - - Parameters - ---------- - x : np.ndarray - y : np.ndarray, Series, Index - op : binary operator - """ - # For Series `x` is 1D so ravel() is a no-op; calling it anyway makes - # the logic valid for both Series and DataFrame ops. - xrav = x.ravel() - - if isinstance(y, np.ndarray): - dtype = find_common_type([x.dtype, y.dtype]) - result = np.empty(x.size, dtype=dtype) - - if len(x) != len(y): - raise ValueError(x.shape, y.shape) - ymask = notna(y) - - # NB: ravel() is only safe since y is ndarray; for e.g. PeriodIndex - # we would get int64 dtype, see GH#19956 - yrav = y.ravel() - mask = notna(xrav) & ymask.ravel() - - # See GH#5284, GH#5035, GH#19448 for historical reference - if mask.any(): - result[mask] = op(xrav[mask], yrav[mask]) - - else: - if not is_scalar(y): - raise TypeError( - f"Cannot broadcast np.ndarray with operand of type { type(y) }" - ) - - # mask is only meaningful for x - result = np.empty(x.size, dtype=x.dtype) - mask = notna(xrav) - - # 1 ** np.nan is 1. So we have to unmask those. - if op is pow: - mask = np.where(x == 1, False, mask) - elif op is roperator.rpow: - mask = np.where(y == 1, False, mask) - - if mask.any(): - result[mask] = op(xrav[mask], y) - - np.putmask(result, ~mask, np.nan) - result = result.reshape(x.shape) # 2D compat - return result - - -def _na_arithmetic_op(left: np.ndarray, right, op, is_cmp: bool = False): - """ - Return the result of evaluating op on the passed in values. - - If native types are not compatible, try coercion to object dtype. - - Parameters - ---------- - left : np.ndarray - right : np.ndarray or scalar - Excludes DataFrame, Series, Index, ExtensionArray. - is_cmp : bool, default False - If this a comparison operation. - - Returns - ------- - array-like - - Raises - ------ - TypeError : invalid operation - """ - if isinstance(right, str): - # can never use numexpr - func = op - else: - func = partial(expressions.evaluate, op) - - try: - result = func(left, right) - except TypeError: - if not is_cmp and ( - left.dtype == object or getattr(right, "dtype", None) == object - ): - # For object dtype, fallback to a masked operation (only operating - # on the non-missing values) - # Don't do this for comparisons, as that will handle complex numbers - # incorrectly, see GH#32047 - result = _masked_arith_op(left, right, op) - else: - raise - - if is_cmp and (is_scalar(result) or result is NotImplemented): - # numpy returned a scalar instead of operating element-wise - # e.g. numeric array vs str - # TODO: can remove this after dropping some future numpy version? - return invalid_comparison(left, right, op) - - return missing.dispatch_fill_zeros(op, left, right, result) - - -def arithmetic_op(left: ArrayLike, right: Any, op): - """ - Evaluate an arithmetic operation `+`, `-`, `*`, `/`, `//`, `%`, `**`, ... - - Note: the caller is responsible for ensuring that numpy warnings are - suppressed (with np.errstate(all="ignore")) if needed. - - Parameters - ---------- - left : np.ndarray or ExtensionArray - right : object - Cannot be a DataFrame or Index. Series is *not* excluded. - op : {operator.add, operator.sub, ...} - Or one of the reversed variants from roperator. - - Returns - ------- - ndarray or ExtensionArray - Or a 2-tuple of these in the case of divmod or rdivmod. - """ - # NB: We assume that extract_array and ensure_wrapped_if_datetimelike - # have already been called on `left` and `right`, - # and `maybe_prepare_scalar_for_op` has already been called on `right` - # We need to special-case datetime64/timedelta64 dtypes (e.g. because numpy - # casts integer dtypes to timedelta64 when operating with timedelta64 - GH#22390) - - if ( - should_extension_dispatch(left, right) - or isinstance(right, (Timedelta, BaseOffset, Timestamp)) - or right is NaT - ): - # Timedelta/Timestamp and other custom scalars are included in the check - # because numexpr will fail on it, see GH#31457 - res_values = op(left, right) - else: - # TODO we should handle EAs consistently and move this check before the if/else - # (https://github.com/pandas-dev/pandas/issues/41165) - # error: Argument 2 to "_bool_arith_check" has incompatible type - # "Union[ExtensionArray, ndarray[Any, Any]]"; expected "ndarray[Any, Any]" - _bool_arith_check(op, left, right) # type: ignore[arg-type] - - # error: Argument 1 to "_na_arithmetic_op" has incompatible type - # "Union[ExtensionArray, ndarray[Any, Any]]"; expected "ndarray[Any, Any]" - res_values = _na_arithmetic_op(left, right, op) # type: ignore[arg-type] - - return res_values - - -def comparison_op(left: ArrayLike, right: Any, op) -> ArrayLike: - """ - Evaluate a comparison operation `=`, `!=`, `>=`, `>`, `<=`, or `<`. - - Note: the caller is responsible for ensuring that numpy warnings are - suppressed (with np.errstate(all="ignore")) if needed. - - Parameters - ---------- - left : np.ndarray or ExtensionArray - right : object - Cannot be a DataFrame, Series, or Index. - op : {operator.eq, operator.ne, operator.gt, operator.ge, operator.lt, operator.le} - - Returns - ------- - ndarray or ExtensionArray - """ - # NB: We assume extract_array has already been called on left and right - lvalues = ensure_wrapped_if_datetimelike(left) - rvalues = ensure_wrapped_if_datetimelike(right) - - rvalues = lib.item_from_zerodim(rvalues) - if isinstance(rvalues, list): - # We don't catch tuple here bc we may be comparing e.g. MultiIndex - # to a tuple that represents a single entry, see test_compare_tuple_strs - rvalues = np.asarray(rvalues) - - if isinstance(rvalues, (np.ndarray, ABCExtensionArray)): - # TODO: make this treatment consistent across ops and classes. - # We are not catching all listlikes here (e.g. frozenset, tuple) - # The ambiguous case is object-dtype. See GH#27803 - if len(lvalues) != len(rvalues): - raise ValueError( - "Lengths must match to compare", lvalues.shape, rvalues.shape - ) - - if should_extension_dispatch(lvalues, rvalues) or ( - (isinstance(rvalues, (Timedelta, BaseOffset, Timestamp)) or right is NaT) - and lvalues.dtype != object - ): - # Call the method on lvalues - res_values = op(lvalues, rvalues) - - elif is_scalar(rvalues) and isna(rvalues): # TODO: but not pd.NA? - # numpy does not like comparisons vs None - if op is operator.ne: - res_values = np.ones(lvalues.shape, dtype=bool) - else: - res_values = np.zeros(lvalues.shape, dtype=bool) - - elif is_numeric_v_string_like(lvalues, rvalues): - # GH#36377 going through the numexpr path would incorrectly raise - return invalid_comparison(lvalues, rvalues, op) - - elif lvalues.dtype == object or isinstance(rvalues, str): - res_values = comp_method_OBJECT_ARRAY(op, lvalues, rvalues) - - else: - res_values = _na_arithmetic_op(lvalues, rvalues, op, is_cmp=True) - - return res_values - - -def na_logical_op(x: np.ndarray, y, op): - try: - # For exposition, write: - # yarr = isinstance(y, np.ndarray) - # yint = is_integer(y) or (yarr and y.dtype.kind == "i") - # ybool = is_bool(y) or (yarr and y.dtype.kind == "b") - # xint = x.dtype.kind == "i" - # xbool = x.dtype.kind == "b" - # Then Cases where this goes through without raising include: - # (xint or xbool) and (yint or bool) - result = op(x, y) - except TypeError: - if isinstance(y, np.ndarray): - # bool-bool dtype operations should be OK, should not get here - assert not (x.dtype.kind == "b" and y.dtype.kind == "b") - x = ensure_object(x) - y = ensure_object(y) - result = libops.vec_binop(x.ravel(), y.ravel(), op) - else: - # let null fall thru - assert lib.is_scalar(y) - if not isna(y): - y = bool(y) - try: - result = libops.scalar_binop(x, y, op) - except ( - TypeError, - ValueError, - AttributeError, - OverflowError, - NotImplementedError, - ) as err: - typ = type(y).__name__ - raise TypeError( - f"Cannot perform '{op.__name__}' with a dtyped [{x.dtype}] array " - f"and scalar of type [{typ}]" - ) from err - - return result.reshape(x.shape) - - -def logical_op(left: ArrayLike, right: Any, op) -> ArrayLike: - """ - Evaluate a logical operation `|`, `&`, or `^`. - - Parameters - ---------- - left : np.ndarray or ExtensionArray - right : object - Cannot be a DataFrame, Series, or Index. - op : {operator.and_, operator.or_, operator.xor} - Or one of the reversed variants from roperator. - - Returns - ------- - ndarray or ExtensionArray - """ - - def fill_bool(x, left=None): - # if `left` is specifically not-boolean, we do not cast to bool - if x.dtype.kind in "cfO": - # dtypes that can hold NA - mask = isna(x) - if mask.any(): - x = x.astype(object) - x[mask] = False - - if left is None or left.dtype.kind == "b": - x = x.astype(bool) - return x - - right = lib.item_from_zerodim(right) - if is_list_like(right) and not hasattr(right, "dtype"): - # e.g. list, tuple - warnings.warn( - "Logical ops (and, or, xor) between Pandas objects and dtype-less " - "sequences (e.g. list, tuple) are deprecated and will raise in a " - "future version. Wrap the object in a Series, Index, or np.array " - "before operating instead.", - FutureWarning, - stacklevel=find_stack_level(), - ) - right = construct_1d_object_array_from_listlike(right) - - # NB: We assume extract_array has already been called on left and right - lvalues = ensure_wrapped_if_datetimelike(left) - rvalues = right - - if should_extension_dispatch(lvalues, rvalues): - # Call the method on lvalues - res_values = op(lvalues, rvalues) - - else: - if isinstance(rvalues, np.ndarray): - is_other_int_dtype = rvalues.dtype.kind in "iu" - if not is_other_int_dtype: - rvalues = fill_bool(rvalues, lvalues) - - else: - # i.e. scalar - is_other_int_dtype = lib.is_integer(rvalues) - - res_values = na_logical_op(lvalues, rvalues, op) - - # For int vs int `^`, `|`, `&` are bitwise operators and return - # integer dtypes. Otherwise these are boolean ops - if not (left.dtype.kind in "iu" and is_other_int_dtype): - res_values = fill_bool(res_values) - - return res_values - - -def get_array_op(op): - """ - Return a binary array operation corresponding to the given operator op. - - Parameters - ---------- - op : function - Binary operator from operator or roperator module. - - Returns - ------- - functools.partial - """ - if isinstance(op, partial): - # We get here via dispatch_to_series in DataFrame case - # e.g. test_rolling_consistency_var_debiasing_factors - return op - - op_name = op.__name__.strip("_").lstrip("r") - if op_name == "arith_op": - # Reached via DataFrame._combine_frame i.e. flex methods - # e.g. test_df_add_flex_filled_mixed_dtypes - return op - - if op_name in {"eq", "ne", "lt", "le", "gt", "ge"}: - return partial(comparison_op, op=op) - elif op_name in {"and", "or", "xor", "rand", "ror", "rxor"}: - return partial(logical_op, op=op) - elif op_name in { - "add", - "sub", - "mul", - "truediv", - "floordiv", - "mod", - "divmod", - "pow", - }: - return partial(arithmetic_op, op=op) - else: - raise NotImplementedError(op_name) - - -def maybe_prepare_scalar_for_op(obj, shape: Shape): - """ - Cast non-pandas objects to pandas types to unify behavior of arithmetic - and comparison operations. - - Parameters - ---------- - obj: object - shape : tuple[int] - - Returns - ------- - out : object - - Notes - ----- - Be careful to call this *after* determining the `name` attribute to be - attached to the result of the arithmetic operation. - """ - if type(obj) is datetime.timedelta: - # GH#22390 cast up to Timedelta to rely on Timedelta - # implementation; otherwise operation against numeric-dtype - # raises TypeError - return Timedelta(obj) - elif type(obj) is datetime.datetime: - # cast up to Timestamp to rely on Timestamp implementation, see Timedelta above - return Timestamp(obj) - elif isinstance(obj, np.datetime64): - # GH#28080 numpy casts integer-dtype to datetime64 when doing - # array[int] + datetime64, which we do not allow - if isna(obj): - from pandas.core.arrays import DatetimeArray - - # Avoid possible ambiguities with pd.NaT - # GH 52295 - if is_unitless(obj.dtype): - obj = obj.astype("datetime64[ns]") - elif not is_supported_unit(get_unit_from_dtype(obj.dtype)): - unit = get_unit_from_dtype(obj.dtype) - closest_unit = npy_unit_to_abbrev(get_supported_reso(unit)) - obj = obj.astype(f"datetime64[{closest_unit}]") - right = np.broadcast_to(obj, shape) - return DatetimeArray(right) - - return Timestamp(obj) - - elif isinstance(obj, np.timedelta64): - if isna(obj): - from pandas.core.arrays import TimedeltaArray - - # wrapping timedelta64("NaT") in Timedelta returns NaT, - # which would incorrectly be treated as a datetime-NaT, so - # we broadcast and wrap in a TimedeltaArray - # GH 52295 - if is_unitless(obj.dtype): - obj = obj.astype("timedelta64[ns]") - elif not is_supported_unit(get_unit_from_dtype(obj.dtype)): - unit = get_unit_from_dtype(obj.dtype) - closest_unit = npy_unit_to_abbrev(get_supported_reso(unit)) - obj = obj.astype(f"timedelta64[{closest_unit}]") - right = np.broadcast_to(obj, shape) - return TimedeltaArray(right) - - # In particular non-nanosecond timedelta64 needs to be cast to - # nanoseconds, or else we get undesired behavior like - # np.timedelta64(3, 'D') / 2 == np.timedelta64(1, 'D') - return Timedelta(obj) - - return obj - - -_BOOL_OP_NOT_ALLOWED = { - operator.truediv, - roperator.rtruediv, - operator.floordiv, - roperator.rfloordiv, - operator.pow, - roperator.rpow, -} - - -def _bool_arith_check(op, a: np.ndarray, b): - """ - In contrast to numpy, pandas raises an error for certain operations - with booleans. - """ - if op in _BOOL_OP_NOT_ALLOWED: - if a.dtype.kind == "b" and (is_bool_dtype(b) or lib.is_bool(b)): - op_name = op.__name__.strip("_").lstrip("r") - raise NotImplementedError( - f"operator '{op_name}' not implemented for bool dtypes" - ) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/tslibs/test_to_offset.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/tslibs/test_to_offset.py deleted file mode 100644 index 27ddbb82f49a9383b4864c5404c2c5d7d7030cef..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/tslibs/test_to_offset.py +++ /dev/null @@ -1,174 +0,0 @@ -import re - -import pytest - -from pandas._libs.tslibs import ( - Timedelta, - offsets, - to_offset, -) - - -@pytest.mark.parametrize( - "freq_input,expected", - [ - (to_offset("10us"), offsets.Micro(10)), - (offsets.Hour(), offsets.Hour()), - ("2h30min", offsets.Minute(150)), - ("2h 30min", offsets.Minute(150)), - ("2h30min15s", offsets.Second(150 * 60 + 15)), - ("2h 60min", offsets.Hour(3)), - ("2h 20.5min", offsets.Second(8430)), - ("1.5min", offsets.Second(90)), - ("0.5S", offsets.Milli(500)), - ("15l500u", offsets.Micro(15500)), - ("10s75L", offsets.Milli(10075)), - ("1s0.25ms", offsets.Micro(1000250)), - ("1s0.25L", offsets.Micro(1000250)), - ("2800N", offsets.Nano(2800)), - ("2SM", offsets.SemiMonthEnd(2)), - ("2SM-16", offsets.SemiMonthEnd(2, day_of_month=16)), - ("2SMS-14", offsets.SemiMonthBegin(2, day_of_month=14)), - ("2SMS-15", offsets.SemiMonthBegin(2)), - ], -) -def test_to_offset(freq_input, expected): - result = to_offset(freq_input) - assert result == expected - - -@pytest.mark.parametrize( - "freqstr,expected", [("-1S", -1), ("-2SM", -2), ("-1SMS", -1), ("-5min10s", -310)] -) -def test_to_offset_negative(freqstr, expected): - result = to_offset(freqstr) - assert result.n == expected - - -@pytest.mark.parametrize( - "freqstr", - [ - "2h20m", - "U1", - "-U", - "3U1", - "-2-3U", - "-2D:3H", - "1.5.0S", - "2SMS-15-15", - "2SMS-15D", - "100foo", - # Invalid leading +/- signs. - "+-1d", - "-+1h", - "+1", - "-7", - "+d", - "-m", - # Invalid shortcut anchors. - "SM-0", - "SM-28", - "SM-29", - "SM-FOO", - "BSM", - "SM--1", - "SMS-1", - "SMS-28", - "SMS-30", - "SMS-BAR", - "SMS-BYR", - "BSMS", - "SMS--2", - ], -) -def test_to_offset_invalid(freqstr): - # see gh-13930 - - # We escape string because some of our - # inputs contain regex special characters. - msg = re.escape(f"Invalid frequency: {freqstr}") - with pytest.raises(ValueError, match=msg): - to_offset(freqstr) - - -def test_to_offset_no_evaluate(): - msg = str(("", "")) - with pytest.raises(TypeError, match=msg): - to_offset(("", "")) - - -def test_to_offset_tuple_unsupported(): - with pytest.raises(TypeError, match="pass as a string instead"): - to_offset((5, "T")) - - -@pytest.mark.parametrize( - "freqstr,expected", - [ - ("2D 3H", offsets.Hour(51)), - ("2 D3 H", offsets.Hour(51)), - ("2 D 3 H", offsets.Hour(51)), - (" 2 D 3 H ", offsets.Hour(51)), - (" H ", offsets.Hour()), - (" 3 H ", offsets.Hour(3)), - ], -) -def test_to_offset_whitespace(freqstr, expected): - result = to_offset(freqstr) - assert result == expected - - -@pytest.mark.parametrize( - "freqstr,expected", [("00H 00T 01S", 1), ("-00H 03T 14S", -194)] -) -def test_to_offset_leading_zero(freqstr, expected): - result = to_offset(freqstr) - assert result.n == expected - - -@pytest.mark.parametrize("freqstr,expected", [("+1d", 1), ("+2h30min", 150)]) -def test_to_offset_leading_plus(freqstr, expected): - result = to_offset(freqstr) - assert result.n == expected - - -@pytest.mark.parametrize( - "kwargs,expected", - [ - ({"days": 1, "seconds": 1}, offsets.Second(86401)), - ({"days": -1, "seconds": 1}, offsets.Second(-86399)), - ({"hours": 1, "minutes": 10}, offsets.Minute(70)), - ({"hours": 1, "minutes": -10}, offsets.Minute(50)), - ({"weeks": 1}, offsets.Day(7)), - ({"hours": 1}, offsets.Hour(1)), - ({"hours": 1}, to_offset("60min")), - ({"microseconds": 1}, offsets.Micro(1)), - ({"microseconds": 0}, offsets.Nano(0)), - ], -) -def test_to_offset_pd_timedelta(kwargs, expected): - # see gh-9064 - td = Timedelta(**kwargs) - result = to_offset(td) - assert result == expected - - -@pytest.mark.parametrize( - "shortcut,expected", - [ - ("W", offsets.Week(weekday=6)), - ("W-SUN", offsets.Week(weekday=6)), - ("Q", offsets.QuarterEnd(startingMonth=12)), - ("Q-DEC", offsets.QuarterEnd(startingMonth=12)), - ("Q-MAY", offsets.QuarterEnd(startingMonth=5)), - ("SM", offsets.SemiMonthEnd(day_of_month=15)), - ("SM-15", offsets.SemiMonthEnd(day_of_month=15)), - ("SM-1", offsets.SemiMonthEnd(day_of_month=1)), - ("SM-27", offsets.SemiMonthEnd(day_of_month=27)), - ("SMS-2", offsets.SemiMonthBegin(day_of_month=2)), - ("SMS-27", offsets.SemiMonthBegin(day_of_month=27)), - ], -) -def test_anchored_shortcuts(shortcut, expected): - result = to_offset(shortcut) - assert result == expected diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/window/test_win_type.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/window/test_win_type.py deleted file mode 100644 index 5052019ddb7264c4f81e99ccdd79d88d86865ec4..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/window/test_win_type.py +++ /dev/null @@ -1,688 +0,0 @@ -import numpy as np -import pytest - -from pandas import ( - DataFrame, - Series, - Timedelta, - concat, - date_range, -) -import pandas._testing as tm -from pandas.api.indexers import BaseIndexer - - -@pytest.fixture( - params=[ - "triang", - "blackman", - "hamming", - "bartlett", - "bohman", - "blackmanharris", - "nuttall", - "barthann", - ] -) -def win_types(request): - return request.param - - -@pytest.fixture(params=["kaiser", "gaussian", "general_gaussian", "exponential"]) -def win_types_special(request): - return request.param - - -def test_constructor(frame_or_series): - # GH 12669 - pytest.importorskip("scipy") - c = frame_or_series(range(5)).rolling - - # valid - c(win_type="boxcar", window=2, min_periods=1) - c(win_type="boxcar", window=2, min_periods=1, center=True) - c(win_type="boxcar", window=2, min_periods=1, center=False) - - -@pytest.mark.parametrize("w", [2.0, "foo", np.array([2])]) -def test_invalid_constructor(frame_or_series, w): - # not valid - pytest.importorskip("scipy") - c = frame_or_series(range(5)).rolling - with pytest.raises(ValueError, match="min_periods must be an integer"): - c(win_type="boxcar", window=2, min_periods=w) - with pytest.raises(ValueError, match="center must be a boolean"): - c(win_type="boxcar", window=2, min_periods=1, center=w) - - -@pytest.mark.parametrize("wt", ["foobar", 1]) -def test_invalid_constructor_wintype(frame_or_series, wt): - pytest.importorskip("scipy") - c = frame_or_series(range(5)).rolling - with pytest.raises(ValueError, match="Invalid win_type"): - c(win_type=wt, window=2) - - -def test_constructor_with_win_type(frame_or_series, win_types): - # GH 12669 - pytest.importorskip("scipy") - c = frame_or_series(range(5)).rolling - c(win_type=win_types, window=2) - - -@pytest.mark.parametrize("arg", ["median", "kurt", "skew"]) -def test_agg_function_support(arg): - pytest.importorskip("scipy") - df = DataFrame({"A": np.arange(5)}) - roll = df.rolling(2, win_type="triang") - - msg = f"'{arg}' is not a valid function for 'Window' object" - with pytest.raises(AttributeError, match=msg): - roll.agg(arg) - - with pytest.raises(AttributeError, match=msg): - roll.agg([arg]) - - with pytest.raises(AttributeError, match=msg): - roll.agg({"A": arg}) - - -def test_invalid_scipy_arg(): - # This error is raised by scipy - pytest.importorskip("scipy") - msg = r"boxcar\(\) got an unexpected" - with pytest.raises(TypeError, match=msg): - Series(range(3)).rolling(1, win_type="boxcar").mean(foo="bar") - - -def test_constructor_with_win_type_invalid(frame_or_series): - # GH 13383 - pytest.importorskip("scipy") - c = frame_or_series(range(5)).rolling - - msg = "window must be an integer 0 or greater" - - with pytest.raises(ValueError, match=msg): - c(-1, win_type="boxcar") - - -def test_window_with_args(step): - # make sure that we are aggregating window functions correctly with arg - pytest.importorskip("scipy") - r = Series(np.random.default_rng(2).standard_normal(100)).rolling( - window=10, min_periods=1, win_type="gaussian", step=step - ) - expected = concat([r.mean(std=10), r.mean(std=0.01)], axis=1) - expected.columns = ["", ""] - result = r.aggregate([lambda x: x.mean(std=10), lambda x: x.mean(std=0.01)]) - tm.assert_frame_equal(result, expected) - - def a(x): - return x.mean(std=10) - - def b(x): - return x.mean(std=0.01) - - expected = concat([r.mean(std=10), r.mean(std=0.01)], axis=1) - expected.columns = ["a", "b"] - result = r.aggregate([a, b]) - tm.assert_frame_equal(result, expected) - - -def test_win_type_with_method_invalid(): - pytest.importorskip("scipy") - with pytest.raises( - NotImplementedError, match="'single' is the only supported method type." - ): - Series(range(1)).rolling(1, win_type="triang", method="table") - - -@pytest.mark.parametrize("arg", [2000000000, "2s", Timedelta("2s")]) -def test_consistent_win_type_freq(arg): - # GH 15969 - pytest.importorskip("scipy") - s = Series(range(1)) - with pytest.raises(ValueError, match="Invalid win_type freq"): - s.rolling(arg, win_type="freq") - - -def test_win_type_freq_return_none(): - # GH 48838 - freq_roll = Series(range(2), index=date_range("2020", periods=2)).rolling("2s") - assert freq_roll.win_type is None - - -def test_win_type_not_implemented(): - pytest.importorskip("scipy") - - class CustomIndexer(BaseIndexer): - def get_window_bounds(self, num_values, min_periods, center, closed, step): - return np.array([0, 1]), np.array([1, 2]) - - df = DataFrame({"values": range(2)}) - indexer = CustomIndexer() - with pytest.raises(NotImplementedError, match="BaseIndexer subclasses not"): - df.rolling(indexer, win_type="boxcar") - - -def test_cmov_mean(step): - # GH 8238 - pytest.importorskip("scipy") - vals = np.array([6.95, 15.21, 4.72, 9.12, 13.81, 13.49, 16.68, 9.48, 10.63, 14.48]) - result = Series(vals).rolling(5, center=True, step=step).mean() - expected_values = [ - np.nan, - np.nan, - 9.962, - 11.27, - 11.564, - 12.516, - 12.818, - 12.952, - np.nan, - np.nan, - ] - expected = Series(expected_values)[::step] - tm.assert_series_equal(expected, result) - - -def test_cmov_window(step): - # GH 8238 - pytest.importorskip("scipy") - vals = np.array([6.95, 15.21, 4.72, 9.12, 13.81, 13.49, 16.68, 9.48, 10.63, 14.48]) - result = Series(vals).rolling(5, win_type="boxcar", center=True, step=step).mean() - expected_values = [ - np.nan, - np.nan, - 9.962, - 11.27, - 11.564, - 12.516, - 12.818, - 12.952, - np.nan, - np.nan, - ] - expected = Series(expected_values)[::step] - tm.assert_series_equal(expected, result) - - -def test_cmov_window_corner(step): - # GH 8238 - # all nan - pytest.importorskip("scipy") - vals = Series([np.nan] * 10) - result = vals.rolling(5, center=True, win_type="boxcar", step=step).mean() - assert np.isnan(result).all() - - # empty - vals = Series([], dtype=object) - result = vals.rolling(5, center=True, win_type="boxcar", step=step).mean() - assert len(result) == 0 - - # shorter than window - vals = Series(np.random.default_rng(2).standard_normal(5)) - result = vals.rolling(10, win_type="boxcar", step=step).mean() - assert np.isnan(result).all() - assert len(result) == len(range(0, 5, step or 1)) - - -@pytest.mark.parametrize( - "f,xp", - [ - ( - "mean", - [ - [np.nan, np.nan], - [np.nan, np.nan], - [9.252, 9.392], - [8.644, 9.906], - [8.87, 10.208], - [6.81, 8.588], - [7.792, 8.644], - [9.05, 7.824], - [np.nan, np.nan], - [np.nan, np.nan], - ], - ), - ( - "std", - [ - [np.nan, np.nan], - [np.nan, np.nan], - [3.789706, 4.068313], - [3.429232, 3.237411], - [3.589269, 3.220810], - [3.405195, 2.380655], - [3.281839, 2.369869], - [3.676846, 1.801799], - [np.nan, np.nan], - [np.nan, np.nan], - ], - ), - ( - "var", - [ - [np.nan, np.nan], - [np.nan, np.nan], - [14.36187, 16.55117], - [11.75963, 10.48083], - [12.88285, 10.37362], - [11.59535, 5.66752], - [10.77047, 5.61628], - [13.51920, 3.24648], - [np.nan, np.nan], - [np.nan, np.nan], - ], - ), - ( - "sum", - [ - [np.nan, np.nan], - [np.nan, np.nan], - [46.26, 46.96], - [43.22, 49.53], - [44.35, 51.04], - [34.05, 42.94], - [38.96, 43.22], - [45.25, 39.12], - [np.nan, np.nan], - [np.nan, np.nan], - ], - ), - ], -) -def test_cmov_window_frame(f, xp, step): - # Gh 8238 - pytest.importorskip("scipy") - df = DataFrame( - np.array( - [ - [12.18, 3.64], - [10.18, 9.16], - [13.24, 14.61], - [4.51, 8.11], - [6.15, 11.44], - [9.14, 6.21], - [11.31, 10.67], - [2.94, 6.51], - [9.42, 8.39], - [12.44, 7.34], - ] - ) - ) - xp = DataFrame(np.array(xp))[::step] - - roll = df.rolling(5, win_type="boxcar", center=True, step=step) - rs = getattr(roll, f)() - - tm.assert_frame_equal(xp, rs) - - -@pytest.mark.parametrize("min_periods", [0, 1, 2, 3, 4, 5]) -def test_cmov_window_na_min_periods(step, min_periods): - pytest.importorskip("scipy") - vals = Series(np.random.default_rng(2).standard_normal(10)) - vals[4] = np.nan - vals[8] = np.nan - - xp = vals.rolling(5, min_periods=min_periods, center=True, step=step).mean() - rs = vals.rolling( - 5, win_type="boxcar", min_periods=min_periods, center=True, step=step - ).mean() - tm.assert_series_equal(xp, rs) - - -def test_cmov_window_regular(win_types, step): - # GH 8238 - pytest.importorskip("scipy") - vals = np.array([6.95, 15.21, 4.72, 9.12, 13.81, 13.49, 16.68, 9.48, 10.63, 14.48]) - xps = { - "hamming": [ - np.nan, - np.nan, - 8.71384, - 9.56348, - 12.38009, - 14.03687, - 13.8567, - 11.81473, - np.nan, - np.nan, - ], - "triang": [ - np.nan, - np.nan, - 9.28667, - 10.34667, - 12.00556, - 13.33889, - 13.38, - 12.33667, - np.nan, - np.nan, - ], - "barthann": [ - np.nan, - np.nan, - 8.4425, - 9.1925, - 12.5575, - 14.3675, - 14.0825, - 11.5675, - np.nan, - np.nan, - ], - "bohman": [ - np.nan, - np.nan, - 7.61599, - 9.1764, - 12.83559, - 14.17267, - 14.65923, - 11.10401, - np.nan, - np.nan, - ], - "blackmanharris": [ - np.nan, - np.nan, - 6.97691, - 9.16438, - 13.05052, - 14.02156, - 15.10512, - 10.74574, - np.nan, - np.nan, - ], - "nuttall": [ - np.nan, - np.nan, - 7.04618, - 9.16786, - 13.02671, - 14.03559, - 15.05657, - 10.78514, - np.nan, - np.nan, - ], - "blackman": [ - np.nan, - np.nan, - 7.73345, - 9.17869, - 12.79607, - 14.20036, - 14.57726, - 11.16988, - np.nan, - np.nan, - ], - "bartlett": [ - np.nan, - np.nan, - 8.4425, - 9.1925, - 12.5575, - 14.3675, - 14.0825, - 11.5675, - np.nan, - np.nan, - ], - } - - xp = Series(xps[win_types])[::step] - rs = Series(vals).rolling(5, win_type=win_types, center=True, step=step).mean() - tm.assert_series_equal(xp, rs) - - -def test_cmov_window_regular_linear_range(win_types, step): - # GH 8238 - pytest.importorskip("scipy") - vals = np.array(range(10), dtype=float) - xp = vals.copy() - xp[:2] = np.nan - xp[-2:] = np.nan - xp = Series(xp)[::step] - - rs = Series(vals).rolling(5, win_type=win_types, center=True, step=step).mean() - tm.assert_series_equal(xp, rs) - - -def test_cmov_window_regular_missing_data(win_types, step): - # GH 8238 - pytest.importorskip("scipy") - vals = np.array( - [6.95, 15.21, 4.72, 9.12, 13.81, 13.49, 16.68, np.nan, 10.63, 14.48] - ) - xps = { - "bartlett": [ - np.nan, - np.nan, - 9.70333, - 10.5225, - 8.4425, - 9.1925, - 12.5575, - 14.3675, - 15.61667, - 13.655, - ], - "blackman": [ - np.nan, - np.nan, - 9.04582, - 11.41536, - 7.73345, - 9.17869, - 12.79607, - 14.20036, - 15.8706, - 13.655, - ], - "barthann": [ - np.nan, - np.nan, - 9.70333, - 10.5225, - 8.4425, - 9.1925, - 12.5575, - 14.3675, - 15.61667, - 13.655, - ], - "bohman": [ - np.nan, - np.nan, - 8.9444, - 11.56327, - 7.61599, - 9.1764, - 12.83559, - 14.17267, - 15.90976, - 13.655, - ], - "hamming": [ - np.nan, - np.nan, - 9.59321, - 10.29694, - 8.71384, - 9.56348, - 12.38009, - 14.20565, - 15.24694, - 13.69758, - ], - "nuttall": [ - np.nan, - np.nan, - 8.47693, - 12.2821, - 7.04618, - 9.16786, - 13.02671, - 14.03673, - 16.08759, - 13.65553, - ], - "triang": [ - np.nan, - np.nan, - 9.33167, - 9.76125, - 9.28667, - 10.34667, - 12.00556, - 13.82125, - 14.49429, - 13.765, - ], - "blackmanharris": [ - np.nan, - np.nan, - 8.42526, - 12.36824, - 6.97691, - 9.16438, - 13.05052, - 14.02175, - 16.1098, - 13.65509, - ], - } - - xp = Series(xps[win_types])[::step] - rs = Series(vals).rolling(5, win_type=win_types, min_periods=3, step=step).mean() - tm.assert_series_equal(xp, rs) - - -def test_cmov_window_special(win_types_special, step): - # GH 8238 - pytest.importorskip("scipy") - kwds = { - "kaiser": {"beta": 1.0}, - "gaussian": {"std": 1.0}, - "general_gaussian": {"p": 2.0, "sig": 2.0}, - "exponential": {"tau": 10}, - } - - vals = np.array([6.95, 15.21, 4.72, 9.12, 13.81, 13.49, 16.68, 9.48, 10.63, 14.48]) - - xps = { - "gaussian": [ - np.nan, - np.nan, - 8.97297, - 9.76077, - 12.24763, - 13.89053, - 13.65671, - 12.01002, - np.nan, - np.nan, - ], - "general_gaussian": [ - np.nan, - np.nan, - 9.85011, - 10.71589, - 11.73161, - 13.08516, - 12.95111, - 12.74577, - np.nan, - np.nan, - ], - "kaiser": [ - np.nan, - np.nan, - 9.86851, - 11.02969, - 11.65161, - 12.75129, - 12.90702, - 12.83757, - np.nan, - np.nan, - ], - "exponential": [ - np.nan, - np.nan, - 9.83364, - 11.10472, - 11.64551, - 12.66138, - 12.92379, - 12.83770, - np.nan, - np.nan, - ], - } - - xp = Series(xps[win_types_special])[::step] - rs = ( - Series(vals) - .rolling(5, win_type=win_types_special, center=True, step=step) - .mean(**kwds[win_types_special]) - ) - tm.assert_series_equal(xp, rs) - - -def test_cmov_window_special_linear_range(win_types_special, step): - # GH 8238 - pytest.importorskip("scipy") - kwds = { - "kaiser": {"beta": 1.0}, - "gaussian": {"std": 1.0}, - "general_gaussian": {"p": 2.0, "sig": 2.0}, - "slepian": {"width": 0.5}, - "exponential": {"tau": 10}, - } - - vals = np.array(range(10), dtype=float) - xp = vals.copy() - xp[:2] = np.nan - xp[-2:] = np.nan - xp = Series(xp)[::step] - - rs = ( - Series(vals) - .rolling(5, win_type=win_types_special, center=True, step=step) - .mean(**kwds[win_types_special]) - ) - tm.assert_series_equal(xp, rs) - - -def test_weighted_var_big_window_no_segfault(win_types, center): - # GitHub Issue #46772 - pytest.importorskip("scipy") - x = Series(0) - result = x.rolling(window=16, center=center, win_type=win_types).var() - expected = Series(np.nan) - - tm.assert_series_equal(result, expected) - - -def test_rolling_center_axis_1(): - pytest.importorskip("scipy") - df = DataFrame( - {"a": [1, 1, 0, 0, 0, 1], "b": [1, 0, 0, 1, 0, 0], "c": [1, 0, 0, 1, 0, 1]} - ) - - msg = "Support for axis=1 in DataFrame.rolling is deprecated" - with tm.assert_produces_warning(FutureWarning, match=msg): - result = df.rolling(window=3, axis=1, win_type="boxcar", center=True).sum() - - expected = DataFrame( - {"a": [np.nan] * 6, "b": [3.0, 1.0, 0.0, 2.0, 0.0, 2.0], "c": [np.nan] * 6} - ) - - tm.assert_frame_equal(result, expected, check_dtype=True) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pydantic/v1/class_validators.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pydantic/v1/class_validators.py deleted file mode 100644 index 71e66509398510a493286d4a23aa112b73cbe6c6..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pydantic/v1/class_validators.py +++ /dev/null @@ -1,361 +0,0 @@ -import warnings -from collections import ChainMap -from functools import partial, partialmethod, wraps -from itertools import chain -from types import FunctionType -from typing import TYPE_CHECKING, Any, Callable, Dict, Iterable, List, Optional, Set, Tuple, Type, Union, overload - -from .errors import ConfigError -from .typing import AnyCallable -from .utils import ROOT_KEY, in_ipython - -if TYPE_CHECKING: - from .typing import AnyClassMethod - - -class Validator: - __slots__ = 'func', 'pre', 'each_item', 'always', 'check_fields', 'skip_on_failure' - - def __init__( - self, - func: AnyCallable, - pre: bool = False, - each_item: bool = False, - always: bool = False, - check_fields: bool = False, - skip_on_failure: bool = False, - ): - self.func = func - self.pre = pre - self.each_item = each_item - self.always = always - self.check_fields = check_fields - self.skip_on_failure = skip_on_failure - - -if TYPE_CHECKING: - from inspect import Signature - - from .config import BaseConfig - from .fields import ModelField - from .types import ModelOrDc - - ValidatorCallable = Callable[[Optional[ModelOrDc], Any, Dict[str, Any], ModelField, Type[BaseConfig]], Any] - ValidatorsList = List[ValidatorCallable] - ValidatorListDict = Dict[str, List[Validator]] - -_FUNCS: Set[str] = set() -VALIDATOR_CONFIG_KEY = '__validator_config__' -ROOT_VALIDATOR_CONFIG_KEY = '__root_validator_config__' - - -def validator( - *fields: str, - pre: bool = False, - each_item: bool = False, - always: bool = False, - check_fields: bool = True, - whole: Optional[bool] = None, - allow_reuse: bool = False, -) -> Callable[[AnyCallable], 'AnyClassMethod']: - """ - Decorate methods on the class indicating that they should be used to validate fields - :param fields: which field(s) the method should be called on - :param pre: whether or not this validator should be called before the standard validators (else after) - :param each_item: for complex objects (sets, lists etc.) whether to validate individual elements rather than the - whole object - :param always: whether this method and other validators should be called even if the value is missing - :param check_fields: whether to check that the fields actually exist on the model - :param allow_reuse: whether to track and raise an error if another validator refers to the decorated function - """ - if not fields: - raise ConfigError('validator with no fields specified') - elif isinstance(fields[0], FunctionType): - raise ConfigError( - "validators should be used with fields and keyword arguments, not bare. " # noqa: Q000 - "E.g. usage should be `@validator('', ...)`" - ) - elif not all(isinstance(field, str) for field in fields): - raise ConfigError( - "validator fields should be passed as separate string args. " # noqa: Q000 - "E.g. usage should be `@validator('', '', ...)`" - ) - - if whole is not None: - warnings.warn( - 'The "whole" keyword argument is deprecated, use "each_item" (inverse meaning, default False) instead', - DeprecationWarning, - ) - assert each_item is False, '"each_item" and "whole" conflict, remove "whole"' - each_item = not whole - - def dec(f: AnyCallable) -> 'AnyClassMethod': - f_cls = _prepare_validator(f, allow_reuse) - setattr( - f_cls, - VALIDATOR_CONFIG_KEY, - ( - fields, - Validator(func=f_cls.__func__, pre=pre, each_item=each_item, always=always, check_fields=check_fields), - ), - ) - return f_cls - - return dec - - -@overload -def root_validator(_func: AnyCallable) -> 'AnyClassMethod': - ... - - -@overload -def root_validator( - *, pre: bool = False, allow_reuse: bool = False, skip_on_failure: bool = False -) -> Callable[[AnyCallable], 'AnyClassMethod']: - ... - - -def root_validator( - _func: Optional[AnyCallable] = None, *, pre: bool = False, allow_reuse: bool = False, skip_on_failure: bool = False -) -> Union['AnyClassMethod', Callable[[AnyCallable], 'AnyClassMethod']]: - """ - Decorate methods on a model indicating that they should be used to validate (and perhaps modify) data either - before or after standard model parsing/validation is performed. - """ - if _func: - f_cls = _prepare_validator(_func, allow_reuse) - setattr( - f_cls, ROOT_VALIDATOR_CONFIG_KEY, Validator(func=f_cls.__func__, pre=pre, skip_on_failure=skip_on_failure) - ) - return f_cls - - def dec(f: AnyCallable) -> 'AnyClassMethod': - f_cls = _prepare_validator(f, allow_reuse) - setattr( - f_cls, ROOT_VALIDATOR_CONFIG_KEY, Validator(func=f_cls.__func__, pre=pre, skip_on_failure=skip_on_failure) - ) - return f_cls - - return dec - - -def _prepare_validator(function: AnyCallable, allow_reuse: bool) -> 'AnyClassMethod': - """ - Avoid validators with duplicated names since without this, validators can be overwritten silently - which generally isn't the intended behaviour, don't run in ipython (see #312) or if allow_reuse is False. - """ - f_cls = function if isinstance(function, classmethod) else classmethod(function) - if not in_ipython() and not allow_reuse: - ref = ( - getattr(f_cls.__func__, '__module__', '') - + '.' - + getattr(f_cls.__func__, '__qualname__', f'') - ) - if ref in _FUNCS: - raise ConfigError(f'duplicate validator function "{ref}"; if this is intended, set `allow_reuse=True`') - _FUNCS.add(ref) - return f_cls - - -class ValidatorGroup: - def __init__(self, validators: 'ValidatorListDict') -> None: - self.validators = validators - self.used_validators = {'*'} - - def get_validators(self, name: str) -> Optional[Dict[str, Validator]]: - self.used_validators.add(name) - validators = self.validators.get(name, []) - if name != ROOT_KEY: - validators += self.validators.get('*', []) - if validators: - return {getattr(v.func, '__name__', f''): v for v in validators} - else: - return None - - def check_for_unused(self) -> None: - unused_validators = set( - chain.from_iterable( - ( - getattr(v.func, '__name__', f'') - for v in self.validators[f] - if v.check_fields - ) - for f in (self.validators.keys() - self.used_validators) - ) - ) - if unused_validators: - fn = ', '.join(unused_validators) - raise ConfigError( - f"Validators defined with incorrect fields: {fn} " # noqa: Q000 - f"(use check_fields=False if you're inheriting from the model and intended this)" - ) - - -def extract_validators(namespace: Dict[str, Any]) -> Dict[str, List[Validator]]: - validators: Dict[str, List[Validator]] = {} - for var_name, value in namespace.items(): - validator_config = getattr(value, VALIDATOR_CONFIG_KEY, None) - if validator_config: - fields, v = validator_config - for field in fields: - if field in validators: - validators[field].append(v) - else: - validators[field] = [v] - return validators - - -def extract_root_validators(namespace: Dict[str, Any]) -> Tuple[List[AnyCallable], List[Tuple[bool, AnyCallable]]]: - from inspect import signature - - pre_validators: List[AnyCallable] = [] - post_validators: List[Tuple[bool, AnyCallable]] = [] - for name, value in namespace.items(): - validator_config: Optional[Validator] = getattr(value, ROOT_VALIDATOR_CONFIG_KEY, None) - if validator_config: - sig = signature(validator_config.func) - args = list(sig.parameters.keys()) - if args[0] == 'self': - raise ConfigError( - f'Invalid signature for root validator {name}: {sig}, "self" not permitted as first argument, ' - f'should be: (cls, values).' - ) - if len(args) != 2: - raise ConfigError(f'Invalid signature for root validator {name}: {sig}, should be: (cls, values).') - # check function signature - if validator_config.pre: - pre_validators.append(validator_config.func) - else: - post_validators.append((validator_config.skip_on_failure, validator_config.func)) - return pre_validators, post_validators - - -def inherit_validators(base_validators: 'ValidatorListDict', validators: 'ValidatorListDict') -> 'ValidatorListDict': - for field, field_validators in base_validators.items(): - if field not in validators: - validators[field] = [] - validators[field] += field_validators - return validators - - -def make_generic_validator(validator: AnyCallable) -> 'ValidatorCallable': - """ - Make a generic function which calls a validator with the right arguments. - - Unfortunately other approaches (eg. return a partial of a function that builds the arguments) is slow, - hence this laborious way of doing things. - - It's done like this so validators don't all need **kwargs in their signature, eg. any combination of - the arguments "values", "fields" and/or "config" are permitted. - """ - from inspect import signature - - if not isinstance(validator, (partial, partialmethod)): - # This should be the default case, so overhead is reduced - sig = signature(validator) - args = list(sig.parameters.keys()) - else: - # Fix the generated argument lists of partial methods - sig = signature(validator.func) - args = [ - k - for k in signature(validator.func).parameters.keys() - if k not in validator.args | validator.keywords.keys() - ] - - first_arg = args.pop(0) - if first_arg == 'self': - raise ConfigError( - f'Invalid signature for validator {validator}: {sig}, "self" not permitted as first argument, ' - f'should be: (cls, value, values, config, field), "values", "config" and "field" are all optional.' - ) - elif first_arg == 'cls': - # assume the second argument is value - return wraps(validator)(_generic_validator_cls(validator, sig, set(args[1:]))) - else: - # assume the first argument was value which has already been removed - return wraps(validator)(_generic_validator_basic(validator, sig, set(args))) - - -def prep_validators(v_funcs: Iterable[AnyCallable]) -> 'ValidatorsList': - return [make_generic_validator(f) for f in v_funcs if f] - - -all_kwargs = {'values', 'field', 'config'} - - -def _generic_validator_cls(validator: AnyCallable, sig: 'Signature', args: Set[str]) -> 'ValidatorCallable': - # assume the first argument is value - has_kwargs = False - if 'kwargs' in args: - has_kwargs = True - args -= {'kwargs'} - - if not args.issubset(all_kwargs): - raise ConfigError( - f'Invalid signature for validator {validator}: {sig}, should be: ' - f'(cls, value, values, config, field), "values", "config" and "field" are all optional.' - ) - - if has_kwargs: - return lambda cls, v, values, field, config: validator(cls, v, values=values, field=field, config=config) - elif args == set(): - return lambda cls, v, values, field, config: validator(cls, v) - elif args == {'values'}: - return lambda cls, v, values, field, config: validator(cls, v, values=values) - elif args == {'field'}: - return lambda cls, v, values, field, config: validator(cls, v, field=field) - elif args == {'config'}: - return lambda cls, v, values, field, config: validator(cls, v, config=config) - elif args == {'values', 'field'}: - return lambda cls, v, values, field, config: validator(cls, v, values=values, field=field) - elif args == {'values', 'config'}: - return lambda cls, v, values, field, config: validator(cls, v, values=values, config=config) - elif args == {'field', 'config'}: - return lambda cls, v, values, field, config: validator(cls, v, field=field, config=config) - else: - # args == {'values', 'field', 'config'} - return lambda cls, v, values, field, config: validator(cls, v, values=values, field=field, config=config) - - -def _generic_validator_basic(validator: AnyCallable, sig: 'Signature', args: Set[str]) -> 'ValidatorCallable': - has_kwargs = False - if 'kwargs' in args: - has_kwargs = True - args -= {'kwargs'} - - if not args.issubset(all_kwargs): - raise ConfigError( - f'Invalid signature for validator {validator}: {sig}, should be: ' - f'(value, values, config, field), "values", "config" and "field" are all optional.' - ) - - if has_kwargs: - return lambda cls, v, values, field, config: validator(v, values=values, field=field, config=config) - elif args == set(): - return lambda cls, v, values, field, config: validator(v) - elif args == {'values'}: - return lambda cls, v, values, field, config: validator(v, values=values) - elif args == {'field'}: - return lambda cls, v, values, field, config: validator(v, field=field) - elif args == {'config'}: - return lambda cls, v, values, field, config: validator(v, config=config) - elif args == {'values', 'field'}: - return lambda cls, v, values, field, config: validator(v, values=values, field=field) - elif args == {'values', 'config'}: - return lambda cls, v, values, field, config: validator(v, values=values, config=config) - elif args == {'field', 'config'}: - return lambda cls, v, values, field, config: validator(v, field=field, config=config) - else: - # args == {'values', 'field', 'config'} - return lambda cls, v, values, field, config: validator(v, values=values, field=field, config=config) - - -def gather_all_validators(type_: 'ModelOrDc') -> Dict[str, 'AnyClassMethod']: - all_attributes = ChainMap(*[cls.__dict__ for cls in type_.__mro__]) # type: ignore[arg-type,var-annotated] - return { - k: v - for k, v in all_attributes.items() - if hasattr(v, VALIDATOR_CONFIG_KEY) or hasattr(v, ROOT_VALIDATOR_CONFIG_KEY) - } diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pytz/tzfile.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pytz/tzfile.py deleted file mode 100644 index 99e74489b859e21fcaa68e93089035c3d81a73c8..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pytz/tzfile.py +++ /dev/null @@ -1,133 +0,0 @@ -''' -$Id: tzfile.py,v 1.8 2004/06/03 00:15:24 zenzen Exp $ -''' - -from datetime import datetime -from struct import unpack, calcsize - -from pytz.tzinfo import StaticTzInfo, DstTzInfo, memorized_ttinfo -from pytz.tzinfo import memorized_datetime, memorized_timedelta - - -def _byte_string(s): - """Cast a string or byte string to an ASCII byte string.""" - return s.encode('ASCII') - -_NULL = _byte_string('\0') - - -def _std_string(s): - """Cast a string or byte string to an ASCII string.""" - return str(s.decode('ASCII')) - - -def build_tzinfo(zone, fp): - head_fmt = '>4s c 15x 6l' - head_size = calcsize(head_fmt) - (magic, format, ttisgmtcnt, ttisstdcnt, leapcnt, timecnt, - typecnt, charcnt) = unpack(head_fmt, fp.read(head_size)) - - # Make sure it is a tzfile(5) file - assert magic == _byte_string('TZif'), 'Got magic %s' % repr(magic) - - # Read out the transition times, localtime indices and ttinfo structures. - data_fmt = '>%(timecnt)dl %(timecnt)dB %(ttinfo)s %(charcnt)ds' % dict( - timecnt=timecnt, ttinfo='lBB' * typecnt, charcnt=charcnt) - data_size = calcsize(data_fmt) - data = unpack(data_fmt, fp.read(data_size)) - - # make sure we unpacked the right number of values - assert len(data) == 2 * timecnt + 3 * typecnt + 1 - transitions = [memorized_datetime(trans) - for trans in data[:timecnt]] - lindexes = list(data[timecnt:2 * timecnt]) - ttinfo_raw = data[2 * timecnt:-1] - tznames_raw = data[-1] - del data - - # Process ttinfo into separate structs - ttinfo = [] - tznames = {} - i = 0 - while i < len(ttinfo_raw): - # have we looked up this timezone name yet? - tzname_offset = ttinfo_raw[i + 2] - if tzname_offset not in tznames: - nul = tznames_raw.find(_NULL, tzname_offset) - if nul < 0: - nul = len(tznames_raw) - tznames[tzname_offset] = _std_string( - tznames_raw[tzname_offset:nul]) - ttinfo.append((ttinfo_raw[i], - bool(ttinfo_raw[i + 1]), - tznames[tzname_offset])) - i += 3 - - # Now build the timezone object - if len(ttinfo) == 1 or len(transitions) == 0: - ttinfo[0][0], ttinfo[0][2] - cls = type(zone, (StaticTzInfo,), dict( - zone=zone, - _utcoffset=memorized_timedelta(ttinfo[0][0]), - _tzname=ttinfo[0][2])) - else: - # Early dates use the first standard time ttinfo - i = 0 - while ttinfo[i][1]: - i += 1 - if ttinfo[i] == ttinfo[lindexes[0]]: - transitions[0] = datetime.min - else: - transitions.insert(0, datetime.min) - lindexes.insert(0, i) - - # calculate transition info - transition_info = [] - for i in range(len(transitions)): - inf = ttinfo[lindexes[i]] - utcoffset = inf[0] - if not inf[1]: - dst = 0 - else: - for j in range(i - 1, -1, -1): - prev_inf = ttinfo[lindexes[j]] - if not prev_inf[1]: - break - dst = inf[0] - prev_inf[0] # dst offset - - # Bad dst? Look further. DST > 24 hours happens when - # a timzone has moved across the international dateline. - if dst <= 0 or dst > 3600 * 3: - for j in range(i + 1, len(transitions)): - stdinf = ttinfo[lindexes[j]] - if not stdinf[1]: - dst = inf[0] - stdinf[0] - if dst > 0: - break # Found a useful std time. - - tzname = inf[2] - - # Round utcoffset and dst to the nearest minute or the - # datetime library will complain. Conversions to these timezones - # might be up to plus or minus 30 seconds out, but it is - # the best we can do. - utcoffset = int((utcoffset + 30) // 60) * 60 - dst = int((dst + 30) // 60) * 60 - transition_info.append(memorized_ttinfo(utcoffset, dst, tzname)) - - cls = type(zone, (DstTzInfo,), dict( - zone=zone, - _utc_transition_times=transitions, - _transition_info=transition_info)) - - return cls() - -if __name__ == '__main__': - import os.path - from pprint import pprint - base = os.path.join(os.path.dirname(__file__), 'zoneinfo') - tz = build_tzinfo('Australia/Melbourne', - open(os.path.join(base, 'Australia', 'Melbourne'), 'rb')) - tz = build_tzinfo('US/Eastern', - open(os.path.join(base, 'US', 'Eastern'), 'rb')) - pprint(tz._utc_transition_times) diff --git a/spaces/pseudolab/Balanced-News-Reading/README.md b/spaces/pseudolab/Balanced-News-Reading/README.md deleted file mode 100644 index 318dfd17ebef9b3b7f8ddabad848a854889fb05e..0000000000000000000000000000000000000000 --- a/spaces/pseudolab/Balanced-News-Reading/README.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -title: Balanced News Reading -emoji: 🐠 -colorFrom: indigo -colorTo: gray -sdk: gradio -sdk_version: 3.50.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -기사 정보를 가져와서 요약하고 해당기사의 감정평가를 보여줍니다. -기사 요약과 감정평가로 기사를 균형있게 볼 수 있습니다. - -# Space Name -- Balanced-News-Reading [효과적인 뉴스읽기] -# Member -1. ID: gabrielyang -2. ID: Hyeonseo -3. ID: diff --git a/spaces/pytorch/NTSNET/README.md b/spaces/pytorch/NTSNET/README.md deleted file mode 100644 index 9cb1f2965e6e2749dbbdda853dc4007808ed9dd9..0000000000000000000000000000000000000000 --- a/spaces/pytorch/NTSNET/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: NTSNET -emoji: 🚀 -colorFrom: green -colorTo: green -sdk: gradio -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/pytorch/PGAN/app.py b/spaces/pytorch/PGAN/app.py deleted file mode 100644 index 3731a37f9b99d66e317f997a07ac27b0ea587697..0000000000000000000000000000000000000000 --- a/spaces/pytorch/PGAN/app.py +++ /dev/null @@ -1,38 +0,0 @@ -import torch -import matplotlib.pyplot as plt -import torchvision -import gradio as gr -use_gpu = True if torch.cuda.is_available() else False - -model = torch.hub.load('facebookresearch/pytorch_GAN_zoo:hub', - 'PGAN', model_name='celebAHQ-512', - pretrained=True, useGPU=use_gpu) - - - -def pggan(num_images): - noise, _ = model.buildNoiseData(int(num_images)) - with torch.no_grad(): - generated_images = model.test(noise) - - grid = torchvision.utils.make_grid(generated_images.clamp(min=-1, max=1), scale_each=True, normalize=True) - plt.axis("off") - plt.imshow(grid.permute(1, 2, 0).cpu().numpy()) - return plt - - -inputs = gr.inputs.Number(label="number of images") -outputs = gr.outputs.Image(label="Output Image") - -title = "Progressive Growing of GANs" -description = "Gradio demo for Progressive Growing of GANs (PGAN). To use it, simply add the number of images to generate or click on the examples. Read more below." -article = "

        Progressive Growing of GANs for Improved Quality, Stability, and Variation | Github Repo

        " -examples = [ - [1], - [2], - [3], - [4] -] - - -gr.Interface(pggan, inputs, outputs, title=title, description=description, article=article, analytics_enabled=False, examples=examples).launch(debug=True) \ No newline at end of file diff --git a/spaces/qingxu98/academic-chatgpt-beta/crazy_functions/test_project/cpp/longcode/jpge.cpp b/spaces/qingxu98/academic-chatgpt-beta/crazy_functions/test_project/cpp/longcode/jpge.cpp deleted file mode 100644 index 2e26b71ed5aad0d46478fdbcd3a880be1401f946..0000000000000000000000000000000000000000 --- a/spaces/qingxu98/academic-chatgpt-beta/crazy_functions/test_project/cpp/longcode/jpge.cpp +++ /dev/null @@ -1,1049 +0,0 @@ -// jpge.cpp - C++ class for JPEG compression. -// Public domain, Rich Geldreich -// v1.01, Dec. 18, 2010 - Initial release -// v1.02, Apr. 6, 2011 - Removed 2x2 ordered dither in H2V1 chroma subsampling method load_block_16_8_8(). (The rounding factor was 2, when it should have been 1. Either way, it wasn't helping.) -// v1.03, Apr. 16, 2011 - Added support for optimized Huffman code tables, optimized dynamic memory allocation down to only 1 alloc. -// Also from Alex Evans: Added RGBA support, linear memory allocator (no longer needed in v1.03). -// v1.04, May. 19, 2012: Forgot to set m_pFile ptr to NULL in cfile_stream::close(). Thanks to Owen Kaluza for reporting this bug. -// Code tweaks to fix VS2008 static code analysis warnings (all looked harmless). -// Code review revealed method load_block_16_8_8() (used for the non-default H2V1 sampling mode to downsample chroma) somehow didn't get the rounding factor fix from v1.02. - -#include "jpge.h" - -#include -#include -#if PLATFORM_WINDOWS -#include -#endif - -#define JPGE_MAX(a,b) (((a)>(b))?(a):(b)) -#define JPGE_MIN(a,b) (((a)<(b))?(a):(b)) - -namespace jpge { - -static inline void *jpge_malloc(size_t nSize) { return FMemory::Malloc(nSize); } -static inline void jpge_free(void *p) { FMemory::Free(p);; } - -// Various JPEG enums and tables. -enum { M_SOF0 = 0xC0, M_DHT = 0xC4, M_SOI = 0xD8, M_EOI = 0xD9, M_SOS = 0xDA, M_DQT = 0xDB, M_APP0 = 0xE0 }; -enum { DC_LUM_CODES = 12, AC_LUM_CODES = 256, DC_CHROMA_CODES = 12, AC_CHROMA_CODES = 256, MAX_HUFF_SYMBOLS = 257, MAX_HUFF_CODESIZE = 32 }; - -static uint8 s_zag[64] = { 0,1,8,16,9,2,3,10,17,24,32,25,18,11,4,5,12,19,26,33,40,48,41,34,27,20,13,6,7,14,21,28,35,42,49,56,57,50,43,36,29,22,15,23,30,37,44,51,58,59,52,45,38,31,39,46,53,60,61,54,47,55,62,63 }; -static int16 s_std_lum_quant[64] = { 16,11,12,14,12,10,16,14,13,14,18,17,16,19,24,40,26,24,22,22,24,49,35,37,29,40,58,51,61,60,57,51,56,55,64,72,92,78,64,68,87,69,55,56,80,109,81,87,95,98,103,104,103,62,77,113,121,112,100,120,92,101,103,99 }; -static int16 s_std_croma_quant[64] = { 17,18,18,24,21,24,47,26,26,47,99,66,56,66,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99 }; -static uint8 s_dc_lum_bits[17] = { 0,0,1,5,1,1,1,1,1,1,0,0,0,0,0,0,0 }; -static uint8 s_dc_lum_val[DC_LUM_CODES] = { 0,1,2,3,4,5,6,7,8,9,10,11 }; -static uint8 s_ac_lum_bits[17] = { 0,0,2,1,3,3,2,4,3,5,5,4,4,0,0,1,0x7d }; -static uint8 s_ac_lum_val[AC_LUM_CODES] = -{ - 0x01,0x02,0x03,0x00,0x04,0x11,0x05,0x12,0x21,0x31,0x41,0x06,0x13,0x51,0x61,0x07,0x22,0x71,0x14,0x32,0x81,0x91,0xa1,0x08,0x23,0x42,0xb1,0xc1,0x15,0x52,0xd1,0xf0, - 0x24,0x33,0x62,0x72,0x82,0x09,0x0a,0x16,0x17,0x18,0x19,0x1a,0x25,0x26,0x27,0x28,0x29,0x2a,0x34,0x35,0x36,0x37,0x38,0x39,0x3a,0x43,0x44,0x45,0x46,0x47,0x48,0x49, - 0x4a,0x53,0x54,0x55,0x56,0x57,0x58,0x59,0x5a,0x63,0x64,0x65,0x66,0x67,0x68,0x69,0x6a,0x73,0x74,0x75,0x76,0x77,0x78,0x79,0x7a,0x83,0x84,0x85,0x86,0x87,0x88,0x89, - 0x8a,0x92,0x93,0x94,0x95,0x96,0x97,0x98,0x99,0x9a,0xa2,0xa3,0xa4,0xa5,0xa6,0xa7,0xa8,0xa9,0xaa,0xb2,0xb3,0xb4,0xb5,0xb6,0xb7,0xb8,0xb9,0xba,0xc2,0xc3,0xc4,0xc5, - 0xc6,0xc7,0xc8,0xc9,0xca,0xd2,0xd3,0xd4,0xd5,0xd6,0xd7,0xd8,0xd9,0xda,0xe1,0xe2,0xe3,0xe4,0xe5,0xe6,0xe7,0xe8,0xe9,0xea,0xf1,0xf2,0xf3,0xf4,0xf5,0xf6,0xf7,0xf8, - 0xf9,0xfa -}; -static uint8 s_dc_chroma_bits[17] = { 0,0,3,1,1,1,1,1,1,1,1,1,0,0,0,0,0 }; -static uint8 s_dc_chroma_val[DC_CHROMA_CODES] = { 0,1,2,3,4,5,6,7,8,9,10,11 }; -static uint8 s_ac_chroma_bits[17] = { 0,0,2,1,2,4,4,3,4,7,5,4,4,0,1,2,0x77 }; -static uint8 s_ac_chroma_val[AC_CHROMA_CODES] = -{ - 0x00,0x01,0x02,0x03,0x11,0x04,0x05,0x21,0x31,0x06,0x12,0x41,0x51,0x07,0x61,0x71,0x13,0x22,0x32,0x81,0x08,0x14,0x42,0x91,0xa1,0xb1,0xc1,0x09,0x23,0x33,0x52,0xf0, - 0x15,0x62,0x72,0xd1,0x0a,0x16,0x24,0x34,0xe1,0x25,0xf1,0x17,0x18,0x19,0x1a,0x26,0x27,0x28,0x29,0x2a,0x35,0x36,0x37,0x38,0x39,0x3a,0x43,0x44,0x45,0x46,0x47,0x48, - 0x49,0x4a,0x53,0x54,0x55,0x56,0x57,0x58,0x59,0x5a,0x63,0x64,0x65,0x66,0x67,0x68,0x69,0x6a,0x73,0x74,0x75,0x76,0x77,0x78,0x79,0x7a,0x82,0x83,0x84,0x85,0x86,0x87, - 0x88,0x89,0x8a,0x92,0x93,0x94,0x95,0x96,0x97,0x98,0x99,0x9a,0xa2,0xa3,0xa4,0xa5,0xa6,0xa7,0xa8,0xa9,0xaa,0xb2,0xb3,0xb4,0xb5,0xb6,0xb7,0xb8,0xb9,0xba,0xc2,0xc3, - 0xc4,0xc5,0xc6,0xc7,0xc8,0xc9,0xca,0xd2,0xd3,0xd4,0xd5,0xd6,0xd7,0xd8,0xd9,0xda,0xe2,0xe3,0xe4,0xe5,0xe6,0xe7,0xe8,0xe9,0xea,0xf2,0xf3,0xf4,0xf5,0xf6,0xf7,0xf8, - 0xf9,0xfa -}; - -// Low-level helper functions. -template inline void clear_obj(T &obj) { memset(&obj, 0, sizeof(obj)); } - -const int YR = 19595, YG = 38470, YB = 7471, CB_R = -11059, CB_G = -21709, CB_B = 32768, CR_R = 32768, CR_G = -27439, CR_B = -5329; -static inline uint8 clamp(int i) { if (static_cast(i) > 255U) { if (i < 0) i = 0; else if (i > 255) i = 255; } return static_cast(i); } - -static void RGB_to_YCC(uint8* pDst, const uint8 *pSrc, int num_pixels) -{ - for ( ; num_pixels; pDst += 3, pSrc += 3, num_pixels--) - { - const int r = pSrc[0], g = pSrc[1], b = pSrc[2]; - pDst[0] = static_cast((r * YR + g * YG + b * YB + 32768) >> 16); - pDst[1] = clamp(128 + ((r * CB_R + g * CB_G + b * CB_B + 32768) >> 16)); - pDst[2] = clamp(128 + ((r * CR_R + g * CR_G + b * CR_B + 32768) >> 16)); - } -} - -static void RGB_to_Y(uint8* pDst, const uint8 *pSrc, int num_pixels) -{ - for ( ; num_pixels; pDst++, pSrc += 3, num_pixels--) - pDst[0] = static_cast((pSrc[0] * YR + pSrc[1] * YG + pSrc[2] * YB + 32768) >> 16); -} - -static void RGBA_to_YCC(uint8* pDst, const uint8 *pSrc, int num_pixels) -{ - for ( ; num_pixels; pDst += 3, pSrc += 4, num_pixels--) - { - const int r = pSrc[0], g = pSrc[1], b = pSrc[2]; - pDst[0] = static_cast((r * YR + g * YG + b * YB + 32768) >> 16); - pDst[1] = clamp(128 + ((r * CB_R + g * CB_G + b * CB_B + 32768) >> 16)); - pDst[2] = clamp(128 + ((r * CR_R + g * CR_G + b * CR_B + 32768) >> 16)); - } -} - -static void RGBA_to_Y(uint8* pDst, const uint8 *pSrc, int num_pixels) -{ - for ( ; num_pixels; pDst++, pSrc += 4, num_pixels--) - pDst[0] = static_cast((pSrc[0] * YR + pSrc[1] * YG + pSrc[2] * YB + 32768) >> 16); -} - -static void Y_to_YCC(uint8* pDst, const uint8* pSrc, int num_pixels) -{ - for( ; num_pixels; pDst += 3, pSrc++, num_pixels--) { pDst[0] = pSrc[0]; pDst[1] = 128; pDst[2] = 128; } -} - -// Forward DCT - DCT derived from jfdctint. -#define CONST_BITS 13 -#define ROW_BITS 2 -#define DCT_DESCALE(x, n) (((x) + (((int32)1) << ((n) - 1))) >> (n)) -#define DCT_MUL(var, c) (static_cast(var) * static_cast(c)) -#define DCT1D(s0, s1, s2, s3, s4, s5, s6, s7) \ - int32 t0 = s0 + s7, t7 = s0 - s7, t1 = s1 + s6, t6 = s1 - s6, t2 = s2 + s5, t5 = s2 - s5, t3 = s3 + s4, t4 = s3 - s4; \ - int32 t10 = t0 + t3, t13 = t0 - t3, t11 = t1 + t2, t12 = t1 - t2; \ - int32 u1 = DCT_MUL(t12 + t13, 4433); \ - s2 = u1 + DCT_MUL(t13, 6270); \ - s6 = u1 + DCT_MUL(t12, -15137); \ - u1 = t4 + t7; \ - int32 u2 = t5 + t6, u3 = t4 + t6, u4 = t5 + t7; \ - int32 z5 = DCT_MUL(u3 + u4, 9633); \ - t4 = DCT_MUL(t4, 2446); t5 = DCT_MUL(t5, 16819); \ - t6 = DCT_MUL(t6, 25172); t7 = DCT_MUL(t7, 12299); \ - u1 = DCT_MUL(u1, -7373); u2 = DCT_MUL(u2, -20995); \ - u3 = DCT_MUL(u3, -16069); u4 = DCT_MUL(u4, -3196); \ - u3 += z5; u4 += z5; \ - s0 = t10 + t11; s1 = t7 + u1 + u4; s3 = t6 + u2 + u3; s4 = t10 - t11; s5 = t5 + u2 + u4; s7 = t4 + u1 + u3; - -static void DCT2D(int32 *p) -{ - int32 c, *q = p; - for (c = 7; c >= 0; c--, q += 8) - { - int32 s0 = q[0], s1 = q[1], s2 = q[2], s3 = q[3], s4 = q[4], s5 = q[5], s6 = q[6], s7 = q[7]; - DCT1D(s0, s1, s2, s3, s4, s5, s6, s7); - q[0] = s0 << ROW_BITS; q[1] = DCT_DESCALE(s1, CONST_BITS-ROW_BITS); q[2] = DCT_DESCALE(s2, CONST_BITS-ROW_BITS); q[3] = DCT_DESCALE(s3, CONST_BITS-ROW_BITS); - q[4] = s4 << ROW_BITS; q[5] = DCT_DESCALE(s5, CONST_BITS-ROW_BITS); q[6] = DCT_DESCALE(s6, CONST_BITS-ROW_BITS); q[7] = DCT_DESCALE(s7, CONST_BITS-ROW_BITS); - } - for (q = p, c = 7; c >= 0; c--, q++) - { - int32 s0 = q[0*8], s1 = q[1*8], s2 = q[2*8], s3 = q[3*8], s4 = q[4*8], s5 = q[5*8], s6 = q[6*8], s7 = q[7*8]; - DCT1D(s0, s1, s2, s3, s4, s5, s6, s7); - q[0*8] = DCT_DESCALE(s0, ROW_BITS+3); q[1*8] = DCT_DESCALE(s1, CONST_BITS+ROW_BITS+3); q[2*8] = DCT_DESCALE(s2, CONST_BITS+ROW_BITS+3); q[3*8] = DCT_DESCALE(s3, CONST_BITS+ROW_BITS+3); - q[4*8] = DCT_DESCALE(s4, ROW_BITS+3); q[5*8] = DCT_DESCALE(s5, CONST_BITS+ROW_BITS+3); q[6*8] = DCT_DESCALE(s6, CONST_BITS+ROW_BITS+3); q[7*8] = DCT_DESCALE(s7, CONST_BITS+ROW_BITS+3); - } -} - -struct sym_freq { uint m_key, m_sym_index; }; - -// Radix sorts sym_freq[] array by 32-bit key m_key. Returns ptr to sorted values. -static inline sym_freq* radix_sort_syms(uint num_syms, sym_freq* pSyms0, sym_freq* pSyms1) -{ - const uint cMaxPasses = 4; - uint32 hist[256 * cMaxPasses]; clear_obj(hist); - for (uint i = 0; i < num_syms; i++) { uint freq = pSyms0[i].m_key; hist[freq & 0xFF]++; hist[256 + ((freq >> 8) & 0xFF)]++; hist[256*2 + ((freq >> 16) & 0xFF)]++; hist[256*3 + ((freq >> 24) & 0xFF)]++; } - sym_freq* pCur_syms = pSyms0, *pNew_syms = pSyms1; - uint total_passes = cMaxPasses; while ((total_passes > 1) && (num_syms == hist[(total_passes - 1) * 256])) total_passes--; - for (uint pass_shift = 0, pass = 0; pass < total_passes; pass++, pass_shift += 8) - { - const uint32* pHist = &hist[pass << 8]; - uint offsets[256], cur_ofs = 0; - for (uint i = 0; i < 256; i++) { offsets[i] = cur_ofs; cur_ofs += pHist[i]; } - for (uint i = 0; i < num_syms; i++) - pNew_syms[offsets[(pCur_syms[i].m_key >> pass_shift) & 0xFF]++] = pCur_syms[i]; - sym_freq* t = pCur_syms; pCur_syms = pNew_syms; pNew_syms = t; - } - return pCur_syms; -} - -// calculate_minimum_redundancy() originally written by: Alistair Moffat, alistair@cs.mu.oz.au, Jyrki Katajainen, jyrki@diku.dk, November 1996. -static void calculate_minimum_redundancy(sym_freq *A, int n) -{ - int root, leaf, next, avbl, used, dpth; - if (n==0) return; else if (n==1) { A[0].m_key = 1; return; } - A[0].m_key += A[1].m_key; root = 0; leaf = 2; - for (next=1; next < n-1; next++) - { - if (leaf>=n || A[root].m_key=n || (root=0; next--) A[next].m_key = A[A[next].m_key].m_key+1; - avbl = 1; used = dpth = 0; root = n-2; next = n-1; - while (avbl>0) - { - while (root>=0 && (int)A[root].m_key==dpth) { used++; root--; } - while (avbl>used) { A[next--].m_key = dpth; avbl--; } - avbl = 2*used; dpth++; used = 0; - } -} - -// Limits canonical Huffman code table's max code size to max_code_size. -static void huffman_enforce_max_code_size(int *pNum_codes, int code_list_len, int max_code_size) -{ - if (code_list_len <= 1) return; - - for (int i = max_code_size + 1; i <= MAX_HUFF_CODESIZE; i++) pNum_codes[max_code_size] += pNum_codes[i]; - - uint32 total = 0; - for (int i = max_code_size; i > 0; i--) - total += (((uint32)pNum_codes[i]) << (max_code_size - i)); - - while (total != (1UL << max_code_size)) - { - pNum_codes[max_code_size]--; - for (int i = max_code_size - 1; i > 0; i--) - { - if (pNum_codes[i]) { pNum_codes[i]--; pNum_codes[i + 1] += 2; break; } - } - total--; - } -} - -// Generates an optimized offman table. -void jpeg_encoder::optimize_huffman_table(int table_num, int table_len) -{ - sym_freq syms0[MAX_HUFF_SYMBOLS], syms1[MAX_HUFF_SYMBOLS]; - syms0[0].m_key = 1; syms0[0].m_sym_index = 0; // dummy symbol, assures that no valid code contains all 1's - int num_used_syms = 1; - const uint32 *pSym_count = &m_huff_count[table_num][0]; - for (int i = 0; i < table_len; i++) - if (pSym_count[i]) { syms0[num_used_syms].m_key = pSym_count[i]; syms0[num_used_syms++].m_sym_index = i + 1; } - sym_freq* pSyms = radix_sort_syms(num_used_syms, syms0, syms1); - calculate_minimum_redundancy(pSyms, num_used_syms); - - // Count the # of symbols of each code size. - int num_codes[1 + MAX_HUFF_CODESIZE]; clear_obj(num_codes); - for (int i = 0; i < num_used_syms; i++) - num_codes[pSyms[i].m_key]++; - - const uint JPGE_CODE_SIZE_LIMIT = 16; // the maximum possible size of a JPEG Huffman code (valid range is [9,16] - 9 vs. 8 because of the dummy symbol) - huffman_enforce_max_code_size(num_codes, num_used_syms, JPGE_CODE_SIZE_LIMIT); - - // Compute m_huff_bits array, which contains the # of symbols per code size. - clear_obj(m_huff_bits[table_num]); - for (int i = 1; i <= (int)JPGE_CODE_SIZE_LIMIT; i++) - m_huff_bits[table_num][i] = static_cast(num_codes[i]); - - // Remove the dummy symbol added above, which must be in largest bucket. - for (int i = JPGE_CODE_SIZE_LIMIT; i >= 1; i--) - { - if (m_huff_bits[table_num][i]) { m_huff_bits[table_num][i]--; break; } - } - - // Compute the m_huff_val array, which contains the symbol indices sorted by code size (smallest to largest). - for (int i = num_used_syms - 1; i >= 1; i--) - m_huff_val[table_num][num_used_syms - 1 - i] = static_cast(pSyms[i].m_sym_index - 1); -} - -// JPEG marker generation. -void jpeg_encoder::emit_byte(uint8 i) -{ - m_all_stream_writes_succeeded = m_all_stream_writes_succeeded && m_pStream->put_obj(i); -} - -void jpeg_encoder::emit_word(uint i) -{ - emit_byte(uint8(i >> 8)); emit_byte(uint8(i & 0xFF)); -} - -void jpeg_encoder::emit_marker(int marker) -{ - emit_byte(uint8(0xFF)); emit_byte(uint8(marker)); -} - -// Emit JFIF marker -void jpeg_encoder::emit_jfif_app0() -{ - emit_marker(M_APP0); - emit_word(2 + 4 + 1 + 2 + 1 + 2 + 2 + 1 + 1); - emit_byte(0x4A); emit_byte(0x46); emit_byte(0x49); emit_byte(0x46); /* Identifier: ASCII "JFIF" */ - emit_byte(0); - emit_byte(1); /* Major version */ - emit_byte(1); /* Minor version */ - emit_byte(0); /* Density unit */ - emit_word(1); - emit_word(1); - emit_byte(0); /* No thumbnail image */ - emit_byte(0); -} - -// Emit quantization tables -void jpeg_encoder::emit_dqt() -{ - for (int i = 0; i < ((m_num_components == 3) ? 2 : 1); i++) - { - emit_marker(M_DQT); - emit_word(64 + 1 + 2); - emit_byte(static_cast(i)); - for (int j = 0; j < 64; j++) - emit_byte(static_cast(m_quantization_tables[i][j])); - } -} - -// Emit start of frame marker -void jpeg_encoder::emit_sof() -{ - emit_marker(M_SOF0); /* baseline */ - emit_word(3 * m_num_components + 2 + 5 + 1); - emit_byte(8); /* precision */ - emit_word(m_image_y); - emit_word(m_image_x); - emit_byte(m_num_components); - for (int i = 0; i < m_num_components; i++) - { - emit_byte(static_cast(i + 1)); /* component ID */ - emit_byte((m_comp_h_samp[i] << 4) + m_comp_v_samp[i]); /* h and v sampling */ - emit_byte(i > 0); /* quant. table num */ - } -} - -// Emit Huffman table. -void jpeg_encoder::emit_dht(uint8 *bits, uint8 *val, int index, bool ac_flag) -{ - emit_marker(M_DHT); - - int length = 0; - for (int i = 1; i <= 16; i++) - length += bits[i]; - - emit_word(length + 2 + 1 + 16); - emit_byte(static_cast(index + (ac_flag << 4))); - - for (int i = 1; i <= 16; i++) - emit_byte(bits[i]); - - for (int i = 0; i < length; i++) - emit_byte(val[i]); -} - -// Emit all Huffman tables. -void jpeg_encoder::emit_dhts() -{ - emit_dht(m_huff_bits[0+0], m_huff_val[0+0], 0, false); - emit_dht(m_huff_bits[2+0], m_huff_val[2+0], 0, true); - if (m_num_components == 3) - { - emit_dht(m_huff_bits[0+1], m_huff_val[0+1], 1, false); - emit_dht(m_huff_bits[2+1], m_huff_val[2+1], 1, true); - } -} - -// emit start of scan -void jpeg_encoder::emit_sos() -{ - emit_marker(M_SOS); - emit_word(2 * m_num_components + 2 + 1 + 3); - emit_byte(m_num_components); - for (int i = 0; i < m_num_components; i++) - { - emit_byte(static_cast(i + 1)); - if (i == 0) - emit_byte((0 << 4) + 0); - else - emit_byte((1 << 4) + 1); - } - emit_byte(0); /* spectral selection */ - emit_byte(63); - emit_byte(0); -} - -// Emit all markers at beginning of image file. -void jpeg_encoder::emit_markers() -{ - emit_marker(M_SOI); - emit_jfif_app0(); - emit_dqt(); - emit_sof(); - emit_dhts(); - emit_sos(); -} - -// Compute the actual canonical Huffman codes/code sizes given the JPEG huff bits and val arrays. -void jpeg_encoder::compute_huffman_table(uint *codes, uint8 *code_sizes, uint8 *bits, uint8 *val) -{ - int i, l, last_p, si; - uint8 huff_size[257]; - uint huff_code[257]; - uint code; - - int p = 0; - for (l = 1; l <= 16; l++) - for (i = 1; i <= bits[l]; i++) - huff_size[p++] = (char)l; - - huff_size[p] = 0; last_p = p; // write sentinel - - code = 0; si = huff_size[0]; p = 0; - - while (huff_size[p]) - { - while (huff_size[p] == si) - huff_code[p++] = code++; - code <<= 1; - si++; - } - - memset(codes, 0, sizeof(codes[0])*256); - memset(code_sizes, 0, sizeof(code_sizes[0])*256); - for (p = 0; p < last_p; p++) - { - codes[val[p]] = huff_code[p]; - code_sizes[val[p]] = huff_size[p]; - } -} - -// Quantization table generation. -void jpeg_encoder::compute_quant_table(int32 *pDst, int16 *pSrc) -{ - int32 q; - if (m_params.m_quality < 50) - q = 5000 / m_params.m_quality; - else - q = 200 - m_params.m_quality * 2; - for (int i = 0; i < 64; i++) - { - int32 j = *pSrc++; j = (j * q + 50L) / 100L; - *pDst++ = JPGE_MIN(JPGE_MAX(j, 1), 255); - } -} - -// Higher-level methods. -void jpeg_encoder::first_pass_init() -{ - m_bit_buffer = 0; m_bits_in = 0; - memset(m_last_dc_val, 0, 3 * sizeof(m_last_dc_val[0])); - m_mcu_y_ofs = 0; - m_pass_num = 1; -} - -bool jpeg_encoder::second_pass_init() -{ - compute_huffman_table(&m_huff_codes[0+0][0], &m_huff_code_sizes[0+0][0], m_huff_bits[0+0], m_huff_val[0+0]); - compute_huffman_table(&m_huff_codes[2+0][0], &m_huff_code_sizes[2+0][0], m_huff_bits[2+0], m_huff_val[2+0]); - if (m_num_components > 1) - { - compute_huffman_table(&m_huff_codes[0+1][0], &m_huff_code_sizes[0+1][0], m_huff_bits[0+1], m_huff_val[0+1]); - compute_huffman_table(&m_huff_codes[2+1][0], &m_huff_code_sizes[2+1][0], m_huff_bits[2+1], m_huff_val[2+1]); - } - first_pass_init(); - emit_markers(); - m_pass_num = 2; - return true; -} - -bool jpeg_encoder::jpg_open(int p_x_res, int p_y_res, int src_channels) -{ - m_num_components = 3; - switch (m_params.m_subsampling) - { - case Y_ONLY: - { - m_num_components = 1; - m_comp_h_samp[0] = 1; m_comp_v_samp[0] = 1; - m_mcu_x = 8; m_mcu_y = 8; - break; - } - case H1V1: - { - m_comp_h_samp[0] = 1; m_comp_v_samp[0] = 1; - m_comp_h_samp[1] = 1; m_comp_v_samp[1] = 1; - m_comp_h_samp[2] = 1; m_comp_v_samp[2] = 1; - m_mcu_x = 8; m_mcu_y = 8; - break; - } - case H2V1: - { - m_comp_h_samp[0] = 2; m_comp_v_samp[0] = 1; - m_comp_h_samp[1] = 1; m_comp_v_samp[1] = 1; - m_comp_h_samp[2] = 1; m_comp_v_samp[2] = 1; - m_mcu_x = 16; m_mcu_y = 8; - break; - } - case H2V2: - { - m_comp_h_samp[0] = 2; m_comp_v_samp[0] = 2; - m_comp_h_samp[1] = 1; m_comp_v_samp[1] = 1; - m_comp_h_samp[2] = 1; m_comp_v_samp[2] = 1; - m_mcu_x = 16; m_mcu_y = 16; - } - } - - m_image_x = p_x_res; m_image_y = p_y_res; - m_image_bpp = src_channels; - m_image_bpl = m_image_x * src_channels; - m_image_x_mcu = (m_image_x + m_mcu_x - 1) & (~(m_mcu_x - 1)); - m_image_y_mcu = (m_image_y + m_mcu_y - 1) & (~(m_mcu_y - 1)); - m_image_bpl_xlt = m_image_x * m_num_components; - m_image_bpl_mcu = m_image_x_mcu * m_num_components; - m_mcus_per_row = m_image_x_mcu / m_mcu_x; - - if ((m_mcu_lines[0] = static_cast(jpge_malloc(m_image_bpl_mcu * m_mcu_y))) == NULL) return false; - for (int i = 1; i < m_mcu_y; i++) - m_mcu_lines[i] = m_mcu_lines[i-1] + m_image_bpl_mcu; - - compute_quant_table(m_quantization_tables[0], s_std_lum_quant); - compute_quant_table(m_quantization_tables[1], m_params.m_no_chroma_discrim_flag ? s_std_lum_quant : s_std_croma_quant); - - m_out_buf_left = JPGE_OUT_BUF_SIZE; - m_pOut_buf = m_out_buf; - - if (m_params.m_two_pass_flag) - { - clear_obj(m_huff_count); - first_pass_init(); - } - else - { - memcpy(m_huff_bits[0+0], s_dc_lum_bits, 17); memcpy(m_huff_val [0+0], s_dc_lum_val, DC_LUM_CODES); - memcpy(m_huff_bits[2+0], s_ac_lum_bits, 17); memcpy(m_huff_val [2+0], s_ac_lum_val, AC_LUM_CODES); - memcpy(m_huff_bits[0+1], s_dc_chroma_bits, 17); memcpy(m_huff_val [0+1], s_dc_chroma_val, DC_CHROMA_CODES); - memcpy(m_huff_bits[2+1], s_ac_chroma_bits, 17); memcpy(m_huff_val [2+1], s_ac_chroma_val, AC_CHROMA_CODES); - if (!second_pass_init()) return false; // in effect, skip over the first pass - } - return m_all_stream_writes_succeeded; -} - -void jpeg_encoder::load_block_8_8_grey(int x) -{ - uint8 *pSrc; - sample_array_t *pDst = m_sample_array; - x <<= 3; - for (int i = 0; i < 8; i++, pDst += 8) - { - pSrc = m_mcu_lines[i] + x; - pDst[0] = pSrc[0] - 128; pDst[1] = pSrc[1] - 128; pDst[2] = pSrc[2] - 128; pDst[3] = pSrc[3] - 128; - pDst[4] = pSrc[4] - 128; pDst[5] = pSrc[5] - 128; pDst[6] = pSrc[6] - 128; pDst[7] = pSrc[7] - 128; - } -} - -void jpeg_encoder::load_block_8_8(int x, int y, int c) -{ - uint8 *pSrc; - sample_array_t *pDst = m_sample_array; - x = (x * (8 * 3)) + c; - y <<= 3; - for (int i = 0; i < 8; i++, pDst += 8) - { - pSrc = m_mcu_lines[y + i] + x; - pDst[0] = pSrc[0 * 3] - 128; pDst[1] = pSrc[1 * 3] - 128; pDst[2] = pSrc[2 * 3] - 128; pDst[3] = pSrc[3 * 3] - 128; - pDst[4] = pSrc[4 * 3] - 128; pDst[5] = pSrc[5 * 3] - 128; pDst[6] = pSrc[6 * 3] - 128; pDst[7] = pSrc[7 * 3] - 128; - } -} - -void jpeg_encoder::load_block_16_8(int x, int c) -{ - uint8 *pSrc1, *pSrc2; - sample_array_t *pDst = m_sample_array; - x = (x * (16 * 3)) + c; - int a = 0, b = 2; - for (int i = 0; i < 16; i += 2, pDst += 8) - { - pSrc1 = m_mcu_lines[i + 0] + x; - pSrc2 = m_mcu_lines[i + 1] + x; - pDst[0] = ((pSrc1[ 0 * 3] + pSrc1[ 1 * 3] + pSrc2[ 0 * 3] + pSrc2[ 1 * 3] + a) >> 2) - 128; pDst[1] = ((pSrc1[ 2 * 3] + pSrc1[ 3 * 3] + pSrc2[ 2 * 3] + pSrc2[ 3 * 3] + b) >> 2) - 128; - pDst[2] = ((pSrc1[ 4 * 3] + pSrc1[ 5 * 3] + pSrc2[ 4 * 3] + pSrc2[ 5 * 3] + a) >> 2) - 128; pDst[3] = ((pSrc1[ 6 * 3] + pSrc1[ 7 * 3] + pSrc2[ 6 * 3] + pSrc2[ 7 * 3] + b) >> 2) - 128; - pDst[4] = ((pSrc1[ 8 * 3] + pSrc1[ 9 * 3] + pSrc2[ 8 * 3] + pSrc2[ 9 * 3] + a) >> 2) - 128; pDst[5] = ((pSrc1[10 * 3] + pSrc1[11 * 3] + pSrc2[10 * 3] + pSrc2[11 * 3] + b) >> 2) - 128; - pDst[6] = ((pSrc1[12 * 3] + pSrc1[13 * 3] + pSrc2[12 * 3] + pSrc2[13 * 3] + a) >> 2) - 128; pDst[7] = ((pSrc1[14 * 3] + pSrc1[15 * 3] + pSrc2[14 * 3] + pSrc2[15 * 3] + b) >> 2) - 128; - int temp = a; a = b; b = temp; - } -} - -void jpeg_encoder::load_block_16_8_8(int x, int c) -{ - uint8 *pSrc1; - sample_array_t *pDst = m_sample_array; - x = (x * (16 * 3)) + c; - for (int i = 0; i < 8; i++, pDst += 8) - { - pSrc1 = m_mcu_lines[i + 0] + x; - pDst[0] = ((pSrc1[ 0 * 3] + pSrc1[ 1 * 3]) >> 1) - 128; pDst[1] = ((pSrc1[ 2 * 3] + pSrc1[ 3 * 3]) >> 1) - 128; - pDst[2] = ((pSrc1[ 4 * 3] + pSrc1[ 5 * 3]) >> 1) - 128; pDst[3] = ((pSrc1[ 6 * 3] + pSrc1[ 7 * 3]) >> 1) - 128; - pDst[4] = ((pSrc1[ 8 * 3] + pSrc1[ 9 * 3]) >> 1) - 128; pDst[5] = ((pSrc1[10 * 3] + pSrc1[11 * 3]) >> 1) - 128; - pDst[6] = ((pSrc1[12 * 3] + pSrc1[13 * 3]) >> 1) - 128; pDst[7] = ((pSrc1[14 * 3] + pSrc1[15 * 3]) >> 1) - 128; - } -} - -void jpeg_encoder::load_quantized_coefficients(int component_num) -{ - int32 *q = m_quantization_tables[component_num > 0]; - int16 *pDst = m_coefficient_array; - for (int i = 0; i < 64; i++) - { - sample_array_t j = m_sample_array[s_zag[i]]; - if (j < 0) - { - if ((j = -j + (*q >> 1)) < *q) - *pDst++ = 0; - else - *pDst++ = static_cast(-(j / *q)); - } - else - { - if ((j = j + (*q >> 1)) < *q) - *pDst++ = 0; - else - *pDst++ = static_cast((j / *q)); - } - q++; - } -} - -void jpeg_encoder::flush_output_buffer() -{ - if (m_out_buf_left != JPGE_OUT_BUF_SIZE) - m_all_stream_writes_succeeded = m_all_stream_writes_succeeded && m_pStream->put_buf(m_out_buf, JPGE_OUT_BUF_SIZE - m_out_buf_left); - m_pOut_buf = m_out_buf; - m_out_buf_left = JPGE_OUT_BUF_SIZE; -} - -void jpeg_encoder::put_bits(uint bits, uint len) -{ - m_bit_buffer |= ((uint32)bits << (24 - (m_bits_in += len))); - while (m_bits_in >= 8) - { - uint8 c; - #define JPGE_PUT_BYTE(c) { *m_pOut_buf++ = (c); if (--m_out_buf_left == 0) flush_output_buffer(); } - JPGE_PUT_BYTE(c = (uint8)((m_bit_buffer >> 16) & 0xFF)); - if (c == 0xFF) JPGE_PUT_BYTE(0); - m_bit_buffer <<= 8; - m_bits_in -= 8; - } -} - -void jpeg_encoder::code_coefficients_pass_one(int component_num) -{ - if (component_num >= 3) return; // just to shut up static analysis - int i, run_len, nbits, temp1; - int16 *src = m_coefficient_array; - uint32 *dc_count = component_num ? m_huff_count[0 + 1] : m_huff_count[0 + 0], *ac_count = component_num ? m_huff_count[2 + 1] : m_huff_count[2 + 0]; - - temp1 = src[0] - m_last_dc_val[component_num]; - m_last_dc_val[component_num] = src[0]; - if (temp1 < 0) temp1 = -temp1; - - nbits = 0; - while (temp1) - { - nbits++; temp1 >>= 1; - } - - dc_count[nbits]++; - for (run_len = 0, i = 1; i < 64; i++) - { - if ((temp1 = m_coefficient_array[i]) == 0) - run_len++; - else - { - while (run_len >= 16) - { - ac_count[0xF0]++; - run_len -= 16; - } - if (temp1 < 0) temp1 = -temp1; - nbits = 1; - while (temp1 >>= 1) nbits++; - ac_count[(run_len << 4) + nbits]++; - run_len = 0; - } - } - if (run_len) ac_count[0]++; -} - -void jpeg_encoder::code_coefficients_pass_two(int component_num) -{ - int i, j, run_len, nbits, temp1, temp2; - int16 *pSrc = m_coefficient_array; - uint *codes[2]; - uint8 *code_sizes[2]; - - if (component_num == 0) - { - codes[0] = m_huff_codes[0 + 0]; codes[1] = m_huff_codes[2 + 0]; - code_sizes[0] = m_huff_code_sizes[0 + 0]; code_sizes[1] = m_huff_code_sizes[2 + 0]; - } - else - { - codes[0] = m_huff_codes[0 + 1]; codes[1] = m_huff_codes[2 + 1]; - code_sizes[0] = m_huff_code_sizes[0 + 1]; code_sizes[1] = m_huff_code_sizes[2 + 1]; - } - - temp1 = temp2 = pSrc[0] - m_last_dc_val[component_num]; - m_last_dc_val[component_num] = pSrc[0]; - - if (temp1 < 0) - { - temp1 = -temp1; temp2--; - } - - nbits = 0; - while (temp1) - { - nbits++; temp1 >>= 1; - } - - put_bits(codes[0][nbits], code_sizes[0][nbits]); - if (nbits) put_bits(temp2 & ((1 << nbits) - 1), nbits); - - for (run_len = 0, i = 1; i < 64; i++) - { - if ((temp1 = m_coefficient_array[i]) == 0) - run_len++; - else - { - while (run_len >= 16) - { - put_bits(codes[1][0xF0], code_sizes[1][0xF0]); - run_len -= 16; - } - if ((temp2 = temp1) < 0) - { - temp1 = -temp1; - temp2--; - } - nbits = 1; - while (temp1 >>= 1) - nbits++; - j = (run_len << 4) + nbits; - put_bits(codes[1][j], code_sizes[1][j]); - put_bits(temp2 & ((1 << nbits) - 1), nbits); - run_len = 0; - } - } - if (run_len) - put_bits(codes[1][0], code_sizes[1][0]); -} - -void jpeg_encoder::code_block(int component_num) -{ - DCT2D(m_sample_array); - load_quantized_coefficients(component_num); - if (m_pass_num == 1) - code_coefficients_pass_one(component_num); - else - code_coefficients_pass_two(component_num); -} - -void jpeg_encoder::process_mcu_row() -{ - if (m_num_components == 1) - { - for (int i = 0; i < m_mcus_per_row; i++) - { - load_block_8_8_grey(i); code_block(0); - } - } - else if ((m_comp_h_samp[0] == 1) && (m_comp_v_samp[0] == 1)) - { - for (int i = 0; i < m_mcus_per_row; i++) - { - load_block_8_8(i, 0, 0); code_block(0); load_block_8_8(i, 0, 1); code_block(1); load_block_8_8(i, 0, 2); code_block(2); - } - } - else if ((m_comp_h_samp[0] == 2) && (m_comp_v_samp[0] == 1)) - { - for (int i = 0; i < m_mcus_per_row; i++) - { - load_block_8_8(i * 2 + 0, 0, 0); code_block(0); load_block_8_8(i * 2 + 1, 0, 0); code_block(0); - load_block_16_8_8(i, 1); code_block(1); load_block_16_8_8(i, 2); code_block(2); - } - } - else if ((m_comp_h_samp[0] == 2) && (m_comp_v_samp[0] == 2)) - { - for (int i = 0; i < m_mcus_per_row; i++) - { - load_block_8_8(i * 2 + 0, 0, 0); code_block(0); load_block_8_8(i * 2 + 1, 0, 0); code_block(0); - load_block_8_8(i * 2 + 0, 1, 0); code_block(0); load_block_8_8(i * 2 + 1, 1, 0); code_block(0); - load_block_16_8(i, 1); code_block(1); load_block_16_8(i, 2); code_block(2); - } - } -} - -bool jpeg_encoder::terminate_pass_one() -{ - optimize_huffman_table(0+0, DC_LUM_CODES); optimize_huffman_table(2+0, AC_LUM_CODES); - if (m_num_components > 1) - { - optimize_huffman_table(0+1, DC_CHROMA_CODES); optimize_huffman_table(2+1, AC_CHROMA_CODES); - } - return second_pass_init(); -} - -bool jpeg_encoder::terminate_pass_two() -{ - put_bits(0x7F, 7); - flush_output_buffer(); - emit_marker(M_EOI); - m_pass_num++; // purposely bump up m_pass_num, for debugging - return true; -} - -bool jpeg_encoder::process_end_of_image() -{ - if (m_mcu_y_ofs) - { - if (m_mcu_y_ofs < 16) // check here just to shut up static analysis - { - for (int i = m_mcu_y_ofs; i < m_mcu_y; i++) - memcpy(m_mcu_lines[i], m_mcu_lines[m_mcu_y_ofs - 1], m_image_bpl_mcu); - } - - process_mcu_row(); - } - - if (m_pass_num == 1) - return terminate_pass_one(); - else - return terminate_pass_two(); -} - -void jpeg_encoder::load_mcu(const void *pSrc) -{ - const uint8* Psrc = reinterpret_cast(pSrc); - - uint8* pDst = m_mcu_lines[m_mcu_y_ofs]; // OK to write up to m_image_bpl_xlt bytes to pDst - - if (m_num_components == 1) - { - if (m_image_bpp == 4) - RGBA_to_Y(pDst, Psrc, m_image_x); - else if (m_image_bpp == 3) - RGB_to_Y(pDst, Psrc, m_image_x); - else - memcpy(pDst, Psrc, m_image_x); - } - else - { - if (m_image_bpp == 4) - RGBA_to_YCC(pDst, Psrc, m_image_x); - else if (m_image_bpp == 3) - RGB_to_YCC(pDst, Psrc, m_image_x); - else - Y_to_YCC(pDst, Psrc, m_image_x); - } - - // Possibly duplicate pixels at end of scanline if not a multiple of 8 or 16 - if (m_num_components == 1) - memset(m_mcu_lines[m_mcu_y_ofs] + m_image_bpl_xlt, pDst[m_image_bpl_xlt - 1], m_image_x_mcu - m_image_x); - else - { - const uint8 y = pDst[m_image_bpl_xlt - 3 + 0], cb = pDst[m_image_bpl_xlt - 3 + 1], cr = pDst[m_image_bpl_xlt - 3 + 2]; - uint8 *q = m_mcu_lines[m_mcu_y_ofs] + m_image_bpl_xlt; - for (int i = m_image_x; i < m_image_x_mcu; i++) - { - *q++ = y; *q++ = cb; *q++ = cr; - } - } - - if (++m_mcu_y_ofs == m_mcu_y) - { - process_mcu_row(); - m_mcu_y_ofs = 0; - } -} - -void jpeg_encoder::clear() -{ - m_mcu_lines[0] = NULL; - m_pass_num = 0; - m_all_stream_writes_succeeded = true; -} - -jpeg_encoder::jpeg_encoder() -{ - clear(); -} - -jpeg_encoder::~jpeg_encoder() -{ - deinit(); -} - -bool jpeg_encoder::init(output_stream *pStream, int64_t width, int64_t height, int64_t src_channels, const params &comp_params) -{ - deinit(); - if (((!pStream) || (width < 1) || (height < 1)) || ((src_channels != 1) && (src_channels != 3) && (src_channels != 4)) || (!comp_params.check_valid())) return false; - m_pStream = pStream; - m_params = comp_params; - return jpg_open(width, height, src_channels); -} - -void jpeg_encoder::deinit() -{ - jpge_free(m_mcu_lines[0]); - clear(); -} - -bool jpeg_encoder::process_scanline(const void* pScanline) -{ - if ((m_pass_num < 1) || (m_pass_num > 2)) return false; - if (m_all_stream_writes_succeeded) - { - if (!pScanline) - { - if (!process_end_of_image()) return false; - } - else - { - load_mcu(pScanline); - } - } - return m_all_stream_writes_succeeded; -} - -// Higher level wrappers/examples (optional). -#include - -class cfile_stream : public output_stream -{ - cfile_stream(const cfile_stream &); - cfile_stream &operator= (const cfile_stream &); - - FILE* m_pFile; - bool m_bStatus; - -public: - cfile_stream() : m_pFile(NULL), m_bStatus(false) { } - - virtual ~cfile_stream() - { - close(); - } - - bool open(const char *pFilename) - { - close(); -#if defined(_MSC_VER) - if (fopen_s(&m_pFile, pFilename, "wb") != 0) - { - return false; - } -#else - m_pFile = fopen(pFilename, "wb"); -#endif - m_bStatus = (m_pFile != NULL); - return m_bStatus; - } - - bool close() - { - if (m_pFile) - { - if (fclose(m_pFile) == EOF) - { - m_bStatus = false; - } - m_pFile = NULL; - } - return m_bStatus; - } - - virtual bool put_buf(const void* pBuf, int64_t len) - { - m_bStatus = m_bStatus && (fwrite(pBuf, len, 1, m_pFile) == 1); - return m_bStatus; - } - - uint get_size() const - { - return m_pFile ? ftell(m_pFile) : 0; - } -}; - -// Writes JPEG image to file. -bool compress_image_to_jpeg_file(const char *pFilename, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params) -{ - cfile_stream dst_stream; - if (!dst_stream.open(pFilename)) - return false; - - jpge::jpeg_encoder dst_image; - if (!dst_image.init(&dst_stream, width, height, num_channels, comp_params)) - return false; - - for (uint pass_index = 0; pass_index < dst_image.get_total_passes(); pass_index++) - { - for (int64_t i = 0; i < height; i++) - { - // i, width, and num_channels are all 64bit - const uint8* pBuf = pImage_data + i * width * num_channels; - if (!dst_image.process_scanline(pBuf)) - return false; - } - if (!dst_image.process_scanline(NULL)) - return false; - } - - dst_image.deinit(); - - return dst_stream.close(); -} - -class memory_stream : public output_stream -{ - memory_stream(const memory_stream &); - memory_stream &operator= (const memory_stream &); - - uint8 *m_pBuf; - uint64_t m_buf_size, m_buf_ofs; - -public: - memory_stream(void *pBuf, uint64_t buf_size) : m_pBuf(static_cast(pBuf)), m_buf_size(buf_size), m_buf_ofs(0) { } - - virtual ~memory_stream() { } - - virtual bool put_buf(const void* pBuf, int64_t len) - { - uint64_t buf_remaining = m_buf_size - m_buf_ofs; - if ((uint64_t)len > buf_remaining) - return false; - memcpy(m_pBuf + m_buf_ofs, pBuf, len); - m_buf_ofs += len; - return true; - } - - uint64_t get_size() const - { - return m_buf_ofs; - } -}; - -bool compress_image_to_jpeg_file_in_memory(void *pDstBuf, int64_t &buf_size, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params) -{ - if ((!pDstBuf) || (!buf_size)) - return false; - - memory_stream dst_stream(pDstBuf, buf_size); - - buf_size = 0; - - jpge::jpeg_encoder dst_image; - if (!dst_image.init(&dst_stream, width, height, num_channels, comp_params)) - return false; - - for (uint pass_index = 0; pass_index < dst_image.get_total_passes(); pass_index++) - { - for (int64_t i = 0; i < height; i++) - { - const uint8* pScanline = pImage_data + i * width * num_channels; - if (!dst_image.process_scanline(pScanline)) - return false; - } - if (!dst_image.process_scanline(NULL)) - return false; - } - - dst_image.deinit(); - - buf_size = dst_stream.get_size(); - return true; -} - -} // namespace jpge \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Google Translate Client 6.0.612 Pro Key Serial Number [Extra Quality] Crack.143.md b/spaces/quidiaMuxgu/Expedit-SAM/Google Translate Client 6.0.612 Pro Key Serial Number [Extra Quality] Crack.143.md deleted file mode 100644 index 73ce638fff49c0b672cf79e29a1665826133c86b..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Google Translate Client 6.0.612 Pro Key Serial Number [Extra Quality] Crack.143.md +++ /dev/null @@ -1,8 +0,0 @@ -

        Google Translate Client 6.0.612 pro key serial number crack.143


        Download File ✵✵✵ https://geags.com/2uCq1V



        -
        -Google Translate Client 6.0.612 Pro Key Serial Number Crack.143. Download | Watch - Google Translate Client 6.0.612 Pro Key Serial Number Crack.143. Download | Watch - Google Translate Client 6.0.612 Pro Key Serial Number Crack.143. Download | Watch - Google Translate Client 6.0.612 Pro Key Serial Number Crack.143. Download | Watch - Google Translate Client 6.0.612 Pro Key Serial Number Crack.143. -Download | Watch - Google Translate Client 6.0.612 Pro Key Serial Number Crack.143. -Google Translate Client 6.0 8a78ff9644
        -
        -
        -

        diff --git a/spaces/rachana219/MODT2/track.py b/spaces/rachana219/MODT2/track.py deleted file mode 100644 index 9cb1c94af8e41743c07085dbf892de512f44f13d..0000000000000000000000000000000000000000 --- a/spaces/rachana219/MODT2/track.py +++ /dev/null @@ -1,399 +0,0 @@ -import argparse -import cv2 -import os -# limit the number of cpus used by high performance libraries -os.environ["OMP_NUM_THREADS"] = "1" -os.environ["OPENBLAS_NUM_THREADS"] = "1" -os.environ["MKL_NUM_THREADS"] = "1" -os.environ["VECLIB_MAXIMUM_THREADS"] = "1" -os.environ["NUMEXPR_NUM_THREADS"] = "1" - -import sys -import platform -import numpy as np -from pathlib import Path -import torch -import torch.backends.cudnn as cudnn - -FILE = Path(__file__).resolve() -ROOT = FILE.parents[0] # yolov5 strongsort root directory -WEIGHTS = ROOT / 'weights' - -if str(ROOT) not in sys.path: - sys.path.append(str(ROOT)) # add ROOT to PATH -if str(ROOT / 'yolov8') not in sys.path: - sys.path.append(str(ROOT / 'yolov8')) # add yolov5 ROOT to PATH -if str(ROOT / 'trackers' / 'strongsort') not in sys.path: - sys.path.append(str(ROOT / 'trackers' / 'strongsort')) # add strong_sort ROOT to PATH - -ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative - -import logging - -from ultralytics.nn.autobackend import AutoBackend -from ultralytics.yolo.data.dataloaders.stream_loaders import LoadImages, LoadStreams -from ultralytics.yolo.data.utils import IMG_FORMATS, VID_FORMATS -from ultralytics.yolo.utils import DEFAULT_CFG, LOGGER, SETTINGS, callbacks, colorstr, ops -from ultralytics.yolo.utils.checks import check_file, check_imgsz, check_imshow, print_args, check_requirements -from ultralytics.yolo.utils.files import increment_path -from ultralytics.yolo.utils.torch_utils import select_device -from ultralytics.yolo.utils.ops import Profile, non_max_suppression, scale_boxes, process_mask, process_mask_native -from ultralytics.yolo.utils.plotting import Annotator, colors, save_one_box - - -from trackers.multi_tracker_zoo import create_tracker - - -@torch.no_grad() -def run( - source='0', - yolo_weights=WEIGHTS / 'yolov5m.pt', # model.pt path(s), - reid_weights=WEIGHTS / 'osnet_x0_25_msmt17.pt', # model.pt path, - tracking_method='strongsort', - tracking_config=None, - imgsz=(640, 640), # inference size (height, width) - conf_thres=0.25, # confidence threshold - iou_thres=0.45, # NMS IOU threshold - max_det=1000, # maximum detections per image - device='', # cuda device, i.e. 0 or 0,1,2,3 or cpu - show_vid=False, # show results - save_txt=False, # save results to *.txt - save_conf=False, # save confidences in --save-txt labels - save_crop=False, # save cropped prediction boxes - save_trajectories=False, # save trajectories for each track - save_vid=True, # save confidences in --save-txt labels - nosave=False, # do not save images/videos - classes=None, # filter by class: --class 0, or --class 0 2 3 - agnostic_nms=False, # class-agnostic NMS - augment=False, # augmented inference - visualize=False, # visualize features - update=False, # update all models - #project=ROOT / 'runs' / 'track', # save results to project/name - project=ROOT ,# save results to project/name - name='exp', # save results to project/name - exist_ok=True, # existing project/name ok, do not increment - line_thickness=2, # bounding box thickness (pixels) - hide_labels=False, # hide labels - hide_conf=False, # hide confidences - hide_class=False, # hide IDs - half=False, # use FP16 half-precision inference - dnn=False, # use OpenCV DNN for ONNX inference - vid_stride=1, # video frame-rate stride - retina_masks=False, -): - #print the inputs - print(f"model used : {yolo_weights}, tracking method : {tracking_method}") - - source = str(source) - save_img = not nosave and not source.endswith('.txt') # save inference images - is_file = Path(source).suffix[1:] in (VID_FORMATS) - is_url = source.lower().startswith(('rtsp://', 'rtmp://', 'http://', 'https://')) - webcam = source.isnumeric() or source.endswith('.txt') or (is_url and not is_file) - if is_url and is_file: - source = check_file(source) # download - - # Directories - if not isinstance(yolo_weights, list): # single yolo model - exp_name = yolo_weights.stem - elif type(yolo_weights) is list and len(yolo_weights) == 1: # single models after --yolo_weights - exp_name = Path(yolo_weights[0]).stem - else: # multiple models after --yolo_weights - exp_name = 'ensemble' - exp_name = name if name else exp_name + "_" + reid_weights.stem - save_dir = increment_path(Path(project) / exp_name, exist_ok=exist_ok) # increment run - (save_dir / 'tracks' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir - - # Load model - device = select_device(device) - is_seg = '-seg' in str(yolo_weights) - - - model = AutoBackend(yolo_weights, device=device, dnn=dnn, fp16=half) - stride, names, pt = model.stride, model.names, model.pt - imgsz = check_imgsz(imgsz, stride=stride) # check image size - - # Dataloader - bs = 1 - if webcam: - show_vid = check_imshow(warn=True) - dataset = LoadStreams( - source, - imgsz=imgsz, - stride=stride, - auto=pt, - transforms=getattr(model.model, 'transforms', None), - vid_stride=vid_stride - ) - bs = len(dataset) - else: - dataset = LoadImages( - source, - imgsz=imgsz, - stride=stride, - auto=pt, - transforms=getattr(model.model, 'transforms', None), - vid_stride=vid_stride - ) - vid_path, vid_writer, txt_path = [None] * bs, [None] * bs, [None] * bs - model.warmup(imgsz=(1 if pt or model.triton else bs, 3, *imgsz)) # warmup - - # Create as many strong sort instances as there are video sources - tracker_list = [] - for i in range(bs): - tracker = create_tracker(tracking_method, tracking_config, reid_weights, device, half) - tracker_list.append(tracker, ) - if hasattr(tracker_list[i], 'model'): - if hasattr(tracker_list[i].model, 'warmup'): - tracker_list[i].model.warmup() - outputs = [None] * bs - - # Run tracking - #model.warmup(imgsz=(1 if pt else bs, 3, *imgsz)) # warmup - seen, windows, dt = 0, [], (Profile(), Profile(), Profile(), Profile()) - curr_frames, prev_frames = [None] * bs, [None] * bs - for frame_idx, batch in enumerate(dataset): - path, im, im0s, vid_cap, s = batch - visualize = increment_path(save_dir / Path(path[0]).stem, mkdir=True) if visualize else False - with dt[0]: - im = torch.from_numpy(im).to(device) - im = im.half() if half else im.float() # uint8 to fp16/32 - im /= 255.0 # 0 - 255 to 0.0 - 1.0 - if len(im.shape) == 3: - im = im[None] # expand for batch dim - - # Inference - with dt[1]: - preds = model(im, augment=augment, visualize=visualize) - - # Apply NMS - with dt[2]: - if is_seg: - masks = [] - p = non_max_suppression(preds[0], conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det, nm=32) - proto = preds[1][-1] - else: - p = non_max_suppression(preds, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det) - - # Process detections - filename = 'out.mp4' - for i, det in enumerate(p): # detections per image - seen += 1 - if webcam: # bs >= 1 - p, im0, _ = path[i], im0s[i].copy(), dataset.count - p = Path(p) # to Path - s += f'{i}: ' - txt_file_name = p.name - save_path = str(save_dir / filename) # im.jpg, vid.mp4, ... - - else: - p, im0, _ = path, im0s.copy(), getattr(dataset, 'frame', 0) - p = Path(p) # to Path - # video file - if source.endswith(VID_FORMATS): - txt_file_name = p.stem - save_path = str(save_dir / filename) # im.jpg, vid.mp4, ... - LOGGER.info(f"p.name is {p.name}, save_path value is {save_path}") - # folder with imgs - else: - txt_file_name = p.parent.name # get folder name containing current img - save_path = str(save_dir / p.parent.name) # im.jpg, vid.mp4, ... - curr_frames[i] = im0 - - txt_path = str(save_dir / 'tracks' / txt_file_name) # im.txt - s += '%gx%g ' % im.shape[2:] # print string - imc = im0.copy() if save_crop else im0 # for save_crop - - annotator = Annotator(im0, line_width=line_thickness, example=str(names)) - - if hasattr(tracker_list[i], 'tracker') and hasattr(tracker_list[i].tracker, 'camera_update'): - if prev_frames[i] is not None and curr_frames[i] is not None: # camera motion compensation - tracker_list[i].tracker.camera_update(prev_frames[i], curr_frames[i]) - - if det is not None and len(det): - if is_seg: - shape = im0.shape - # scale bbox first the crop masks - if retina_masks: - det[:, :4] = scale_boxes(im.shape[2:], det[:, :4], shape).round() # rescale boxes to im0 size - masks.append(process_mask_native(proto[i], det[:, 6:], det[:, :4], im0.shape[:2])) # HWC - else: - masks.append(process_mask(proto[i], det[:, 6:], det[:, :4], im.shape[2:], upsample=True)) # HWC - det[:, :4] = scale_boxes(im.shape[2:], det[:, :4], shape).round() # rescale boxes to im0 size - else: - det[:, :4] = scale_boxes(im.shape[2:], det[:, :4], im0.shape).round() # rescale boxes to im0 size - - # Print results - for c in det[:, 5].unique(): - n = (det[:, 5] == c).sum() # detections per class - s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string - - # pass detections to strongsort - with dt[3]: - outputs[i] = tracker_list[i].update(det.cpu(), im0) - - # draw boxes for visualization - if len(outputs[i]) > 0: - - if is_seg: - # Mask plotting - annotator.masks( - masks[i], - colors=[colors(x, True) for x in det[:, 5]], - im_gpu=torch.as_tensor(im0, dtype=torch.float16).to(device).permute(2, 0, 1).flip(0).contiguous() / - 255 if retina_masks else im[i] - ) - - for j, (output) in enumerate(outputs[i]): - - bbox = output[0:4] - id = output[4] - cls = output[5] - conf = output[6] - - if save_txt: - # to MOT format - bbox_left = output[0] - bbox_top = output[1] - bbox_w = output[2] - output[0] - bbox_h = output[3] - output[1] - # Write MOT compliant results to file - with open(txt_path + '.txt', 'a') as f: - f.write(('%g ' * 10 + '\n') % (frame_idx + 1, id, bbox_left, # MOT format - bbox_top, bbox_w, bbox_h, -1, -1, -1, i)) - - if save_vid or save_crop or show_vid: # Add bbox/seg to image - c = int(cls) # integer class - id = int(id) # integer id - label = None if hide_labels else (f'{id} {names[c]}' if hide_conf else \ - (f'{id} {conf:.2f}' if hide_class else f'{id} {names[c]} {conf:.2f}')) - color = colors(c, True) - annotator.box_label(bbox, label, color=color) - - if save_trajectories and tracking_method == 'strongsort': - q = output[7] - tracker_list[i].trajectory(im0, q, color=color) - if save_crop: - txt_file_name = txt_file_name if (isinstance(path, list) and len(path) > 1) else '' - save_one_box(np.array(bbox, dtype=np.int16), imc, file=save_dir / 'crops' / txt_file_name / names[c] / f'{id}' / f'{p.stem}.jpg', BGR=True) - - else: - pass - #tracker_list[i].tracker.pred_n_update_all_tracks() - - # Stream results - im0 = annotator.result() - if show_vid: - if platform.system() == 'Linux' and p not in windows: - windows.append(p) - cv2.namedWindow(str(p), cv2.WINDOW_NORMAL | cv2.WINDOW_KEEPRATIO) # allow window resize (Linux) - cv2.resizeWindow(str(p), im0.shape[1], im0.shape[0]) - cv2.imshow(str(p), im0) - if cv2.waitKey(1) == ord('q'): # 1 millisecond - exit() - - # Save results (image with detections) - if save_vid: - LOGGER.info(f"vid_path, save_path {vid_path[i]}{save_path}") - if vid_path[i] != save_path: # new video - vid_path[i] = save_path - if isinstance(vid_writer[i], cv2.VideoWriter): - vid_writer[i].release() # release previous video writer - if vid_cap: # video - fps = vid_cap.get(cv2.CAP_PROP_FPS) - w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH)) - h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) - else: # stream - fps, w, h = 30, im0.shape[1], im0.shape[0] - save_path = str(Path(save_path).with_suffix('.mp4')) # force *.mp4 suffix on results videos - LOGGER.info(f"test Results saved to {colorstr('bold', save_path)}") - vid_writer[i] = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h)) - vid_writer[i].write(im0) - - prev_frames[i] = curr_frames[i] - - # Print total time (preprocessing + inference + NMS + tracking) - LOGGER.info(f"{s}{'' if len(det) else '(no detections), '}{sum([dt.dt for dt in dt if hasattr(dt, 'dt')]) * 1E3:.1f}ms") - - # Print results - t = tuple(x.t / seen * 1E3 for x in dt) # speeds per image - LOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS, %.1fms {tracking_method} update per image at shape {(1, 3, *imgsz)}' % t) - if save_txt or save_vid: - s = f"\n{len(list((save_dir / 'tracks').glob('*.txt')))} tracks saved to {save_dir / 'tracks'}" if save_txt else '' - LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}{s}") - if update: - strip_optimizer(yolo_weights) # update model (to fix SourceChangeWarning) - - -def parse_opt(): - parser = argparse.ArgumentParser() - #parser.add_argument('--yolo-weights', nargs='+', type=Path, default=WEIGHTS / 'yolov8s-seg.pt', help='model.pt path(s)') - parser.add_argument('--reid-weights', type=Path, default=WEIGHTS / 'osnet_x0_25_msmt17.pt') - #parser.add_argument('--tracking-method', type=str, default='bytetrack', help='strongsort, ocsort, bytetrack') - parser.add_argument('--tracking-config', type=Path, default=None) - #parser.add_argument('--source', type=str, default='0', help='file/dir/URL/glob, 0 for webcam') - parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640], help='inference size h,w') - parser.add_argument('--conf-thres', type=float, default=0.5, help='confidence threshold') - parser.add_argument('--iou-thres', type=float, default=0.5, help='NMS IoU threshold') - parser.add_argument('--max-det', type=int, default=1000, help='maximum detections per image') - parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - parser.add_argument('--show-vid', action='store_true', help='display tracking video results') - parser.add_argument('--save-txt', action='store_true', help='save results to *.txt') - parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels') - parser.add_argument('--save-crop', action='store_true', help='save cropped prediction boxes') - parser.add_argument('--save-trajectories', action='store_true', help='save trajectories for each track') - parser.add_argument('--save-vid', action='store_true',default=True, help='save video tracking results') - parser.add_argument('--nosave', action='store_true', help='do not save images/videos') - # class 0 is person, 1 is bycicle, 2 is car... 79 is oven - parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --classes 0, or --classes 0 2 3') - parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS') - parser.add_argument('--augment', action='store_true', help='augmented inference') - parser.add_argument('--visualize', action='store_true', help='visualize features') - parser.add_argument('--update', action='store_true', help='update all models') - parser.add_argument('--project', default=ROOT , help='save results to project/name') - parser.add_argument('--name', default='exp', help='save results to ROOT') - parser.add_argument('--exist-ok', default='True', action='store_true', help='existing project/name ok, do not increment') - parser.add_argument('--line-thickness', default=2, type=int, help='bounding box thickness (pixels)') - parser.add_argument('--hide-labels', default=False, action='store_true', help='hide labels') - parser.add_argument('--hide-conf', default=False, action='store_true', help='hide confidences') - parser.add_argument('--hide-class', default=False, action='store_true', help='hide IDs') - parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference') - parser.add_argument('--dnn', action='store_true', help='use OpenCV DNN for ONNX inference') - parser.add_argument('--vid-stride', type=int, default=1, help='video frame-rate stride') - parser.add_argument('--retina-masks', action='store_true', help='whether to plot masks in native resolution') - #opt = parser.parse_args() - #opt.imgsz *= 2 if len(opt.imgsz) == 1 else 1 # expand - #opt.tracking_config = ROOT / 'trackers' / opt.tracking_method / 'configs' / (opt.tracking_method + '.yaml') - #print_args(vars(opt)) - #return opt - return parser - - -def main(opt): - check_requirements(requirements=ROOT / 'requirements.txt', exclude=('tensorboard', 'thop')) - run(**vars(opt)) - - -#if __name__ == "__main__": -# opt = parse_opt() -# main(opt) - -def MOT(yoloweights, trackingmethod, sourceVideo): - parser = parse_opt() - parser.add_argument('--yolo-weights', nargs='+', type=Path, default= yoloweights, help='model.pt path(s)') - parser.add_argument('--tracking-method', type=str, default= trackingmethod, help='strongsort, ocsort, bytetrack') - parser.add_argument('--source', type=str, default=sourceVideo, help='file/dir/URL/glob, 0 for webcam') - opt = parser.parse_args() - opt.imgsz *= 2 if len(opt.imgsz) == 1 else 1 # expand - opt.tracking_config = ROOT / 'trackers' / opt.tracking_method / 'configs' / (opt.tracking_method + '.yaml') - print_args(vars(opt)) - main(opt) - save_dir = increment_path('exp', exist_ok=True) - input = os.path.join(save_dir,'out.mp4') - outpath = 'output.mp4' #'output/'+ 'output.mp4' - if os.path.exists(outpath): - os.remove(outpath) - - command = f"ffmpeg -i {input} -vf fps=30 -vcodec libx264 {outpath}" - print(command) - os.system(command) - return outpath \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Arturia Oberheim SEM V v1.1.2 PC and MAC cracked versions What you can do with the versatile and expressive instrument.md b/spaces/raedeXanto/academic-chatgpt-beta/Arturia Oberheim SEM V v1.1.2 PC and MAC cracked versions What you can do with the versatile and expressive instrument.md deleted file mode 100644 index 1219224aab0c8cce5b457c0251b9c65cdc7ca19f..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Arturia Oberheim SEM V v1.1.2 PC and MAC cracked versions What you can do with the versatile and expressive instrument.md +++ /dev/null @@ -1,121 +0,0 @@ - -

        Arturia Oberheim SEM V v1.1.2 PC and MAC cracked versions

        -

        Introduction

        -

        If you are a fan of analog synthesizers, you probably know about the Oberheim SEM, one of the most iconic and influential synths ever made. The SEM stands for Synthesizer Expander Module, and it was designed by Tom Oberheim in the late 1970s as a companion module for other synths and sequencers. The SEM had a unique sound thanks to its two oscillators, multimode filter, and dual envelope generators. It was also one of the first synths to offer polyphony by combining multiple SEMs into 2, 4, or 8-voice systems.

        -

        Today, you can experience the legendary sound of the Oberheim SEM in a software format, thanks to Arturia, a leading company in the field of music software and hardware. Arturia has recreated the SEM in stunning detail using their True Analog Emulation technology, which captures the warmth and character of the original hardware. Arturia has also added some modern features and effects to make the SEM even more versatile and powerful.

        -

        Arturia Oberheim SEM V v1.1.2 PC and MAC cracked versions


        Downloadhttps://tinourl.com/2uL0Kw



        -

        However, there is a catch: Arturia Oberheim SEM V is not a free software. You have to pay $149 to buy it from their official website, or subscribe to their V Collection bundle for $199 per year. That's quite a lot of money for some people, especially if you are on a tight budget or just want to try it out for fun.

        -

        That's why some people resort to using cracked versions of Arturia Oberheim SEM V, which are illegally distributed on various websites and torrent platforms. These cracked versions claim to offer the same functionality as the original software, but without any cost or registration required.

        -

        But are these cracked versions really worth it? How do they work? And what are the risks involved? In this article, we will answer these questions and more. We will also show you how to download and install Arturia Oberheim SEM V v1.1.2 PC and MAC cracked versions, which are the latest available versions as of May 2023.

        -

        What is Arturia Oberheim SEM V?

        -

        Arturia Oberheim SEM V is a software emulation of the Oberheim SEM synthesizer, which was released by Arturia in 2011 as part of their Analog Classics series. It is compatible with both Windows and Mac operating systems, and it can run as a standalone application or as a plug-in in most DAWs (Digital Audio Workstations).

        -

        How to download Arturia Oberheim SEM V v1.1.2 for free
        -Arturia Oberheim SEM V v1.1.2 review and tutorial
        -Best analog synth emulation software: Arturia Oberheim SEM V v1.1.2
        -Arturia Oberheim SEM V v1.1.2 vs other Arturia synths
        -Arturia Oberheim SEM V v1.1.2 presets and sound design tips
        -Where to find Arturia Oberheim SEM V v1.1.2 cracked versions online
        -Arturia Oberheim SEM V v1.1.2 installation and activation guide
        -Arturia Oberheim SEM V v1.1.2 features and specifications
        -Arturia Oberheim SEM V v1.1.2 compatibility and system requirements
        -Arturia Oberheim SEM V v1.1.2 update and bug fixes
        -How to use Arturia Oberheim SEM V v1.1.2 in your DAW
        -Arturia Oberheim SEM V v1.1.2 demo and trial version
        -How to get Arturia Oberheim SEM V v1.1.2 discount and coupon codes
        -Arturia Oberheim SEM V v1.1.2 alternatives and competitors
        -How to recreate classic Oberheim sounds with Arturia Oberheim SEM V v1.1.2
        -How to use the voice programmer in Arturia Oberheim SEM V v1.1.2
        -How to customize the interface of Arturia Oberheim SEM V v1.1.2
        -How to optimize the performance of Arturia Oberheim SEM V v1.1.2
        -How to troubleshoot common problems with Arturia Oberheim SEM V v1.1.2
        -How to contact Arturia support for Arturia Oberheim SEM V v1.1.2 issues
        -How to uninstall Arturia Oberheim SEM V v1.1.2 from your PC or MAC
        -How to backup and restore your Arturia Oberheim SEM V v1.1.2 settings and presets
        -How to import and export your Arturia Oberheim SEM V v1.1.2 projects and patches
        -How to sync your Arturia Oberheim SEM V v1.1.2 with other hardware or software synths
        -How to use MIDI controllers with Arturia Oberheim SEM V v1.1.2
        -How to layer and mix your Arturia Oberheim SEM V v1.1.2 sounds
        -How to add effects and modulation to your Arturia Oberheim SEM V v1.1.2 sounds
        -How to create your own presets and sound banks for Arturia Oberheim SEM V v1.1.2
        -How to share your Arturia Oberheim SEM V v1.1.2 sounds and tips with other users
        -How to learn more about the history and legacy of the Oberheim SEM synth
        -What are the differences between the original and the emulated versions of the Oberheim SEM synth
        -What are the advantages and disadvantages of using a cracked version of Arturia Oberheim SEM V v1.1.2
        -What are the legal and ethical implications of using a cracked version of Arturia Oberheim SEM V v1.1.2
        -What are the risks and consequences of using a cracked version of Arturia Oberheim SEM V v1.1.2
        -What are some reliable sources and websites for downloading a cracked version of Arturia Oberheim SEM V v1.1.

        -

        Arturia Oberheim SEM V faithfully reproduces all the original parameters of the Oberheim SEM, such as:

        -
          -
        • Two oscillators, each offering sawtooth wave and variable-width pulse wave with PWM (Pulse Width Modulation)
        • -
        • Sine wave LFO (Low Frequency Oscillator) with sync option
        • -
        • 12dB/oct multimode filter with low-pass, high-pass, band-pass, and notch modes
        • -
        • Two ADS (Attack Decay Sustain) envelope generators
        • -
        • Mixer section with volume control for each oscillator, sub-oscillator, and noise source
        • -
        -

        In addition, Arturia Oberheim SEM V adds some new features and enhancements that make it more flexible and expressive than the original hardware, such as:

        -
          -
        • Polyphony up to 32 voices and 8-voice multitimbrality
        • -
        • Sub-oscillator one or two octaves below the main oscillators for extra low-end
        • -
        • Extra LFO with multiple waveforms and destinations
        • -
        • Modulation matrix with 8 slots for routing various sources to various destinations
        • -
        • Built-in arpeggiator with random mode, hold function, and up to four-octave range
        • -
        • Built-in effects section with overdrive, chorus, and delay
        • -
        • Over 500 presets created by professional sound designers
        • -
        -

        With these features, Arturia Oberheim SEM V can create a wide range of sounds, from classic analog leads, basses, pads, brasses, strings, and FXs, to modern digital sounds that take advantage of the polyphony and modulation possibilities.

        -

        Why use a cracked version?

        -

        A cracked version is a modified version of a software that bypasses its copy protection or activation mechanism. This means that you can use it without paying for it or registering it with the developer.

        -

        The main reason why some people use cracked versions of software is obvious: they want to save money. Buying software can be expensive, especially if you want to use multiple products or upgrade them regularly. Cracked versions offer a way to access software for free or at a very low cost.

        -

        Another reason why some people use cracked versions of software is curiosity. They want to try out different products or features without committing to them or spending too much time on them. Cracked versions offer a way to experiment with software without any limitations or obligations.

        -

        How to download and install Arturia Oberheim SEM V v1.1.2 PC and MAC cracked versions?

        -

        If you are interested in using a cracked version of Arturia Oberheim SEM V v1.1.2 PC and MAC, here are the steps you need to follow:

        -
          -
        1. Go to one of the websites that offer cracked versions of Arturia Oberheim SEM V v1.1.2 PC and MAC, such as crackzsoft.me, vstcrack.net, or plugincrack.com. Be careful when visiting these websites as they may contain malware or viruses that can harm your computer or data.
        2. -
        3. Download the file that contains the cracked version of Arturia Oberheim SEM V v1.1.2 PC and MAC. The file may be compressed in a ZIP or RAR format, so you will need a program like WinRAR or 7-Zip to extract it.
        4. -
        5. Extract the file to a folder on your computer.
        6. -
        7. Run the setup.exe file (for Windows) or the install.pkg file (for Mac) to install Arturia Oberheim SEM V v1.1.2 PC and MAC on your computer.
        8. -
        9. Follow the instructions on the screen to complete the installation process.
        10. -
        11. Copy the crack file (usually named arturiasemv.dll or arturiasemv.component) from the crack folder to the installation folder where you installed Arturia Oberheim SEM V v1.1.2 PC and MAC.
        12. -
        13. Paste and replace the original file with the crack file.
        14. -
        15. You have successfully installed Arturia Oberheim SEM V v1.1.2 PC and MAC cracked version on your computer.
        16. -
        17. You can now launch Arturia Oberheim SEM V v1.1.2 PC and MAC as a standalone application or as a plug-in in your DAW.
        18. -
        -

        Features of Arturia Oberheim SEM V v1.1.2 PC and MAC cracked versions

        - and MAC cracked versions are the same as the original software, as they are based on the same code and files. However, there may be some differences or issues that arise from using a cracked version, such as:

        -
          -
        • Lack of updates or technical support from Arturia. If you use a cracked version, you will not be able to receive any updates or bug fixes from Arturia, which may affect the performance or compatibility of the software. You will also not be able to contact Arturia for any questions or problems you may have with the software.
        • -
        • Risk of malware or viruses. As mentioned earlier, the websites that offer cracked versions of software may contain malicious software that can infect your computer or data. You should always scan the files you download with an antivirus program before opening them.
        • -
        • Risk of legal consequences. Using a cracked version of software is illegal and unethical, as it violates the intellectual property rights of the developer. You may face legal action or penalties if you are caught using a cracked version of software.
        • -
        -

        Pros and cons of Arturia Oberheim SEM V v1.1.2 PC and MAC cracked versions

        -

        Pros

        -
          -
        • Free to use. You don't have to pay anything to use a cracked version of Arturia Oberheim SEM V v1.1.2 PC and MAC.
        • -
        • Full access to all features and functions. You can use all the features and functions of Arturia Oberheim SEM V v1.1.2 PC and MAC without any restrictions or limitations.
        • -
        • Compatible with most DAWs and MIDI controllers. You can use Arturia Oberheim SEM V v1.1.2 PC and MAC as a plug-in in most DAWs, such as Ableton Live, FL Studio, Logic Pro, Cubase, etc. You can also use it with most MIDI controllers, such as keyboards, pads, knobs, etc.
        • -
        -

        Cons

        -
          -
        • Illegal and unethical. Using a cracked version of Arturia Oberheim SEM V v1.1.2 PC and MAC is against the law and the moral principles of the music industry. You are stealing from the developer who spent time and money to create the software.
        • -
        • Risky for your computer and data security. Using a cracked version of Arturia Oberheim SEM V v1.1.2 PC and MAC may expose your computer or data to malware or viruses that can damage them or compromise your privacy.
        • -
        • No updates or technical support from Arturia. Using a cracked version of Arturia Oberheim SEM V v1.1.2 PC and MAC means that you will not receive any updates or technical support from Arturia, which may affect the quality or compatibility of the software.
        • -
        -

        Conclusion

        -

        Arturia Oberheim SEM V v1.1.2 PC and MAC is a great software emulation of the Oberheim SEM synthesizer, which offers authentic analog sound and modern features and effects. However, using a cracked version of Arturia Oberheim SEM V v1.1.2 PC and MAC is not recommended, as it is illegal, unethical, risky, and unsupported.

        -

        If you want to use Arturia Oberheim SEM V v1.1.2 PC and MAC, you should buy it from their official website or subscribe to their V Collection bundle, which will give you access to many other amazing software instruments from Arturia.

        -

        By doing so, you will support the developer who created the software, enjoy a better user experience with updates and technical support, and avoid any legal or security issues that may arise from using a cracked version.

        -

        FAQs

        -

        Q: What is the difference between Arturia Oberheim SEM V and SEM V2?

        -

        A: Arturia Oberheim SEM V is the original version of the software emulation of the Oberheim SEM synthesizer, which was released in 2011. Arturia Oberheim SEM V2 is an updated version of the software emulation of the Oberheim SEM synthesizer, which was released in 2017 as part of their V Collection 6 bundle.

        -

        The main difference between Arturia Oberheim SEM V and SEM V2 is that SEM V2 has a new user interface that is more intuitive and user-friendly than SEM V's interface. SEM V2 also has some minor improvements in sound quality and performance.

        -

        Q: How can I get a free trial of Arturia Oberheim SEM V?

        -

        A: You can get a free trial of Arturia Oberheim SEM V by visiting their official website and creating an account with them. Then, you can download and install Arturia Oberheim SEM V on your computer and use it for 20 minutes per session for 15 days.

        -

        Q: How can I update my Arturia Oberheim SEM V?

        -

        A: You can update your Arturia Oberheim SEM V by using their Arturia Software Center application, which you can download from their official website. The application will automatically detect any updates available for your Arturia products and let you install them with one click.

        -

        Q: How can I contact Arturia for technical support?

        -

        A: You can contact Arturia for technical support by visiting their official website and filling out their support form with your details and issue description. You can also check their online manuals and tutorials for more information on how to use their products.

        -

        Q: How can I uninstall Arturia Oberheim SEM V?

        -

        A: You can uninstall Arturia Oberheim SEM V by using their uninstaller program, which you can find in the installation folder where you installed Arturia Oberheim SEM V on your computer.

        -

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Bigfile.002.tiger How to Optimize Your PC Performance for These Games.md b/spaces/raedeXanto/academic-chatgpt-beta/Bigfile.002.tiger How to Optimize Your PC Performance for These Games.md deleted file mode 100644 index 4e5890ef1059b68cac0fc1c8f85a6be12b4468c9..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Bigfile.002.tiger How to Optimize Your PC Performance for These Games.md +++ /dev/null @@ -1,99 +0,0 @@ - -

        Hand2note Poker Office 5 Crack: What You Need to Know

        -

        If you are a serious poker player who wants to improve your game and gain an edge over your opponents, you might have heard of Hand2note, a powerful and advanced poker software that can help you analyze your own and your opponents' play, customize your HUD, and make better decisions at the table. But you might also wonder if you can get Hand2note for free or if there is a crack for the full version. In this article, we will answer these questions and give you some useful information about Hand2note.

        -

        How to Get Hand2note for Free

        -

        The good news is that you can easily use Hand2note without paying anything. The software is completely free for any stakes just with some limitations on functionality. But the paid features are not really important at the beginning. So all you need to do to start using Hand2note for free is go to the official website, download and install it.

        -

        Hand2note Poker Office 5 Crack


        Downloadhttps://tinourl.com/2uL3Lu



        -

        Please note that you can also get a free HUD. Thus, you don’t have to spend any money to start using the most advanced poker software.

        -

        What are the Limitations of the Free Version?

        -

        The most advanced features of Hand2note such as positional and dynamic HUD, Range Research, Decision Analysis, and Extended popup on stat are not available on a free version. But even with these restrictions, Hand2note is way better than Holdem Manager and Poker Tracker. So if you can get the best tracker for free, you should definitely do it, right?

        -

        Hand2note Poker Office 5 license key
        -Hand2note Poker Office 5 activation code
        -Hand2note Poker Office 5 serial number
        -Hand2note Poker Office 5 free download
        -Hand2note Poker Office 5 full version
        -Hand2note Poker Office 5 torrent
        -Hand2note Poker Office 5 patch
        -Hand2note Poker Office 5 keygen
        -Hand2note Poker Office 5 cracked software
        -Hand2note Poker Office 5 registration code
        -Hand2note Poker Office 5 product key
        -Hand2note Poker Office 5 crack download
        -Hand2note Poker Office 5 crack file
        -Hand2note Poker Office 5 crack only
        -Hand2note Poker Office 5 crack mac
        -Hand2note Poker Office 5 crack windows
        -Hand2note Poker Office 5 crack reddit
        -Hand2note Poker Office 5 crack forum
        -Hand2note Poker Office 5 crack online
        -Hand2note Poker Office 5 crack generator
        -Hand2note Poker Office 5 crack no survey
        -Hand2note Poker Office 5 crack no password
        -Hand2note Poker Office 5 crack no virus
        -Hand2note Poker Office 5 crack working
        -Hand2note Poker Office 5 crack latest version
        -How to crack Hand2note Poker Office 5
        -How to install Hand2note Poker Office 5 crack
        -How to use Hand2note Poker Office 5 crack
        -How to get Hand2note Poker Office 5 crack
        -How to download Hand2note Poker Office 5 crack
        -Where to find Hand2note Poker Office 5 crack
        -Where to download Hand2note Poker Office 5 crack
        -Where to buy Hand2note Poker Office 5 crack
        -Is Hand2note Poker Office 5 crack safe
        -Is Hand2note Poker Office 5 crack legal
        -Is Hand2note Poker Office 5 crack legit
        -Is Hand2note Poker Office 5 crack worth it
        -Is Hand2note Poker Office 5 crack real
        -Does Hand2note Poker Office 5 crack work
        -Does Hand2note Poker Office 5 crack exist
        -What is Hand2note Poker Office 5 crack
        -What does Hand2note Poker Office 5 crack do
        -What are the benefits of using Hand2note Poker Office 5 crack
        -What are the risks of using Hand2note Poker Office 5 crack
        -What are the alternatives to using Hand2note Poker Office 5 crack
        -Why use Hand2note Poker Office 5 crack
        -Why not use Hand2note Poker Office 5 crack
        -When to use Hand2note Poker Office 5 crack
        -When not to use Hand2note Poker Office 5 crack

        -

        What are the Benefits of the Paid Version?

        -

        However, it is worth noting that the paid version of Hand2note provides you with the most advanced features. So after you use the free version, you can think about buying. With the help of these tools, you can significantly improve your game and gain an advantage over your opponents. Here are some of the benefits of the paid version:

        -
          -
        • You can customize your HUD according to different positions, stack sizes, bet sizes, board textures, etc.
        • -
        • You can use Range Research to analyze any range of hands in any situation and find optimal strategies.
        • -
        • You can use Decision Analysis to review your own decisions and find leaks in your game.
        • -
        • You can use Extended popup on stat to see detailed statistics on any player or situation.
        • -
        -

        How to Adjust to the New Software?

        -

        Hand2note software can be pretty complicated, but it’s totally worth it to study it. First of all, take a look at the official manual. It’s also completely free and contains a description of all the functions. If you find it too big and complicated, pay attention to poker training videos and personal coaching offers. It will help you to study Hand2Note very quickly and efficiently.

        -

        Is There a Crack for the Full Version?

        -

        The bad news is that there is no crack for the full version of Hand2note. And even if there was one, we would not recommend you to use it. Why? Because using a crack is risky, illegal, and unethical. Let us explain.

        -

        The Risks of Using a Crack

        -

        Using a crack means downloading an unofficial version of Hand2note from an untrusted source. This can expose you to several risks:

        -
          -
        • You can get infected by malware that can harm your computer or steal your personal information.
        • -
        • You can lose your data or damage your database if the crack is incompatible with your system or has bugs.
        • -
        • You can get banned by poker sites or authorities if they detect that you are using an illegal software.
        • -
        • You can miss out on updates, support, security patches that are only available for licensed users.
        • -
        -

        The Benefits of Buying a License

        -

        Buying a license means supporting the developers who have invested their time and money into creating Hand2note. This can bring you several benefits:

        -
          -
        • You can enjoy all the features of Hand2note without any limitations or restrictions.
        • -
        • You can get access to updates, support, security patches that are only available for licensed users.
        • -
        • You can avoid legal issues or ethical concerns that come with using an illegal software.
        • -
        • You can feel satisfied that you have contributed to the development of Hand2note and helped it grow.
        • -
        -

        Conclusion

        -

        Hand2note is the most powerful and advanced poker software that can help you improve your game and gain an edge over your opponents. You can use it for free for any stakes with some limitations on functionality or buy a license to unlock all its features. There is no crack for the full version and we do not recommend you to use one because it is risky, illegal, and unethical. Instead, we suggest you to try out Hand2note for yourself and see how it works for you. You will not regret it!

        -

        FAQs

        -
          -
        1. What is Hand2note?
          Hand2note is a powerful and advanced poker software that can help you analyze your own and your opponents' play, customize your HUD, and make better decisions at the table.
        2. -
        3. How much does Hand2note cost?
          Hand2note is free for any stakes with some limitations on functionality. You can also buy a license for $29.95 (PRO) or $69.95 (EDGE) per month or get discounts for longer periods.
        4. -
        5. How do I install Hand2note?
          You can download and install Hand2note from the official website. You will need Windows 7 or higher and .NET Framework 4.6 or higher.
        6. -
        7. How do I learn Hand2note?
          You can learn Hand2note by reading the official manual, watching training videos, or getting personal coaching.
        8. -
        9. Is there a crack for Hand2note?
          No, there is no crack for Hand2note and we do not recommend you to use one because it is risky, illegal, and unethical.
        10. -
        -

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/BikeCAD Pro 4shared.zip Full UPD.md b/spaces/raedeXanto/academic-chatgpt-beta/BikeCAD Pro 4shared.zip Full UPD.md deleted file mode 100644 index 836e971cfd5d6acecc72b4cb68fceb15be54ab24..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/BikeCAD Pro 4shared.zip Full UPD.md +++ /dev/null @@ -1,85 +0,0 @@ -
        - - -
        -

        BikeCAD Pro 4shared.zip Full: What Is It and How to Use It

        -

        Introduction

        -

        If you are a bicycle enthusiast, a frame builder, or a fit specialist, you might have heard of BikeCAD Pro, a powerful software that allows you to design and customize any diamond frame bike. BikeCAD Pro is a standalone application that runs on Windows, Mac, and Linux, and has many features that are not available in the free version of BikeCAD, such as importing photos, generating multi-model PDFs, exporting miter templates, and more.

        -

        BikeCAD Pro 4shared.zip Full


        Download Filehttps://tinourl.com/2uL4jo



        -

        But what if you don't want to pay $750 for BikeCAD Pro? What if you want to try it out for free or share it with your friends? That's where 4shared comes in. 4shared is a popular file sharing and storage service that allows you to upload, download, and share files without any hassle. You can access 4shared from your web browser or from your mobile device, and enjoy 15 GB of free storage space.

        -

        One of the files that you can find on 4shared is BikeCAD Pro 4shared.zip full, which is a compressed file that contains the full version of BikeCAD Pro. This file has been uploaded by someone who has bought BikeCAD Pro and decided to share it with others. By downloading this file, you can install BikeCAD Pro on your computer without paying anything.

        -

        But how do you download BikeCAD Pro 4shared.zip full? How do you make sure that it is safe and reliable? And how do you use BikeCAD Pro once you have installed it? In this article, we will answer these questions and more. We will show you how to download BikeCAD Pro 4shared.zip full from 4shared.com, how to unzip it and install it on your computer, and how to use it to design your own bike frame. Let's get started! -

        How to Download BikeCAD Pro 4shared.zip Full

        -

        The first step is to find and access the file on 4shared.com. There are two ways to do this: either by searching for it on the website or by following a direct link provided by someone who has already downloaded it. Here are the steps for both methods:

        -
          -
        • Searching for the file on 4shared.com: Go to www.4shared.com and type "BikeCAD Pro" in the search box. You will see a list of results that match your query. Look for the file that has the name "BikeCAD Pro 4shared.zip full" and the size of 18.9 MB. Click on the file name to open its page.
        • -
        • Following a direct link to the file: If someone has already downloaded the file and shared the link with you, you can simply click on the link to open the file page on 4shared.com. For example, here is a link that we have found online: https://www.4shared.com/zip/8QZlZbL7iq/BikeCAD_Pro_4sharedzip_full.html. Note that this link may not work in the future, as the file owner may delete it or change its settings.
        • -
        -

        Once you have opened the file page on 4shared.com, you will see some information about the file, such as its name, size, type, date, and description. You will also see a preview of the file contents, which should show you a folder named "BikeCAD Pro" and some files inside it. Here is a screenshot of what it looks like:

        - BikeCAD Pro 4shared.zip full preview -

        Before you download the file, you should verify its size, quality, and safety. Here are some tips to do that:

        -
          -
        • Check the file size: The file size should be 18.9 MB, as shown on the file page. If it is much smaller or larger than that, it may be a fake or corrupted file. You can also compare the file size with the original BikeCAD Pro installer, which is 19.1 MB.
        • -
        • Check the file quality: The file quality should be high, as it contains the full version of BikeCAD Pro. You can check the file quality by looking at the preview of the file contents and seeing if they match with the original BikeCAD Pro files. You can also read the comments and ratings from other users who have downloaded the file and see if they have any complaints or issues.
        • -
        • Check the file safety: The file safety should be high, as it should not contain any viruses or malware. You can check the file safety by scanning it with your antivirus software before opening it. You can also use online tools such as VirusTotal or Metadefender to scan the file URL and see if it has any malicious detections.
        • -
        -

        After you have verified the file size, quality, and safety, you are ready to download it. There are two ways to download the file from 4shared.com: either by using your web browser or by using a download manager. Here are the steps for both methods:

        -
          -
        • Downloading the file using your web browser: Click on the green "Download" button on the top right corner of the file page. You will see a pop-up window that asks you to sign in or create an account on 4shared.com. You can either do that or skip it by clicking on "Download for free with Ads". You will then see another pop-up window that shows you a countdown timer and some ads. Wait for a few seconds until the timer reaches zero and then click on "Download". You will then see a dialog box that asks you to save the file on your computer. Choose a location where you want to save the file and click on "Save". The download will start and you can monitor its progress on your browser.
        • -
        • Downloading the file using a download manager: A download manager is a software that helps you download files faster and easier. Some examples of download managers are Internet Download Manager, Free Download Manager, and JDownloader. To use a download manager, you need to install it on your computer first and then integrate it with your web browser. Then, when you click on the green "Download" button on the file page, you will see a dialog box that asks you to confirm the download with your download manager. Click on "OK" and the download will start and you can monitor its progress on your download manager.
        • -
        -

        After you have downloaded the file, you need to unzip it and install BikeCAD Pro on your computer. Here are the steps to do that:

        -

        -
          -
        • Unzipping the file: To unzip the file, you need a software that can extract compressed files, such as WinZip, WinRAR, or 7-Zip. To use one of these software, you need to install it on your computer first and then associate it with the ZIP file type. Then, when you right-click on the file, you will see an option to extract it. Choose a location where you want to extract the file and click on "Extract". You will then see a folder named "BikeCAD Pro" and some files inside it.
        • -
        • Installing BikeCAD Pro: To install BikeCAD Pro, you need to run the file named "BikeCADPro.exe" inside the folder. Double-click on the file and you will see a welcome screen that asks you to agree to the terms and conditions of BikeCAD Pro. Click on "I Agree" and then click on "Next". You will then see a screen that asks you to choose a destination folder for BikeCAD Pro. You can either keep the default folder or browse for another one. Click on "Next" and then click on "Install". The installation will start and you will see a progress bar. When the installation is complete, click on "Finish". You will then see a shortcut icon for BikeCAD Pro on your desktop.
        • -
        -

        How to Use BikeCAD Pro

        -

        Now that you have installed BikeCAD Pro on your computer, you can start using it to design and customize your own bike frame. Here are the steps to do that:

        -
          -
        • Launching and configuring BikeCAD Pro: To launch BikeCAD Pro, double-click on the shortcut icon on your desktop or go to the destination folder and run the file named "BikeCADPro.exe". You will see a splash screen that shows you the version number and the license information of BikeCAD Pro. Then, you will see the main window of BikeCAD Pro, which looks like this:

          - BikeCAD Pro main window -

          The main window of BikeCAD Pro consists of four parts: the menu bar, the tool bar, the design area, and the status bar. The menu bar contains various options for file, edit, view, model, dimensions, fit advisor, paint scheme, components, analysis, tools, help, and more. The tool bar contains buttons for zooming, panning, rotating, measuring, selecting, moving, copying, deleting, undoing, redoing, and more. The design area shows you a 3D view of your bike frame and its dimensions. The status bar shows you some information about your bike frame, such as its weight, center of mass, trail, wheelbase, etc.

          -

          Before you start designing your bike frame, you may want to configure some settings for BikeCAD Pro. To do that, go to the menu bar and click on "Tools" and then "Options". You will see a dialog box that allows you to change some preferences for BikeCAD Pro, such as language, units, display, graphics, sound, and more. You can also customize the keyboard shortcuts for BikeCAD Pro by clicking on "Tools" and then "Keyboard". After you have made your changes, click on "OK" to save them.

        • -
        • Designing and customizing your own bike frame using BikeCAD Pro: To design your own bike frame using BikeCAD Pro, you need to specify some dimensions and parameters for your bike frame, such as the top tube length, the head tube angle, the seat tube angle, the chainstay length, the fork rake, and more. You can do that by using the menu bar, the tool bar, or the dimension dialog boxes. You can also use the fit advisor feature to get some suggestions for your bike frame dimensions based on your body measurements and riding style. To access the fit advisor feature, go to the menu bar and click on "Dimensions" and then "Fit Advisor". You will see a dialog box that asks you to enter some information about yourself and your preferences. After you have entered your information, click on "OK" and you will see some recommended dimensions for your bike frame.

          -

          As you change the dimensions and parameters of your bike frame, you will see the changes reflected in the design area. You can also change the view of your bike frame by using the zoom, pan, and rotate buttons on the tool bar. You can also switch between different views, such as front view, side view, top view, isometric view, etc. by using the view menu on the menu bar. You can also toggle between wireframe mode and solid mode by using the wireframe button on the tool bar.

          -

          Besides changing the dimensions and parameters of your bike frame, you can also customize its appearance and components using BikeCAD Pro. You can change the color and texture of your bike frame by using the paint scheme feature. To access the paint scheme feature, go to the menu bar and click on "Paint Scheme". You will see a dialog box that allows you to choose from different colors, patterns, logos, decals, etc. for your bike frame. You can also create your own custom paint scheme by using the advanced options. After you have made your choices, click on "OK" and you will see your bike frame painted according to your preferences.

          -

          You can also change the components of your bike frame by using the components feature. To access the components feature, go to the menu bar and click on "Components". You will see a dialog box that allows you to choose from different components for your bike frame, such as wheels, tires, brakes, gears, handlebars, saddle, pedals, etc. You can also adjust some settings for each component, such as size, position, angle, etc. After you have made your choices, click on "OK" and you will see your bike frame equipped with your chosen components.

          -
        • Exporting and sharing your bike frame design using BikeCAD Pro: After you have designed and customized your bike frame using BikeCAD Pro, you may want to export and share it with others. You can do that by using the export feature of BikeCAD Pro. To access the export feature, go to the menu bar and click on "File" and then "Export". You will see a dialog box that allows you to choose from different formats and options for exporting your bike frame design. Some of the formats that you can export are PDF, DXF, SVG, PNG, JPG, GIF, etc. Some of the options that you can choose are resolution, orientation, scale, etc. After you have made your choices, click on "OK" and you will see a dialog box that asks you to save the exported file on your computer. Choose a location where you want to save the file and click on "Save". The export will start and you will see a progress bar. When the export is complete, you will have a file that contains your bike frame design in the chosen format.

          -

          Once you have exported your bike frame design, you can share it with others by using different methods, such as email, social media, cloud storage, etc. You can also upload your bike frame design to the BikeCAD website and showcase it to other BikeCAD users. To do that, go to the menu bar and click on "File" and then "Upload to BikeCAD.ca". You will see a dialog box that asks you to enter some information about your bike frame design, such as title, description, tags, etc. You will also need to create an account or sign in to the BikeCAD website. After you have entered your information, click on "OK" and you will see a confirmation message that your bike frame design has been uploaded to the BikeCAD website. You can then view your bike frame design on the website and see how other users rate and comment on it.

          -

          Conclusion

          -

          In this article, we have shown you how to download BikeCAD Pro 4shared.zip full from 4shared.com, how to unzip it and install it on your computer, and how to use it to design and customize your own bike frame. We have also shown you how to export and share your bike frame design using BikeCAD Pro. We hope that you have found this article useful and informative, and that you have enjoyed using BikeCAD Pro.

          -

          BikeCAD Pro is a powerful software that allows you to design and customize any diamond frame bike. It has many features that are not available in the free version of BikeCAD, such as importing photos, generating multi-model PDFs, exporting miter templates, and more. However, it also costs $750, which may be too expensive for some users. That's why some users may want to download BikeCAD Pro 4shared.zip full, which is a compressed file that contains the full version of BikeCAD Pro for free.

          -

          However, downloading BikeCAD Pro 4shared.zip full also comes with some risks and challenges. You need to make sure that the file is safe and reliable before downloading it. You also need to unzip it and install it on your computer properly. And you need to use it ethically and responsibly, without violating the terms and conditions of BikeCAD Pro or infringing the rights of its developers.

          -

          Here are some tips and tricks for using BikeCAD Pro effectively:

          -
            -
          • Learn from the tutorials and examples: BikeCAD Pro has a comprehensive help system that provides tutorials and examples for using its features. You can access the help system by clicking on "Help" on the menu bar or by pressing F1 on your keyboard. You can also visit the BikeCAD website and watch some video tutorials and read some articles about BikeCAD Pro.
          • -
          • Use the dimension dialog boxes: BikeCAD Pro has a lot of dimension dialog boxes that allow you to change the dimensions and parameters of your bike frame easily. You can access the dimension dialog boxes by clicking on "Dimensions" on the menu bar or by double-clicking on any dimension label on the design area. You can also use the arrow keys or the mouse wheel to adjust the values of each dimension.
          • -
          • Use the fit advisor feature: BikeCAD Pro has a fit advisor feature that helps you get some suggestions for your bike frame dimensions based on your body measurements and riding style. You can access the fit advisor feature by clicking on "Dimensions" on the menu bar and then "Fit Advisor". You can also use the Bike Fit Calculator on the BikeCAD website and enter your body measurements and preferences to get some fit suggestions.
          • -
          • Use the paint scheme feature: BikeCAD Pro has a paint scheme feature that allows you to change the color and texture of your bike frame. You can access the paint scheme feature by clicking on "Paint Scheme" on the menu bar. You can also use the Bike Paint Job Generator on the BikeCAD website and create your own custom paint scheme by uploading your own images and logos.
          • -
          • Use the components feature: BikeCAD Pro has a components feature that allows you to change the components of your bike frame, such as wheels, tires, brakes, gears, handlebars, saddle, pedals, etc. You can access the components feature by clicking on "Components" on the menu bar. You can also use the Bike Parts Database on the BikeCAD website and browse through thousands of bike parts from different brands and models.
          • -
          • Use the analysis feature: BikeCAD Pro has an analysis feature that allows you to perform some calculations and simulations on your bike frame, such as weight distribution, center of mass, trail, wheel flop, steering torque, etc. You can access the analysis feature by clicking on "Analysis" on the menu bar. You can also use the Bike Geometry Calculator on the BikeCAD website and compare different bike geometries and see how they affect the handling and performance of your bike.
          • -
          -

          FAQs

          -

          Here are some frequently asked questions about BikeCAD Pro and BikeCAD Pro 4shared.zip full:

          -
            -
          1. What are the system requirements for BikeCAD Pro?
          2. -

            BikeCAD Pro is a standalone application that runs on Windows, Mac, and Linux. It requires Java Runtime Environment (JRE) version 8 or higher to run. It also requires a minimum of 512 MB of RAM and 100 MB of disk space. It works best with a screen resolution of 1024 x 768 pixels or higher and a graphics card that supports OpenGL.

            -
          3. How much does BikeCAD Pro cost and how can I buy it?
          4. -

            BikeCAD Pro costs $750 USD for a single user license. You can buy it online from the BikeCAD website by using PayPal or credit card. You will receive an email with a download link and a license key after you have completed your purchase. You can also buy BikeCAD Pro from some authorized resellers in different countries.

            -
          5. What are the differences between BikeCAD Pro and the free version of BikeCAD?
          6. -

            BikeCAD Pro has many features that are not available in the free version of BikeCAD, such as importing photos, generating multi-model PDFs, exporting miter templates, importing and exporting DXF files, creating custom components, applying custom paint schemes, performing advanced analysis, and more. You can see a full comparison of BikeCAD Pro and the free version of BikeCAD on the BikeCAD website.

            -
          7. How can I update BikeCAD Pro to the latest version?
          8. -

            BikeCAD Pro is updated regularly with new features and bug fixes. You can check for updates by clicking on "Help" on the menu bar and then "Check for Updates". You will see a dialog box that tells you if there is a new version available or not. If there is a new version available, you can download it by clicking on "Download". You will then see a dialog box that asks you to save the updated file on your computer. Choose a location where you want to save the file and click on "Save". The download will start and you will see a progress bar. When the download is complete, you can run the updated file and install it over your existing version of BikeCAD Pro.

            -
          9. How can I get help and support for BikeCAD Pro?
          10. -

            BikeCAD Pro has a comprehensive help system that provides tutorials and examples for using its features. You can access the help system by clicking on "Help" on the menu bar or by pressing F1 on your keyboard. You can also visit the BikeCAD website and watch some video tutorials and read some articles about BikeCAD Pro. You can also contact the developer of BikeCAD Pro, Brent Curry, by email at brent@bikeforest.com or by phone at +1 519 760 9849. You can also join the BikeCAD forum and interact with other BikeCAD users and experts.

            -

            -

            This is the end of the article. I hope you have enjoyed reading it and learned something new. If you have any questions or comments, please feel free to leave them below. Thank you for your attention and have a great day!

            - : https://www.bikecad.ca/bikecad_pro -: https://www.bikecad.ca/bike_fit_calculator -: https://www.bikecad.ca/bike_paint_job_generator -: https://www.bikecad.ca/bike_parts_database -: https://www.bikecad.ca/bike_geometry_calculator -: https://www.4shared.com/

            b2dd77e56b
            -
            -
            \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Download Film Megaloman Full 45 LUltima Battaglia Contro Capitan Delitto.md b/spaces/raedeXanto/academic-chatgpt-beta/Download Film Megaloman Full 45 LUltima Battaglia Contro Capitan Delitto.md deleted file mode 100644 index 6733cbaf258a5ad09937b6126cff60f3f4c6f9b0..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Download Film Megaloman Full 45 LUltima Battaglia Contro Capitan Delitto.md +++ /dev/null @@ -1,103 +0,0 @@ -
            -

            Download Film Megaloman Full 45

            -

            Are you a fan of classic Japanese superheroes? Do you love watching action-packed scenes with amazing special effects? If so, you might want to download film Megaloman full 45, one of the most popular episodes of the legendary tokusatsu series from the 1970s. In this article, we will tell you what Megaloman is, why you should watch it, and how to download it online in a few easy steps.

            -

            Download Film Megaloman Full 45


            Download File ===== https://tinourl.com/2uL2tE



            -

            What is Megaloman?

            -

            Megaloman (メガロマン) is a Japanese tokusatsu series that aired from May to December 1979 on Fuji TV. Tokusatsu is a genre of live-action television and film that features superheroes, monsters, robots, and special effects. Megaloman is one of the many tokusatsu series that were produced in Japan during the 1970s and 1980s, along with other famous titles such as Ultraman, Kamen Rider, and Super Sentai.

            -

            The story of Megaloman follows Takeshi Shishido (獅子堂武), a young man who lives in a peaceful island called Green Star. One day, he witnesses a UFO crash-landing on his island, and discovers that it contains five aliens from the planet Rosetta. The aliens give him a bracelet that allows him to transform into Megaloman (メガロマン), a giant red-and-silver superhero with incredible powers. Together with his four friends who also receive bracelets, he fights against the evil alien invaders from the planet Megas (メガス), who want to conquer Earth.

            -

            Why you should watch Megaloman?

            -

            There are many reasons why you should watch Megaloman, especially if you are a fan of Japanese culture and entertainment. Here are some of them:

            -

            How to download film Megaloman full 45 for free
            -Watch film Megaloman full 45 online streaming
            -Film Megaloman full 45 subtitle Indonesia download
            -Film Megaloman full 45 HD quality download
            -Film Megaloman full 45 review and synopsis
            -Film Megaloman full 45 cast and crew
            -Film Megaloman full 45 trailer and teaser
            -Film Megaloman full 45 behind the scenes and making of
            -Film Megaloman full 45 soundtrack and theme song
            -Film Megaloman full 45 DVD and Blu-ray release date
            -Film Megaloman full 45 torrent and magnet link download
            -Film Megaloman full 45 Google Drive and Mega link download
            -Film Megaloman full 45 best scenes and moments
            -Film Megaloman full 45 trivia and facts
            -Film Megaloman full 45 fan art and cosplay
            -Film Megaloman full 45 merchandise and collectibles
            -Film Megaloman full 45 ratings and awards
            -Film Megaloman full 45 box office and budget
            -Film Megaloman full 45 sequel and spin-off
            -Film Megaloman full 45 crossover and parody
            -Download film Megaloman series complete collection
            -Download film Megaloman episode 1 to episode 44
            -Download film Megaloman special edition and director's cut
            -Download film Megaloman remastered and restored version
            -Download film Megaloman original and dubbed version
            -Download film Megaloman in English, Japanese, or other languages
            -Download film Megaloman with commentary and bonus features
            -Download film Megaloman in MP4, MKV, AVI, or other formats
            -Download film Megaloman in 720p, 1080p, or other resolutions
            -Download film Megaloman with no ads and no registration required
            -Where to download film Megaloman full 45 legally and safely
            -Alternatives to download film Megaloman full 45 online
            -Tips and tricks to download film Megaloman full 45 faster and easier
            -Problems and solutions to download film Megaloman full 45 successfully
            -Reviews and recommendations of sites to download film Megaloman full 45
            -Comparison of sites to download film Megaloman full 45 by price, quality, and features
            -Coupons and discounts for sites to download film Megaloman full 45
            -Best VPNs for sites to download film Megaloman full 45 anonymously and securely
            -How to watch film Megaloman full 45 offline on PC, laptop, or mobile devices
            -How to watch film Megaloman full 45 on smart TV, Roku, Firestick, or Chromecast
            -How to watch film Megaloman full 45 with friends and family online
            -How to watch film Megaloman full 45 with subtitles or captions
            -How to watch film Megaloman full 45 in different regions or countries
            -How to watch film Megaloman full 45 without buffering or lagging
            -How to watch film Megaloman full 45 in high definition or ultra high definition
            -How to watch film Megaloman full 45 in virtual reality or augmented reality
            -How to watch film Megaloman full 45 in 3D or IMAX
            -How to watch film Megaloman full 45 with surround sound or Dolby Atmos
            -How to watch film Megaloman full 45 on Netflix, Hulu, Amazon Prime Video, or other streaming platforms

            -
              -
            • Nostalgia: If you grew up watching Megaloman or other tokusatsu shows on TV, you might want to relive your childhood memories and enjoy the retro style and charm of these shows.
            • -
            • Entertainment: If you are looking for some fun and excitement, you will love watching Megaloman's thrilling battles against various monsters and villains. You will also appreciate the creativity and craftsmanship of the costumes, props, and special effects that were used in these shows.
            • -
            • Inspiration: If you are looking for some motivation and inspiration, you will admire Megaloman's courage, justice, and friendship. You will also learn some valuable lessons about teamwork, loyalty, and responsibility from his adventures.
            • -
            -

            How to download film Megaloman full 45

            -

            Now that you know what Megaloman is and why you should watch it, you might be wondering how to download it online. Don't worry, we have got you covered. Here are the steps and tips to download film Megaloman full 45 in a safe and legal way.

            -

            Step 1: Find a reliable website that offers the film

            -

            The first step is to find a website that has the film available for download. You can use any search engine such as Google or Bing to look for keywords like "download film Megaloman full 45" or "Megaloman episode 45 download". However, be careful not to click on any suspicious or malicious links that might harm your device or steal your personal information. Here are some tips to find a reliable website:

            -
              -
            • Check the reviews and ratings of the website from other users. You can use sites like Trustpilot or Sitejabber to see what other people think about the website.
            • -
            • Check the domain name and extension of the website. Avoid websites that have strange or unfamiliar domain names or extensions such as .ru or .tk.
            • -
            • Check the security and privacy policies of the website. Look for signs that indicate that the website is secure and encrypted such as HTTPS or a padlock icon in the address bar.
            • -
            • Check the quality and quantity of the content on the website. Look for websites that have a large collection of films and shows in various genres and languages. Also look for websites that offer high-quality videos in HD or Full HD resolution.
            • -
            -

            Step 2: Check the quality and format of the film

            -

            Step 3: Download the film safely and legally

            -

            The third step is to download the film from the website you have chosen. However, before you do that, make sure that you are doing it in a safe and legal way. Here are some tips to download the film safely and legally:

            -
              -
            • Use a VPN or a proxy service to hide your IP address and location. This will protect your privacy and security from hackers and trackers. It will also help you bypass any geo-restrictions or censorship that might prevent you from accessing the website.
            • -
            • Use an antivirus or a malware scanner to scan the file before you open it. This will prevent any viruses, malware, or spyware from infecting your device or stealing your data.
            • -
            • Use a download manager or a torrent client to speed up the download process and resume it if it gets interrupted. This will save you time and bandwidth. It will also allow you to pause and resume the download whenever you want.
            • -
            • Use a legal streaming service or a subscription platform to watch the film online instead of downloading it. This will avoid any copyright infringement or piracy issues that might get you in trouble with the law. It will also support the creators and producers of the film.
            • -
            -

            Step 4: Enjoy the film on your preferred device

            -

            The final step is to enjoy the film on your preferred device. You can transfer, play, and share the film with others using various methods and tools. Here are some tips to enjoy the film on your preferred device:

            -
              -
            • Use a USB cable, a Bluetooth connection, or a cloud service to transfer the film from your computer to your smartphone, tablet, or TV. This will allow you to watch the film on a bigger screen or on the go.
            • -
            • Use a media player or a video converter to play and convert the film to different formats and resolutions. This will ensure that the film is compatible with your device and that it has the best quality possible.
            • -
            • Use a social media platform or a messaging app to share the film with your friends and family. This will allow you to enjoy the film together and discuss it with others.
            • -
            -

            Conclusion

            -

            In conclusion, downloading film Megaloman full 45 is a great way to enjoy one of the most popular episodes of the classic Japanese tokusatsu series from the 1970s. Megaloman is a thrilling and inspiring show that features a young man who transforms into a giant superhero and fights against evil aliens. You can download the film online in four easy steps: find a reliable website that offers the film, check the quality and format of the film, download the film safely and legally, and enjoy the film on your preferred device. We hope that this article has helped you learn how to download film Megaloman full 45 and that you have fun watching it.

            -

            FAQs

            -

            Here are some frequently asked questions and answers about downloading film Megaloman full 45:

            -
              -
            1. How many episodes are there in Megaloman?
              Megaloman has 31 episodes in total, each lasting about 25 minutes. Episode 45 is actually a special compilation episode that features clips from previous episodes.
            2. -
            3. Where can I watch Megaloman online legally?
              You can watch Megaloman online legally on some streaming services or subscription platforms that have licensed the show. Some examples are Amazon Prime Video, Hulu, Netflix, or Crunchyroll.
            4. -
            5. What are some other tokusatsu shows that I can watch?
              There are many other tokusatsu shows that you can watch, depending on your preferences and interests. Some examples are Ultraman, Kamen Rider, Super Sentai, Godzilla, Gamera, or Power Rangers.
            6. -
            7. What are some benefits of watching tokusatsu shows?
              Some benefits of watching tokusatsu shows are that they can stimulate your imagination, creativity, and curiosity. They can also teach you some values such as courage, justice, and friendship. They can also entertain you and make you laugh.
            8. -
            9. What are some challenges of downloading tokusatsu shows?
              Some challenges of downloading tokusatsu shows are that they might be hard to find online, especially if they are old or obscure. They might also have low quality or poor subtitles. They might also be illegal or unsafe to download.
            10. -
            -

            0a6ba089eb
            -
            -
            \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Download xforce keygen Civil 3D 2019 and Create Stunning Designs with Autodesk Software.md b/spaces/raedeXanto/academic-chatgpt-beta/Download xforce keygen Civil 3D 2019 and Create Stunning Designs with Autodesk Software.md deleted file mode 100644 index 65aad2cd0c7a36cd479c7f4806decd92c9319191..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Download xforce keygen Civil 3D 2019 and Create Stunning Designs with Autodesk Software.md +++ /dev/null @@ -1,124 +0,0 @@ - -

            Download Xforce Keygen Civil 3D 2019 Download: A Complete Guide

            -

            If you are looking for a way to activate Autodesk Civil 3D 2019, one of the most popular and powerful software for civil engineering and design, you may have heard of Xforce Keygen. Xforce Keygen is a tool that can generate activation codes for any Autodesk product, including Civil 3D 2019. But what is Xforce Keygen, how does it work, and how can you use it to download and activate Civil 3D 2019? In this article, we will answer all these questions and more. We will also discuss the benefits and risks of using Xforce Keygen, and provide some tips and precautions to ensure a safe and successful experience.

            -

            What is Xforce Keygen?

            -

            Xforce Keygen is a software that can generate serial numbers and activation codes for various Autodesk products, such as AutoCAD, Revit, Maya, Inventor, and Civil 3D. It is created by a group of hackers who call themselves X-Force. They claim to have cracked the algorithm that Autodesk uses to generate its activation codes, and thus can produce unlimited codes for any Autodesk product.

            -

            download xforce keygen Civil 3D 2019 download


            Download ✔✔✔ https://tinourl.com/2uL1yo



            -

            How Xforce Keygen works

            -

            Xforce Keygen works by mimicking the process that Autodesk uses to verify its products. When you install an Autodesk product, such as Civil 3D 2019, you need to enter a serial number and a product key. These are provided by Autodesk when you purchase the product or sign up for a trial. Then, you need to activate the product online or offline. This is where Xforce Keygen comes in. It can generate a serial number and a product key that match the product you want to activate. Then, it can generate an activation code that can bypass the online or offline verification process. This way, you can activate your Autodesk product without paying or registering.

            -

            Why use Xforce Keygen for Civil 3D 2019

            -

            Civil 3D 2019 is one of the most advanced and comprehensive software for civil engineering and design. It allows you to create, edit, analyze, and document various types of civil projects, such as roads, bridges, tunnels, railways, land development, water resources, and more. It also integrates with other Autodesk products, such as AutoCAD, Revit, InfraWorks, Navisworks, and BIM 360. However, Civil 3D 2019 is also very expensive and requires a subscription or a perpetual license to use. Depending on the plan you choose, you may have to pay hundreds or thousands of dollars per year or per month to access Civil 3D 2019. This can be a huge burden for students, freelancers, hobbyists, or small businesses who want to use Civil 3D 2019 for their projects. That's why some people may resort to using Xforce Keygen to download and activate Civil 3D 2019 for free.

            -

            How to download Xforce Keygen Civil 3D 2019

            - 3D 2019, you need to follow these steps:

            -

            Step 1: Find a reliable source

            -

            The first step is to find a reliable source to download Xforce Keygen Civil 3D 2019. There are many websites and blogs that claim to offer Xforce Keygen for various Autodesk products, but not all of them are trustworthy. Some of them may contain fake or outdated files, or worse, virus or malware that can harm your computer or steal your data. Therefore, you need to be careful and do some research before downloading anything from the internet. You can use some criteria to evaluate the credibility of a source, such as:

            -
              -
            • The reputation and popularity of the website or blog
            • -
            • The reviews and feedback from other users
            • -
            • The date and version of the file
            • -
            • The size and format of the file
            • -
            • The presence and quality of screenshots or videos
            • -
            • The availability and responsiveness of customer support
            • -
            -

            Based on these criteria, you can narrow down your options and choose the best source for downloading Xforce Keygen Civil 3D 2019.

            -

            How to download xforce keygen Civil 3D 2019 for free
            -Download xforce keygen Civil 3D 2019 crack full version
            -Xforce keygen Civil 3D 2019 download link and installation guide
            -Download xforce keygen Civil 3D 2019 offline installer
            -Xforce keygen Civil 3D 2019 activation code generator download
            -Download xforce keygen Civil 3D 2019 patch and serial number
            -Xforce keygen Civil 3D 2019 torrent download with magnet link
            -Download xforce keygen Civil 3D 2019 from official website
            -Xforce keygen Civil 3D 2019 license key and product key download
            -Download xforce keygen Civil 3D 2019 for Windows 10/8/7
            -Xforce keygen Civil 3D 2019 for Mac OS download
            -Download xforce keygen Civil 3D 2019 for Linux
            -Xforce keygen Civil 3D 2019 for Android download
            -Download xforce keygen Civil 3D 2019 for iOS
            -Xforce keygen Civil 3D 2019 online generator download
            -Download xforce keygen Civil 3D 2019 without survey or password
            -Xforce keygen Civil 3D 2019 no virus or malware download
            -Download xforce keygen Civil 3D 2019 safe and secure
            -Xforce keygen Civil 3D 2019 latest version download
            -Download xforce keygen Civil 3D 2019 updated and working
            -Xforce keygen Civil 3D 2019 reviews and ratings download
            -Download xforce keygen Civil 3D 2019 testimonials and feedbacks
            -Xforce keygen Civil 3D 2019 features and benefits download
            -Download xforce keygen Civil 3D 2019 comparison and alternatives
            -Xforce keygen Civil 3D 2019 pros and cons download
            -Download xforce keygen Civil 3D 2019 tips and tricks
            -Xforce keygen Civil 3D 2019 tutorials and videos download
            -Download xforce keygen Civil 3D 2019 FAQs and answers
            -Xforce keygen Civil 3D 2019 support and help download
            -Download xforce keygen Civil 3D 2019 refund policy and guarantee
            -Xforce keygen Civil 3D 2019 discount and coupon code download
            -Download xforce keygen Civil 3D 2019 bonus and freebies
            -Xforce keygen Civil

            -

            Step 2: Download the file

            -

            The next step is to download the file from the source you have chosen. Usually, the file will be in a compressed format, such as ZIP or RAR. You may need to complete some surveys or offers to access the download link, or use a password to unlock the file. Be careful not to click on any ads or pop-ups that may redirect you to other websites or download unwanted programs. Also, make sure you have enough space on your hard drive to store the file.

            -

            Step 3: Extract the file

            -

            The final step is to extract the file using a software such as WinRAR or 7-Zip. You will need to locate the file on your computer and right-click on it. Then, choose the option to extract it to a folder of your choice. You may need to enter a password if the file is encrypted. After extracting the file, you will see a folder containing Xforce Keygen Civil 3D 2019 and some instructions on how to use it.

            -

            How to use Xforce Keygen Civil 3D 2019

            -

            Now that you have downloaded and extracted Xforce Keygen Civil 3D 2019, you can use it to activate Civil 3D 2019 on your computer. Here are the steps you need to follow:

            -

            Step 1: Install Civil 3D 2019

            -

            Before using Xforce Keygen, you need to install Civil 3D 2019 on your computer. You can download Civil 3D 2019 from the official Autodesk website or from any other source you trust. You can choose either a trial version or a full version, depending on your preference. After downloading Civil 3D 2019, run the installer and follow the instructions on the screen. You will need to enter a serial number and a product key during the installation process. You can use any random numbers for these fields, as they will be replaced by Xforce Keygen later.

            -

            Step 2: Run Xforce Keygen as administrator

            -

            After installing Civil 3D 2019, you need to run Xforce Keygen as administrator. To do this, locate Xforce Keygen in the folder where you extracted it and right-click on it. Then, choose the option to run as administrator. This will open Xforce Keygen in a new window.

            -

            Step 3: Generate the activation code

            - 3D 2019. Here are the steps you need to follow:

            -
              -
            • Select the product you want to activate from the drop-down menu. In this case, choose Civil 3D 2019.
            • -
            • Click on the Patch button. This will patch the Autodesk product and allow Xforce Keygen to access it.
            • -
            • Copy the Request Code from Civil 3D 2019. To get the Request Code, you need to launch Civil 3D 2019 and click on the Activate button. This will open a window where you will see the Request Code.
            • -
            • Paste the Request Code into Xforce Keygen and click on the Generate button. This will generate an Activation Code that matches the Request Code.
            • -
            • Copy the Activation Code from Xforce Keygen.
            • -
            -

            Step 4: Enter the activation code in Civil 3D 2019

            -

            The final step is to enter the activation code in Civil 3D 2019 and complete the activation process. Here are the steps you need to follow:

            -
              -
            • Paste the Activation Code into Civil 3D 2019 and click on the Next button. This will verify the Activation Code and activate Civil 3D 2019.
            • -
            • Click on the Finish button to close the window and enjoy Civil 3D 2019.
            • -
            -

            Congratulations! You have successfully downloaded and activated Civil 3D 2019 using Xforce Keygen. You can now use all the features and tools of Civil 3D 2019 for your civil engineering and design projects.

            -

            Benefits of using Xforce Keygen Civil 3D 2019

            -

            Using Xforce Keygen to download and activate Civil 3D 2019 can have some benefits for some users. Here are some of them:

            -

            Access to all features and tools of Civil 3D 2019

            -

            Civil 3D 2019 is a powerful and comprehensive software that offers a range of features and tools for civil engineering and design. It allows you to create, edit, analyze, and document various types of civil projects, such as roads, bridges, tunnels, railways, land development, water resources, and more. It also integrates with other Autodesk products, such as AutoCAD, Revit, InfraWorks, Navisworks, and BIM 360. By using Xforce Keygen to activate Civil 3D 2019, you can access all these features and tools without any limitations or restrictions.

            -

            Save money and time

            - 3D 2019. This can be a huge burden for students, freelancers, hobbyists, or small businesses who want to use Civil 3D 2019 for their projects. By using Xforce Keygen to download and activate Civil 3D 2019 for free, you can save a lot of money and time that you can use for other purposes.

            -

            Enhance your design and engineering skills

            -

            Civil 3D 2019 is a software that can help you improve your design and engineering skills. It can help you learn new techniques, methods, and standards for civil engineering and design. It can also help you develop your creativity, problem-solving, and collaboration skills. By using Xforce Keygen to download and activate Civil 3D 2019, you can have more opportunities and resources to practice and enhance your skills.

            -

            Risks and precautions of using Xforce Keygen Civil 3D 2019

            -

            However, using Xforce Keygen to download and activate Civil 3D 2019 also comes with some risks and precautions that you need to be aware of. Here are some of them:

            -

            Legal and ethical issues

            -

            Xforce Keygen is a software that violates the terms and conditions of Autodesk. It is an illegal and unethical way to use Autodesk products without paying or registering. By using Xforce Keygen, you are infringing the intellectual property rights of Autodesk and its partners. You may face legal consequences or penalties if you are caught using Xforce Keygen by Autodesk or any other authority. You may also damage your reputation and credibility as a professional or a student if you use Xforce Keygen for your projects.

            -

            Virus and malware threats

            -

            Xforce Keygen is also a software that may contain virus or malware that can harm your computer or steal your data. There are many websites and blogs that offer Xforce Keygen for various Autodesk products, but not all of them are trustworthy. Some of them may contain fake or outdated files, or worse, virus or malware that can infect your computer or compromise your security. You may lose your files, data, or personal information if you download or run Xforce Keygen from an unreliable source.

            -

            Compatibility and performance issues

            -

            Xforce Keygen is also a software that may cause compatibility and performance issues for your computer or Autodesk product. Xforce Keygen may not work properly with some versions or updates of Civil 3D 2019 or other Autodesk products. It may also interfere with some features or functions of Civil 3D 2019 or other Autodesk products. It may also affect the speed, stability, or quality of your computer or Autodesk product. You may experience crashes, errors, glitches, or bugs if you use Xforce Keygen to download and activate Civil 3D 2019.

            -

            Conclusion

            -

            In conclusion, Xforce Keygen is a tool that can generate activation codes for any Autodesk product, including Civil 3D 2019. It can help you download and activate Civil 3D 2019 for free and access all its features and tools. However, it also has some risks and precautions that you need to consider before using it. It is illegal and unethical to use Xforce Keygen without paying or registering for Autodesk products. It may also contain virus or malware that can harm your computer or steal your data. It may also cause compatibility and performance issues for your computer or Autodesk product.

            -

            FAQs

            -

            Here are some frequently asked questions about Xforce Keygen Civil 3D 2019:

            -
              -
            1. Is Xforce Keygen safe to use?
            2. -

              Xforce Keygen is not safe to use. It is an illegal and unethical way to use Autodesk products without paying or registering. It may also contain virus or malware that can harm your computer or steal your data. It may also cause compatibility and performance issues for your computer or Autodesk product.

              -
            3. Where can I download Xforce Keygen Civil 3D 2019?
            4. -

              You can download Xforce Keygen Civil 3D 2019 from various websites and blogs that offer it. However, you need to be careful and do some research before downloading anything from the internet. Not all sources are trustworthy and reliable. Some of them may contain fake or outdated files, or worse, virus or malware that can infect your computer or compromise your security.

              -
            5. How can I activate Civil 3D 2019 without using Xforce Keygen?
            6. -

              You can activate Civil 3D 2019 without using Xforce Keygen by purchasing a subscription or a perpetual license from Autodesk or its authorized resellers. You can choose from different plans and options that suit your needs and budget. You can also sign up for a free trial of Civil 3D 2019 for a limited period of time.

              -
            7. What are the alternatives to Xforce Keygen Civil 3D 2019?
            8. -

              There are some alternatives to Xforce Keygen Civil 3D 2019 that you can use to download and activate Civil 3D 2019 for free or at a lower cost. Some of them are:

              -
                -
              • Crack files: These are files that can modify or replace the original files of Civil 3D 2019 to bypass the activation process.
              • -
              • Key generators: These are software that can generate serial numbers and product keys for Civil 3D 2019.
              • -
              • Patch files: These are files that can patch the Autodesk product and allow it to access the activation code.
              • -
              • Activators: These are software that can activate Civil 3D 2019 without requiring a serial number, a product key, or an activation code.
              • -
              -

              However, these alternatives also have some risks and precautions similar to Xforce Keygen. They are also illegal and unethical to use without paying or registering for Autodesk products. They may also contain virus or malware that can harm your computer or steal your data. They may also cause compatibility and performance issues for your computer or Autodesk product.

              -
            9. How can I contact Xforce Keygen support?
            10. -

              Xforce Keygen does not have an official website or customer support. It is created by a group of hackers who call themselves X-Force. They do not provide any contact information or assistance for their users. However, some websites and blogs that offer Xforce Keygen may have their own customer support or feedback system. You can try to contact them if you have any questions or issues with Xforce Keygen.

              -
            -

            0a6ba089eb
            -
            -
            \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Galaxy On Fire 2 Hd N8 Nokia.md b/spaces/raedeXanto/academic-chatgpt-beta/Galaxy On Fire 2 Hd N8 Nokia.md deleted file mode 100644 index 735672239e6617e9a653c465dac719dd32bd8f8d..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Galaxy On Fire 2 Hd N8 Nokia.md +++ /dev/null @@ -1,18 +0,0 @@ - -

            Galaxy On Fire 2 HD: A Space Adventure Game for Nokia N8

            -

            If you are a fan of sci-fi and space exploration games, you might want to check out Galaxy On Fire 2 HD, a 3D action game that lets you explore a vast galaxy full of planets, star systems, factions, and missions. Galaxy On Fire 2 HD is a remastered version of the original game that was released in 2009, and it features improved graphics, sound, and gameplay for the Nokia N8 smartphone.

            -

            In Galaxy On Fire 2 HD, you play as Keith T. Maxwell, a veteran space pilot who gets thrown into a wormhole and wakes up 35 years later in an unknown part of the galaxy. There, he encounters a mysterious alien threat that is endangering the balance of power in the galaxy. You will have to help Keith find his way back home, while also fighting pirates, mercenaries, and aliens, trading goods, mining asteroids, upgrading your ship, and completing various quests.

            -

            Galaxy On Fire 2 Hd N8 Nokia


            DOWNLOAD ››››› https://tinourl.com/2uL35P



            -

            The game boasts a rich and immersive story mode that spans over 10 hours of gameplay, as well as a free-play mode that lets you explore the galaxy at your own pace. You can choose from over 30 customizable spaceships, each with different weapons, shields, and equipment. You can also visit over 100 different locations across four solar systems, each with its own atmosphere, culture, and politics. You can interact with various characters and factions, and influence the outcome of the story with your decisions.

            -

            Galaxy On Fire 2 HD is a stunning game that showcases the capabilities of the Nokia N8 smartphone. The game features high-resolution textures, detailed models, realistic lighting and shadows, and smooth animations. The game also supports the N8's accelerometer and touch screen controls, as well as its HDMI output and Dolby Digital Plus sound. The game is compatible with Symbian^3 devices such as the Nokia N8, E7, C7, and C6-01.

            -

            If you are looking for a thrilling and captivating space adventure game for your Nokia N8 smartphone, you should definitely give Galaxy On Fire 2 HD a try. You can download it from the Ovi Store for €9.99 or $14.99.

            - -

            Galaxy On Fire 2 HD is not only a game, but also a community. You can connect with other players online and share your achievements, screenshots, and tips. You can also access the official website and forum of the game, where you can find more information, news, updates, and support. You can also join the fan page of the game on Facebook and Twitter, and follow the developer Fishlabs on their social media channels.

            -

            Galaxy On Fire 2 HD is a game that will keep you entertained for hours with its engaging story, stunning graphics, and varied gameplay. Whether you are a casual gamer or a hardcore fan of space games, you will find something to enjoy in this game. Galaxy On Fire 2 HD is a must-have for any Nokia N8 smartphone owner who loves sci-fi and space adventure games.

            - -

            If you want to experience more of the Galaxy On Fire universe, you can also check out the other games in the series. Galaxy On Fire 3: Manticore is a sequel to Galaxy On Fire 2 HD that takes place in the Neox Sector, a lawless region of space where pirates and warlords rule. You can join the mercenary group Manticore and take on dangerous missions and contracts. Galaxy On Fire 3: Manticore is available for iOS and Android devices.

            -

            Galaxy On Fire: Alliances is a multiplayer strategy game that lets you build your own space station, form alliances with other players, and conquer planets and star systems. You can choose from three different factions: the Terrans, the Vossk, or the Nivelians, and compete with other players for resources and territory. Galaxy On Fire: Alliances is available for iOS and Android devices.

            -

            -

            Galaxy On Fire 2 HD is a game that will take you on an epic journey across the galaxy. It is a game that combines action, adventure, exploration, and trading in a rich and immersive sci-fi world. It is a game that you will not regret downloading for your Nokia N8 smartphone.

            81aa517590
            -
            -
            \ No newline at end of file diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/ts4.8/perf_hooks.d.ts b/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/ts4.8/perf_hooks.d.ts deleted file mode 100644 index 5c0b228e7d2a75d3d2726f4c4e02681dba341cac..0000000000000000000000000000000000000000 --- a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/ts4.8/perf_hooks.d.ts +++ /dev/null @@ -1,625 +0,0 @@ -/** - * This module provides an implementation of a subset of the W3C [Web Performance APIs](https://w3c.github.io/perf-timing-primer/) as well as additional APIs for - * Node.js-specific performance measurements. - * - * Node.js supports the following [Web Performance APIs](https://w3c.github.io/perf-timing-primer/): - * - * * [High Resolution Time](https://www.w3.org/TR/hr-time-2) - * * [Performance Timeline](https://w3c.github.io/performance-timeline/) - * * [User Timing](https://www.w3.org/TR/user-timing/) - * - * ```js - * const { PerformanceObserver, performance } = require('perf_hooks'); - * - * const obs = new PerformanceObserver((items) => { - * console.log(items.getEntries()[0].duration); - * performance.clearMarks(); - * }); - * obs.observe({ type: 'measure' }); - * performance.measure('Start to Now'); - * - * performance.mark('A'); - * doSomeLongRunningProcess(() => { - * performance.measure('A to Now', 'A'); - * - * performance.mark('B'); - * performance.measure('A to B', 'A', 'B'); - * }); - * ``` - * @see [source](https://github.com/nodejs/node/blob/v18.0.0/lib/perf_hooks.js) - */ -declare module 'perf_hooks' { - import { AsyncResource } from 'node:async_hooks'; - type EntryType = 'node' | 'mark' | 'measure' | 'gc' | 'function' | 'http2' | 'http'; - interface NodeGCPerformanceDetail { - /** - * When `performanceEntry.entryType` is equal to 'gc', `the performance.kind` property identifies - * the type of garbage collection operation that occurred. - * See perf_hooks.constants for valid values. - */ - readonly kind?: number | undefined; - /** - * When `performanceEntry.entryType` is equal to 'gc', the `performance.flags` - * property contains additional information about garbage collection operation. - * See perf_hooks.constants for valid values. - */ - readonly flags?: number | undefined; - } - /** - * @since v8.5.0 - */ - class PerformanceEntry { - protected constructor(); - /** - * The total number of milliseconds elapsed for this entry. This value will not - * be meaningful for all Performance Entry types. - * @since v8.5.0 - */ - readonly duration: number; - /** - * The name of the performance entry. - * @since v8.5.0 - */ - readonly name: string; - /** - * The high resolution millisecond timestamp marking the starting time of the - * Performance Entry. - * @since v8.5.0 - */ - readonly startTime: number; - /** - * The type of the performance entry. It may be one of: - * - * * `'node'` (Node.js only) - * * `'mark'` (available on the Web) - * * `'measure'` (available on the Web) - * * `'gc'` (Node.js only) - * * `'function'` (Node.js only) - * * `'http2'` (Node.js only) - * * `'http'` (Node.js only) - * @since v8.5.0 - */ - readonly entryType: EntryType; - /** - * Additional detail specific to the `entryType`. - * @since v16.0.0 - */ - readonly detail?: NodeGCPerformanceDetail | unknown | undefined; // TODO: Narrow this based on entry type. - toJSON(): any; - } - class PerformanceMark extends PerformanceEntry { - readonly duration: 0; - readonly entryType: 'mark'; - } - class PerformanceMeasure extends PerformanceEntry { - readonly entryType: 'measure'; - } - /** - * _This property is an extension by Node.js. It is not available in Web browsers._ - * - * Provides timing details for Node.js itself. The constructor of this class - * is not exposed to users. - * @since v8.5.0 - */ - class PerformanceNodeTiming extends PerformanceEntry { - /** - * The high resolution millisecond timestamp at which the Node.js process - * completed bootstrapping. If bootstrapping has not yet finished, the property - * has the value of -1. - * @since v8.5.0 - */ - readonly bootstrapComplete: number; - /** - * The high resolution millisecond timestamp at which the Node.js environment was - * initialized. - * @since v8.5.0 - */ - readonly environment: number; - /** - * The high resolution millisecond timestamp of the amount of time the event loop - * has been idle within the event loop's event provider (e.g. `epoll_wait`). This - * does not take CPU usage into consideration. If the event loop has not yet - * started (e.g., in the first tick of the main script), the property has the - * value of 0. - * @since v14.10.0, v12.19.0 - */ - readonly idleTime: number; - /** - * The high resolution millisecond timestamp at which the Node.js event loop - * exited. If the event loop has not yet exited, the property has the value of -1\. - * It can only have a value of not -1 in a handler of the `'exit'` event. - * @since v8.5.0 - */ - readonly loopExit: number; - /** - * The high resolution millisecond timestamp at which the Node.js event loop - * started. If the event loop has not yet started (e.g., in the first tick of the - * main script), the property has the value of -1. - * @since v8.5.0 - */ - readonly loopStart: number; - /** - * The high resolution millisecond timestamp at which the V8 platform was - * initialized. - * @since v8.5.0 - */ - readonly v8Start: number; - } - interface EventLoopUtilization { - idle: number; - active: number; - utilization: number; - } - /** - * @param util1 The result of a previous call to eventLoopUtilization() - * @param util2 The result of a previous call to eventLoopUtilization() prior to util1 - */ - type EventLoopUtilityFunction = (util1?: EventLoopUtilization, util2?: EventLoopUtilization) => EventLoopUtilization; - interface MarkOptions { - /** - * Additional optional detail to include with the mark. - */ - detail?: unknown | undefined; - /** - * An optional timestamp to be used as the mark time. - * @default `performance.now()`. - */ - startTime?: number | undefined; - } - interface MeasureOptions { - /** - * Additional optional detail to include with the mark. - */ - detail?: unknown | undefined; - /** - * Duration between start and end times. - */ - duration?: number | undefined; - /** - * Timestamp to be used as the end time, or a string identifying a previously recorded mark. - */ - end?: number | string | undefined; - /** - * Timestamp to be used as the start time, or a string identifying a previously recorded mark. - */ - start?: number | string | undefined; - } - interface TimerifyOptions { - /** - * A histogram object created using - * `perf_hooks.createHistogram()` that will record runtime durations in - * nanoseconds. - */ - histogram?: RecordableHistogram | undefined; - } - interface Performance { - /** - * If name is not provided, removes all PerformanceMark objects from the Performance Timeline. - * If name is provided, removes only the named mark. - * @param name - */ - clearMarks(name?: string): void; - /** - * If name is not provided, removes all PerformanceMeasure objects from the Performance Timeline. - * If name is provided, removes only the named measure. - * @param name - * @since v16.7.0 - */ - clearMeasures(name?: string): void; - /** - * Returns a list of `PerformanceEntry` objects in chronological order with respect to `performanceEntry.startTime`. - * If you are only interested in performance entries of certain types or that have certain names, see - * `performance.getEntriesByType()` and `performance.getEntriesByName()`. - * @since v16.7.0 - */ - getEntries(): PerformanceEntry[]; - /** - * Returns a list of `PerformanceEntry` objects in chronological order with respect to `performanceEntry.startTime` - * whose `performanceEntry.name` is equal to `name`, and optionally, whose `performanceEntry.entryType` is equal to `type`. - * @param name - * @param type - * @since v16.7.0 - */ - getEntriesByName(name: string, type?: EntryType): PerformanceEntry[]; - /** - * Returns a list of `PerformanceEntry` objects in chronological order with respect to `performanceEntry.startTime` - * whose `performanceEntry.entryType` is equal to `type`. - * @param type - * @since v16.7.0 - */ - getEntriesByType(type: EntryType): PerformanceEntry[]; - /** - * Creates a new PerformanceMark entry in the Performance Timeline. - * A PerformanceMark is a subclass of PerformanceEntry whose performanceEntry.entryType is always 'mark', - * and whose performanceEntry.duration is always 0. - * Performance marks are used to mark specific significant moments in the Performance Timeline. - * @param name - * @return The PerformanceMark entry that was created - */ - mark(name?: string, options?: MarkOptions): PerformanceMark; - /** - * Creates a new PerformanceMeasure entry in the Performance Timeline. - * A PerformanceMeasure is a subclass of PerformanceEntry whose performanceEntry.entryType is always 'measure', - * and whose performanceEntry.duration measures the number of milliseconds elapsed since startMark and endMark. - * - * The startMark argument may identify any existing PerformanceMark in the the Performance Timeline, or may identify - * any of the timestamp properties provided by the PerformanceNodeTiming class. If the named startMark does not exist, - * then startMark is set to timeOrigin by default. - * - * The endMark argument must identify any existing PerformanceMark in the the Performance Timeline or any of the timestamp - * properties provided by the PerformanceNodeTiming class. If the named endMark does not exist, an error will be thrown. - * @param name - * @param startMark - * @param endMark - * @return The PerformanceMeasure entry that was created - */ - measure(name: string, startMark?: string, endMark?: string): PerformanceMeasure; - measure(name: string, options: MeasureOptions): PerformanceMeasure; - /** - * An instance of the PerformanceNodeTiming class that provides performance metrics for specific Node.js operational milestones. - */ - readonly nodeTiming: PerformanceNodeTiming; - /** - * @return the current high resolution millisecond timestamp - */ - now(): number; - /** - * The timeOrigin specifies the high resolution millisecond timestamp from which all performance metric durations are measured. - */ - readonly timeOrigin: number; - /** - * Wraps a function within a new function that measures the running time of the wrapped function. - * A PerformanceObserver must be subscribed to the 'function' event type in order for the timing details to be accessed. - * @param fn - */ - timerify any>(fn: T, options?: TimerifyOptions): T; - /** - * eventLoopUtilization is similar to CPU utilization except that it is calculated using high precision wall-clock time. - * It represents the percentage of time the event loop has spent outside the event loop's event provider (e.g. epoll_wait). - * No other CPU idle time is taken into consideration. - */ - eventLoopUtilization: EventLoopUtilityFunction; - } - interface PerformanceObserverEntryList { - /** - * Returns a list of `PerformanceEntry` objects in chronological order - * with respect to `performanceEntry.startTime`. - * - * ```js - * const { - * performance, - * PerformanceObserver - * } = require('perf_hooks'); - * - * const obs = new PerformanceObserver((perfObserverList, observer) => { - * console.log(perfObserverList.getEntries()); - * - * * [ - * * PerformanceEntry { - * * name: 'test', - * * entryType: 'mark', - * * startTime: 81.465639, - * * duration: 0 - * * }, - * * PerformanceEntry { - * * name: 'meow', - * * entryType: 'mark', - * * startTime: 81.860064, - * * duration: 0 - * * } - * * ] - * - * - * performance.clearMarks(); - * performance.clearMeasures(); - * observer.disconnect(); - * }); - * obs.observe({ type: 'mark' }); - * - * performance.mark('test'); - * performance.mark('meow'); - * ``` - * @since v8.5.0 - */ - getEntries(): PerformanceEntry[]; - /** - * Returns a list of `PerformanceEntry` objects in chronological order - * with respect to `performanceEntry.startTime` whose `performanceEntry.name` is - * equal to `name`, and optionally, whose `performanceEntry.entryType` is equal to`type`. - * - * ```js - * const { - * performance, - * PerformanceObserver - * } = require('perf_hooks'); - * - * const obs = new PerformanceObserver((perfObserverList, observer) => { - * console.log(perfObserverList.getEntriesByName('meow')); - * - * * [ - * * PerformanceEntry { - * * name: 'meow', - * * entryType: 'mark', - * * startTime: 98.545991, - * * duration: 0 - * * } - * * ] - * - * console.log(perfObserverList.getEntriesByName('nope')); // [] - * - * console.log(perfObserverList.getEntriesByName('test', 'mark')); - * - * * [ - * * PerformanceEntry { - * * name: 'test', - * * entryType: 'mark', - * * startTime: 63.518931, - * * duration: 0 - * * } - * * ] - * - * console.log(perfObserverList.getEntriesByName('test', 'measure')); // [] - * - * performance.clearMarks(); - * performance.clearMeasures(); - * observer.disconnect(); - * }); - * obs.observe({ entryTypes: ['mark', 'measure'] }); - * - * performance.mark('test'); - * performance.mark('meow'); - * ``` - * @since v8.5.0 - */ - getEntriesByName(name: string, type?: EntryType): PerformanceEntry[]; - /** - * Returns a list of `PerformanceEntry` objects in chronological order - * with respect to `performanceEntry.startTime` whose `performanceEntry.entryType`is equal to `type`. - * - * ```js - * const { - * performance, - * PerformanceObserver - * } = require('perf_hooks'); - * - * const obs = new PerformanceObserver((perfObserverList, observer) => { - * console.log(perfObserverList.getEntriesByType('mark')); - * - * * [ - * * PerformanceEntry { - * * name: 'test', - * * entryType: 'mark', - * * startTime: 55.897834, - * * duration: 0 - * * }, - * * PerformanceEntry { - * * name: 'meow', - * * entryType: 'mark', - * * startTime: 56.350146, - * * duration: 0 - * * } - * * ] - * - * performance.clearMarks(); - * performance.clearMeasures(); - * observer.disconnect(); - * }); - * obs.observe({ type: 'mark' }); - * - * performance.mark('test'); - * performance.mark('meow'); - * ``` - * @since v8.5.0 - */ - getEntriesByType(type: EntryType): PerformanceEntry[]; - } - type PerformanceObserverCallback = (list: PerformanceObserverEntryList, observer: PerformanceObserver) => void; - class PerformanceObserver extends AsyncResource { - constructor(callback: PerformanceObserverCallback); - /** - * Disconnects the `PerformanceObserver` instance from all notifications. - * @since v8.5.0 - */ - disconnect(): void; - /** - * Subscribes the `PerformanceObserver` instance to notifications of new `PerformanceEntry` instances identified either by `options.entryTypes`or `options.type`: - * - * ```js - * const { - * performance, - * PerformanceObserver - * } = require('perf_hooks'); - * - * const obs = new PerformanceObserver((list, observer) => { - * // Called once asynchronously. `list` contains three items. - * }); - * obs.observe({ type: 'mark' }); - * - * for (let n = 0; n < 3; n++) - * performance.mark(`test${n}`); - * ``` - * @since v8.5.0 - */ - observe( - options: - | { - entryTypes: ReadonlyArray; - buffered?: boolean | undefined; - } - | { - type: EntryType; - buffered?: boolean | undefined; - } - ): void; - } - namespace constants { - const NODE_PERFORMANCE_GC_MAJOR: number; - const NODE_PERFORMANCE_GC_MINOR: number; - const NODE_PERFORMANCE_GC_INCREMENTAL: number; - const NODE_PERFORMANCE_GC_WEAKCB: number; - const NODE_PERFORMANCE_GC_FLAGS_NO: number; - const NODE_PERFORMANCE_GC_FLAGS_CONSTRUCT_RETAINED: number; - const NODE_PERFORMANCE_GC_FLAGS_FORCED: number; - const NODE_PERFORMANCE_GC_FLAGS_SYNCHRONOUS_PHANTOM_PROCESSING: number; - const NODE_PERFORMANCE_GC_FLAGS_ALL_AVAILABLE_GARBAGE: number; - const NODE_PERFORMANCE_GC_FLAGS_ALL_EXTERNAL_MEMORY: number; - const NODE_PERFORMANCE_GC_FLAGS_SCHEDULE_IDLE: number; - } - const performance: Performance; - interface EventLoopMonitorOptions { - /** - * The sampling rate in milliseconds. - * Must be greater than zero. - * @default 10 - */ - resolution?: number | undefined; - } - interface Histogram { - /** - * Returns a `Map` object detailing the accumulated percentile distribution. - * @since v11.10.0 - */ - readonly percentiles: Map; - /** - * The number of times the event loop delay exceeded the maximum 1 hour event - * loop delay threshold. - * @since v11.10.0 - */ - readonly exceeds: number; - /** - * The minimum recorded event loop delay. - * @since v11.10.0 - */ - readonly min: number; - /** - * The maximum recorded event loop delay. - * @since v11.10.0 - */ - readonly max: number; - /** - * The mean of the recorded event loop delays. - * @since v11.10.0 - */ - readonly mean: number; - /** - * The standard deviation of the recorded event loop delays. - * @since v11.10.0 - */ - readonly stddev: number; - /** - * Resets the collected histogram data. - * @since v11.10.0 - */ - reset(): void; - /** - * Returns the value at the given percentile. - * @since v11.10.0 - * @param percentile A percentile value in the range (0, 100]. - */ - percentile(percentile: number): number; - } - interface IntervalHistogram extends Histogram { - /** - * Enables the update interval timer. Returns `true` if the timer was - * started, `false` if it was already started. - * @since v11.10.0 - */ - enable(): boolean; - /** - * Disables the update interval timer. Returns `true` if the timer was - * stopped, `false` if it was already stopped. - * @since v11.10.0 - */ - disable(): boolean; - } - interface RecordableHistogram extends Histogram { - /** - * @since v15.9.0, v14.18.0 - * @param val The amount to record in the histogram. - */ - record(val: number | bigint): void; - /** - * Calculates the amount of time (in nanoseconds) that has passed since the - * previous call to `recordDelta()` and records that amount in the histogram. - * - * ## Examples - * @since v15.9.0, v14.18.0 - */ - recordDelta(): void; - /** - * Adds the values from other to this histogram. - * @since v17.4.0, v16.14.0 - * @param other Recordable Histogram to combine with - */ - add(other: RecordableHistogram): void; - } - /** - * _This property is an extension by Node.js. It is not available in Web browsers._ - * - * Creates an `IntervalHistogram` object that samples and reports the event loop - * delay over time. The delays will be reported in nanoseconds. - * - * Using a timer to detect approximate event loop delay works because the - * execution of timers is tied specifically to the lifecycle of the libuv - * event loop. That is, a delay in the loop will cause a delay in the execution - * of the timer, and those delays are specifically what this API is intended to - * detect. - * - * ```js - * const { monitorEventLoopDelay } = require('perf_hooks'); - * const h = monitorEventLoopDelay({ resolution: 20 }); - * h.enable(); - * // Do something. - * h.disable(); - * console.log(h.min); - * console.log(h.max); - * console.log(h.mean); - * console.log(h.stddev); - * console.log(h.percentiles); - * console.log(h.percentile(50)); - * console.log(h.percentile(99)); - * ``` - * @since v11.10.0 - */ - function monitorEventLoopDelay(options?: EventLoopMonitorOptions): IntervalHistogram; - interface CreateHistogramOptions { - /** - * The minimum recordable value. Must be an integer value greater than 0. - * @default 1 - */ - min?: number | bigint | undefined; - /** - * The maximum recordable value. Must be an integer value greater than min. - * @default Number.MAX_SAFE_INTEGER - */ - max?: number | bigint | undefined; - /** - * The number of accuracy digits. Must be a number between 1 and 5. - * @default 3 - */ - figures?: number | undefined; - } - /** - * Returns a `RecordableHistogram`. - * @since v15.9.0, v14.18.0 - */ - function createHistogram(options?: CreateHistogramOptions): RecordableHistogram; - - import { performance as _performance } from 'perf_hooks'; - global { - /** - * `performance` is a global reference for `require('perf_hooks').performance` - * https://nodejs.org/api/globals.html#performance - * @since v16.0.0 - */ - var performance: typeof globalThis extends { - onmessage: any; - performance: infer T; - } - ? T - : typeof _performance; - } -} -declare module 'node:perf_hooks' { - export * from 'perf_hooks'; -} diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/ts4.8/trace_events.d.ts b/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/ts4.8/trace_events.d.ts deleted file mode 100644 index d47aa9311ec85754ce71d1ee64c8b3bb9f509b20..0000000000000000000000000000000000000000 --- a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/ts4.8/trace_events.d.ts +++ /dev/null @@ -1,171 +0,0 @@ -/** - * The `trace_events` module provides a mechanism to centralize tracing information - * generated by V8, Node.js core, and userspace code. - * - * Tracing can be enabled with the `--trace-event-categories` command-line flag - * or by using the `trace_events` module. The `--trace-event-categories` flag - * accepts a list of comma-separated category names. - * - * The available categories are: - * - * * `node`: An empty placeholder. - * * `node.async_hooks`: Enables capture of detailed `async_hooks` trace data. - * The `async_hooks` events have a unique `asyncId` and a special `triggerId` `triggerAsyncId` property. - * * `node.bootstrap`: Enables capture of Node.js bootstrap milestones. - * * `node.console`: Enables capture of `console.time()` and `console.count()`output. - * * `node.dns.native`: Enables capture of trace data for DNS queries. - * * `node.environment`: Enables capture of Node.js Environment milestones. - * * `node.fs.sync`: Enables capture of trace data for file system sync methods. - * * `node.perf`: Enables capture of `Performance API` measurements. - * * `node.perf.usertiming`: Enables capture of only Performance API User Timing - * measures and marks. - * * `node.perf.timerify`: Enables capture of only Performance API timerify - * measurements. - * * `node.promises.rejections`: Enables capture of trace data tracking the number - * of unhandled Promise rejections and handled-after-rejections. - * * `node.vm.script`: Enables capture of trace data for the `vm` module's`runInNewContext()`, `runInContext()`, and `runInThisContext()` methods. - * * `v8`: The `V8` events are GC, compiling, and execution related. - * - * By default the `node`, `node.async_hooks`, and `v8` categories are enabled. - * - * ```bash - * node --trace-event-categories v8,node,node.async_hooks server.js - * ``` - * - * Prior versions of Node.js required the use of the `--trace-events-enabled`flag to enable trace events. This requirement has been removed. However, the`--trace-events-enabled` flag _may_ still be - * used and will enable the`node`, `node.async_hooks`, and `v8` trace event categories by default. - * - * ```bash - * node --trace-events-enabled - * - * # is equivalent to - * - * node --trace-event-categories v8,node,node.async_hooks - * ``` - * - * Alternatively, trace events may be enabled using the `trace_events` module: - * - * ```js - * const trace_events = require('trace_events'); - * const tracing = trace_events.createTracing({ categories: ['node.perf'] }); - * tracing.enable(); // Enable trace event capture for the 'node.perf' category - * - * // do work - * - * tracing.disable(); // Disable trace event capture for the 'node.perf' category - * ``` - * - * Running Node.js with tracing enabled will produce log files that can be opened - * in the [`chrome://tracing`](https://www.chromium.org/developers/how-tos/trace-event-profiling-tool) tab of Chrome. - * - * The logging file is by default called `node_trace.${rotation}.log`, where`${rotation}` is an incrementing log-rotation id. The filepath pattern can - * be specified with `--trace-event-file-pattern` that accepts a template - * string that supports `${rotation}` and `${pid}`: - * - * ```bash - * node --trace-event-categories v8 --trace-event-file-pattern '${pid}-${rotation}.log' server.js - * ``` - * - * To guarantee that the log file is properly generated after signal events like`SIGINT`, `SIGTERM`, or `SIGBREAK`, make sure to have the appropriate handlers - * in your code, such as: - * - * ```js - * process.on('SIGINT', function onSigint() { - * console.info('Received SIGINT.'); - * process.exit(130); // Or applicable exit code depending on OS and signal - * }); - * ``` - * - * The tracing system uses the same time source - * as the one used by `process.hrtime()`. - * However the trace-event timestamps are expressed in microseconds, - * unlike `process.hrtime()` which returns nanoseconds. - * - * The features from this module are not available in `Worker` threads. - * @experimental - * @see [source](https://github.com/nodejs/node/blob/v18.0.0/lib/trace_events.js) - */ -declare module 'trace_events' { - /** - * The `Tracing` object is used to enable or disable tracing for sets of - * categories. Instances are created using the - * `trace_events.createTracing()` method. - * - * When created, the `Tracing` object is disabled. Calling the - * `tracing.enable()` method adds the categories to the set of enabled trace - * event categories. Calling `tracing.disable()` will remove the categories - * from the set of enabled trace event categories. - */ - interface Tracing { - /** - * A comma-separated list of the trace event categories covered by this - * `Tracing` object. - */ - readonly categories: string; - /** - * Disables this `Tracing` object. - * - * Only trace event categories _not_ covered by other enabled `Tracing` - * objects and _not_ specified by the `--trace-event-categories` flag - * will be disabled. - */ - disable(): void; - /** - * Enables this `Tracing` object for the set of categories covered by - * the `Tracing` object. - */ - enable(): void; - /** - * `true` only if the `Tracing` object has been enabled. - */ - readonly enabled: boolean; - } - interface CreateTracingOptions { - /** - * An array of trace category names. Values included in the array are - * coerced to a string when possible. An error will be thrown if the - * value cannot be coerced. - */ - categories: string[]; - } - /** - * Creates and returns a `Tracing` object for the given set of `categories`. - * - * ```js - * const trace_events = require('trace_events'); - * const categories = ['node.perf', 'node.async_hooks']; - * const tracing = trace_events.createTracing({ categories }); - * tracing.enable(); - * // do stuff - * tracing.disable(); - * ``` - * @since v10.0.0 - * @return . - */ - function createTracing(options: CreateTracingOptions): Tracing; - /** - * Returns a comma-separated list of all currently-enabled trace event - * categories. The current set of enabled trace event categories is determined - * by the _union_ of all currently-enabled `Tracing` objects and any categories - * enabled using the `--trace-event-categories` flag. - * - * Given the file `test.js` below, the command`node --trace-event-categories node.perf test.js` will print`'node.async_hooks,node.perf'` to the console. - * - * ```js - * const trace_events = require('trace_events'); - * const t1 = trace_events.createTracing({ categories: ['node.async_hooks'] }); - * const t2 = trace_events.createTracing({ categories: ['node.perf'] }); - * const t3 = trace_events.createTracing({ categories: ['v8'] }); - * - * t1.enable(); - * t2.enable(); - * - * console.log(trace_events.getEnabledCategories()); - * ``` - * @since v10.0.0 - */ - function getEnabledCategories(): string | undefined; -} -declare module 'node:trace_events' { - export * from 'trace_events'; -} diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Autodesk Products 2009 Keygen Xforce.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Autodesk Products 2009 Keygen Xforce.md deleted file mode 100644 index 254473726249a222010a168a1bc77906f760d596..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Autodesk Products 2009 Keygen Xforce.md +++ /dev/null @@ -1,22 +0,0 @@ -

            autodesk products 2009 keygen xforce


            Download Zip ❤❤❤ https://urlgoal.com/2uCKZ6



            - -iphone. is 1gb free on the iphone 5 and the 4s. let me know. - -Honestly, I don't think this is going to work for you. Why not just do a CMD + ALT + DEL and see if a reset helps? - -This happened on my wife's iPhone4 last night. I was able to take over the device. The only change made to the device was a full respring and I was able to get into it once again. There is no way you are going to be able to do anything from the iPhone. - -It was the battery at fault, it's so bad it had heat issues and shut itself off, as well as making the phone appear offline. It just didn't do anything, every keystroke on the phone was recorded on the screen. If you have a laptop with a battery, you may be able to put the laptop to sleep and that would take care of the issue. - -As for the other issue, iTunes will be listed as being on the phone, not iTunes, since it will be using the Apple ID. However, you cannot use itunes to update, because itunes is signed in to your iCloud account. You have to do it via the browser or the computer, since itunes will be signed in to your account (even though it's not being used). - -All the software updates will be available through iTunes. You can download the updates through iTunes and install them to the device. If you go into settings, under "update" the option will say "download and install." Just click on that and go to it, you should be good. - -Also if you go into settings > general > storage > advanced, you should see how much space is available on the phone. If you are close to the max, then you can free some of the space. You can use a couple of applications at once and you can uninstall them afterwards. You can have multiple computers and still have their cache files on the phone, not to mention saved games. If you have any apps on there that you dont want, you can uninstall them and the cache/data files will remain on the phone. - -You might also look into the ios6 beta. You could jailbreak if you wanted to and put it on the iPhone. iS6 is set to be released in the fall, but I don't think apple will release a beta. - -hey i forgot my apple id password and i just connected 4fefd39f24
            -
            -
            -

            diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Free Download BIM 360 Field 2016 Crack Keygen.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Free Download BIM 360 Field 2016 Crack Keygen.md deleted file mode 100644 index c74542949a0ef6e3517963048495f6736cf3caff..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Free Download BIM 360 Field 2016 Crack Keygen.md +++ /dev/null @@ -1,11 +0,0 @@ -

            free download BIM 360 Field 2016 crack keygen


            Download Filehttps://urlgoal.com/2uCJqy



            -
            -Download a free 30-day trial of 3ds Max, the 3D modeling and rendering software for rendering design, games, and animation to create with complete artistic control. Create unique 3D objects and render your designs in a realistic 3D environment. -Create professional animations, games, and interactive shows. -3ds Max is part of Autodesk® Inventor® software -Start Autodesk 3ds Max software and open your project. -Use the Open Project option on the top toolbar to open your project. -After that, click the Parts tab and click the Add to Build button. 8a78ff9644
            -
            -
            -

            diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (pirates Of The Caribbean 4 Tamil Dub) LINK.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (pirates Of The Caribbean 4 Tamil Dub) LINK.md deleted file mode 100644 index a53400caf93251f436f49d431227b20569070941..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (pirates Of The Caribbean 4 Tamil Dub) LINK.md +++ /dev/null @@ -1,55 +0,0 @@ -
            -

            How to Watch Pirates of the Caribbean 4 Tamil Dubbed Movie Online in HD Quality

            -

            Pirates of the Caribbean 4: On Stranger Tides is the fourth installment of the popular fantasy adventure film series starring Johnny Depp as Captain Jack Sparrow. The film was released in 2011 and was directed by Rob Marshall. The film follows Jack Sparrow as he searches for the Fountain of Youth, while being pursued by his former lover Angelica (Penélope Cruz), who is the daughter of the notorious pirate Blackbeard (Ian McShane).

            -

            HD Online Player (pirates of the caribbean 4 tamil dub)


            Download Filehttps://urlgoal.com/2uCKGb



            -

            If you are a fan of Pirates of the Caribbean series and want to watch the fourth movie in Tamil dubbed version, you might be wondering how to find a reliable and high-quality online player that can stream the movie in HD quality. In this article, we will show you some of the best options to watch Pirates of the Caribbean 4 Tamil dubbed movie online in HD quality.

            - -

            YouTube

            -

            One of the easiest and most accessible ways to watch Pirates of the Caribbean 4 Tamil dubbed movie online in HD quality is to use YouTube. YouTube is a popular video-sharing platform that hosts millions of videos, including movies, trailers, clips, and scenes. You can find many videos related to Pirates of the Caribbean 4 Tamil dubbed movie on YouTube, such as:

            -
              -
            • Pirates of the Caribbean 4: On Stranger Tides (2011) (தமிழ்) - Tamil Dubbed - Movie Scene - HD: This video shows a scene from the movie where Jack Sparrow meets Angelica for the first time after many years. The video has over 4.7K views and is uploaded by Ssv Super.
            • -
            • Pirates of the Caribbean: The Curse of the Black Pearl (2003) - Tamilyogi HD: This video shows the full movie of Pirates of the Caribbean: The Curse of the Black Pearl, which is the first movie of the series, in Tamil dubbed version. The video has over 1.3M views and is uploaded by Tamilyogi HD.
            • -
            -

            To watch Pirates of the Caribbean 4 Tamil dubbed movie online in HD quality on YouTube, you need to follow these steps:

            -
              -
            1. Go to YouTube.com and type "Pirates of the Caribbean 4 Tamil dubbed" in the search box.
            2. -
            3. Filter the results by choosing "Video" and "HD" from the options on the left side.
            4. -
            5. Choose a video that matches your preference and click on it.
            6. -
            7. Enjoy watching Pirates of the Caribbean 4 Tamil dubbed movie online in HD quality on YouTube.
            8. -
            - -

            SoundCloud

            -

            Another option to watch Pirates of the Caribbean 4 Tamil dubbed movie online in HD quality is to use SoundCloud. SoundCloud is a popular audio-sharing platform that hosts millions of tracks, podcasts, and playlists. You can find many tracks related to Pirates of the Caribbean 4 Tamil dubbed movie on SoundCloud, such as:

            -

            -
              -
            • HD Online Player (pirates Of The Caribbean 4 Tamil Dub) by Charles Francois: This track is a recording of an online player that streams Pirates of the Caribbean 4 Tamil dubbed movie in HD quality. The track has over 1K plays and is uploaded by Charles Francois.
            • -
            • HD Online Player (pirates Of The Caribbean 4 Tamil Dub) by Darius: This track is another recording of an online player that streams Pirates of the Caribbean 4 Tamil dubbed movie in HD quality. The track has over 500 plays and is uploaded by Darius.
            • -
            -

            To watch Pirates of the Caribbean 4 Tamil dubbed movie online in HD quality on SoundCloud, you need to follow these steps:

            -
              -
            1. Go to SoundCloud.com and type "Pirates of the Caribbean 4 Tamil dubbed" in the search box.
            2. -
            3. Filter the results by choosing "Tracks" and "HD" from the options on the left side.
            4. -
            5. Choose a track that matches your preference and click on it.
            6. -
            7. Enjoy listening to Pirates of the Caribbean 4 Tamil dubbed movie online in HD quality on SoundCloud.
            8. -
            - -

            Conclusion

            -

            In this article, we have shown you some of the best options to watch Pirates of -the Caribbean 4 Tamil dubbed movie online in HD quality. You can use YouTube or SoundCloud to find videos or tracks related to Pirates of -the Caribbean 4 Tamil dubbed movie and stream them in HD quality. We hope you enjoy watching Pirates -of -the Caribbean 4 Tamil dubbed movie online in HD quality with these options.

            -

            Other Options to Watch Pirates of the Caribbean 4 Tamil Dubbed Movie Online in HD Quality

            -

            Besides YouTube and SoundCloud, there are some other options to watch Pirates of the Caribbean 4 Tamil dubbed movie online in HD quality. However, these options may not be as reliable or safe as YouTube and SoundCloud, and may require you to pay a subscription fee or download additional software. Some of these options are:

            -
              -
            • Tamilyogi HD: This is a website that hosts Tamil dubbed movies and TV shows in HD quality. You can find Pirates of the Caribbean 4 Tamil dubbed movie on this website, along with other movies from the series. However, this website may contain pop-up ads and malware that can harm your device or compromise your privacy.
            • -
            • Karahvi.fi: This is a website that provides PDF files of various topics, including movies, books, games, and software. You can find a PDF file of Pirates of the Caribbean 4 Tamil dubbed movie on this website, along with a link to an online player that streams the movie in HD quality. However, this website may require you to register an account or complete a survey before accessing the PDF file or the online player.
            • -
            -

            Therefore, we recommend you to use YouTube or SoundCloud to watch Pirates of the Caribbean 4 Tamil dubbed movie online in HD quality, as they are more trustworthy and convenient than these other options.

            -

            In this article, we have shown you some of the best options to watch Pirates of -the Caribbean 4 Tamil dubbed movie online in HD quality. You can use YouTube or SoundCloud to find videos or tracks related to Pirates of -the Caribbean 4 Tamil dubbed movie and stream them in HD quality. We have also mentioned some other options that may not be as reliable or safe as YouTube and SoundCloud, and may require you to pay a subscription fee or download additional software. We hope you enjoy watching Pirates -of -the Caribbean 4 Tamil dubbed movie online in HD quality with these options.

            3cee63e6c2
            -
            -
            \ No newline at end of file diff --git a/spaces/rewoo/ReWOO-Demo/nodes/Planner.py b/spaces/rewoo/ReWOO-Demo/nodes/Planner.py deleted file mode 100644 index 063b04ddcfbce9c199868cea8e7dc245e71d2233..0000000000000000000000000000000000000000 --- a/spaces/rewoo/ReWOO-Demo/nodes/Planner.py +++ /dev/null @@ -1,39 +0,0 @@ -from nodes.LLMNode import LLMNode -from nodes.Worker import WORKER_REGISTRY -from prompts.planner import * -from utils.util import LLAMA_WEIGHTS - - -class Planner(LLMNode): - def __init__(self, workers, prefix=DEFAULT_PREFIX, suffix=DEFAULT_SUFFIX, fewshot=DEFAULT_FEWSHOT, - model_name="text-davinci-003", stop=None): - super().__init__("Planner", model_name, stop, input_type=str, output_type=str) - self.workers = workers - self.prefix = prefix - self.worker_prompt = self._generate_worker_prompt() - self.suffix = suffix - self.fewshot = fewshot - - def run(self, input, log=False): - assert isinstance(input, self.input_type) - prompt = self.prefix + self.worker_prompt + self.fewshot + self.suffix + input + '\n' - if self.model_name in LLAMA_WEIGHTS: - prompt = [self.prefix + self.worker_prompt, input] - response = self.call_llm(prompt, self.stop) - completion = response["output"] - if log: - return response - return completion - - def _get_worker(self, name): - if name in WORKER_REGISTRY: - return WORKER_REGISTRY[name] - else: - raise ValueError("Worker not found") - - def _generate_worker_prompt(self): - prompt = "Tools can be one of the following:\n" - for name in self.workers: - worker = self._get_worker(name) - prompt += f"{worker.name}[input]: {worker.description}\n" - return prompt + "\n" diff --git a/spaces/rifkat/Uz-Text-Summarization/README.md b/spaces/rifkat/Uz-Text-Summarization/README.md deleted file mode 100644 index b8cbf793388cb3af77bad285f5706946cd8a391a..0000000000000000000000000000000000000000 --- a/spaces/rifkat/Uz-Text-Summarization/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Uz Text Summarization -emoji: 🐨 -colorFrom: pink -colorTo: red -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/focalnet_dino/models/dino/dino.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/focalnet_dino/models/dino/dino.py deleted file mode 100644 index 6eae52cf0af83ffe94fc22fbab0a5698812aae27..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/focalnet_dino/models/dino/dino.py +++ /dev/null @@ -1,778 +0,0 @@ -# ------------------------------------------------------------------------ -# DINO -# Copyright (c) 2022 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Conditional DETR model and criterion classes. -# Copyright (c) 2021 Microsoft. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Modified from DETR (https://github.com/facebookresearch/detr) -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -# ------------------------------------------------------------------------ -# Modified from Deformable DETR (https://github.com/fundamentalvision/Deformable-DETR) -# Copyright (c) 2020 SenseTime. All Rights Reserved. -# ------------------------------------------------------------------------ -import copy -import math -from typing import List -import torch -import torch.nn.functional as F -from torch import nn -from torchvision.ops.boxes import nms - -from .util import box_ops -from .util.misc import (NestedTensor, nested_tensor_from_tensor_list, - accuracy, get_world_size, interpolate, - is_dist_avail_and_initialized, inverse_sigmoid) - -from .backbone import build_backbone -from .matcher import build_matcher -from .segmentation import (dice_loss) -from .deformable_transformer import build_deformable_transformer -from .utils import sigmoid_focal_loss, MLP - -from .dn_components import prepare_for_cdn, dn_post_process - - -class DINO(nn.Module): - """ This is the Cross-Attention Detector module that performs object detection """ - - def __init__(self, backbone, transformer, num_classes, num_queries, - aux_loss=False, iter_update=False, - query_dim=2, - random_refpoints_xy=False, - fix_refpoints_hw=-1, - num_feature_levels=1, - nheads=8, - # two stage - two_stage_type='no', # ['no', 'standard'] - two_stage_add_query_num=0, - dec_pred_class_embed_share=True, - dec_pred_bbox_embed_share=True, - two_stage_class_embed_share=True, - two_stage_bbox_embed_share=True, - decoder_sa_type='sa', - num_patterns=0, - dn_number=100, - dn_box_noise_scale=0.4, - dn_label_noise_ratio=0.5, - dn_labelbook_size=100, - ): - """ Initializes the model. - Parameters: - backbone: torch module of the backbone to be used. See backbone.py - transformer: torch module of the transformer architecture. See transformer.py - num_classes: number of object classes - num_queries: number of object queries, ie detection slot. This is the maximal number of objects - Conditional DETR can detect in a single image. For COCO, we recommend 100 queries. - aux_loss: True if auxiliary decoding losses (loss at each decoder layer) are to be used. - - fix_refpoints_hw: -1(default): learn w and h for each box seperately - >0 : given fixed number - -2 : learn a shared w and h - """ - super().__init__() - self.num_queries = num_queries - self.transformer = transformer - self.num_classes = num_classes - self.hidden_dim = hidden_dim = transformer.d_model - self.num_feature_levels = num_feature_levels - self.nheads = nheads - self.label_enc = nn.Embedding(dn_labelbook_size + 1, hidden_dim) - - # setting query dim - self.query_dim = query_dim - assert query_dim == 4 - self.random_refpoints_xy = random_refpoints_xy - self.fix_refpoints_hw = fix_refpoints_hw - - # for dn training - self.num_patterns = num_patterns - self.dn_number = dn_number - self.dn_box_noise_scale = dn_box_noise_scale - self.dn_label_noise_ratio = dn_label_noise_ratio - self.dn_labelbook_size = dn_labelbook_size - - # prepare input projection layers - if num_feature_levels > 1: - num_backbone_outs = len(backbone.num_channels) - input_proj_list = [] - for _ in range(num_backbone_outs): - in_channels = backbone.num_channels[_] - input_proj_list.append(nn.Sequential( - nn.Conv2d(in_channels, hidden_dim, kernel_size=1), - nn.GroupNorm(32, hidden_dim), - )) - for _ in range(num_feature_levels - num_backbone_outs): - input_proj_list.append(nn.Sequential( - nn.Conv2d(in_channels, hidden_dim, kernel_size=3, stride=2, padding=1), - nn.GroupNorm(32, hidden_dim), - )) - in_channels = hidden_dim - self.input_proj = nn.ModuleList(input_proj_list) - else: - assert two_stage_type == 'no', "two_stage_type should be no if num_feature_levels=1 !!!" - self.input_proj = nn.ModuleList([ - nn.Sequential( - nn.Conv2d(backbone.num_channels[-1], hidden_dim, kernel_size=1), - nn.GroupNorm(32, hidden_dim), - )]) - - self.backbone = backbone - self.aux_loss = aux_loss - self.box_pred_damping = box_pred_damping = None - - self.iter_update = iter_update - assert iter_update, "Why not iter_update?" - - # prepare pred layers - self.dec_pred_class_embed_share = dec_pred_class_embed_share - self.dec_pred_bbox_embed_share = dec_pred_bbox_embed_share - # prepare class & box embed - _class_embed = nn.Linear(hidden_dim, num_classes) - _bbox_embed = MLP(hidden_dim, hidden_dim, 4, 3) - # init the two embed layers - prior_prob = 0.01 - bias_value = -math.log((1 - prior_prob) / prior_prob) - _class_embed.bias.data = torch.ones(self.num_classes) * bias_value - nn.init.constant_(_bbox_embed.layers[-1].weight.data, 0) - nn.init.constant_(_bbox_embed.layers[-1].bias.data, 0) - - if dec_pred_bbox_embed_share: - box_embed_layerlist = [_bbox_embed for i in range(transformer.num_decoder_layers)] - else: - box_embed_layerlist = [copy.deepcopy(_bbox_embed) for i in range(transformer.num_decoder_layers)] - if dec_pred_class_embed_share: - class_embed_layerlist = [_class_embed for i in range(transformer.num_decoder_layers)] - else: - class_embed_layerlist = [copy.deepcopy(_class_embed) for i in range(transformer.num_decoder_layers)] - self.bbox_embed = nn.ModuleList(box_embed_layerlist) - self.class_embed = nn.ModuleList(class_embed_layerlist) - self.transformer.decoder.bbox_embed = self.bbox_embed - self.transformer.decoder.class_embed = self.class_embed - - # two stage - self.two_stage_type = two_stage_type - self.two_stage_add_query_num = two_stage_add_query_num - assert two_stage_type in ['no', 'standard'], "unknown param {} of two_stage_type".format(two_stage_type) - if two_stage_type != 'no': - if two_stage_bbox_embed_share: - assert dec_pred_class_embed_share and dec_pred_bbox_embed_share - self.transformer.enc_out_bbox_embed = _bbox_embed - else: - self.transformer.enc_out_bbox_embed = copy.deepcopy(_bbox_embed) - - if two_stage_class_embed_share: - assert dec_pred_class_embed_share and dec_pred_bbox_embed_share - self.transformer.enc_out_class_embed = _class_embed - else: - self.transformer.enc_out_class_embed = copy.deepcopy(_class_embed) - - self.refpoint_embed = None - if self.two_stage_add_query_num > 0: - self.init_ref_points(two_stage_add_query_num) - - self.decoder_sa_type = decoder_sa_type - assert decoder_sa_type in ['sa', 'ca_label', 'ca_content'] - # self.replace_sa_with_double_ca = replace_sa_with_double_ca - if decoder_sa_type == 'ca_label': - self.label_embedding = nn.Embedding(num_classes, hidden_dim) - for layer in self.transformer.decoder.layers: - layer.label_embedding = self.label_embedding - else: - for layer in self.transformer.decoder.layers: - layer.label_embedding = None - self.label_embedding = None - - self._reset_parameters() - - def _reset_parameters(self): - # init input_proj - for proj in self.input_proj: - nn.init.xavier_uniform_(proj[0].weight, gain=1) - nn.init.constant_(proj[0].bias, 0) - - def init_ref_points(self, use_num_queries): - self.refpoint_embed = nn.Embedding(use_num_queries, self.query_dim) - - if self.random_refpoints_xy: - # import ipdb; ipdb.set_trace() - self.refpoint_embed.weight.data[:, :2].uniform_(0, 1) - self.refpoint_embed.weight.data[:, :2] = inverse_sigmoid(self.refpoint_embed.weight.data[:, :2]) - self.refpoint_embed.weight.data[:, :2].requires_grad = False - - if self.fix_refpoints_hw > 0: - print("fix_refpoints_hw: {}".format(self.fix_refpoints_hw)) - assert self.random_refpoints_xy - self.refpoint_embed.weight.data[:, 2:] = self.fix_refpoints_hw - self.refpoint_embed.weight.data[:, 2:] = inverse_sigmoid(self.refpoint_embed.weight.data[:, 2:]) - self.refpoint_embed.weight.data[:, 2:].requires_grad = False - elif int(self.fix_refpoints_hw) == -1: - pass - elif int(self.fix_refpoints_hw) == -2: - print('learn a shared h and w') - assert self.random_refpoints_xy - self.refpoint_embed = nn.Embedding(use_num_queries, 2) - self.refpoint_embed.weight.data[:, :2].uniform_(0, 1) - self.refpoint_embed.weight.data[:, :2] = inverse_sigmoid(self.refpoint_embed.weight.data[:, :2]) - self.refpoint_embed.weight.data[:, :2].requires_grad = False - self.hw_embed = nn.Embedding(1, 1) - else: - raise NotImplementedError('Unknown fix_refpoints_hw {}'.format(self.fix_refpoints_hw)) - - def forward(self, samples: NestedTensor, targets: List = None): - """ The forward expects a NestedTensor, which consists of: - - samples.tensor: batched images, of shape [batch_size x 3 x H x W] - - samples.mask: a binary mask of shape [batch_size x H x W], containing 1 on padded pixels - - It returns a dict with the following elements: - - "pred_logits": the classification logits (including no-object) for all queries. - Shape= [batch_size x num_queries x num_classes] - - "pred_boxes": The normalized boxes coordinates for all queries, represented as - (center_x, center_y, width, height). These values are normalized in [0, 1], - relative to the size of each individual image (disregarding possible padding). - See PostProcess for information on how to retrieve the unnormalized bounding box. - - "aux_outputs": Optional, only returned when auxilary losses are activated. It is a list of - dictionnaries containing the two above keys for each decoder layer. - """ - if isinstance(samples, (list, torch.Tensor)): - samples = nested_tensor_from_tensor_list(samples) - features, poss = self.backbone(samples) - - srcs = [] - masks = [] - for l, feat in enumerate(features): - src, mask = feat.decompose() - srcs.append(self.input_proj[l](src)) - masks.append(mask) - assert mask is not None - if self.num_feature_levels > len(srcs): - _len_srcs = len(srcs) - for l in range(_len_srcs, self.num_feature_levels): - if l == _len_srcs: - src = self.input_proj[l](features[-1].tensors) - else: - src = self.input_proj[l](srcs[-1]) - m = samples.mask - mask = F.interpolate(m[None].float(), size=src.shape[-2:]).to(torch.bool)[0] - pos_l = self.backbone[1](NestedTensor(src, mask)).to(src.dtype) - srcs.append(src) - masks.append(mask) - poss.append(pos_l) - - if self.dn_number > 0 or targets is not None: - input_query_label, input_query_bbox, attn_mask, dn_meta = \ - prepare_for_cdn(dn_args=(targets, self.dn_number, self.dn_label_noise_ratio, self.dn_box_noise_scale), - training=self.training, num_queries=self.num_queries, num_classes=self.num_classes, - hidden_dim=self.hidden_dim, label_enc=self.label_enc) - else: - assert targets is None - input_query_bbox = input_query_label = attn_mask = dn_meta = None - - hs, reference, hs_enc, ref_enc, init_box_proposal = self.transformer(srcs, masks, input_query_bbox, poss, - input_query_label, attn_mask) - # In case num object=0 - hs[0] += self.label_enc.weight[0, 0] * 0.0 - - # deformable-detr-like anchor update - # reference_before_sigmoid = inverse_sigmoid(reference[:-1]) # n_dec, bs, nq, 4 - outputs_coord_list = [] - for dec_lid, (layer_ref_sig, layer_bbox_embed, layer_hs) in enumerate(zip(reference[:-1], self.bbox_embed, hs)): - layer_delta_unsig = layer_bbox_embed(layer_hs) - layer_outputs_unsig = layer_delta_unsig + inverse_sigmoid(layer_ref_sig) - layer_outputs_unsig = layer_outputs_unsig.sigmoid() - outputs_coord_list.append(layer_outputs_unsig) - outputs_coord_list = torch.stack(outputs_coord_list) - - # outputs_class = self.class_embed(hs) - outputs_class = torch.stack([layer_cls_embed(layer_hs) for - layer_cls_embed, layer_hs in zip(self.class_embed, hs)]) - if self.dn_number > 0 and dn_meta is not None: - outputs_class, outputs_coord_list = \ - dn_post_process(outputs_class, outputs_coord_list, - dn_meta, self.aux_loss, self._set_aux_loss) - out = {'pred_logits': outputs_class[-1], 'pred_boxes': outputs_coord_list[-1]} - if self.aux_loss: - out['aux_outputs'] = self._set_aux_loss(outputs_class, outputs_coord_list) - - # for encoder output - if hs_enc is not None: - # prepare intermediate outputs - interm_coord = ref_enc[-1] - interm_class = self.transformer.enc_out_class_embed(hs_enc[-1]) - out['interm_outputs'] = {'pred_logits': interm_class, 'pred_boxes': interm_coord} - out['interm_outputs_for_matching_pre'] = {'pred_logits': interm_class, 'pred_boxes': init_box_proposal} - - # prepare enc outputs - # import ipdb; ipdb.set_trace() - if hs_enc.shape[0] > 1: - enc_outputs_coord = [] - enc_outputs_class = [] - for layer_id, (layer_box_embed, layer_class_embed, layer_hs_enc, layer_ref_enc) in enumerate( - zip(self.enc_bbox_embed, self.enc_class_embed, hs_enc[:-1], ref_enc[:-1])): - layer_enc_delta_unsig = layer_box_embed(layer_hs_enc) - layer_enc_outputs_coord_unsig = layer_enc_delta_unsig + inverse_sigmoid(layer_ref_enc) - layer_enc_outputs_coord = layer_enc_outputs_coord_unsig.sigmoid() - - layer_enc_outputs_class = layer_class_embed(layer_hs_enc) - enc_outputs_coord.append(layer_enc_outputs_coord) - enc_outputs_class.append(layer_enc_outputs_class) - - # enc_delta_unsig = self.enc_bbox_embed(hs_enc[:-1]) - # enc_outputs_unsig = enc_delta_unsig + ref_enc[:-1] - # enc_outputs_coord = enc_outputs_unsig.sigmoid() - # enc_outputs_class = self.enc_class_embed(hs_enc[:-1]) - out['enc_outputs'] = [ - {'pred_logits': a, 'pred_boxes': b} for a, b in zip(enc_outputs_class, enc_outputs_coord) - ] - - out['dn_meta'] = dn_meta - - return out - - @torch.jit.unused - def _set_aux_loss(self, outputs_class, outputs_coord): - # this is a workaround to make torchscript happy, as torchscript - # doesn't support dictionary with non-homogeneous values, such - # as a dict having both a Tensor and a list. - return [{'pred_logits': a, 'pred_boxes': b} - for a, b in zip(outputs_class[:-1], outputs_coord[:-1])] - - -class SetCriterion(nn.Module): - """ This class computes the loss for Conditional DETR. - The process happens in two steps: - 1) we compute hungarian assignment between ground truth boxes and the outputs of the model - 2) we supervise each pair of matched ground-truth / prediction (supervise class and box) - """ - - def __init__(self, num_classes, matcher, weight_dict, focal_alpha, losses): - """ Create the criterion. - Parameters: - num_classes: number of object categories, omitting the special no-object category - matcher: module able to compute a matching between targets and proposals - weight_dict: dict containing as key the names of the losses and as values their relative weight. - losses: list of all the losses to be applied. See get_loss for list of available losses. - focal_alpha: alpha in Focal Loss - """ - super().__init__() - self.num_classes = num_classes - self.matcher = matcher - self.weight_dict = weight_dict - self.losses = losses - self.focal_alpha = focal_alpha - - def loss_labels(self, outputs, targets, indices, num_boxes, log=True): - """Classification loss (Binary focal loss) - targets dicts must contain the key "labels" containing a tensor of dim [nb_target_boxes] - """ - assert 'pred_logits' in outputs - src_logits = outputs['pred_logits'] - - idx = self._get_src_permutation_idx(indices) - target_classes_o = torch.cat([t["labels"][J] for t, (_, J) in zip(targets, indices)]) - target_classes = torch.full(src_logits.shape[:2], self.num_classes, - dtype=torch.int64, device=src_logits.device) - target_classes[idx] = target_classes_o - - target_classes_onehot = torch.zeros([src_logits.shape[0], src_logits.shape[1], src_logits.shape[2] + 1], - dtype=src_logits.dtype, layout=src_logits.layout, device=src_logits.device) - target_classes_onehot.scatter_(2, target_classes.unsqueeze(-1), 1) - - target_classes_onehot = target_classes_onehot[:, :, :-1] - loss_ce = sigmoid_focal_loss(src_logits, target_classes_onehot, num_boxes, alpha=self.focal_alpha, gamma=2) * \ - src_logits.shape[1] - losses = {'loss_ce': loss_ce} - - if log: - # TODO this should probably be a separate loss, not hacked in this one here - losses['class_error'] = 100 - accuracy(src_logits[idx], target_classes_o)[0] - return losses - - @torch.no_grad() - def loss_cardinality(self, outputs, targets, indices, num_boxes): - """ Compute the cardinality error, ie the absolute error in the number of predicted non-empty boxes - This is not really a loss, it is intended for logging purposes only. It doesn't propagate gradients - """ - pred_logits = outputs['pred_logits'] - device = pred_logits.device - tgt_lengths = torch.as_tensor([len(v["labels"]) for v in targets], device=device) - # Count the number of predictions that are NOT "no-object" (which is the last class) - card_pred = (pred_logits.argmax(-1) != pred_logits.shape[-1] - 1).sum(1) - card_err = F.l1_loss(card_pred.float(), tgt_lengths.float()) - losses = {'cardinality_error': card_err} - return losses - - def loss_boxes(self, outputs, targets, indices, num_boxes): - """Compute the losses related to the bounding boxes, the L1 regression loss and the GIoU loss - targets dicts must contain the key "boxes" containing a tensor of dim [nb_target_boxes, 4] - The target boxes are expected in format (center_x, center_y, w, h), normalized by the image size. - """ - assert 'pred_boxes' in outputs - idx = self._get_src_permutation_idx(indices) - src_boxes = outputs['pred_boxes'][idx] - target_boxes = torch.cat([t['boxes'][i] for t, (_, i) in zip(targets, indices)], dim=0) - - loss_bbox = F.l1_loss(src_boxes, target_boxes, reduction='none') - - losses = {} - losses['loss_bbox'] = loss_bbox.sum() / num_boxes - - loss_giou = 1 - torch.diag(box_ops.generalized_box_iou( - box_ops.box_cxcywh_to_xyxy(src_boxes), - box_ops.box_cxcywh_to_xyxy(target_boxes))) - losses['loss_giou'] = loss_giou.sum() / num_boxes - - # calculate the x,y and h,w loss - with torch.no_grad(): - losses['loss_xy'] = loss_bbox[..., :2].sum() / num_boxes - losses['loss_hw'] = loss_bbox[..., 2:].sum() / num_boxes - - return losses - - def loss_masks(self, outputs, targets, indices, num_boxes): - """Compute the losses related to the masks: the focal loss and the dice loss. - targets dicts must contain the key "masks" containing a tensor of dim [nb_target_boxes, h, w] - """ - assert "pred_masks" in outputs - - src_idx = self._get_src_permutation_idx(indices) - tgt_idx = self._get_tgt_permutation_idx(indices) - src_masks = outputs["pred_masks"] - src_masks = src_masks[src_idx] - masks = [t["masks"] for t in targets] - # TODO use valid to mask invalid areas due to padding in loss - target_masks, valid = nested_tensor_from_tensor_list(masks).decompose() - target_masks = target_masks.to(src_masks) - target_masks = target_masks[tgt_idx] - - # upsample predictions to the target size - src_masks = interpolate(src_masks[:, None], size=target_masks.shape[-2:], - mode="bilinear", align_corners=False) - src_masks = src_masks[:, 0].flatten(1) - - target_masks = target_masks.flatten(1) - target_masks = target_masks.view(src_masks.shape) - losses = { - "loss_mask": sigmoid_focal_loss(src_masks, target_masks, num_boxes), - "loss_dice": dice_loss(src_masks, target_masks, num_boxes), - } - return losses - - def _get_src_permutation_idx(self, indices): - # permute predictions following indices - batch_idx = torch.cat([torch.full_like(src, i) for i, (src, _) in enumerate(indices)]) - src_idx = torch.cat([src for (src, _) in indices]) - return batch_idx, src_idx - - def _get_tgt_permutation_idx(self, indices): - # permute targets following indices - batch_idx = torch.cat([torch.full_like(tgt, i) for i, (_, tgt) in enumerate(indices)]) - tgt_idx = torch.cat([tgt for (_, tgt) in indices]) - return batch_idx, tgt_idx - - def get_loss(self, loss, outputs, targets, indices, num_boxes, **kwargs): - loss_map = { - 'labels': self.loss_labels, - 'cardinality': self.loss_cardinality, - 'boxes': self.loss_boxes, - 'masks': self.loss_masks, - # 'dn_labels': self.loss_dn_labels, - # 'dn_boxes': self.loss_dn_boxes - } - assert loss in loss_map, f'do you really want to compute {loss} loss?' - return loss_map[loss](outputs, targets, indices, num_boxes, **kwargs) - - def forward(self, outputs, targets, return_indices=False): - """ This performs the loss computation. - Parameters: - outputs: dict of tensors, see the output specification of the model for the format - targets: list of dicts, such that len(targets) == batch_size. - The expected keys in each dict depends on the losses applied, see each loss' doc - - return_indices: used for vis. if True, the layer0-5 indices will be returned as well. - - """ - outputs_without_aux = {k: v for k, v in outputs.items() if k != 'aux_outputs'} - device = next(iter(outputs.values())).device - indices = self.matcher(outputs_without_aux, targets) - - if return_indices: - indices0_copy = indices - indices_list = [] - - # Compute the average number of target boxes accross all nodes, for normalization purposes - num_boxes = sum(len(t["labels"]) for t in targets) - num_boxes = torch.as_tensor([num_boxes], dtype=torch.float, device=device) - if is_dist_avail_and_initialized(): - torch.distributed.all_reduce(num_boxes) - num_boxes = torch.clamp(num_boxes / get_world_size(), min=1).item() - - # Compute all the requested losses - losses = {} - - # prepare for dn loss - dn_meta = outputs['dn_meta'] - - if self.training and dn_meta and 'output_known_lbs_bboxes' in dn_meta: - output_known_lbs_bboxes, single_pad, scalar = self.prep_for_dn(dn_meta) - - dn_pos_idx = [] - dn_neg_idx = [] - for i in range(len(targets)): - if len(targets[i]['labels']) > 0: - t = torch.range(0, len(targets[i]['labels']) - 1).long().cuda() - t = t.unsqueeze(0).repeat(scalar, 1) - tgt_idx = t.flatten() - output_idx = (torch.tensor(range(scalar)) * single_pad).long().cuda().unsqueeze(1) + t - output_idx = output_idx.flatten() - else: - output_idx = tgt_idx = torch.tensor([]).long().cuda() - - dn_pos_idx.append((output_idx, tgt_idx)) - dn_neg_idx.append((output_idx + single_pad // 2, tgt_idx)) - - output_known_lbs_bboxes = dn_meta['output_known_lbs_bboxes'] - l_dict = {} - for loss in self.losses: - kwargs = {} - if 'labels' in loss: - kwargs = {'log': False} - l_dict.update( - self.get_loss(loss, output_known_lbs_bboxes, targets, dn_pos_idx, num_boxes * scalar, **kwargs)) - - l_dict = {k + f'_dn': v for k, v in l_dict.items()} - losses.update(l_dict) - else: - l_dict = dict() - l_dict['loss_bbox_dn'] = torch.as_tensor(0.).to('cuda') - l_dict['loss_giou_dn'] = torch.as_tensor(0.).to('cuda') - l_dict['loss_ce_dn'] = torch.as_tensor(0.).to('cuda') - l_dict['loss_xy_dn'] = torch.as_tensor(0.).to('cuda') - l_dict['loss_hw_dn'] = torch.as_tensor(0.).to('cuda') - l_dict['cardinality_error_dn'] = torch.as_tensor(0.).to('cuda') - losses.update(l_dict) - - for loss in self.losses: - losses.update(self.get_loss(loss, outputs, targets, indices, num_boxes)) - - # In case of auxiliary losses, we repeat this process with the output of each intermediate layer. - if 'aux_outputs' in outputs: - for idx, aux_outputs in enumerate(outputs['aux_outputs']): - indices = self.matcher(aux_outputs, targets) - if return_indices: - indices_list.append(indices) - for loss in self.losses: - if loss == 'masks': - # Intermediate masks losses are too costly to compute, we ignore them. - continue - kwargs = {} - if loss == 'labels': - # Logging is enabled only for the last layer - kwargs = {'log': False} - l_dict = self.get_loss(loss, aux_outputs, targets, indices, num_boxes, **kwargs) - l_dict = {k + f'_{idx}': v for k, v in l_dict.items()} - losses.update(l_dict) - - if self.training and dn_meta and 'output_known_lbs_bboxes' in dn_meta: - aux_outputs_known = output_known_lbs_bboxes['aux_outputs'][idx] - l_dict = {} - for loss in self.losses: - kwargs = {} - if 'labels' in loss: - kwargs = {'log': False} - - l_dict.update(self.get_loss(loss, aux_outputs_known, targets, dn_pos_idx, num_boxes * scalar, - **kwargs)) - - l_dict = {k + f'_dn_{idx}': v for k, v in l_dict.items()} - losses.update(l_dict) - else: - l_dict = dict() - l_dict['loss_bbox_dn'] = torch.as_tensor(0.).to('cuda') - l_dict['loss_giou_dn'] = torch.as_tensor(0.).to('cuda') - l_dict['loss_ce_dn'] = torch.as_tensor(0.).to('cuda') - l_dict['loss_xy_dn'] = torch.as_tensor(0.).to('cuda') - l_dict['loss_hw_dn'] = torch.as_tensor(0.).to('cuda') - l_dict['cardinality_error_dn'] = torch.as_tensor(0.).to('cuda') - l_dict = {k + f'_{idx}': v for k, v in l_dict.items()} - losses.update(l_dict) - - # interm_outputs loss - if 'interm_outputs' in outputs: - interm_outputs = outputs['interm_outputs'] - indices = self.matcher(interm_outputs, targets) - if return_indices: - indices_list.append(indices) - for loss in self.losses: - if loss == 'masks': - # Intermediate masks losses are too costly to compute, we ignore them. - continue - kwargs = {} - if loss == 'labels': - # Logging is enabled only for the last layer - kwargs = {'log': False} - l_dict = self.get_loss(loss, interm_outputs, targets, indices, num_boxes, **kwargs) - l_dict = {k + f'_interm': v for k, v in l_dict.items()} - losses.update(l_dict) - - # enc output loss - if 'enc_outputs' in outputs: - for i, enc_outputs in enumerate(outputs['enc_outputs']): - indices = self.matcher(enc_outputs, targets) - if return_indices: - indices_list.append(indices) - for loss in self.losses: - if loss == 'masks': - # Intermediate masks losses are too costly to compute, we ignore them. - continue - kwargs = {} - if loss == 'labels': - # Logging is enabled only for the last layer - kwargs = {'log': False} - l_dict = self.get_loss(loss, enc_outputs, targets, indices, num_boxes, **kwargs) - l_dict = {k + f'_enc_{i}': v for k, v in l_dict.items()} - losses.update(l_dict) - - if return_indices: - indices_list.append(indices0_copy) - return losses, indices_list - - return losses - - def prep_for_dn(self, dn_meta): - output_known_lbs_bboxes = dn_meta['output_known_lbs_bboxes'] - num_dn_groups, pad_size = dn_meta['num_dn_group'], dn_meta['pad_size'] - assert pad_size % num_dn_groups == 0 - single_pad = pad_size // num_dn_groups - - return output_known_lbs_bboxes, single_pad, num_dn_groups - - -class PostProcess(nn.Module): - """ This module converts the model's output into the format expected by the coco api""" - - def __init__(self, num_select=100, nms_iou_threshold=-1) -> None: - super().__init__() - self.num_select = num_select - self.nms_iou_threshold = nms_iou_threshold - - @torch.no_grad() - def forward(self, outputs, target_sizes, not_to_xyxy=False, test=False): - """ Perform the computation - Parameters: - outputs: raw outputs of the model - target_sizes: tensor of dimension [batch_size x 2] containing the size of each images of the batch - For evaluation, this must be the original image size (before any data augmentation) - For visualization, this should be the image size after data augment, but before padding - """ - num_select = self.num_select - out_logits, out_bbox = outputs['pred_logits'], outputs['pred_boxes'] - - assert len(out_logits) == len(target_sizes) - assert target_sizes.shape[1] == 2 - - prob = out_logits.sigmoid() - topk_values, topk_indexes = torch.topk(prob.view(out_logits.shape[0], -1), num_select, dim=1) - scores = topk_values - topk_boxes = topk_indexes // out_logits.shape[2] - labels = topk_indexes % out_logits.shape[2] - if not_to_xyxy: - boxes = out_bbox - else: - boxes = box_ops.box_cxcywh_to_xyxy(out_bbox) - - if test: - assert not not_to_xyxy - boxes[:, :, 2:] = boxes[:, :, 2:] - boxes[:, :, :2] - boxes = torch.gather(boxes, 1, topk_boxes.unsqueeze(-1).repeat(1, 1, 4)) - - # and from relative [0, 1] to absolute [0, height] coordinates - img_h, img_w = target_sizes.unbind(1) - scale_fct = torch.stack([img_w, img_h, img_w, img_h], dim=1) - boxes = boxes * scale_fct[:, None, :] - - if self.nms_iou_threshold > 0: - item_indices = [nms(b, s, iou_threshold=self.nms_iou_threshold) for b, s in zip(boxes, scores)] - # import ipdb; ipdb.set_trace() - results = [{'scores': s[i], 'labels': l[i], 'boxes': b[i]} for s, l, b, i in - zip(scores, labels, boxes, item_indices)] - else: - results = [{'scores': s, 'labels': l, 'boxes': b} for s, l, b in zip(scores, labels, boxes)] - - return results - - -def build_dino(args): - # the `num_classes` naming here is somewhat misleading. - # it indeed corresponds to `max_obj_id + 1`, where max_obj_id - # is the maximum id for a class in your dataset. For example, - # COCO has a max_obj_id of 90, so we pass `num_classes` to be 91. - # As another example, for a dataset that has a single class with id 1, - # you should pass `num_classes` to be 2 (max_obj_id + 1). - # For more details on this, check the following discussion - # https://github.com/facebookresearch/detr/issues/108#issuecomment-650269223 - # num_classes = 20 if args.dataset_file != 'coco' else 91 - # if args.dataset_file == "coco_panoptic": - # # for panoptic, we just add a num_classes that is large enough to hold - # # max_obj_id + 1, but the exact value doesn't really matter - # num_classes = 250 - # if args.dataset_file == 'o365': - # num_classes = 366 - # if args.dataset_file == 'vanke': - # num_classes = 51 - num_classes = args.num_classes - - backbone = build_backbone(args) - - transformer = build_deformable_transformer(args) - - try: - match_unstable_error = args.match_unstable_error - dn_labelbook_size = args.dn_labelbook_size - except: - match_unstable_error = True - dn_labelbook_size = num_classes - - try: - dec_pred_class_embed_share = args.dec_pred_class_embed_share - except: - dec_pred_class_embed_share = True - try: - dec_pred_bbox_embed_share = args.dec_pred_bbox_embed_share - except: - dec_pred_bbox_embed_share = True - - model = DINO( - backbone, - transformer, - num_classes=num_classes, - num_queries=args.num_queries, - aux_loss=True, - iter_update=True, - query_dim=4, - random_refpoints_xy=args.random_refpoints_xy, - fix_refpoints_hw=args.fix_refpoints_hw, - num_feature_levels=args.num_feature_levels, - nheads=args.nheads, - dec_pred_class_embed_share=dec_pred_class_embed_share, - dec_pred_bbox_embed_share=dec_pred_bbox_embed_share, - # two stage - two_stage_type=args.two_stage_type, - # box_share - two_stage_bbox_embed_share=args.two_stage_bbox_embed_share, - two_stage_class_embed_share=args.two_stage_class_embed_share, - decoder_sa_type=args.decoder_sa_type, - num_patterns=args.num_patterns, - dn_number=args.dn_number if args.use_dn else 0, - dn_box_noise_scale=args.dn_box_noise_scale, - dn_label_noise_ratio=args.dn_label_noise_ratio, - dn_labelbook_size=dn_labelbook_size, - ) - matcher = build_matcher(args) - - # prepare weight dict - box_postprocessor = PostProcess(num_select=args.num_select, nms_iou_threshold=args.nms_iou_threshold) - - return model, matcher, box_postprocessor diff --git a/spaces/rorallitri/biomedical-language-models/logs/Fifa Street 4 Pc Game Free Download Torrent Hit 57.md b/spaces/rorallitri/biomedical-language-models/logs/Fifa Street 4 Pc Game Free Download Torrent Hit 57.md deleted file mode 100644 index 63370ff1bc2826ad08687682a66d33c889a17825..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Fifa Street 4 Pc Game Free Download Torrent Hit 57.md +++ /dev/null @@ -1,95 +0,0 @@ - -

            Fifa Street 4 Pc Game Free Download Torrent Hit 57: How to Enjoy the Ultimate Street Soccer Experience

            -

            If you are a fan of soccer games, you may have heard of Fifa Street 4, a sports video game that focuses on street soccer culture and skills. Fifa Street 4 is also known as Fifa Street 2012 and Fifa Street 4, and it was developed by EA Canada and published by EA Sports in 2012. It is the fourth installment in the Fifa Street series and the first one to use the FIFA game engine.

            -

            Fifa Street 4 features over 50 teams and over 35 locations from around the world, where you can play various modes such as World Tour, Freestyle, Last Man Standing, Panna Rules, and more. You can also customize your own team and player with different outfits, accessories, and skills. You can also challenge other players online or offline in multiplayer matches.

            -

            Fifa Street 4 Pc Game Free Download Torrent Hit 57


            Download Zip ->>->>->> https://tinurll.com/2uzo0c



            -

            Fifa Street 4 is a fun and exciting game that showcases the authentic street soccer culture and style. You can perform amazing tricks, dribbles, passes, and shots with realistic physics and animations. You can also experience the atmosphere and vibe of different street soccer venues with dynamic crowds and music.

            -

            However, Fifa Street 4 is not a cheap game, and you may need to pay a lot of money to get it for your PC. But what if we tell you that there is a way to get Fifa Street 4 for free by using a torrent download? In this article, we will show you how to download and install Fifa Street 4 for PC using a torrent file that has been hit by over 57 users. We will also show you some screenshots and reviews of the game to help you decide if it is worth it.

            -

            How to Download Fifa Street 4 for PC using Torrent

            -

            To download Fifa Street 4 for PC using torrent, you will need two things: a torrent client and a torrent file. A torrent client is a software that allows you to download files from other users who are sharing them on a peer-to-peer network. A torrent file is a small file that contains information about the files you want to download, such as their names, sizes, locations, and sources.

            -

            There are many torrent clients available online, such as uTorrent, BitTorrent, qBittorrent, and more. You can choose any one of them according to your preference and compatibility. You can download them from their official websites or from other trusted sources.

            -

            Once you have installed a torrent client on your PC, you will need to find a torrent file for Fifa Street 4. There are many websites that offer torrent files for various games, movies, music, software, and more. However, not all of them are safe and reliable. Some of them may contain malware, viruses, spyware, or other malicious programs that may harm your system or steal your data.

            -

            Therefore, you need to be careful and selective when choosing a torrent file for Fifa Street 4. You need to check the ratings, comments, reviews, and feedbacks of other users who have downloaded the same file before. You also need to check the file size, seeders, leechers, and health of the file. Seeders are users who have the complete file and are sharing it with others. Leechers are users who are downloading the file but not sharing it back. Health is the ratio of seeders to leechers. The higher the health, the faster and more stable the download speed.

            -

            One of the websites that we recommend for finding torrent files for Fifa Street 4 is Technosteria.com. This website offers a torrent file for Fifa Street 4 that has been hit by over 57 users. The file size is 4.47 GB and has a high health of over 90%. The website also provides some screenshots and information about the game.

            -

            To download Fifa Street 4 for PC using torrent from Technosteria.com, you need to follow these steps:

            -

            -
              -
            1. Go to https://technosteria.com/fifa-street-4-download-torrent-for-pc/ on your browser.
            2. -
            3. Click on the "Download Torrent" button at the bottom of the page.
            4. -
            5. A new tab will open with a captcha verification. Solve the captcha and click on "Continue".
            6. -
            7. A new page will open with some ads. Wait for a few seconds and click on "Get Link".
            8. -
            9. A new page will open with a download link for the torrent file. Click on "Download" or "Save Link As".
            10. -
            11. Choose a location on your PC where you want to save the torrent file and click on "Save".
            12. -
            13. Open your torrent client and add the torrent file by clicking on "File" > "Add Torrent" or by dragging and dropping it into the client window.
            14. -
            15. Select a location on your PC where you want to save the game files and click on "OK".
            16. -
            17. The download will start automatically. Wait until it is completed.
            18. -
            -

            How to Install Fifa Street 4 for PC using Torrent

            -

            After downloading Fifa Street 4 for PC using torrent

            -

            , you will need to install it on your PC. To install Fifa Street 4 for PC using torrent, you will need two things: an ISO extractor and an emulator. An ISO extractor is a software that allows you to extract or unzip files from an ISO image file. An ISO image file is a single file that contains all the data of a CD or DVD disc. An emulator is a software that allows you to run games or applications that are designed for another platform or system on your PC.

            -

            There are many ISO extractors available online, such as WinRAR, 7-Zip, PowerISO, and more. You can choose any one of them according to your preference and compatibility. You can download them from their official websites or from other trusted sources.

            -

            There are also many emulators available online, such as PCSX2, RPCS3, Dolphin, and more. You can choose any one of them according to the platform or system of the game you want to run on your PC. For Fifa Street 4, you will need an emulator that can run PS3 games, such as RPCS3. You can download it from its official website or from other trusted sources.

            -

            Once you have installed an ISO extractor and an emulator on your PC, you will need to follow these steps to install Fifa Street 4 for PC using torrent:

            -
              -
            1. Open your ISO extractor and locate the ISO image file of Fifa Street 4 that you have downloaded using torrent. Right-click on it and select "Extract Here" or "Extract to" and choose a location on your PC where you want to extract the files.
            2. -
            3. Open your emulator and locate the folder where you have extracted the files of Fifa Street 4. Select the game file that has the extension .elf or .bin and click on "Open".
            4. -
            5. The game will start loading on the emulator. Wait until it is loaded completely.
            6. -
            7. Enjoy playing Fifa Street 4 on your PC using torrent.
            8. -
            -

            How to Enjoy Fifa Street 4 for PC using Torrent

            -

            Now that you have downloaded and installed Fifa Street 4 for PC using torrent, you may wonder how to enjoy the game and what are its features and benefits. Here are some tips and tricks to help you enjoy Fifa Street 4 for PC using torrent:

            -
              -
            • Choose your favorite team and player from over 50 teams and over 35 locations from around the world. You can also customize your own team and player with different outfits, accessories, and skills.
            • -
            • Play various modes such as World Tour, Freestyle, Last Man Standing, Panna Rules, and more. You can also challenge other players online or offline in multiplayer matches.
            • -
            • Perform amazing tricks, dribbles, passes, and shots with realistic physics and animations. You can also use the environment and objects to your advantage.
            • -
            • Experience the atmosphere and vibe of different street soccer venues with dynamic crowds and music. You can also unlock new venues and music tracks as you progress in the game.
            • -
            • Earn points and rewards for your performance and skills. You can use them to upgrade your team and player or to unlock new items and features.
            • -
            -

            Fifa Street 4 is a fun and exciting game that showcases the authentic street soccer culture and style. It is a game that will appeal to both casual and hardcore soccer fans. It is also a game that you can get for free by using a torrent download.

            -

            Conclusion

            -

            In this article, we have shown you how to get Fifa Street 4 for PC for free by using a torrent download. We have also explained how to download and install Fifa Street 4 for PC using torrent. We have also introduced some of the features and benefits of Fifa Street 4 as a street soccer game. We hope this article was informative and helpful for you.

            -

            However, we do not encourage or endorse piracy or illegal use of software products. If you like Fifa Street 4 and find it useful for your street soccer needs, we strongly recommend that you support its developers by purchasing a legitimate copy from their official website or from other authorized sources.

            -

            Frequently Asked Questions about Fifa Street 4 for PC using Torrent

            -

            In this section, we will answer some of the most frequently asked questions about Fifa Street 4 for PC using torrent. If you have any other questions or doubts, feel free to leave a comment below and we will try to answer them as soon as possible.

            -

            Is Fifa Street 4 for PC using torrent safe and reliable?

            -

            Using torrent to download and install Fifa Street 4 for PC may not be safe and reliable. As we have mentioned before, torrent files may contain malware, viruses, spyware, or other harmful programs that may damage your system or steal your data. You may also encounter errors, bugs, crashes, or compatibility issues that may affect your game performance or integrity. You may not be able to access the official support or updates from EA Sports.

            -

            Therefore, we advise you to use torrent at your own risk and discretion. We are not responsible for any damages or losses that may occur from using torrent. We also recommend that you use a good antivirus software and a VPN service to protect your system and your privacy while using torrent.

            -

            Is Fifa Street 4 for PC using torrent legal and ethical?

            -

            Using torrent to download and install Fifa Street 4 for PC may not be legal and ethical. As we have mentioned before, using torrent is a violation of the software license agreement and the intellectual property rights of EA Sports. It is also unfair to the developers who have invested time and money to create and maintain Fifa Street 4. You may face legal consequences if you are caught using torrent.

            -

            Therefore, we advise you to respect the law and the rights of EA Sports. We are not encouraging or endorsing piracy or illegal use of software products. We are only providing information and guidance for educational purposes only. We also recommend that you support EA Sports by purchasing a legitimate copy of Fifa Street 4 from their official website or from other authorized sources.

            -

            What are the system requirements for Fifa Street 4 for PC using torrent?

            -

            To run Fifa Street 4 for PC using torrent, you will need a PC that meets the following minimum system requirements:

            -
              -
            • Operating System: Windows XP/Vista/7/8/10
            • -
            • Processor: Intel Core 2 Duo E6600 or AMD Athlon X2 5000+
            • -
            • Memory: 2 GB RAM
            • -
            • Graphics: NVIDIA GeForce 8800 GT or ATI Radeon HD 3870
            • -
            • DirectX: Version 9.0c
            • -
            • Storage: 6 GB available space
            • -
            • Sound Card: DirectX compatible sound card
            • -
            -

            However, to enjoy the best game performance and quality, you will need a PC that meets the following recommended system requirements:

            -
              -
            • Operating System: Windows XP/Vista/7/8/10
            • -
            • Processor: Intel Core i3-530 or AMD Phenom II X4 925
            • -
            • Memory: 4 GB RAM
            • -
            • Graphics: NVIDIA GeForce GTX 460 or ATI Radeon HD 5850
            • -
            • DirectX: Version 11
            • -
            • Storage: 6 GB available space
            • -
            • Sound Card: DirectX compatible sound card
            • -
            -

            How to uninstall Fifa Street 4 for PC using torrent?

            -

            If you want to uninstall Fifa Street 4 for PC using torrent, you will need to follow these steps:

            -
              -
            1. Delete the game files that you have downloaded and extracted using torrent from your PC.
            2. -
            3. Delete the torrent file that you have downloaded from Technosteria.com or any other website from your PC.
            4. -
            5. Delete the ISO extractor and the emulator that you have installed on your PC.
            6. -
            7. Delete any leftover files or folders related to Fifa Street 4 from your PC.
            8. -
            -

            Conclusion

            -

            In this article, we have shown you how to get Fifa Street 4 for PC for free by using a torrent download. We have also explained how to download and install Fifa Street 4 for PC using torrent. We have also introduced some of the features and benefits of Fifa Street 4 as a street soccer game. We have also answered some of the most frequently asked questions about Fifa Street 4 for PC using torrent.

            -

            Fifa Street 4 is a fun and exciting game that showcases the authentic street soccer culture and style. It is a game that will appeal to both casual and hardcore soccer fans. It is also a game that you can get for free by using a torrent download.

            -

            However, we do not encourage or endorse piracy or illegal use of software products. If you like Fifa Street 4 and find it useful for your street soccer needs, we strongly recommend that you support its developers by purchasing a legitimate copy from their official website or from other authorized sources.

            -

            We hope this article was informative and helpful for you. If you have any feedback or suggestions, please let us know in the comments below. Thank you for reading and happy gaming!

            3cee63e6c2
            -
            -
            \ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/Introductory Digital Image Processing Jensen PDF Free Download What You Need to Know.md b/spaces/rorallitri/biomedical-language-models/logs/Introductory Digital Image Processing Jensen PDF Free Download What You Need to Know.md deleted file mode 100644 index 67a907a6b81912c9d0ff7399ac872167ef25270e..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Introductory Digital Image Processing Jensen PDF Free Download What You Need to Know.md +++ /dev/null @@ -1,6 +0,0 @@ -

            introductory digital image processing jensen pdf free download


            DOWNLOAD ✦✦✦ https://tinurll.com/2uzm8c



            - - aaccfb2cb3
            -
            -
            -

            diff --git a/spaces/russel0719/deepfake_detector/README.md b/spaces/russel0719/deepfake_detector/README.md deleted file mode 100644 index 0affea8c6afc300eab6f57cc95a7b7c52ae1ee07..0000000000000000000000000000000000000000 --- a/spaces/russel0719/deepfake_detector/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Deepfake Detector -emoji: 💻 -colorFrom: pink -colorTo: green -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/s3nh/WizardLM-1.0-Uncensored-Llama2-13b-GGML/README.md b/spaces/s3nh/WizardLM-1.0-Uncensored-Llama2-13b-GGML/README.md deleted file mode 100644 index 618569665f44bcb6662c5eae55be70de38ea8176..0000000000000000000000000000000000000000 --- a/spaces/s3nh/WizardLM-1.0-Uncensored-Llama2-13b-GGML/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: WizardLM 1.0 Uncensored Llama2 13b GGML -emoji: ⚡ -colorFrom: gray -colorTo: indigo -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/scedlatioru/img-to-music/example/BoletoFastCompletoBaixarCrackeado23.md b/spaces/scedlatioru/img-to-music/example/BoletoFastCompletoBaixarCrackeado23.md deleted file mode 100644 index 7b3b65ebd784678d768efa9805287062c69409f2..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/BoletoFastCompletoBaixarCrackeado23.md +++ /dev/null @@ -1,6 +0,0 @@ -

            BoletoFastCompletoBaixarCrackeado23


            DOWNLOAD --->>> https://gohhs.com/2uEzsm



            -
            -2021.01.18 20:49 · Batman Begins Brrip 1080p Download jafrandd. 2021.01.15 00:19 · BoletoFastCompletoBaixarCrackeado23 [2020]. 2021.01.14 23:57 ... 1fdad05405
            -
            -
            -

            diff --git a/spaces/scedlatioru/img-to-music/example/DragonAgeInquisitionCrackOnlyv6ForUpdate93DM.md b/spaces/scedlatioru/img-to-music/example/DragonAgeInquisitionCrackOnlyv6ForUpdate93DM.md deleted file mode 100644 index 23255308c2efc90f8705e81edbcc73415219c7ec..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/DragonAgeInquisitionCrackOnlyv6ForUpdate93DM.md +++ /dev/null @@ -1,7 +0,0 @@ -
            -

            DragonAgeInquisitionCrackOnlyv6ForUpdate93DM. DragonAgeInquisitionCrackOnlyv6ForUpdate93DM DragonAgeInquisitionCrackOnlyv6ForUpdate93DM DragonAgeInquisitionCrackOnlyv6ForUpdate93DM. DragonAgeInquisitionCrackOnlyv6ForUpdate93DM.

            -

            DragonAgeInquisitionCrackOnlyv6ForUpdate93DM


            Download File >>> https://gohhs.com/2uEyRc



            -

            DragonAgeInquisitionCrackOnlyv6ForUpdate93DM. DragonAgeInquisitionCrackOnlyv6ForUpdate93DM DragonAgeInquisitionCrackOnlyv6ForUpdate93DM DragonAgeInquisitionCrackOnlyv6ForUpdate93DM DragonAgeInquisitionCrackOnlyv6ForUpdate93DM DragonAgeInquisitionCrackOnlyv6ForUpdate93DM.

            -

            https://en.wikipedia.org/wiki/DragonAge_Inquisition_Crack_Only_v6_For_Update_93_DM https://commons.wikimedia.org/wiki/File:DragonAge_Inquisition_Crack_Only_v6_For_Update_93_DM.png DragonAgeInquisitionCrackOnlyv6ForUpdate93DM,Rs Means Book Pdf Free 15, CMSEng V1.0.0.8.T.20100813.42 jonnykens. DragonAgeInquisitionCrackOnlyv6ForUpdate93DM

            [url=https://trello.com/c/jwSWzMyS/5-dragonageinquisitioncrackonlyv6forupdate93dm-johhny]DragonAgeInquisitionCrackOnlyv6ForUpdate93DM[/url] [url=https://commons.wikimedia.org/wiki/File:DragonAge_Inquisition_Crack_Only_v6_For_Update_93_DM.png]DragonAgeInquisitionCrackOnlyv6ForUpdate93DM[/url] [url=https://www.vedorreview.com/wp-content/uploads/2022/06/DragonAgeInquisitionCrackOnlyv6ForUpdate93DM.pdf]DragonAgeInquisitionCrackOnlyv6ForUpdate93DM[/url] [url=https://commons.wikimedia.org/wiki/File:DragonAge_Inquisition_Crack_Only_v6_For_Update_93_DM.png]DragonAgeInquisitionCrackOnlyv6ForUpdate93DM[/url] [url=https://trello.com/c/YRCOsLZW/39-dragonageinquisitioncrackonlyv6forupdate93dm]DragonAgeInquisitionCrackOnlyv6ForUpdate93DM[/url]. DragonAgeInquisitionCrackOnlyv6ForUpdate93DM

            https://commons.wikimedia.org/wiki/File:DragonAge_Inquisition_Crack_Only_v6_For_Update_93_DM.png https://trello.com/c/YRCOsLZW/39-dragonageinquisitioncrackonlyv6forupdate93dm. DragonAgeInquisitionCrackOnlyv6ForUpdate93DM

            This 3125841983. Related links: Spiderman 2.70 Full Crack Rar.zip Online phone unlock code generator DragonAgeInquisitionCrackOnlyv6ForUpdate93DM. https://trello.com/c/79rtiuXp/83-dragonageinquisitioncrackonlyv6forupdate93dm-chrrayn

            899543212b
            -
            -
            \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/Ubot Studio Developer Edition Cracked 14.md b/spaces/scedlatioru/img-to-music/example/Ubot Studio Developer Edition Cracked 14.md deleted file mode 100644 index 65797761f7606f01d0fa3a7124c3b987a3c8f9aa..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Ubot Studio Developer Edition Cracked 14.md +++ /dev/null @@ -1,6 +0,0 @@ -

            ubot studio developer edition cracked 14


            DOWNLOADhttps://gohhs.com/2uEAjm



            - -This is a menu that is only available in the advanced automation settings. By submitting your email, you are opting in to receiving Enterprise Security Alerts from Symantec. The C# control is not currently available from the API or Marketplace. Offering a variety of different options to let you control automation from any device, even if they aren't a Windows PC. R&D - Advanced Automation. AutoScale Plus does more than run applications on demand, it also provides a framework for integrating features into your application. Built-in Backup and Restore Feature. Let us know what you think. Confidentiality, integrity, availability, non-repudiation, accountability, accountability, and integrity have been part of Internet communications and have even been built into domain name system DNS (Domain Name System) registries since 1983. C# source code library for Microsoft Windows. Extends the. Where C# (pronounced "C sharp") is an object-oriented programming language that was originally developed for Microsoft by Bill Gates and Paul Allen. C# is often thought of as a Microsoft proprietary language but it is a part of the Common Language Runtime (CLR) which is a core component of Microsoft. Explore our automated firewalls, network. and applications that deliver the security, performance, and management automation you need.. This is an international standard (ISO/IEC 11770) that is used to define the architecture, security and protocols for an application that is to be trusted to run critical tasks. The following is a list of all the machines that the UBS bot is monitoring: 1 - UBS01. Based on a powerful and flexible architecture, Visual Studio enables an integrated and streamlined development and debugging environment. NET; URL: < Generate scripts to automate your Windows application development. Network virtualization by VxVend, a VM-based C# application that manages, automates and virtualizes your application, cloud environment and data storage. Screen In Windows Forms (C# Programming) Download this book to learn how to make your application design scalable and flexible so that it will fit into any design and fit any situation. 2019 - Microsoft | Open Source. A control is basically a place to embed a control in your application. This video highlights all of these features of the. NET in the Windows Forms Application. A C# Winforms project is a Windows Forms project in Visual Studio 2017. NET (Micro Framework). If you're upgrading from a previous version of the UniCast Agent, see our documentation to get 4fefd39f24
            -
            -
            -

            diff --git a/spaces/sczhou/ProPainter/utils/img_util.py b/spaces/sczhou/ProPainter/utils/img_util.py deleted file mode 100644 index d409a132ff216e6943a276fb5d8cd5f410824883..0000000000000000000000000000000000000000 --- a/spaces/sczhou/ProPainter/utils/img_util.py +++ /dev/null @@ -1,170 +0,0 @@ -import cv2 -import math -import numpy as np -import os -import torch -from torchvision.utils import make_grid - - -def img2tensor(imgs, bgr2rgb=True, float32=True): - """Numpy array to tensor. - - Args: - imgs (list[ndarray] | ndarray): Input images. - bgr2rgb (bool): Whether to change bgr to rgb. - float32 (bool): Whether to change to float32. - - Returns: - list[tensor] | tensor: Tensor images. If returned results only have - one element, just return tensor. - """ - - def _totensor(img, bgr2rgb, float32): - if img.shape[2] == 3 and bgr2rgb: - if img.dtype == 'float64': - img = img.astype('float32') - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - img = torch.from_numpy(img.transpose(2, 0, 1)) - if float32: - img = img.float() - return img - - if isinstance(imgs, list): - return [_totensor(img, bgr2rgb, float32) for img in imgs] - else: - return _totensor(imgs, bgr2rgb, float32) - - -def tensor2img(tensor, rgb2bgr=True, out_type=np.uint8, min_max=(0, 1)): - """Convert torch Tensors into image numpy arrays. - - After clamping to [min, max], values will be normalized to [0, 1]. - - Args: - tensor (Tensor or list[Tensor]): Accept shapes: - 1) 4D mini-batch Tensor of shape (B x 3/1 x H x W); - 2) 3D Tensor of shape (3/1 x H x W); - 3) 2D Tensor of shape (H x W). - Tensor channel should be in RGB order. - rgb2bgr (bool): Whether to change rgb to bgr. - out_type (numpy type): output types. If ``np.uint8``, transform outputs - to uint8 type with range [0, 255]; otherwise, float type with - range [0, 1]. Default: ``np.uint8``. - min_max (tuple[int]): min and max values for clamp. - - Returns: - (Tensor or list): 3D ndarray of shape (H x W x C) OR 2D ndarray of - shape (H x W). The channel order is BGR. - """ - if not (torch.is_tensor(tensor) or (isinstance(tensor, list) and all(torch.is_tensor(t) for t in tensor))): - raise TypeError(f'tensor or list of tensors expected, got {type(tensor)}') - - if torch.is_tensor(tensor): - tensor = [tensor] - result = [] - for _tensor in tensor: - _tensor = _tensor.squeeze(0).float().detach().cpu().clamp_(*min_max) - _tensor = (_tensor - min_max[0]) / (min_max[1] - min_max[0]) - - n_dim = _tensor.dim() - if n_dim == 4: - img_np = make_grid(_tensor, nrow=int(math.sqrt(_tensor.size(0))), normalize=False).numpy() - img_np = img_np.transpose(1, 2, 0) - if rgb2bgr: - img_np = cv2.cvtColor(img_np, cv2.COLOR_RGB2BGR) - elif n_dim == 3: - img_np = _tensor.numpy() - img_np = img_np.transpose(1, 2, 0) - if img_np.shape[2] == 1: # gray image - img_np = np.squeeze(img_np, axis=2) - else: - if rgb2bgr: - img_np = cv2.cvtColor(img_np, cv2.COLOR_RGB2BGR) - elif n_dim == 2: - img_np = _tensor.numpy() - else: - raise TypeError('Only support 4D, 3D or 2D tensor. ' f'But received with dimension: {n_dim}') - if out_type == np.uint8: - # Unlike MATLAB, numpy.unit8() WILL NOT round by default. - img_np = (img_np * 255.0).round() - img_np = img_np.astype(out_type) - result.append(img_np) - if len(result) == 1: - result = result[0] - return result - - -def tensor2img_fast(tensor, rgb2bgr=True, min_max=(0, 1)): - """This implementation is slightly faster than tensor2img. - It now only supports torch tensor with shape (1, c, h, w). - - Args: - tensor (Tensor): Now only support torch tensor with (1, c, h, w). - rgb2bgr (bool): Whether to change rgb to bgr. Default: True. - min_max (tuple[int]): min and max values for clamp. - """ - output = tensor.squeeze(0).detach().clamp_(*min_max).permute(1, 2, 0) - output = (output - min_max[0]) / (min_max[1] - min_max[0]) * 255 - output = output.type(torch.uint8).cpu().numpy() - if rgb2bgr: - output = cv2.cvtColor(output, cv2.COLOR_RGB2BGR) - return output - - -def imfrombytes(content, flag='color', float32=False): - """Read an image from bytes. - - Args: - content (bytes): Image bytes got from files or other streams. - flag (str): Flags specifying the color type of a loaded image, - candidates are `color`, `grayscale` and `unchanged`. - float32 (bool): Whether to change to float32., If True, will also norm - to [0, 1]. Default: False. - - Returns: - ndarray: Loaded image array. - """ - img_np = np.frombuffer(content, np.uint8) - imread_flags = {'color': cv2.IMREAD_COLOR, 'grayscale': cv2.IMREAD_GRAYSCALE, 'unchanged': cv2.IMREAD_UNCHANGED} - img = cv2.imdecode(img_np, imread_flags[flag]) - if float32: - img = img.astype(np.float32) / 255. - return img - - -def imwrite(img, file_path, params=None, auto_mkdir=True): - """Write image to file. - - Args: - img (ndarray): Image array to be written. - file_path (str): Image file path. - params (None or list): Same as opencv's :func:`imwrite` interface. - auto_mkdir (bool): If the parent folder of `file_path` does not exist, - whether to create it automatically. - - Returns: - bool: Successful or not. - """ - if auto_mkdir: - dir_name = os.path.abspath(os.path.dirname(file_path)) - os.makedirs(dir_name, exist_ok=True) - return cv2.imwrite(file_path, img, params) - - -def crop_border(imgs, crop_border): - """Crop borders of images. - - Args: - imgs (list[ndarray] | ndarray): Images with shape (h, w, c). - crop_border (int): Crop border for each end of height and weight. - - Returns: - list[ndarray]: Cropped images. - """ - if crop_border == 0: - return imgs - else: - if isinstance(imgs, list): - return [v[crop_border:-crop_border, crop_border:-crop_border, ...] for v in imgs] - else: - return imgs[crop_border:-crop_border, crop_border:-crop_border, ...] diff --git a/spaces/sczhou/ProPainter/web-demos/hugging_face/tracker/utils/load_subset.py b/spaces/sczhou/ProPainter/web-demos/hugging_face/tracker/utils/load_subset.py deleted file mode 100644 index c16ed0391ae745a736290bb7b956c98539e087ca..0000000000000000000000000000000000000000 --- a/spaces/sczhou/ProPainter/web-demos/hugging_face/tracker/utils/load_subset.py +++ /dev/null @@ -1,13 +0,0 @@ -import json - - -def load_subset(path): - with open(path, mode='r') as f: - subset = set(f.read().splitlines()) - return subset - - -def load_empty_masks(path): - with open(path, mode='r') as f: - empty_masks = json.load(f) - return empty_masks diff --git a/spaces/seduerr/semantic_search/README.md b/spaces/seduerr/semantic_search/README.md deleted file mode 100644 index f66f87c94ba42c98bfabd4943cbf71cf8875b994..0000000000000000000000000000000000000000 --- a/spaces/seduerr/semantic_search/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: semantic search -emoji: 👀 -colorFrom: red -colorTo: pink -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference \ No newline at end of file diff --git a/spaces/segments-tobias/conex/espnet/nets/beam_search.py b/spaces/segments-tobias/conex/espnet/nets/beam_search.py deleted file mode 100644 index fa41753c948621dae51794f7c111188f39bddd49..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet/nets/beam_search.py +++ /dev/null @@ -1,512 +0,0 @@ -"""Beam search module.""" - -from itertools import chain -import logging -from typing import Any -from typing import Dict -from typing import List -from typing import NamedTuple -from typing import Tuple -from typing import Union - -import torch - -from espnet.nets.e2e_asr_common import end_detect -from espnet.nets.scorer_interface import PartialScorerInterface -from espnet.nets.scorer_interface import ScorerInterface - - -class Hypothesis(NamedTuple): - """Hypothesis data type.""" - - yseq: torch.Tensor - score: Union[float, torch.Tensor] = 0 - scores: Dict[str, Union[float, torch.Tensor]] = dict() - states: Dict[str, Any] = dict() - - def asdict(self) -> dict: - """Convert data to JSON-friendly dict.""" - return self._replace( - yseq=self.yseq.tolist(), - score=float(self.score), - scores={k: float(v) for k, v in self.scores.items()}, - )._asdict() - - -class BeamSearch(torch.nn.Module): - """Beam search implementation.""" - - def __init__( - self, - scorers: Dict[str, ScorerInterface], - weights: Dict[str, float], - beam_size: int, - vocab_size: int, - sos: int, - eos: int, - token_list: List[str] = None, - pre_beam_ratio: float = 1.5, - pre_beam_score_key: str = None, - ): - """Initialize beam search. - - Args: - scorers (dict[str, ScorerInterface]): Dict of decoder modules - e.g., Decoder, CTCPrefixScorer, LM - The scorer will be ignored if it is `None` - weights (dict[str, float]): Dict of weights for each scorers - The scorer will be ignored if its weight is 0 - beam_size (int): The number of hypotheses kept during search - vocab_size (int): The number of vocabulary - sos (int): Start of sequence id - eos (int): End of sequence id - token_list (list[str]): List of tokens for debug log - pre_beam_score_key (str): key of scores to perform pre-beam search - pre_beam_ratio (float): beam size in the pre-beam search - will be `int(pre_beam_ratio * beam_size)` - - """ - super().__init__() - # set scorers - self.weights = weights - self.scorers = dict() - self.full_scorers = dict() - self.part_scorers = dict() - # this module dict is required for recursive cast - # `self.to(device, dtype)` in `recog.py` - self.nn_dict = torch.nn.ModuleDict() - for k, v in scorers.items(): - w = weights.get(k, 0) - if w == 0 or v is None: - continue - assert isinstance( - v, ScorerInterface - ), f"{k} ({type(v)}) does not implement ScorerInterface" - self.scorers[k] = v - if isinstance(v, PartialScorerInterface): - self.part_scorers[k] = v - else: - self.full_scorers[k] = v - if isinstance(v, torch.nn.Module): - self.nn_dict[k] = v - - # set configurations - self.sos = sos - self.eos = eos - self.token_list = token_list - self.pre_beam_size = int(pre_beam_ratio * beam_size) - self.beam_size = beam_size - self.n_vocab = vocab_size - if ( - pre_beam_score_key is not None - and pre_beam_score_key != "full" - and pre_beam_score_key not in self.full_scorers - ): - raise KeyError(f"{pre_beam_score_key} is not found in {self.full_scorers}") - self.pre_beam_score_key = pre_beam_score_key - self.do_pre_beam = ( - self.pre_beam_score_key is not None - and self.pre_beam_size < self.n_vocab - and len(self.part_scorers) > 0 - ) - - def init_hyp(self, x: torch.Tensor) -> List[Hypothesis]: - """Get an initial hypothesis data. - - Args: - x (torch.Tensor): The encoder output feature - - Returns: - Hypothesis: The initial hypothesis. - - """ - init_states = dict() - init_scores = dict() - for k, d in self.scorers.items(): - init_states[k] = d.init_state(x) - init_scores[k] = 0.0 - return [ - Hypothesis( - score=0.0, - scores=init_scores, - states=init_states, - yseq=torch.tensor([self.sos], device=x.device), - ) - ] - - @staticmethod - def append_token(xs: torch.Tensor, x: int) -> torch.Tensor: - """Append new token to prefix tokens. - - Args: - xs (torch.Tensor): The prefix token - x (int): The new token to append - - Returns: - torch.Tensor: New tensor contains: xs + [x] with xs.dtype and xs.device - - """ - x = torch.tensor([x], dtype=xs.dtype, device=xs.device) - return torch.cat((xs, x)) - - def score_full( - self, hyp: Hypothesis, x: torch.Tensor - ) -> Tuple[Dict[str, torch.Tensor], Dict[str, Any]]: - """Score new hypothesis by `self.full_scorers`. - - Args: - hyp (Hypothesis): Hypothesis with prefix tokens to score - x (torch.Tensor): Corresponding input feature - - Returns: - Tuple[Dict[str, torch.Tensor], Dict[str, Any]]: Tuple of - score dict of `hyp` that has string keys of `self.full_scorers` - and tensor score values of shape: `(self.n_vocab,)`, - and state dict that has string keys - and state values of `self.full_scorers` - - """ - scores = dict() - states = dict() - for k, d in self.full_scorers.items(): - scores[k], states[k] = d.score(hyp.yseq, hyp.states[k], x) - return scores, states - - def score_partial( - self, hyp: Hypothesis, ids: torch.Tensor, x: torch.Tensor - ) -> Tuple[Dict[str, torch.Tensor], Dict[str, Any]]: - """Score new hypothesis by `self.part_scorers`. - - Args: - hyp (Hypothesis): Hypothesis with prefix tokens to score - ids (torch.Tensor): 1D tensor of new partial tokens to score - x (torch.Tensor): Corresponding input feature - - Returns: - Tuple[Dict[str, torch.Tensor], Dict[str, Any]]: Tuple of - score dict of `hyp` that has string keys of `self.part_scorers` - and tensor score values of shape: `(len(ids),)`, - and state dict that has string keys - and state values of `self.part_scorers` - - """ - scores = dict() - states = dict() - for k, d in self.part_scorers.items(): - scores[k], states[k] = d.score_partial(hyp.yseq, ids, hyp.states[k], x) - return scores, states - - def beam( - self, weighted_scores: torch.Tensor, ids: torch.Tensor - ) -> Tuple[torch.Tensor, torch.Tensor]: - """Compute topk full token ids and partial token ids. - - Args: - weighted_scores (torch.Tensor): The weighted sum scores for each tokens. - Its shape is `(self.n_vocab,)`. - ids (torch.Tensor): The partial token ids to compute topk - - Returns: - Tuple[torch.Tensor, torch.Tensor]: - The topk full token ids and partial token ids. - Their shapes are `(self.beam_size,)` - - """ - # no pre beam performed - if weighted_scores.size(0) == ids.size(0): - top_ids = weighted_scores.topk(self.beam_size)[1] - return top_ids, top_ids - - # mask pruned in pre-beam not to select in topk - tmp = weighted_scores[ids] - weighted_scores[:] = -float("inf") - weighted_scores[ids] = tmp - top_ids = weighted_scores.topk(self.beam_size)[1] - local_ids = weighted_scores[ids].topk(self.beam_size)[1] - return top_ids, local_ids - - @staticmethod - def merge_scores( - prev_scores: Dict[str, float], - next_full_scores: Dict[str, torch.Tensor], - full_idx: int, - next_part_scores: Dict[str, torch.Tensor], - part_idx: int, - ) -> Dict[str, torch.Tensor]: - """Merge scores for new hypothesis. - - Args: - prev_scores (Dict[str, float]): - The previous hypothesis scores by `self.scorers` - next_full_scores (Dict[str, torch.Tensor]): scores by `self.full_scorers` - full_idx (int): The next token id for `next_full_scores` - next_part_scores (Dict[str, torch.Tensor]): - scores of partial tokens by `self.part_scorers` - part_idx (int): The new token id for `next_part_scores` - - Returns: - Dict[str, torch.Tensor]: The new score dict. - Its keys are names of `self.full_scorers` and `self.part_scorers`. - Its values are scalar tensors by the scorers. - - """ - new_scores = dict() - for k, v in next_full_scores.items(): - new_scores[k] = prev_scores[k] + v[full_idx] - for k, v in next_part_scores.items(): - new_scores[k] = prev_scores[k] + v[part_idx] - return new_scores - - def merge_states(self, states: Any, part_states: Any, part_idx: int) -> Any: - """Merge states for new hypothesis. - - Args: - states: states of `self.full_scorers` - part_states: states of `self.part_scorers` - part_idx (int): The new token id for `part_scores` - - Returns: - Dict[str, torch.Tensor]: The new score dict. - Its keys are names of `self.full_scorers` and `self.part_scorers`. - Its values are states of the scorers. - - """ - new_states = dict() - for k, v in states.items(): - new_states[k] = v - for k, d in self.part_scorers.items(): - new_states[k] = d.select_state(part_states[k], part_idx) - return new_states - - def search( - self, running_hyps: List[Hypothesis], x: torch.Tensor - ) -> List[Hypothesis]: - """Search new tokens for running hypotheses and encoded speech x. - - Args: - running_hyps (List[Hypothesis]): Running hypotheses on beam - x (torch.Tensor): Encoded speech feature (T, D) - - Returns: - List[Hypotheses]: Best sorted hypotheses - - """ - best_hyps = [] - part_ids = torch.arange(self.n_vocab, device=x.device) # no pre-beam - for hyp in running_hyps: - # scoring - weighted_scores = torch.zeros(self.n_vocab, dtype=x.dtype, device=x.device) - scores, states = self.score_full(hyp, x) - for k in self.full_scorers: - weighted_scores += self.weights[k] * scores[k] - # partial scoring - if self.do_pre_beam: - pre_beam_scores = ( - weighted_scores - if self.pre_beam_score_key == "full" - else scores[self.pre_beam_score_key] - ) - part_ids = torch.topk(pre_beam_scores, self.pre_beam_size)[1] - part_scores, part_states = self.score_partial(hyp, part_ids, x) - for k in self.part_scorers: - weighted_scores[part_ids] += self.weights[k] * part_scores[k] - # add previous hyp score - weighted_scores += hyp.score - - # update hyps - for j, part_j in zip(*self.beam(weighted_scores, part_ids)): - # will be (2 x beam at most) - best_hyps.append( - Hypothesis( - score=weighted_scores[j], - yseq=self.append_token(hyp.yseq, j), - scores=self.merge_scores( - hyp.scores, scores, j, part_scores, part_j - ), - states=self.merge_states(states, part_states, part_j), - ) - ) - - # sort and prune 2 x beam -> beam - best_hyps = sorted(best_hyps, key=lambda x: x.score, reverse=True)[ - : min(len(best_hyps), self.beam_size) - ] - return best_hyps - - def forward( - self, x: torch.Tensor, maxlenratio: float = 0.0, minlenratio: float = 0.0 - ) -> List[Hypothesis]: - """Perform beam search. - - Args: - x (torch.Tensor): Encoded speech feature (T, D) - maxlenratio (float): Input length ratio to obtain max output length. - If maxlenratio=0.0 (default), it uses a end-detect function - to automatically find maximum hypothesis lengths - minlenratio (float): Input length ratio to obtain min output length. - - Returns: - list[Hypothesis]: N-best decoding results - - """ - # set length bounds - if maxlenratio == 0: - maxlen = x.shape[0] - else: - maxlen = max(1, int(maxlenratio * x.size(0))) - minlen = int(minlenratio * x.size(0)) - logging.info("decoder input length: " + str(x.shape[0])) - logging.info("max output length: " + str(maxlen)) - logging.info("min output length: " + str(minlen)) - - # main loop of prefix search - running_hyps = self.init_hyp(x) - ended_hyps = [] - for i in range(maxlen): - logging.debug("position " + str(i)) - best = self.search(running_hyps, x) - # post process of one iteration - running_hyps = self.post_process(i, maxlen, maxlenratio, best, ended_hyps) - # end detection - if maxlenratio == 0.0 and end_detect([h.asdict() for h in ended_hyps], i): - logging.info(f"end detected at {i}") - break - if len(running_hyps) == 0: - logging.info("no hypothesis. Finish decoding.") - break - else: - logging.debug(f"remained hypotheses: {len(running_hyps)}") - - nbest_hyps = sorted(ended_hyps, key=lambda x: x.score, reverse=True) - # check the number of hypotheses reaching to eos - if len(nbest_hyps) == 0: - logging.warning( - "there is no N-best results, perform recognition " - "again with smaller minlenratio." - ) - return ( - [] - if minlenratio < 0.1 - else self.forward(x, maxlenratio, max(0.0, minlenratio - 0.1)) - ) - - # report the best result - best = nbest_hyps[0] - for k, v in best.scores.items(): - logging.info( - f"{v:6.2f} * {self.weights[k]:3} = {v * self.weights[k]:6.2f} for {k}" - ) - logging.info(f"total log probability: {best.score:.2f}") - logging.info(f"normalized log probability: {best.score / len(best.yseq):.2f}") - logging.info(f"total number of ended hypotheses: {len(nbest_hyps)}") - if self.token_list is not None: - logging.info( - "best hypo: " - + "".join([self.token_list[x] for x in best.yseq[1:-1]]) - + "\n" - ) - return nbest_hyps - - def post_process( - self, - i: int, - maxlen: int, - maxlenratio: float, - running_hyps: List[Hypothesis], - ended_hyps: List[Hypothesis], - ) -> List[Hypothesis]: - """Perform post-processing of beam search iterations. - - Args: - i (int): The length of hypothesis tokens. - maxlen (int): The maximum length of tokens in beam search. - maxlenratio (int): The maximum length ratio in beam search. - running_hyps (List[Hypothesis]): The running hypotheses in beam search. - ended_hyps (List[Hypothesis]): The ended hypotheses in beam search. - - Returns: - List[Hypothesis]: The new running hypotheses. - - """ - logging.debug(f"the number of running hypotheses: {len(running_hyps)}") - if self.token_list is not None: - logging.debug( - "best hypo: " - + "".join([self.token_list[x] for x in running_hyps[0].yseq[1:]]) - ) - # add eos in the final loop to avoid that there are no ended hyps - if i == maxlen - 1: - logging.info("adding in the last position in the loop") - running_hyps = [ - h._replace(yseq=self.append_token(h.yseq, self.eos)) - for h in running_hyps - ] - - # add ended hypotheses to a final list, and removed them from current hypotheses - # (this will be a problem, number of hyps < beam) - remained_hyps = [] - for hyp in running_hyps: - if hyp.yseq[-1] == self.eos: - # e.g., Word LM needs to add final score - for k, d in chain(self.full_scorers.items(), self.part_scorers.items()): - s = d.final_score(hyp.states[k]) - hyp.scores[k] += s - hyp = hyp._replace(score=hyp.score + self.weights[k] * s) - ended_hyps.append(hyp) - else: - remained_hyps.append(hyp) - return remained_hyps - - -def beam_search( - x: torch.Tensor, - sos: int, - eos: int, - beam_size: int, - vocab_size: int, - scorers: Dict[str, ScorerInterface], - weights: Dict[str, float], - token_list: List[str] = None, - maxlenratio: float = 0.0, - minlenratio: float = 0.0, - pre_beam_ratio: float = 1.5, - pre_beam_score_key: str = "full", -) -> list: - """Perform beam search with scorers. - - Args: - x (torch.Tensor): Encoded speech feature (T, D) - sos (int): Start of sequence id - eos (int): End of sequence id - beam_size (int): The number of hypotheses kept during search - vocab_size (int): The number of vocabulary - scorers (dict[str, ScorerInterface]): Dict of decoder modules - e.g., Decoder, CTCPrefixScorer, LM - The scorer will be ignored if it is `None` - weights (dict[str, float]): Dict of weights for each scorers - The scorer will be ignored if its weight is 0 - token_list (list[str]): List of tokens for debug log - maxlenratio (float): Input length ratio to obtain max output length. - If maxlenratio=0.0 (default), it uses a end-detect function - to automatically find maximum hypothesis lengths - minlenratio (float): Input length ratio to obtain min output length. - pre_beam_score_key (str): key of scores to perform pre-beam search - pre_beam_ratio (float): beam size in the pre-beam search - will be `int(pre_beam_ratio * beam_size)` - - Returns: - list: N-best decoding results - - """ - ret = BeamSearch( - scorers, - weights, - beam_size=beam_size, - vocab_size=vocab_size, - pre_beam_ratio=pre_beam_ratio, - pre_beam_score_key=pre_beam_score_key, - sos=sos, - eos=eos, - token_list=token_list, - ).forward(x=x, maxlenratio=maxlenratio, minlenratio=minlenratio) - return [h.asdict() for h in ret] diff --git a/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/transformer/initializer.py b/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/transformer/initializer.py deleted file mode 100644 index 1bce5459c3de47630cdafbace242c0d467c5e733..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/transformer/initializer.py +++ /dev/null @@ -1,44 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- - -# Copyright 2019 Shigeki Karita -# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) - -"""Parameter initialization.""" - -import torch - -from espnet.nets.pytorch_backend.transformer.layer_norm import LayerNorm - - -def initialize(model, init_type="pytorch"): - """Initialize Transformer module. - - :param torch.nn.Module model: transformer instance - :param str init_type: initialization type - """ - if init_type == "pytorch": - return - - # weight init - for p in model.parameters(): - if p.dim() > 1: - if init_type == "xavier_uniform": - torch.nn.init.xavier_uniform_(p.data) - elif init_type == "xavier_normal": - torch.nn.init.xavier_normal_(p.data) - elif init_type == "kaiming_uniform": - torch.nn.init.kaiming_uniform_(p.data, nonlinearity="relu") - elif init_type == "kaiming_normal": - torch.nn.init.kaiming_normal_(p.data, nonlinearity="relu") - else: - raise ValueError("Unknown initialization: " + init_type) - # bias init - for p in model.parameters(): - if p.dim() == 1: - p.data.zero_() - - # reset some modules with default init - for m in model.modules(): - if isinstance(m, (torch.nn.Embedding, LayerNorm)): - m.reset_parameters() diff --git a/spaces/segments-tobias/conex/espnet2/train/trainer.py b/spaces/segments-tobias/conex/espnet2/train/trainer.py deleted file mode 100644 index 5d86bbb7d45b07807b8d2a8bfb782d7715928191..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet2/train/trainer.py +++ /dev/null @@ -1,771 +0,0 @@ -import argparse -from contextlib import contextmanager -import dataclasses -from dataclasses import is_dataclass -from distutils.version import LooseVersion -import logging -from pathlib import Path -import time -from typing import Dict -from typing import Iterable -from typing import List -from typing import Optional -from typing import Sequence -from typing import Tuple -from typing import Union - -import humanfriendly -import numpy as np -import torch -import torch.nn -import torch.optim -from typeguard import check_argument_types - -from espnet2.iterators.abs_iter_factory import AbsIterFactory -from espnet2.main_funcs.average_nbest_models import average_nbest_models -from espnet2.main_funcs.calculate_all_attentions import calculate_all_attentions -from espnet2.schedulers.abs_scheduler import AbsBatchStepScheduler -from espnet2.schedulers.abs_scheduler import AbsEpochStepScheduler -from espnet2.schedulers.abs_scheduler import AbsScheduler -from espnet2.schedulers.abs_scheduler import AbsValEpochStepScheduler -from espnet2.torch_utils.add_gradient_noise import add_gradient_noise -from espnet2.torch_utils.device_funcs import to_device -from espnet2.torch_utils.recursive_op import recursive_average -from espnet2.torch_utils.set_all_random_seed import set_all_random_seed -from espnet2.train.abs_espnet_model import AbsESPnetModel -from espnet2.train.distributed_utils import DistributedOption -from espnet2.train.reporter import Reporter -from espnet2.train.reporter import SubReporter -from espnet2.utils.build_dataclass import build_dataclass - -if LooseVersion(torch.__version__) >= LooseVersion("1.1.0"): - from torch.utils.tensorboard import SummaryWriter -else: - from tensorboardX import SummaryWriter -if torch.distributed.is_available(): - if LooseVersion(torch.__version__) > LooseVersion("1.0.1"): - from torch.distributed import ReduceOp - else: - from torch.distributed import reduce_op as ReduceOp -else: - ReduceOp = None - -if LooseVersion(torch.__version__) >= LooseVersion("1.6.0"): - from torch.cuda.amp import autocast - from torch.cuda.amp import GradScaler -else: - # Nothing to do if torch<1.6.0 - @contextmanager - def autocast(enabled=True): - yield - - GradScaler = None - -try: - import fairscale -except ImportError: - fairscale = None - - -@dataclasses.dataclass -class TrainerOptions: - ngpu: int - resume: bool - use_amp: bool - train_dtype: str - grad_noise: bool - accum_grad: int - grad_clip: float - grad_clip_type: float - log_interval: Optional[int] - no_forward_run: bool - use_tensorboard: bool - use_wandb: bool - output_dir: Union[Path, str] - max_epoch: int - seed: int - sharded_ddp: bool - patience: Optional[int] - keep_nbest_models: Union[int, List[int]] - early_stopping_criterion: Sequence[str] - best_model_criterion: Sequence[Sequence[str]] - val_scheduler_criterion: Sequence[str] - unused_parameters: bool - - -class Trainer: - """Trainer having a optimizer. - - If you'd like to use multiple optimizers, then inherit this class - and override the methods if necessary - at least "train_one_epoch()" - - >>> class TwoOptimizerTrainer(Trainer): - ... @classmethod - ... def add_arguments(cls, parser): - ... ... - ... - ... @classmethod - ... def train_one_epoch(cls, model, optimizers, ...): - ... loss1 = model.model1(...) - ... loss1.backward() - ... optimizers[0].step() - ... - ... loss2 = model.model2(...) - ... loss2.backward() - ... optimizers[1].step() - - """ - - def __init__(self): - raise RuntimeError("This class can't be instantiated.") - - @classmethod - def build_options(cls, args: argparse.Namespace) -> TrainerOptions: - """Build options consumed by train(), eval(), and plot_attention()""" - assert check_argument_types() - return build_dataclass(TrainerOptions, args) - - @classmethod - def add_arguments(cls, parser: argparse.ArgumentParser): - """Reserved for future development of another Trainer""" - pass - - @staticmethod - def resume( - checkpoint: Union[str, Path], - model: torch.nn.Module, - reporter: Reporter, - optimizers: Sequence[torch.optim.Optimizer], - schedulers: Sequence[Optional[AbsScheduler]], - scaler: Optional[GradScaler], - ngpu: int = 0, - ): - states = torch.load( - checkpoint, - map_location=f"cuda:{torch.cuda.current_device()}" if ngpu > 0 else "cpu", - ) - model.load_state_dict(states["model"]) - reporter.load_state_dict(states["reporter"]) - for optimizer, state in zip(optimizers, states["optimizers"]): - optimizer.load_state_dict(state) - for scheduler, state in zip(schedulers, states["schedulers"]): - if scheduler is not None: - scheduler.load_state_dict(state) - if scaler is not None: - if states["scaler"] is None: - logging.warning("scaler state is not found") - else: - scaler.load_state_dict(states["scaler"]) - - logging.info(f"The training was resumed using {checkpoint}") - - @classmethod - def run( - cls, - model: AbsESPnetModel, - optimizers: Sequence[torch.optim.Optimizer], - schedulers: Sequence[Optional[AbsScheduler]], - train_iter_factory: AbsIterFactory, - valid_iter_factory: AbsIterFactory, - plot_attention_iter_factory: Optional[AbsIterFactory], - trainer_options, - distributed_option: DistributedOption, - ) -> None: - """Perform training. This method performs the main process of training.""" - assert check_argument_types() - # NOTE(kamo): Don't check the type more strictly as far trainer_options - assert is_dataclass(trainer_options), type(trainer_options) - assert len(optimizers) == len(schedulers), (len(optimizers), len(schedulers)) - - if isinstance(trainer_options.keep_nbest_models, int): - keep_nbest_models = trainer_options.keep_nbest_models - else: - if len(trainer_options.keep_nbest_models) == 0: - logging.warning("No keep_nbest_models is given. Change to [1]") - trainer_options.keep_nbest_models = [1] - keep_nbest_models = max(trainer_options.keep_nbest_models) - - output_dir = Path(trainer_options.output_dir) - reporter = Reporter() - if trainer_options.use_amp: - if LooseVersion(torch.__version__) < LooseVersion("1.6.0"): - raise RuntimeError( - "Require torch>=1.6.0 for Automatic Mixed Precision" - ) - if trainer_options.sharded_ddp: - if fairscale is None: - raise RuntimeError( - "Requiring fairscale. Do 'pip install fairscale'" - ) - scaler = fairscale.optim.grad_scaler.ShardedGradScaler() - else: - scaler = GradScaler() - else: - scaler = None - - if trainer_options.resume and (output_dir / "checkpoint.pth").exists(): - cls.resume( - checkpoint=output_dir / "checkpoint.pth", - model=model, - optimizers=optimizers, - schedulers=schedulers, - reporter=reporter, - scaler=scaler, - ngpu=trainer_options.ngpu, - ) - - start_epoch = reporter.get_epoch() + 1 - if start_epoch == trainer_options.max_epoch + 1: - logging.warning( - f"The training has already reached at max_epoch: {start_epoch}" - ) - - if distributed_option.distributed: - if trainer_options.sharded_ddp: - dp_model = fairscale.nn.data_parallel.ShardedDataParallel( - module=model, - sharded_optimizer=optimizers, - ) - else: - dp_model = torch.nn.parallel.DistributedDataParallel( - model, - device_ids=( - # Perform multi-Process with multi-GPUs - [torch.cuda.current_device()] - if distributed_option.ngpu == 1 - # Perform single-Process with multi-GPUs - else None - ), - output_device=( - torch.cuda.current_device() - if distributed_option.ngpu == 1 - else None - ), - find_unused_parameters=trainer_options.unused_parameters, - ) - elif distributed_option.ngpu > 1: - dp_model = torch.nn.parallel.DataParallel( - model, - device_ids=list(range(distributed_option.ngpu)), - ) - else: - # NOTE(kamo): DataParallel also should work with ngpu=1, - # but for debuggability it's better to keep this block. - dp_model = model - - if trainer_options.use_tensorboard and ( - not distributed_option.distributed or distributed_option.dist_rank == 0 - ): - summary_writer = SummaryWriter(str(output_dir / "tensorboard")) - else: - summary_writer = None - - start_time = time.perf_counter() - for iepoch in range(start_epoch, trainer_options.max_epoch + 1): - if iepoch != start_epoch: - logging.info( - "{}/{}epoch started. Estimated time to finish: {}".format( - iepoch, - trainer_options.max_epoch, - humanfriendly.format_timespan( - (time.perf_counter() - start_time) - / (iepoch - start_epoch) - * (trainer_options.max_epoch - iepoch + 1) - ), - ) - ) - else: - logging.info(f"{iepoch}/{trainer_options.max_epoch}epoch started") - set_all_random_seed(trainer_options.seed + iepoch) - - reporter.set_epoch(iepoch) - # 1. Train and validation for one-epoch - with reporter.observe("train") as sub_reporter: - all_steps_are_invalid = cls.train_one_epoch( - model=dp_model, - optimizers=optimizers, - schedulers=schedulers, - iterator=train_iter_factory.build_iter(iepoch), - reporter=sub_reporter, - scaler=scaler, - summary_writer=summary_writer, - options=trainer_options, - distributed_option=distributed_option, - ) - - with reporter.observe("valid") as sub_reporter: - cls.validate_one_epoch( - model=dp_model, - iterator=valid_iter_factory.build_iter(iepoch), - reporter=sub_reporter, - options=trainer_options, - distributed_option=distributed_option, - ) - - if not distributed_option.distributed or distributed_option.dist_rank == 0: - # att_plot doesn't support distributed - if plot_attention_iter_factory is not None: - with reporter.observe("att_plot") as sub_reporter: - cls.plot_attention( - model=model, - output_dir=output_dir / "att_ws", - summary_writer=summary_writer, - iterator=plot_attention_iter_factory.build_iter(iepoch), - reporter=sub_reporter, - options=trainer_options, - ) - - # 2. LR Scheduler step - for scheduler in schedulers: - if isinstance(scheduler, AbsValEpochStepScheduler): - scheduler.step( - reporter.get_value(*trainer_options.val_scheduler_criterion) - ) - elif isinstance(scheduler, AbsEpochStepScheduler): - scheduler.step() - if trainer_options.sharded_ddp: - for optimizer in optimizers: - if isinstance(optimizer, fairscale.optim.oss.OSS): - optimizer.consolidate_state_dict() - - if not distributed_option.distributed or distributed_option.dist_rank == 0: - # 3. Report the results - logging.info(reporter.log_message()) - reporter.matplotlib_plot(output_dir / "images") - if summary_writer is not None: - reporter.tensorboard_add_scalar(summary_writer) - if trainer_options.use_wandb: - reporter.wandb_log() - - # 4. Save/Update the checkpoint - torch.save( - { - "model": model.state_dict(), - "reporter": reporter.state_dict(), - "optimizers": [o.state_dict() for o in optimizers], - "schedulers": [ - s.state_dict() if s is not None else None - for s in schedulers - ], - "scaler": scaler.state_dict() if scaler is not None else None, - }, - output_dir / "checkpoint.pth", - ) - - # 5. Save the model and update the link to the best model - torch.save(model.state_dict(), output_dir / f"{iepoch}epoch.pth") - - # Creates a sym link latest.pth -> {iepoch}epoch.pth - p = output_dir / "latest.pth" - if p.is_symlink() or p.exists(): - p.unlink() - p.symlink_to(f"{iepoch}epoch.pth") - - _improved = [] - for _phase, k, _mode in trainer_options.best_model_criterion: - # e.g. _phase, k, _mode = "train", "loss", "min" - if reporter.has(_phase, k): - best_epoch = reporter.get_best_epoch(_phase, k, _mode) - # Creates sym links if it's the best result - if best_epoch == iepoch: - p = output_dir / f"{_phase}.{k}.best.pth" - if p.is_symlink() or p.exists(): - p.unlink() - p.symlink_to(f"{iepoch}epoch.pth") - _improved.append(f"{_phase}.{k}") - if len(_improved) == 0: - logging.info("There are no improvements in this epoch") - else: - logging.info( - "The best model has been updated: " + ", ".join(_improved) - ) - - # 6. Remove the model files excluding n-best epoch and latest epoch - _removed = [] - # Get the union set of the n-best among multiple criterion - nbests = set().union( - *[ - set(reporter.sort_epochs(ph, k, m)[:keep_nbest_models]) - for ph, k, m in trainer_options.best_model_criterion - if reporter.has(ph, k) - ] - ) - for e in range(1, iepoch): - p = output_dir / f"{e}epoch.pth" - if p.exists() and e not in nbests: - p.unlink() - _removed.append(str(p)) - if len(_removed) != 0: - logging.info("The model files were removed: " + ", ".join(_removed)) - - # 7. If any updating haven't happened, stops the training - if all_steps_are_invalid: - logging.warning( - f"The gradients at all steps are invalid in this epoch. " - f"Something seems wrong. This training was stopped at {iepoch}epoch" - ) - break - - # 8. Check early stopping - if trainer_options.patience is not None: - if reporter.check_early_stopping( - trainer_options.patience, *trainer_options.early_stopping_criterion - ): - break - - else: - logging.info( - f"The training was finished at {trainer_options.max_epoch} epochs " - ) - - if not distributed_option.distributed or distributed_option.dist_rank == 0: - # Generated n-best averaged model - average_nbest_models( - reporter=reporter, - output_dir=output_dir, - best_model_criterion=trainer_options.best_model_criterion, - nbest=keep_nbest_models, - ) - - @classmethod - def train_one_epoch( - cls, - model: torch.nn.Module, - iterator: Iterable[Tuple[List[str], Dict[str, torch.Tensor]]], - optimizers: Sequence[torch.optim.Optimizer], - schedulers: Sequence[Optional[AbsScheduler]], - scaler: Optional[GradScaler], - reporter: SubReporter, - summary_writer: Optional[SummaryWriter], - options: TrainerOptions, - distributed_option: DistributedOption, - ) -> bool: - assert check_argument_types() - - grad_noise = options.grad_noise - accum_grad = options.accum_grad - grad_clip = options.grad_clip - grad_clip_type = options.grad_clip_type - log_interval = options.log_interval - no_forward_run = options.no_forward_run - ngpu = options.ngpu - use_wandb = options.use_wandb - distributed = distributed_option.distributed - - if log_interval is None: - try: - log_interval = max(len(iterator) // 20, 10) - except TypeError: - log_interval = 100 - - model.train() - all_steps_are_invalid = True - # [For distributed] Because iteration counts are not always equals between - # processes, send stop-flag to the other processes if iterator is finished - iterator_stop = torch.tensor(0).to("cuda" if ngpu > 0 else "cpu") - - start_time = time.perf_counter() - for iiter, (_, batch) in enumerate( - reporter.measure_iter_time(iterator, "iter_time"), 1 - ): - assert isinstance(batch, dict), type(batch) - - if distributed: - torch.distributed.all_reduce(iterator_stop, ReduceOp.SUM) - if iterator_stop > 0: - break - - batch = to_device(batch, "cuda" if ngpu > 0 else "cpu") - if no_forward_run: - all_steps_are_invalid = False - continue - - with autocast(scaler is not None): - with reporter.measure_time("forward_time"): - retval = model(**batch) - - # Note(kamo): - # Supporting two patterns for the returned value from the model - # a. dict type - if isinstance(retval, dict): - loss = retval["loss"] - stats = retval["stats"] - weight = retval["weight"] - optim_idx = retval.get("optim_idx") - if optim_idx is not None and not isinstance(optim_idx, int): - if not isinstance(optim_idx, torch.Tensor): - raise RuntimeError( - "optim_idx must be int or 1dim torch.Tensor, " - f"but got {type(optim_idx)}" - ) - if optim_idx.dim() >= 2: - raise RuntimeError( - "optim_idx must be int or 1dim torch.Tensor, " - f"but got {optim_idx.dim()}dim tensor" - ) - if optim_idx.dim() == 1: - for v in optim_idx: - if v != optim_idx[0]: - raise RuntimeError( - "optim_idx must be 1dim tensor " - "having same values for all entries" - ) - optim_idx = optim_idx[0].item() - else: - optim_idx = optim_idx.item() - - # b. tuple or list type - else: - loss, stats, weight = retval - optim_idx = None - - stats = {k: v for k, v in stats.items() if v is not None} - if ngpu > 1 or distributed: - # Apply weighted averaging for loss and stats - loss = (loss * weight.type(loss.dtype)).sum() - - # if distributed, this method can also apply all_reduce() - stats, weight = recursive_average(stats, weight, distributed) - - # Now weight is summation over all workers - loss /= weight - if distributed: - # NOTE(kamo): Multiply world_size because DistributedDataParallel - # automatically normalizes the gradient by world_size. - loss *= torch.distributed.get_world_size() - - loss /= accum_grad - - reporter.register(stats, weight) - - with reporter.measure_time("backward_time"): - if scaler is not None: - # Scales loss. Calls backward() on scaled loss - # to create scaled gradients. - # Backward passes under autocast are not recommended. - # Backward ops run in the same dtype autocast chose - # for corresponding forward ops. - scaler.scale(loss).backward() - else: - loss.backward() - - if iiter % accum_grad == 0: - if scaler is not None: - # Unscales the gradients of optimizer's assigned params in-place - for iopt, optimizer in enumerate(optimizers): - if optim_idx is not None and iopt != optim_idx: - continue - scaler.unscale_(optimizer) - - # gradient noise injection - if grad_noise: - add_gradient_noise( - model, - reporter.get_total_count(), - duration=100, - eta=1.0, - scale_factor=0.55, - ) - - # compute the gradient norm to check if it is normal or not - grad_norm = torch.nn.utils.clip_grad_norm_( - model.parameters(), - max_norm=grad_clip, - norm_type=grad_clip_type, - ) - # PyTorch<=1.4, clip_grad_norm_ returns float value - if not isinstance(grad_norm, torch.Tensor): - grad_norm = torch.tensor(grad_norm) - - if not torch.isfinite(grad_norm): - logging.warning( - f"The grad norm is {grad_norm}. Skipping updating the model." - ) - - # Must invoke scaler.update() if unscale_() is used in the iteration - # to avoid the following error: - # RuntimeError: unscale_() has already been called - # on this optimizer since the last update(). - # Note that if the gradient has inf/nan values, - # scaler.step skips optimizer.step(). - if scaler is not None: - for iopt, optimizer in enumerate(optimizers): - if optim_idx is not None and iopt != optim_idx: - continue - scaler.step(optimizer) - scaler.update() - - else: - all_steps_are_invalid = False - with reporter.measure_time("optim_step_time"): - for iopt, (optimizer, scheduler) in enumerate( - zip(optimizers, schedulers) - ): - if optim_idx is not None and iopt != optim_idx: - continue - if scaler is not None: - # scaler.step() first unscales the gradients of - # the optimizer's assigned params. - scaler.step(optimizer) - # Updates the scale for next iteration. - scaler.update() - else: - optimizer.step() - if isinstance(scheduler, AbsBatchStepScheduler): - scheduler.step() - optimizer.zero_grad() - - # Register lr and train/load time[sec/step], - # where step refers to accum_grad * mini-batch - reporter.register( - dict( - { - f"optim{i}_lr{j}": pg["lr"] - for i, optimizer in enumerate(optimizers) - for j, pg in enumerate(optimizer.param_groups) - if "lr" in pg - }, - train_time=time.perf_counter() - start_time, - ), - ) - start_time = time.perf_counter() - - # NOTE(kamo): Call log_message() after next() - reporter.next() - if iiter % log_interval == 0: - logging.info(reporter.log_message(-log_interval)) - if summary_writer is not None: - reporter.tensorboard_add_scalar(summary_writer, -log_interval) - if use_wandb: - reporter.wandb_log() - - else: - if distributed: - iterator_stop.fill_(1) - torch.distributed.all_reduce(iterator_stop, ReduceOp.SUM) - - return all_steps_are_invalid - - @classmethod - @torch.no_grad() - def validate_one_epoch( - cls, - model: torch.nn.Module, - iterator: Iterable[Dict[str, torch.Tensor]], - reporter: SubReporter, - options: TrainerOptions, - distributed_option: DistributedOption, - ) -> None: - assert check_argument_types() - ngpu = options.ngpu - no_forward_run = options.no_forward_run - distributed = distributed_option.distributed - - model.eval() - - # [For distributed] Because iteration counts are not always equals between - # processes, send stop-flag to the other processes if iterator is finished - iterator_stop = torch.tensor(0).to("cuda" if ngpu > 0 else "cpu") - for (_, batch) in iterator: - assert isinstance(batch, dict), type(batch) - if distributed: - torch.distributed.all_reduce(iterator_stop, ReduceOp.SUM) - if iterator_stop > 0: - break - - batch = to_device(batch, "cuda" if ngpu > 0 else "cpu") - if no_forward_run: - continue - - retval = model(**batch) - if isinstance(retval, dict): - stats = retval["stats"] - weight = retval["weight"] - else: - _, stats, weight = retval - if ngpu > 1 or distributed: - # Apply weighted averaging for stats. - # if distributed, this method can also apply all_reduce() - stats, weight = recursive_average(stats, weight, distributed) - - reporter.register(stats, weight) - reporter.next() - - else: - if distributed: - iterator_stop.fill_(1) - torch.distributed.all_reduce(iterator_stop, ReduceOp.SUM) - - @classmethod - @torch.no_grad() - def plot_attention( - cls, - model: torch.nn.Module, - output_dir: Optional[Path], - summary_writer: Optional[SummaryWriter], - iterator: Iterable[Tuple[List[str], Dict[str, torch.Tensor]]], - reporter: SubReporter, - options: TrainerOptions, - ) -> None: - assert check_argument_types() - import matplotlib - - ngpu = options.ngpu - no_forward_run = options.no_forward_run - - matplotlib.use("Agg") - import matplotlib.pyplot as plt - from matplotlib.ticker import MaxNLocator - - model.eval() - for ids, batch in iterator: - assert isinstance(batch, dict), type(batch) - assert len(next(iter(batch.values()))) == len(ids), ( - len(next(iter(batch.values()))), - len(ids), - ) - batch = to_device(batch, "cuda" if ngpu > 0 else "cpu") - if no_forward_run: - continue - - # 1. Forwarding model and gathering all attentions - # calculate_all_attentions() uses single gpu only. - att_dict = calculate_all_attentions(model, batch) - - # 2. Plot attentions: This part is slow due to matplotlib - for k, att_list in att_dict.items(): - assert len(att_list) == len(ids), (len(att_list), len(ids)) - for id_, att_w in zip(ids, att_list): - - if isinstance(att_w, torch.Tensor): - att_w = att_w.detach().cpu().numpy() - - if att_w.ndim == 2: - att_w = att_w[None] - elif att_w.ndim > 3 or att_w.ndim == 1: - raise RuntimeError(f"Must be 2 or 3 dimension: {att_w.ndim}") - - w, h = plt.figaspect(1.0 / len(att_w)) - fig = plt.Figure(figsize=(w * 1.3, h * 1.3)) - axes = fig.subplots(1, len(att_w)) - if len(att_w) == 1: - axes = [axes] - - for ax, aw in zip(axes, att_w): - ax.imshow(aw.astype(np.float32), aspect="auto") - ax.set_title(f"{k}_{id_}") - ax.set_xlabel("Input") - ax.set_ylabel("Output") - ax.xaxis.set_major_locator(MaxNLocator(integer=True)) - ax.yaxis.set_major_locator(MaxNLocator(integer=True)) - - if output_dir is not None: - p = output_dir / id_ / f"{k}.{reporter.get_epoch()}ep.png" - p.parent.mkdir(parents=True, exist_ok=True) - fig.savefig(p) - - if summary_writer is not None: - summary_writer.add_figure( - f"{k}_{id_}", fig, reporter.get_epoch() - ) - reporter.next() diff --git a/spaces/segments-tobias/conex/espnet2/tts/fastspeech.py b/spaces/segments-tobias/conex/espnet2/tts/fastspeech.py deleted file mode 100644 index 0a2bd9c005f3933a4804c1f8ca570f20bce3bdcc..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet2/tts/fastspeech.py +++ /dev/null @@ -1,640 +0,0 @@ -# -*- coding: utf-8 -*- - -# Copyright 2020 Nagoya University (Tomoki Hayashi) -# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) - -"""Fastspeech related modules for ESPnet2.""" - -import logging - -from typing import Dict -from typing import Sequence -from typing import Tuple - -import torch -import torch.nn.functional as F - -from typeguard import check_argument_types - -from espnet.nets.pytorch_backend.conformer.encoder import ( - Encoder as ConformerEncoder, # noqa: H301 -) -from espnet.nets.pytorch_backend.e2e_tts_fastspeech import ( - FeedForwardTransformerLoss as FastSpeechLoss, # NOQA -) -from espnet.nets.pytorch_backend.fastspeech.duration_predictor import DurationPredictor -from espnet.nets.pytorch_backend.fastspeech.length_regulator import LengthRegulator -from espnet.nets.pytorch_backend.nets_utils import make_non_pad_mask -from espnet.nets.pytorch_backend.nets_utils import make_pad_mask -from espnet.nets.pytorch_backend.tacotron2.decoder import Postnet -from espnet.nets.pytorch_backend.transformer.embedding import PositionalEncoding -from espnet.nets.pytorch_backend.transformer.embedding import ScaledPositionalEncoding -from espnet.nets.pytorch_backend.transformer.encoder import ( - Encoder as TransformerEncoder, # noqa: H301 -) - -from espnet2.torch_utils.device_funcs import force_gatherable -from espnet2.torch_utils.initialize import initialize -from espnet2.tts.abs_tts import AbsTTS -from espnet2.tts.gst.style_encoder import StyleEncoder - - -class FastSpeech(AbsTTS): - """FastSpeech module for end-to-end text-to-speech. - - This is a module of FastSpeech, feed-forward Transformer with duration predictor - described in `FastSpeech: Fast, Robust and Controllable Text to Speech`_, which - does not require any auto-regressive processing during inference, resulting in - fast decoding compared with auto-regressive Transformer. - - .. _`FastSpeech: Fast, Robust and Controllable Text to Speech`: - https://arxiv.org/pdf/1905.09263.pdf - - Args: - idim (int): Dimension of the inputs. - odim (int): Dimension of the outputs. - elayers (int, optional): Number of encoder layers. - eunits (int, optional): Number of encoder hidden units. - dlayers (int, optional): Number of decoder layers. - dunits (int, optional): Number of decoder hidden units. - use_scaled_pos_enc (bool, optional): - Whether to use trainable scaled positional encoding. - encoder_normalize_before (bool, optional): - Whether to perform layer normalization before encoder block. - decoder_normalize_before (bool, optional): - Whether to perform layer normalization before decoder block. - encoder_concat_after (bool, optional): Whether to concatenate attention - layer's input and output in encoder. - decoder_concat_after (bool, optional): Whether to concatenate attention - layer's input and output in decoder. - duration_predictor_layers (int, optional): Number of duration predictor layers. - duration_predictor_chans (int, optional): Number of duration predictor channels. - duration_predictor_kernel_size (int, optional): - Kernel size of duration predictor. - spk_embed_dim (int, optional): Number of speaker embedding dimensions. - spk_embed_integration_type: How to integrate speaker embedding. - use_gst (str, optional): Whether to use global style token. - gst_tokens (int, optional): The number of GST embeddings. - gst_heads (int, optional): The number of heads in GST multihead attention. - gst_conv_layers (int, optional): The number of conv layers in GST. - gst_conv_chans_list: (Sequence[int], optional): - List of the number of channels of conv layers in GST. - gst_conv_kernel_size (int, optional): Kernal size of conv layers in GST. - gst_conv_stride (int, optional): Stride size of conv layers in GST. - gst_gru_layers (int, optional): The number of GRU layers in GST. - gst_gru_units (int, optional): The number of GRU units in GST. - reduction_factor (int, optional): Reduction factor. - transformer_enc_dropout_rate (float, optional): - Dropout rate in encoder except attention & positional encoding. - transformer_enc_positional_dropout_rate (float, optional): - Dropout rate after encoder positional encoding. - transformer_enc_attn_dropout_rate (float, optional): - Dropout rate in encoder self-attention module. - transformer_dec_dropout_rate (float, optional): - Dropout rate in decoder except attention & positional encoding. - transformer_dec_positional_dropout_rate (float, optional): - Dropout rate after decoder positional encoding. - transformer_dec_attn_dropout_rate (float, optional): - Dropout rate in deocoder self-attention module. - init_type (str, optional): - How to initialize transformer parameters. - init_enc_alpha (float, optional): - Initial value of alpha in scaled pos encoding of the encoder. - init_dec_alpha (float, optional): - Initial value of alpha in scaled pos encoding of the decoder. - use_masking (bool, optional): - Whether to apply masking for padded part in loss calculation. - use_weighted_masking (bool, optional): - Whether to apply weighted masking in loss calculation. - - """ - - def __init__( - self, - # network structure related - idim: int, - odim: int, - adim: int = 384, - aheads: int = 4, - elayers: int = 6, - eunits: int = 1536, - dlayers: int = 6, - dunits: int = 1536, - postnet_layers: int = 5, - postnet_chans: int = 512, - postnet_filts: int = 5, - positionwise_layer_type: str = "conv1d", - positionwise_conv_kernel_size: int = 1, - use_scaled_pos_enc: bool = True, - use_batch_norm: bool = True, - encoder_normalize_before: bool = True, - decoder_normalize_before: bool = True, - encoder_concat_after: bool = False, - decoder_concat_after: bool = False, - duration_predictor_layers: int = 2, - duration_predictor_chans: int = 384, - duration_predictor_kernel_size: int = 3, - reduction_factor: int = 1, - encoder_type: str = "transformer", - decoder_type: str = "transformer", - # only for conformer - conformer_rel_pos_type: str = "legacy", - conformer_pos_enc_layer_type: str = "rel_pos", - conformer_self_attn_layer_type: str = "rel_selfattn", - conformer_activation_type: str = "swish", - use_macaron_style_in_conformer: bool = True, - use_cnn_in_conformer: bool = True, - conformer_enc_kernel_size: int = 7, - conformer_dec_kernel_size: int = 31, - zero_triu: bool = False, - # pretrained spk emb - spk_embed_dim: int = None, - spk_embed_integration_type: str = "add", - # GST - use_gst: bool = False, - gst_tokens: int = 10, - gst_heads: int = 4, - gst_conv_layers: int = 6, - gst_conv_chans_list: Sequence[int] = (32, 32, 64, 64, 128, 128), - gst_conv_kernel_size: int = 3, - gst_conv_stride: int = 2, - gst_gru_layers: int = 1, - gst_gru_units: int = 128, - # training related - transformer_enc_dropout_rate: float = 0.1, - transformer_enc_positional_dropout_rate: float = 0.1, - transformer_enc_attn_dropout_rate: float = 0.1, - transformer_dec_dropout_rate: float = 0.1, - transformer_dec_positional_dropout_rate: float = 0.1, - transformer_dec_attn_dropout_rate: float = 0.1, - duration_predictor_dropout_rate: float = 0.1, - postnet_dropout_rate: float = 0.5, - init_type: str = "xavier_uniform", - init_enc_alpha: float = 1.0, - init_dec_alpha: float = 1.0, - use_masking: bool = False, - use_weighted_masking: bool = False, - ): - """Initialize FastSpeech module.""" - assert check_argument_types() - super().__init__() - - # store hyperparameters - self.idim = idim - self.odim = odim - self.eos = idim - 1 - self.reduction_factor = reduction_factor - self.encoder_type = encoder_type - self.decoder_type = decoder_type - self.use_scaled_pos_enc = use_scaled_pos_enc - self.use_gst = use_gst - self.spk_embed_dim = spk_embed_dim - if self.spk_embed_dim is not None: - self.spk_embed_integration_type = spk_embed_integration_type - - # use idx 0 as padding idx - self.padding_idx = 0 - - # get positional encoding class - pos_enc_class = ( - ScaledPositionalEncoding if self.use_scaled_pos_enc else PositionalEncoding - ) - - # check relative positional encoding compatibility - if "conformer" in [encoder_type, decoder_type]: - if conformer_rel_pos_type == "legacy": - if conformer_pos_enc_layer_type == "rel_pos": - conformer_pos_enc_layer_type = "legacy_rel_pos" - logging.warning( - "Fallback to conformer_pos_enc_layer_type = 'legacy_rel_pos' " - "due to the compatibility. If you want to use the new one, " - "please use conformer_pos_enc_layer_type = 'latest'." - ) - if conformer_self_attn_layer_type == "rel_selfattn": - conformer_self_attn_layer_type = "legacy_rel_selfattn" - logging.warning( - "Fallback to " - "conformer_self_attn_layer_type = 'legacy_rel_selfattn' " - "due to the compatibility. If you want to use the new one, " - "please use conformer_pos_enc_layer_type = 'latest'." - ) - elif conformer_rel_pos_type == "latest": - assert conformer_pos_enc_layer_type != "legacy_rel_pos" - assert conformer_self_attn_layer_type != "legacy_rel_selfattn" - else: - raise ValueError(f"Unknown rel_pos_type: {conformer_rel_pos_type}") - - # define encoder - encoder_input_layer = torch.nn.Embedding( - num_embeddings=idim, embedding_dim=adim, padding_idx=self.padding_idx - ) - if encoder_type == "transformer": - self.encoder = TransformerEncoder( - idim=idim, - attention_dim=adim, - attention_heads=aheads, - linear_units=eunits, - num_blocks=elayers, - input_layer=encoder_input_layer, - dropout_rate=transformer_enc_dropout_rate, - positional_dropout_rate=transformer_enc_positional_dropout_rate, - attention_dropout_rate=transformer_enc_attn_dropout_rate, - pos_enc_class=pos_enc_class, - normalize_before=encoder_normalize_before, - concat_after=encoder_concat_after, - positionwise_layer_type=positionwise_layer_type, - positionwise_conv_kernel_size=positionwise_conv_kernel_size, - ) - elif encoder_type == "conformer": - self.encoder = ConformerEncoder( - idim=idim, - attention_dim=adim, - attention_heads=aheads, - linear_units=eunits, - num_blocks=elayers, - input_layer=encoder_input_layer, - dropout_rate=transformer_enc_dropout_rate, - positional_dropout_rate=transformer_enc_positional_dropout_rate, - attention_dropout_rate=transformer_enc_attn_dropout_rate, - normalize_before=encoder_normalize_before, - concat_after=encoder_concat_after, - positionwise_layer_type=positionwise_layer_type, - positionwise_conv_kernel_size=positionwise_conv_kernel_size, - macaron_style=use_macaron_style_in_conformer, - pos_enc_layer_type=conformer_pos_enc_layer_type, - selfattention_layer_type=conformer_self_attn_layer_type, - activation_type=conformer_activation_type, - use_cnn_module=use_cnn_in_conformer, - cnn_module_kernel=conformer_enc_kernel_size, - ) - else: - raise ValueError(f"{encoder_type} is not supported.") - - # define GST - if self.use_gst: - self.gst = StyleEncoder( - idim=odim, # the input is mel-spectrogram - gst_tokens=gst_tokens, - gst_token_dim=adim, - gst_heads=gst_heads, - conv_layers=gst_conv_layers, - conv_chans_list=gst_conv_chans_list, - conv_kernel_size=gst_conv_kernel_size, - conv_stride=gst_conv_stride, - gru_layers=gst_gru_layers, - gru_units=gst_gru_units, - ) - - # define additional projection for speaker embedding - if self.spk_embed_dim is not None: - if self.spk_embed_integration_type == "add": - self.projection = torch.nn.Linear(self.spk_embed_dim, adim) - else: - self.projection = torch.nn.Linear(adim + self.spk_embed_dim, adim) - - # define duration predictor - self.duration_predictor = DurationPredictor( - idim=adim, - n_layers=duration_predictor_layers, - n_chans=duration_predictor_chans, - kernel_size=duration_predictor_kernel_size, - dropout_rate=duration_predictor_dropout_rate, - ) - - # define length regulator - self.length_regulator = LengthRegulator() - - # define decoder - # NOTE: we use encoder as decoder - # because fastspeech's decoder is the same as encoder - if decoder_type == "transformer": - self.decoder = TransformerEncoder( - idim=0, - attention_dim=adim, - attention_heads=aheads, - linear_units=dunits, - num_blocks=dlayers, - input_layer=None, - dropout_rate=transformer_dec_dropout_rate, - positional_dropout_rate=transformer_dec_positional_dropout_rate, - attention_dropout_rate=transformer_dec_attn_dropout_rate, - pos_enc_class=pos_enc_class, - normalize_before=decoder_normalize_before, - concat_after=decoder_concat_after, - positionwise_layer_type=positionwise_layer_type, - positionwise_conv_kernel_size=positionwise_conv_kernel_size, - ) - elif decoder_type == "conformer": - self.decoder = ConformerEncoder( - idim=0, - attention_dim=adim, - attention_heads=aheads, - linear_units=dunits, - num_blocks=dlayers, - input_layer=None, - dropout_rate=transformer_dec_dropout_rate, - positional_dropout_rate=transformer_dec_positional_dropout_rate, - attention_dropout_rate=transformer_dec_attn_dropout_rate, - normalize_before=decoder_normalize_before, - concat_after=decoder_concat_after, - positionwise_layer_type=positionwise_layer_type, - positionwise_conv_kernel_size=positionwise_conv_kernel_size, - macaron_style=use_macaron_style_in_conformer, - pos_enc_layer_type=conformer_pos_enc_layer_type, - selfattention_layer_type=conformer_self_attn_layer_type, - activation_type=conformer_activation_type, - use_cnn_module=use_cnn_in_conformer, - cnn_module_kernel=conformer_dec_kernel_size, - ) - else: - raise ValueError(f"{decoder_type} is not supported.") - - # define final projection - self.feat_out = torch.nn.Linear(adim, odim * reduction_factor) - - # define postnet - self.postnet = ( - None - if postnet_layers == 0 - else Postnet( - idim=idim, - odim=odim, - n_layers=postnet_layers, - n_chans=postnet_chans, - n_filts=postnet_filts, - use_batch_norm=use_batch_norm, - dropout_rate=postnet_dropout_rate, - ) - ) - - # initialize parameters - self._reset_parameters( - init_type=init_type, - init_enc_alpha=init_enc_alpha, - init_dec_alpha=init_dec_alpha, - ) - - # define criterions - self.criterion = FastSpeechLoss( - use_masking=use_masking, use_weighted_masking=use_weighted_masking - ) - - def _forward( - self, - xs: torch.Tensor, - ilens: torch.Tensor, - ys: torch.Tensor = None, - olens: torch.Tensor = None, - ds: torch.Tensor = None, - spembs: torch.Tensor = None, - is_inference: bool = False, - alpha: float = 1.0, - ) -> Sequence[torch.Tensor]: - # forward encoder - x_masks = self._source_mask(ilens) - hs, _ = self.encoder(xs, x_masks) # (B, Tmax, adim) - - # integrate with GST - if self.use_gst: - style_embs = self.gst(ys) - hs = hs + style_embs.unsqueeze(1) - - # integrate speaker embedding - if self.spk_embed_dim is not None: - hs = self._integrate_with_spk_embed(hs, spembs) - - # forward duration predictor and length regulator - d_masks = make_pad_mask(ilens).to(xs.device) - if is_inference: - d_outs = self.duration_predictor.inference(hs, d_masks) # (B, Tmax) - hs = self.length_regulator(hs, d_outs, alpha) # (B, Lmax, adim) - else: - d_outs = self.duration_predictor(hs, d_masks) # (B, Tmax) - hs = self.length_regulator(hs, ds) # (B, Lmax, adim) - - # forward decoder - if olens is not None and not is_inference: - if self.reduction_factor > 1: - olens_in = olens.new([olen // self.reduction_factor for olen in olens]) - else: - olens_in = olens - h_masks = self._source_mask(olens_in) - else: - h_masks = None - zs, _ = self.decoder(hs, h_masks) # (B, Lmax, adim) - before_outs = self.feat_out(zs).view( - zs.size(0), -1, self.odim - ) # (B, Lmax, odim) - - # postnet -> (B, Lmax//r * r, odim) - if self.postnet is None: - after_outs = before_outs - else: - after_outs = before_outs + self.postnet( - before_outs.transpose(1, 2) - ).transpose(1, 2) - - return before_outs, after_outs, d_outs - - def forward( - self, - text: torch.Tensor, - text_lengths: torch.Tensor, - speech: torch.Tensor, - speech_lengths: torch.Tensor, - durations: torch.Tensor, - durations_lengths: torch.Tensor, - spembs: torch.Tensor = None, - ) -> Tuple[torch.Tensor, Dict[str, torch.Tensor], torch.Tensor]: - """Calculate forward propagation. - - Args: - text (LongTensor): Batch of padded character ids (B, Tmax). - text_lengths (LongTensor): Batch of lengths of each input (B,). - speech (Tensor): Batch of padded target features (B, Lmax, odim). - speech_lengths (LongTensor): Batch of the lengths of each target (B,). - durations (LongTensor): Batch of padded durations (B, Tmax + 1). - durations_lengths (LongTensor): Batch of duration lengths (B, Tmax + 1). - spembs (Tensor, optional): Batch of speaker embeddings (B, spk_embed_dim). - - Returns: - Tensor: Loss scalar value. - Dict: Statistics to be monitored. - Tensor: Weight value. - - """ - text = text[:, : text_lengths.max()] # for data-parallel - speech = speech[:, : speech_lengths.max()] # for data-parallel - durations = durations[:, : durations_lengths.max()] # for data-parallel - - batch_size = text.size(0) - - # Add eos at the last of sequence - xs = F.pad(text, [0, 1], "constant", self.padding_idx) - for i, l in enumerate(text_lengths): - xs[i, l] = self.eos - ilens = text_lengths + 1 - - ys, ds = speech, durations - olens = speech_lengths - - # forward propagation - before_outs, after_outs, d_outs = self._forward( - xs, ilens, ys, olens, ds, spembs=spembs, is_inference=False - ) - - # modifiy mod part of groundtruth - if self.reduction_factor > 1: - olens = olens.new([olen - olen % self.reduction_factor for olen in olens]) - max_olen = max(olens) - ys = ys[:, :max_olen] - - # calculate loss - if self.postnet is None: - after_outs = None - l1_loss, duration_loss = self.criterion( - after_outs, before_outs, d_outs, ys, ds, ilens, olens - ) - loss = l1_loss + duration_loss - - stats = dict( - l1_loss=l1_loss.item(), - duration_loss=duration_loss.item(), - loss=loss.item(), - ) - - # report extra information - if self.encoder_type == "transformer" and self.use_scaled_pos_enc: - stats.update( - encoder_alpha=self.encoder.embed[-1].alpha.data.item(), - ) - if self.decoder_type == "transformer" and self.use_scaled_pos_enc: - stats.update( - decoder_alpha=self.decoder.embed[-1].alpha.data.item(), - ) - - loss, stats, weight = force_gatherable((loss, stats, batch_size), loss.device) - return loss, stats, weight - - def inference( - self, - text: torch.Tensor, - speech: torch.Tensor = None, - spembs: torch.Tensor = None, - durations: torch.Tensor = None, - alpha: float = 1.0, - use_teacher_forcing: bool = False, - ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]: - """Generate the sequence of features given the sequences of characters. - - Args: - text (LongTensor): Input sequence of characters (T,). - speech (Tensor, optional): Feature sequence to extract style (N, idim). - spembs (Tensor, optional): Speaker embedding vector (spk_embed_dim,). - durations (LongTensor, optional): Groundtruth of duration (T + 1,). - alpha (float, optional): Alpha to control the speed. - use_teacher_forcing (bool, optional): Whether to use teacher forcing. - If true, groundtruth of duration, pitch and energy will be used. - - Returns: - Tensor: Output sequence of features (L, odim). - None: Dummy for compatibility. - None: Dummy for compatibility. - - """ - x, y = text, speech - spemb, d = spembs, durations - - # add eos at the last of sequence - x = F.pad(x, [0, 1], "constant", self.eos) - - # setup batch axis - ilens = torch.tensor([x.shape[0]], dtype=torch.long, device=x.device) - xs, ys = x.unsqueeze(0), None - if y is not None: - ys = y.unsqueeze(0) - if spemb is not None: - spembs = spemb.unsqueeze(0) - - if use_teacher_forcing: - # use groundtruth of duration, pitch, and energy - ds = d.unsqueeze(0) - _, outs, *_ = self._forward( - xs, - ilens, - ys, - ds=ds, - spembs=spembs, - ) # (1, L, odim) - else: - # inference - _, outs, _ = self._forward( - xs, - ilens, - ys, - spembs=spembs, - is_inference=True, - alpha=alpha, - ) # (1, L, odim) - - return outs[0], None, None - - def _integrate_with_spk_embed( - self, hs: torch.Tensor, spembs: torch.Tensor - ) -> torch.Tensor: - """Integrate speaker embedding with hidden states. - - Args: - hs (Tensor): Batch of hidden state sequences (B, Tmax, adim). - spembs (Tensor): Batch of speaker embeddings (B, spk_embed_dim). - - Returns: - Tensor: Batch of integrated hidden state sequences (B, Tmax, adim). - - """ - if self.spk_embed_integration_type == "add": - # apply projection and then add to hidden states - spembs = self.projection(F.normalize(spembs)) - hs = hs + spembs.unsqueeze(1) - elif self.spk_embed_integration_type == "concat": - # concat hidden states with spk embeds and then apply projection - spembs = F.normalize(spembs).unsqueeze(1).expand(-1, hs.size(1), -1) - hs = self.projection(torch.cat([hs, spembs], dim=-1)) - else: - raise NotImplementedError("support only add or concat.") - - return hs - - def _source_mask(self, ilens: torch.Tensor) -> torch.Tensor: - """Make masks for self-attention. - - Args: - ilens (LongTensor): Batch of lengths (B,). - - Returns: - Tensor: Mask tensor for self-attention. - dtype=torch.uint8 in PyTorch 1.2- - dtype=torch.bool in PyTorch 1.2+ (including 1.2) - - Examples: - >>> ilens = [5, 3] - >>> self._source_mask(ilens) - tensor([[[1, 1, 1, 1, 1], - [1, 1, 1, 0, 0]]], dtype=torch.uint8) - - """ - x_masks = make_non_pad_mask(ilens).to(next(self.parameters()).device) - return x_masks.unsqueeze(-2) - - def _reset_parameters( - self, init_type: str, init_enc_alpha: float, init_dec_alpha: float - ): - # initialize parameters - if init_type != "pytorch": - initialize(self, init_type) - - # initialize alpha in scaled positional encoding - if self.encoder_type == "transformer" and self.use_scaled_pos_enc: - self.encoder.embed[-1].alpha.data = torch.tensor(init_enc_alpha) - if self.decoder_type == "transformer" and self.use_scaled_pos_enc: - self.decoder.embed[-1].alpha.data = torch.tensor(init_dec_alpha) diff --git a/spaces/sgxz/bingo/src/components/ui/codeblock.tsx b/spaces/sgxz/bingo/src/components/ui/codeblock.tsx deleted file mode 100644 index aabda4e3b59f4e36b6ab79feb19d8d18b70e881b..0000000000000000000000000000000000000000 --- a/spaces/sgxz/bingo/src/components/ui/codeblock.tsx +++ /dev/null @@ -1,142 +0,0 @@ -'use client' - -import { FC, memo } from 'react' -import { Prism as SyntaxHighlighter } from 'react-syntax-highlighter' -import { coldarkDark } from 'react-syntax-highlighter/dist/cjs/styles/prism' - -import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard' -import { IconCheck, IconCopy, IconDownload } from '@/components/ui/icons' -import { Button } from '@/components/ui/button' - -interface Props { - language: string - value: string -} - -interface languageMap { - [key: string]: string | undefined -} - -export const programmingLanguages: languageMap = { - javascript: '.js', - python: '.py', - java: '.java', - c: '.c', - cpp: '.cpp', - 'c++': '.cpp', - 'c#': '.cs', - ruby: '.rb', - php: '.php', - swift: '.swift', - 'objective-c': '.m', - kotlin: '.kt', - typescript: '.ts', - go: '.go', - perl: '.pl', - rust: '.rs', - scala: '.scala', - haskell: '.hs', - lua: '.lua', - shell: '.sh', - sql: '.sql', - html: '.html', - css: '.css' - // add more file extensions here, make sure the key is same as language prop in CodeBlock.tsx component -} - -export const generateRandomString = (length: number, lowercase = false) => { - const chars = 'ABCDEFGHJKLMNPQRSTUVWXY3456789' // excluding similar looking characters like Z, 2, I, 1, O, 0 - let result = '' - for (let i = 0; i < length; i++) { - result += chars.charAt(Math.floor(Math.random() * chars.length)) - } - return lowercase ? result.toLowerCase() : result -} - -const CodeBlock: FC = memo(({ language, value }) => { - const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 }) - - const downloadAsFile = () => { - if (typeof window === 'undefined') { - return - } - const fileExtension = programmingLanguages[language] || '.file' - const suggestedFileName = `file-${generateRandomString( - 3, - true - )}${fileExtension}` - const fileName = window.prompt('Enter file name' || '', suggestedFileName) - - if (!fileName) { - // User pressed cancel on prompt. - return - } - - const blob = new Blob([value], { type: 'text/plain' }) - const url = URL.createObjectURL(blob) - const link = document.createElement('a') - link.download = fileName - link.href = url - link.style.display = 'none' - document.body.appendChild(link) - link.click() - document.body.removeChild(link) - URL.revokeObjectURL(url) - } - - const onCopy = () => { - if (isCopied) return - copyToClipboard(value) - } - - return ( -
            -
            - {language} -
            - - -
            -
            - - {value} - -
            - ) -}) -CodeBlock.displayName = 'CodeBlock' - -export { CodeBlock } diff --git a/spaces/sharmaanupam/eigenvectors/README.md b/spaces/sharmaanupam/eigenvectors/README.md deleted file mode 100644 index 48cf6411cb0e4c38db9469685b1f6b7e2405a809..0000000000000000000000000000000000000000 --- a/spaces/sharmaanupam/eigenvectors/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Eigenvectors -emoji: 📉 -colorFrom: yellow -colorTo: green -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/shawndimantha/hackaithon_generate_email/README.md b/spaces/shawndimantha/hackaithon_generate_email/README.md deleted file mode 100644 index 7bdc3666fa94a097b75c5d2a3d73240d28344833..0000000000000000000000000000000000000000 --- a/spaces/shawndimantha/hackaithon_generate_email/README.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -title: HackAIthon Email Generation -emoji: ✉️ -colorFrom: gray -colorTo: pink -sdk: streamlit -sdk_version: 1.19.0 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: shawndimantha/test_streamlit1 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - -This was created for a HackAIthon workshop during SF AI week on May 16 2023. OpenAI API key will be deprecated after the workshop so please duplicate and use your own API key in the Repository Secrets section. - -Note: Please duplicate and generate an OpenAI API key and add as a repository secret to your new space (under space settings) - -Prompt used for code generation seeding of app.py (ChatGPT Turbo), which may need some iteration especially in engine/model API call and st.secrets assignment (use st.secrets["OPENAI_API_KEY"] instead of a separate function): -"Share Streamlit code that lets users input an email text (include a submit button labeled "Generate response") and output a response to that email based on OpenAI davinci-003 model with a prompt that instructs the model that the user is sharing an email, the user has very little time in their schedule, but wants to be polite in their response, and is expecting a response output, and format the code so it can be uploaded to a HuggingFace space and be run with a user interface for input and display of that output. Include a reference to a Streamlit Secrets for the OpenAI API key, make sure to remove any repeated parameters in the API call and include a max token length of 1024." diff --git a/spaces/shgao/MDT/diffusion/respace.py b/spaces/shgao/MDT/diffusion/respace.py deleted file mode 100644 index 0a2cc0435d1ace54466585db9043b284973d454e..0000000000000000000000000000000000000000 --- a/spaces/shgao/MDT/diffusion/respace.py +++ /dev/null @@ -1,129 +0,0 @@ -# Modified from OpenAI's diffusion repos -# GLIDE: https://github.com/openai/glide-text2im/blob/main/glide_text2im/gaussian_diffusion.py -# ADM: https://github.com/openai/guided-diffusion/blob/main/guided_diffusion -# IDDPM: https://github.com/openai/improved-diffusion/blob/main/improved_diffusion/gaussian_diffusion.py - -import numpy as np -import torch as th - -from .gaussian_diffusion import GaussianDiffusion - - -def space_timesteps(num_timesteps, section_counts): - """ - Create a list of timesteps to use from an original diffusion process, - given the number of timesteps we want to take from equally-sized portions - of the original process. - For example, if there's 300 timesteps and the section counts are [10,15,20] - then the first 100 timesteps are strided to be 10 timesteps, the second 100 - are strided to be 15 timesteps, and the final 100 are strided to be 20. - If the stride is a string starting with "ddim", then the fixed striding - from the DDIM paper is used, and only one section is allowed. - :param num_timesteps: the number of diffusion steps in the original - process to divide up. - :param section_counts: either a list of numbers, or a string containing - comma-separated numbers, indicating the step count - per section. As a special case, use "ddimN" where N - is a number of steps to use the striding from the - DDIM paper. - :return: a set of diffusion steps from the original process to use. - """ - if isinstance(section_counts, str): - if section_counts.startswith("ddim"): - desired_count = int(section_counts[len("ddim") :]) - for i in range(1, num_timesteps): - if len(range(0, num_timesteps, i)) == desired_count: - return set(range(0, num_timesteps, i)) - raise ValueError( - f"cannot create exactly {num_timesteps} steps with an integer stride" - ) - section_counts = [int(x) for x in section_counts.split(",")] - size_per = num_timesteps // len(section_counts) - extra = num_timesteps % len(section_counts) - start_idx = 0 - all_steps = [] - for i, section_count in enumerate(section_counts): - size = size_per + (1 if i < extra else 0) - if size < section_count: - raise ValueError( - f"cannot divide section of {size} steps into {section_count}" - ) - if section_count <= 1: - frac_stride = 1 - else: - frac_stride = (size - 1) / (section_count - 1) - cur_idx = 0.0 - taken_steps = [] - for _ in range(section_count): - taken_steps.append(start_idx + round(cur_idx)) - cur_idx += frac_stride - all_steps += taken_steps - start_idx += size - return set(all_steps) - - -class SpacedDiffusion(GaussianDiffusion): - """ - A diffusion process which can skip steps in a base diffusion process. - :param use_timesteps: a collection (sequence or set) of timesteps from the - original diffusion process to retain. - :param kwargs: the kwargs to create the base diffusion process. - """ - - def __init__(self, use_timesteps, **kwargs): - self.use_timesteps = set(use_timesteps) - self.timestep_map = [] - self.original_num_steps = len(kwargs["betas"]) - - base_diffusion = GaussianDiffusion(**kwargs) # pylint: disable=missing-kwoa - last_alpha_cumprod = 1.0 - new_betas = [] - for i, alpha_cumprod in enumerate(base_diffusion.alphas_cumprod): - if i in self.use_timesteps: - new_betas.append(1 - alpha_cumprod / last_alpha_cumprod) - last_alpha_cumprod = alpha_cumprod - self.timestep_map.append(i) - kwargs["betas"] = np.array(new_betas) - super().__init__(**kwargs) - - def p_mean_variance( - self, model, *args, **kwargs - ): # pylint: disable=signature-differs - return super().p_mean_variance(self._wrap_model(model), *args, **kwargs) - - def training_losses( - self, model, *args, **kwargs - ): # pylint: disable=signature-differs - return super().training_losses(self._wrap_model(model), *args, **kwargs) - - def condition_mean(self, cond_fn, *args, **kwargs): - return super().condition_mean(self._wrap_model(cond_fn), *args, **kwargs) - - def condition_score(self, cond_fn, *args, **kwargs): - return super().condition_score(self._wrap_model(cond_fn), *args, **kwargs) - - def _wrap_model(self, model): - if isinstance(model, _WrappedModel): - return model - return _WrappedModel( - model, self.timestep_map, self.original_num_steps - ) - - def _scale_timesteps(self, t): - # Scaling is done by the wrapped model. - return t - - -class _WrappedModel: - def __init__(self, model, timestep_map, original_num_steps): - self.model = model - self.timestep_map = timestep_map - # self.rescale_timesteps = rescale_timesteps - self.original_num_steps = original_num_steps - - def __call__(self, x, ts, **kwargs): - map_tensor = th.tensor(self.timestep_map, device=ts.device, dtype=ts.dtype) - new_ts = map_tensor[ts] - # if self.rescale_timesteps: - # new_ts = new_ts.float() * (1000.0 / self.original_num_steps) - return self.model(x, new_ts, **kwargs) diff --git a/spaces/shikunl/prismer/prismer/download_checkpoints.py b/spaces/shikunl/prismer/prismer/download_checkpoints.py deleted file mode 100644 index 302d9c055f1ea879a4b87d4b8e194213d51c186d..0000000000000000000000000000000000000000 --- a/spaces/shikunl/prismer/prismer/download_checkpoints.py +++ /dev/null @@ -1,124 +0,0 @@ -from huggingface_hub import hf_hub_download, hf_hub_url, get_hf_file_metadata -from huggingface_hub.utils import disable_progress_bars -from pathlib import Path -from rich.progress import Progress -from fire import Fire -from typing import Union, List - -_EXPERTS = [ - "10_model.pth", - "Unified_learned_OCIM_RS200_6x+2x.pth", - "dpt_hybrid-midas-501f0c75.pt", - "icdar2015_hourglass88.pth", - "model_final_e0c58e.pkl", - "model_final_f07440.pkl", - "scannet.pt", -] - -_MODELS = [ - "vqa_prismer_base", - "vqa_prismer_large", - "vqa_prismerz_base", - "vqa_prismerz_large", - "caption_prismerz_base", - "caption_prismerz_large", - "caption_prismer_base", - "caption_prismer_large", - "pretrain_prismer_base", - "pretrain_prismer_large", - "pretrain_prismerz_base", - "pretrain_prismerz_large", -] - -_REPO_ID = "shikunl/prismer" - - -def download_checkpoints( - download_experts: bool = False, - download_models: Union[bool, List] = False, - hide_tqdm: bool = False, - force_redownload: bool = False, -): - if hide_tqdm: - disable_progress_bars() - # Convert to list and check for invalid names - download_experts = _EXPERTS if download_experts else [] - if download_models: - # only download single model - if isinstance(download_models, str): - download_models = [download_models] - - assert all([m in _MODELS for m in download_models]), f"Invalid model name. Must be one of {_MODELS}" - download_models = _MODELS if isinstance(download_models, bool) else download_models - else: - download_models = [] - - # Check if files already exist - if not force_redownload: - download_experts = [e for e in download_experts if not Path(f"./experts/expert_weights/{e}").exists()] - download_models = [m for m in download_models if not Path(f"{m}/pytorch_model.bin").exists()] - - assert download_experts or download_models, "Nothing to download." - - with Progress() as progress: - # Calculate total download size - progress.print("[blue]Calculating download size...") - total_size = 0 - for expert in download_experts: - url = hf_hub_url( - filename=expert, - repo_id=_REPO_ID, - subfolder="expert_weights" - ) - total_size += get_hf_file_metadata(url).size - - for model in download_models: - url = hf_hub_url( - filename=f"pytorch_model.bin", - repo_id=_REPO_ID, - subfolder=model - ) - total_size += get_hf_file_metadata(url).size - progress.print(f"[blue]Total download size: {total_size / 1e9:.2f} GB") - - # Download files - total_files = len(download_experts) + len(download_models) - total_task = progress.add_task(f"[green]Downloading files", total=total_files) - if download_experts: - expert_task = progress.add_task( - f"[green]Downloading experts...", total=len(download_experts) - ) - out_folder = Path("experts/expert_weights") - out_folder.mkdir(parents=True, exist_ok=True) - for expert in download_experts: - path = Path(hf_hub_download( - filename=expert, - repo_id=_REPO_ID, - subfolder="expert_weights" - )) - path.resolve().rename(out_folder/path.name) - path.unlink() - progress.advance(expert_task) - progress.advance(total_task) - - if download_models: - model_task = progress.add_task( - f"[green]Downloading models...", total=len(download_models) - ) - for model in download_models: - path = Path(hf_hub_download( - filename=f"pytorch_model.bin", - repo_id=_REPO_ID, - subfolder=model - )) - out_folder = Path("./logging")/model - out_folder.mkdir(parents=True, exist_ok=True) - path.resolve().rename(out_folder/"pytorch_model.bin") - path.unlink() - progress.advance(model_task) - progress.advance(total_task) - progress.print("[green]Done!") - - -if __name__ == "__main__": - Fire(download_checkpoints) diff --git a/spaces/shikunl/prismer/prismer/experts/obj_detection/unidet/data/datasets/objects365.py b/spaces/shikunl/prismer/prismer/experts/obj_detection/unidet/data/datasets/objects365.py deleted file mode 100644 index aac523c4807488dc586e3f1c89d7d1331427b7d4..0000000000000000000000000000000000000000 --- a/spaces/shikunl/prismer/prismer/experts/obj_detection/unidet/data/datasets/objects365.py +++ /dev/null @@ -1,391 +0,0 @@ -from detectron2.data.datasets.register_coco import register_coco_instances -import os - -categories = [ -{'id': 164, 'name': 'cutting/chopping board'} , -{'id': 49, 'name': 'tie'} , -{'id': 306, 'name': 'crosswalk sign'} , -{'id': 145, 'name': 'gun'} , -{'id': 14, 'name': 'street lights'} , -{'id': 223, 'name': 'bar soap'} , -{'id': 74, 'name': 'wild bird'} , -{'id': 219, 'name': 'ice cream'} , -{'id': 37, 'name': 'stool'} , -{'id': 25, 'name': 'storage box'} , -{'id': 153, 'name': 'giraffe'} , -{'id': 52, 'name': 'pen/pencil'} , -{'id': 61, 'name': 'high heels'} , -{'id': 340, 'name': 'mangosteen'} , -{'id': 22, 'name': 'bracelet'} , -{'id': 155, 'name': 'piano'} , -{'id': 162, 'name': 'vent'} , -{'id': 75, 'name': 'laptop'} , -{'id': 236, 'name': 'toaster'} , -{'id': 231, 'name': 'fire truck'} , -{'id': 42, 'name': 'basket'} , -{'id': 150, 'name': 'zebra'} , -{'id': 124, 'name': 'head phone'} , -{'id': 90, 'name': 'sheep'} , -{'id': 322, 'name': 'steak'} , -{'id': 39, 'name': 'couch'} , -{'id': 209, 'name': 'toothbrush'} , -{'id': 59, 'name': 'bicycle'} , -{'id': 336, 'name': 'red cabbage'} , -{'id': 228, 'name': 'golf ball'} , -{'id': 120, 'name': 'tomato'} , -{'id': 132, 'name': 'computer box'} , -{'id': 8, 'name': 'cup'} , -{'id': 183, 'name': 'basketball'} , -{'id': 298, 'name': 'butterfly'} , -{'id': 250, 'name': 'garlic'} , -{'id': 12, 'name': 'desk'} , -{'id': 141, 'name': 'microwave'} , -{'id': 171, 'name': 'strawberry'} , -{'id': 200, 'name': 'kettle'} , -{'id': 63, 'name': 'van'} , -{'id': 300, 'name': 'cheese'} , -{'id': 215, 'name': 'marker'} , -{'id': 100, 'name': 'blackboard/whiteboard'} , -{'id': 186, 'name': 'printer'} , -{'id': 333, 'name': 'bread/bun'} , -{'id': 243, 'name': 'penguin'} , -{'id': 364, 'name': 'iron'} , -{'id': 180, 'name': 'ladder'} , -{'id': 34, 'name': 'flag'} , -{'id': 78, 'name': 'cell phone'} , -{'id': 97, 'name': 'fan'} , -{'id': 224, 'name': 'scale'} , -{'id': 151, 'name': 'duck'} , -{'id': 319, 'name': 'flute'} , -{'id': 156, 'name': 'stop sign'} , -{'id': 290, 'name': 'rickshaw'} , -{'id': 128, 'name': 'sailboat'} , -{'id': 165, 'name': 'tennis racket'} , -{'id': 241, 'name': 'cigar'} , -{'id': 101, 'name': 'balloon'} , -{'id': 308, 'name': 'hair drier'} , -{'id': 167, 'name': 'skating and skiing shoes'} , -{'id': 237, 'name': 'helicopter'} , -{'id': 65, 'name': 'sink'} , -{'id': 129, 'name': 'tangerine'} , -{'id': 330, 'name': 'crab'} , -{'id': 320, 'name': 'measuring cup'} , -{'id': 260, 'name': 'fishing rod'} , -{'id': 346, 'name': 'saw'} , -{'id': 216, 'name': 'ship'} , -{'id': 46, 'name': 'coffee table'} , -{'id': 194, 'name': 'facial mask'} , -{'id': 281, 'name': 'stapler'} , -{'id': 118, 'name': 'refrigerator'} , -{'id': 40, 'name': 'belt'} , -{'id': 349, 'name': 'starfish'} , -{'id': 87, 'name': 'hanger'} , -{'id': 116, 'name': 'baseball glove'} , -{'id': 261, 'name': 'cherry'} , -{'id': 334, 'name': 'baozi'} , -{'id': 267, 'name': 'screwdriver'} , -{'id': 158, 'name': 'converter'} , -{'id': 335, 'name': 'lion'} , -{'id': 170, 'name': 'baseball'} , -{'id': 111, 'name': 'skis'} , -{'id': 136, 'name': 'broccoli'} , -{'id': 342, 'name': 'eraser'} , -{'id': 337, 'name': 'polar bear'} , -{'id': 139, 'name': 'shovel'} , -{'id': 193, 'name': 'extension cord'} , -{'id': 284, 'name': 'goldfish'} , -{'id': 174, 'name': 'pepper'} , -{'id': 138, 'name': 'stroller'} , -{'id': 328, 'name': 'yak'} , -{'id': 83, 'name': 'clock'} , -{'id': 235, 'name': 'tricycle'} , -{'id': 248, 'name': 'parking meter'} , -{'id': 274, 'name': 'trophy'} , -{'id': 324, 'name': 'binoculars'} , -{'id': 51, 'name': 'traffic light'} , -{'id': 314, 'name': 'donkey'} , -{'id': 45, 'name': 'barrel/bucket'} , -{'id': 292, 'name': 'pomegranate'} , -{'id': 13, 'name': 'handbag'} , -{'id': 262, 'name': 'tablet'} , -{'id': 68, 'name': 'apple'} , -{'id': 226, 'name': 'cabbage'} , -{'id': 23, 'name': 'flower'} , -{'id': 58, 'name': 'faucet'} , -{'id': 206, 'name': 'tong'} , -{'id': 291, 'name': 'trombone'} , -{'id': 160, 'name': 'carrot'} , -{'id': 172, 'name': 'bow tie'} , -{'id': 122, 'name': 'tent'} , -{'id': 163, 'name': 'cookies'} , -{'id': 115, 'name': 'remote'} , -{'id': 175, 'name': 'coffee machine'} , -{'id': 238, 'name': 'green beans'} , -{'id': 233, 'name': 'cello'} , -{'id': 28, 'name': 'wine glass'} , -{'id': 295, 'name': 'mushroom'} , -{'id': 344, 'name': 'scallop'} , -{'id': 125, 'name': 'lantern'} , -{'id': 123, 'name': 'shampoo/shower gel'} , -{'id': 285, 'name': 'meat balls'} , -{'id': 266, 'name': 'key'} , -{'id': 296, 'name': 'calculator'} , -{'id': 168, 'name': 'scissors'} , -{'id': 103, 'name': 'cymbal'} , -{'id': 6, 'name': 'bottle'} , -{'id': 264, 'name': 'nuts'} , -{'id': 234, 'name': 'notepaper'} , -{'id': 211, 'name': 'mango'} , -{'id': 287, 'name': 'toothpaste'} , -{'id': 196, 'name': 'chopsticks'} , -{'id': 140, 'name': 'baseball bat'} , -{'id': 244, 'name': 'hurdle'} , -{'id': 195, 'name': 'tennis ball'} , -{'id': 144, 'name': 'surveillance camera'} , -{'id': 271, 'name': 'volleyball'} , -{'id': 94, 'name': 'keyboard'} , -{'id': 339, 'name': 'seal'} , -{'id': 11, 'name': 'picture/frame'} , -{'id': 348, 'name': 'okra'} , -{'id': 191, 'name': 'sausage'} , -{'id': 166, 'name': 'candy'} , -{'id': 62, 'name': 'ring'} , -{'id': 311, 'name': 'dolphin'} , -{'id': 273, 'name': 'eggplant'} , -{'id': 84, 'name': 'drum'} , -{'id': 143, 'name': 'surfboard'} , -{'id': 288, 'name': 'antelope'} , -{'id': 204, 'name': 'clutch'} , -{'id': 207, 'name': 'slide'} , -{'id': 43, 'name': 'towel/napkin'} , -{'id': 352, 'name': 'durian'} , -{'id': 276, 'name': 'board eraser'} , -{'id': 315, 'name': 'electric drill'} , -{'id': 312, 'name': 'sushi'} , -{'id': 198, 'name': 'pie'} , -{'id': 106, 'name': 'pickup truck'} , -{'id': 176, 'name': 'bathtub'} , -{'id': 26, 'name': 'vase'} , -{'id': 133, 'name': 'elephant'} , -{'id': 256, 'name': 'sandwich'} , -{'id': 327, 'name': 'noodles'} , -{'id': 10, 'name': 'glasses'} , -{'id': 109, 'name': 'airplane'} , -{'id': 95, 'name': 'tripod'} , -{'id': 247, 'name': 'CD'} , -{'id': 121, 'name': 'machinery vehicle'} , -{'id': 365, 'name': 'flashlight'} , -{'id': 53, 'name': 'microphone'} , -{'id': 270, 'name': 'pliers'} , -{'id': 362, 'name': 'chainsaw'} , -{'id': 259, 'name': 'bear'} , -{'id': 197, 'name': 'electronic stove and gas stove'} , -{'id': 89, 'name': 'pot/pan'} , -{'id': 220, 'name': 'tape'} , -{'id': 338, 'name': 'lighter'} , -{'id': 177, 'name': 'snowboard'} , -{'id': 214, 'name': 'violin'} , -{'id': 217, 'name': 'chicken'} , -{'id': 2, 'name': 'sneakers'} , -{'id': 161, 'name': 'washing machine'} , -{'id': 131, 'name': 'kite'} , -{'id': 354, 'name': 'rabbit'} , -{'id': 86, 'name': 'bus'} , -{'id': 275, 'name': 'dates'} , -{'id': 282, 'name': 'camel'} , -{'id': 88, 'name': 'nightstand'} , -{'id': 179, 'name': 'grapes'} , -{'id': 229, 'name': 'pine apple'} , -{'id': 56, 'name': 'necklace'} , -{'id': 18, 'name': 'leather shoes'} , -{'id': 358, 'name': 'hoverboard'} , -{'id': 345, 'name': 'pencil case'} , -{'id': 359, 'name': 'pasta'} , -{'id': 157, 'name': 'radiator'} , -{'id': 201, 'name': 'hamburger'} , -{'id': 268, 'name': 'globe'} , -{'id': 332, 'name': 'barbell'} , -{'id': 329, 'name': 'mop'} , -{'id': 252, 'name': 'horn'} , -{'id': 350, 'name': 'eagle'} , -{'id': 169, 'name': 'folder'} , -{'id': 137, 'name': 'toilet'} , -{'id': 5, 'name': 'lamp'} , -{'id': 27, 'name': 'bench'} , -{'id': 249, 'name': 'swan'} , -{'id': 76, 'name': 'knife'} , -{'id': 341, 'name': 'comb'} , -{'id': 64, 'name': 'watch'} , -{'id': 105, 'name': 'telephone'} , -{'id': 3, 'name': 'chair'} , -{'id': 33, 'name': 'boat'} , -{'id': 107, 'name': 'orange'} , -{'id': 60, 'name': 'bread'} , -{'id': 147, 'name': 'cat'} , -{'id': 135, 'name': 'gas stove'} , -{'id': 307, 'name': 'papaya'} , -{'id': 227, 'name': 'router/modem'} , -{'id': 357, 'name': 'asparagus'} , -{'id': 73, 'name': 'motorcycle'} , -{'id': 77, 'name': 'traffic sign'} , -{'id': 67, 'name': 'fish'} , -{'id': 326, 'name': 'radish'} , -{'id': 213, 'name': 'egg'} , -{'id': 203, 'name': 'cucumber'} , -{'id': 17, 'name': 'helmet'} , -{'id': 110, 'name': 'luggage'} , -{'id': 80, 'name': 'truck'} , -{'id': 199, 'name': 'frisbee'} , -{'id': 232, 'name': 'peach'} , -{'id': 1, 'name': 'person'} , -{'id': 29, 'name': 'boots'} , -{'id': 310, 'name': 'chips'} , -{'id': 142, 'name': 'skateboard'} , -{'id': 44, 'name': 'slippers'} , -{'id': 4, 'name': 'hat'} , -{'id': 178, 'name': 'suitcase'} , -{'id': 24, 'name': 'tv'} , -{'id': 119, 'name': 'train'} , -{'id': 82, 'name': 'power outlet'} , -{'id': 245, 'name': 'swing'} , -{'id': 15, 'name': 'book'} , -{'id': 294, 'name': 'jellyfish'} , -{'id': 192, 'name': 'fire extinguisher'} , -{'id': 212, 'name': 'deer'} , -{'id': 181, 'name': 'pear'} , -{'id': 347, 'name': 'table tennis paddle'} , -{'id': 113, 'name': 'trolley'} , -{'id': 91, 'name': 'guitar'} , -{'id': 202, 'name': 'golf club'} , -{'id': 221, 'name': 'wheelchair'} , -{'id': 254, 'name': 'saxophone'} , -{'id': 117, 'name': 'paper towel'} , -{'id': 303, 'name': 'race car'} , -{'id': 240, 'name': 'carriage'} , -{'id': 246, 'name': 'radio'} , -{'id': 318, 'name': 'parrot'} , -{'id': 251, 'name': 'french fries'} , -{'id': 98, 'name': 'dog'} , -{'id': 112, 'name': 'soccer'} , -{'id': 355, 'name': 'french horn'} , -{'id': 79, 'name': 'paddle'} , -{'id': 283, 'name': 'lettuce'} , -{'id': 9, 'name': 'car'} , -{'id': 258, 'name': 'kiwi fruit'} , -{'id': 325, 'name': 'llama'} , -{'id': 187, 'name': 'billiards'} , -{'id': 210, 'name': 'facial cleanser'} , -{'id': 81, 'name': 'cow'} , -{'id': 331, 'name': 'microscope'} , -{'id': 148, 'name': 'lemon'} , -{'id': 302, 'name': 'pomelo'} , -{'id': 85, 'name': 'fork'} , -{'id': 154, 'name': 'pumpkin'} , -{'id': 289, 'name': 'shrimp'} , -{'id': 71, 'name': 'teddy bear'} , -{'id': 184, 'name': 'potato'} , -{'id': 102, 'name': 'air conditioner'} , -{'id': 208, 'name': 'hot dog'} , -{'id': 222, 'name': 'plum'} , -{'id': 316, 'name': 'spring rolls'} , -{'id': 230, 'name': 'crane'} , -{'id': 149, 'name': 'liquid soap'} , -{'id': 55, 'name': 'canned'} , -{'id': 35, 'name': 'speaker'} , -{'id': 108, 'name': 'banana'} , -{'id': 297, 'name': 'treadmill'} , -{'id': 99, 'name': 'spoon'} , -{'id': 104, 'name': 'mouse'} , -{'id': 182, 'name': 'american football'} , -{'id': 299, 'name': 'egg tart'} , -{'id': 127, 'name': 'cleaning products'} , -{'id': 313, 'name': 'urinal'} , -{'id': 286, 'name': 'medal'} , -{'id': 239, 'name': 'brush'} , -{'id': 96, 'name': 'hockey'} , -{'id': 279, 'name': 'dumbbell'} , -{'id': 32, 'name': 'umbrella'} , -{'id': 272, 'name': 'hammer'} , -{'id': 16, 'name': 'plate'} , -{'id': 21, 'name': 'potted plant'} , -{'id': 242, 'name': 'earphone'} , -{'id': 70, 'name': 'candle'} , -{'id': 185, 'name': 'paint brush'} , -{'id': 48, 'name': 'toy'} , -{'id': 130, 'name': 'pizza'} , -{'id': 255, 'name': 'trumpet'} , -{'id': 361, 'name': 'hotair balloon'} , -{'id': 188, 'name': 'fire hydrant'} , -{'id': 50, 'name': 'bed'} , -{'id': 253, 'name': 'avocado'} , -{'id': 293, 'name': 'coconut'} , -{'id': 257, 'name': 'cue'} , -{'id': 280, 'name': 'hamimelon'} , -{'id': 66, 'name': 'horse'} , -{'id': 173, 'name': 'pigeon'} , -{'id': 190, 'name': 'projector'} , -{'id': 69, 'name': 'camera'} , -{'id': 30, 'name': 'bowl'} , -{'id': 269, 'name': 'broom'} , -{'id': 343, 'name': 'pitaya'} , -{'id': 305, 'name': 'tuba'} , -{'id': 309, 'name': 'green onion'} , -{'id': 363, 'name': 'lobster'} , -{'id': 225, 'name': 'watermelon'} , -{'id': 47, 'name': 'suv'} , -{'id': 31, 'name': 'dining table'} , -{'id': 54, 'name': 'sandals'} , -{'id': 351, 'name': 'monkey'} , -{'id': 218, 'name': 'onion'} , -{'id': 36, 'name': 'trash bin/can'} , -{'id': 20, 'name': 'glove'} , -{'id': 277, 'name': 'rice'} , -{'id': 152, 'name': 'sports car'} , -{'id': 360, 'name': 'target'} , -{'id': 205, 'name': 'blender'} , -{'id': 19, 'name': 'pillow'} , -{'id': 72, 'name': 'cake'} , -{'id': 93, 'name': 'tea pot'} , -{'id': 353, 'name': 'game board'} , -{'id': 38, 'name': 'backpack'} , -{'id': 356, 'name': 'ambulance'} , -{'id': 146, 'name': 'life saver'} , -{'id': 189, 'name': 'goose'} , -{'id': 278, 'name': 'tape measure/ruler'} , -{'id': 92, 'name': 'traffic cone'} , -{'id': 134, 'name': 'toiletries'} , -{'id': 114, 'name': 'oven'} , -{'id': 317, 'name': 'tortoise/turtle'} , -{'id': 265, 'name': 'corn'} , -{'id': 126, 'name': 'donut'} , -{'id': 57, 'name': 'mirror'} , -{'id': 7, 'name': 'cabinet/shelf'} , -{'id': 263, 'name': 'green vegetables'} , -{'id': 159, 'name': 'tissue '} , -{'id': 321, 'name': 'shark'} , -{'id': 301, 'name': 'pig'} , -{'id': 41, 'name': 'carpet'} , -{'id': 304, 'name': 'rice cooker'} , -{'id': 323, 'name': 'poker card'} , -] - -def _get_builtin_metadata(): - id_to_name = {x['id']: x['name'] for x in categories} - thing_dataset_id_to_contiguous_id = {i + 1: i for i in range(365)} - thing_classes = [id_to_name[k] for k in sorted(id_to_name)] - return { - "thing_dataset_id_to_contiguous_id": thing_dataset_id_to_contiguous_id, - "thing_classes": thing_classes} - -_PREDEFINED_SPLITS_OBJECTS365 = { - "objects365_train": ("objects365/train", "objects365/annotations/objects365_train.json"), - "objects365_val": ("objects365/val", "objects365/annotations/objects365_val.json"), -} - -for key, (image_root, json_file) in _PREDEFINED_SPLITS_OBJECTS365.items(): - register_coco_instances( - key, - _get_builtin_metadata(), - os.path.join("datasets", json_file) if "://" not in json_file else json_file, - os.path.join("datasets", image_root), - ) diff --git a/spaces/simonduerr/ProteinMPNN/af_backprop/alphafold/model/tf/input_pipeline.py b/spaces/simonduerr/ProteinMPNN/af_backprop/alphafold/model/tf/input_pipeline.py deleted file mode 100644 index e9a9bc3a8aa15316aa88c3947120883be869331e..0000000000000000000000000000000000000000 --- a/spaces/simonduerr/ProteinMPNN/af_backprop/alphafold/model/tf/input_pipeline.py +++ /dev/null @@ -1,166 +0,0 @@ -# Copyright 2021 DeepMind Technologies Limited -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Feature pre-processing input pipeline for AlphaFold.""" - -from alphafold.model.tf import data_transforms -from alphafold.model.tf import shape_placeholders -import tensorflow.compat.v1 as tf -import tree - -# Pylint gets confused by the curry1 decorator because it changes the number -# of arguments to the function. -# pylint:disable=no-value-for-parameter - - -NUM_RES = shape_placeholders.NUM_RES -NUM_MSA_SEQ = shape_placeholders.NUM_MSA_SEQ -NUM_EXTRA_SEQ = shape_placeholders.NUM_EXTRA_SEQ -NUM_TEMPLATES = shape_placeholders.NUM_TEMPLATES - - -def nonensembled_map_fns(data_config): - """Input pipeline functions which are not ensembled.""" - common_cfg = data_config.common - - map_fns = [ - data_transforms.correct_msa_restypes, - data_transforms.add_distillation_flag(False), - data_transforms.cast_64bit_ints, - data_transforms.squeeze_features, - # Keep to not disrupt RNG. - data_transforms.randomly_replace_msa_with_unknown(0.0), - data_transforms.make_seq_mask, - data_transforms.make_msa_mask, - # Compute the HHblits profile if it's not set. This has to be run before - # sampling the MSA. - data_transforms.make_hhblits_profile, - data_transforms.make_random_crop_to_size_seed, - ] - if common_cfg.use_templates: - map_fns.extend([ - data_transforms.fix_templates_aatype, - data_transforms.make_template_mask, - data_transforms.make_pseudo_beta('template_') - ]) - map_fns.extend([ - data_transforms.make_atom14_masks, - ]) - - return map_fns - - -def ensembled_map_fns(data_config): - """Input pipeline functions that can be ensembled and averaged.""" - common_cfg = data_config.common - eval_cfg = data_config.eval - - map_fns = [] - - if common_cfg.reduce_msa_clusters_by_max_templates: - pad_msa_clusters = eval_cfg.max_msa_clusters - eval_cfg.max_templates - else: - pad_msa_clusters = eval_cfg.max_msa_clusters - - max_msa_clusters = pad_msa_clusters - max_extra_msa = common_cfg.max_extra_msa - - map_fns.append( - data_transforms.sample_msa( - max_msa_clusters, - keep_extra=True)) - - if 'masked_msa' in common_cfg: - # Masked MSA should come *before* MSA clustering so that - # the clustering and full MSA profile do not leak information about - # the masked locations and secret corrupted locations. - map_fns.append( - data_transforms.make_masked_msa(common_cfg.masked_msa, - eval_cfg.masked_msa_replace_fraction)) - - if common_cfg.msa_cluster_features: - map_fns.append(data_transforms.nearest_neighbor_clusters()) - map_fns.append(data_transforms.summarize_clusters()) - - # Crop after creating the cluster profiles. - if max_extra_msa: - map_fns.append(data_transforms.crop_extra_msa(max_extra_msa)) - else: - map_fns.append(data_transforms.delete_extra_msa) - - map_fns.append(data_transforms.make_msa_feat()) - - crop_feats = dict(eval_cfg.feat) - - if eval_cfg.fixed_size: - map_fns.append(data_transforms.select_feat(list(crop_feats))) - map_fns.append(data_transforms.random_crop_to_size( - eval_cfg.crop_size, - eval_cfg.max_templates, - crop_feats, - eval_cfg.subsample_templates)) - map_fns.append(data_transforms.make_fixed_size( - crop_feats, - pad_msa_clusters, - common_cfg.max_extra_msa, - eval_cfg.crop_size, - eval_cfg.max_templates)) - else: - map_fns.append(data_transforms.crop_templates(eval_cfg.max_templates)) - - return map_fns - - -def process_tensors_from_config(tensors, data_config): - """Apply filters and maps to an existing dataset, based on the config.""" - - def wrap_ensemble_fn(data, i): - """Function to be mapped over the ensemble dimension.""" - d = data.copy() - fns = ensembled_map_fns(data_config) - fn = compose(fns) - d['ensemble_index'] = i - return fn(d) - - eval_cfg = data_config.eval - tensors = compose( - nonensembled_map_fns( - data_config))( - tensors) - - tensors_0 = wrap_ensemble_fn(tensors, tf.constant(0)) - num_ensemble = eval_cfg.num_ensemble - if data_config.common.resample_msa_in_recycling: - # Separate batch per ensembling & recycling step. - num_ensemble *= data_config.common.num_recycle + 1 - - if isinstance(num_ensemble, tf.Tensor) or num_ensemble > 1: - fn_output_signature = tree.map_structure( - tf.TensorSpec.from_tensor, tensors_0) - tensors = tf.map_fn( - lambda x: wrap_ensemble_fn(tensors, x), - tf.range(num_ensemble), - parallel_iterations=1, - fn_output_signature=fn_output_signature) - else: - tensors = tree.map_structure(lambda x: x[None], - tensors_0) - return tensors - - -@data_transforms.curry1 -def compose(x, fs): - for f in fs: - x = f(x) - return x diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Traffic Racer 2.0 Hack APK Mod with Unlimited Money.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Traffic Racer 2.0 Hack APK Mod with Unlimited Money.md deleted file mode 100644 index a22a33f13b3861cda6f962d80ec6cc9ba256d52b..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Traffic Racer 2.0 Hack APK Mod with Unlimited Money.md +++ /dev/null @@ -1,61 +0,0 @@ -
            -

            Traffic Racer Hack APK Mod Unlimited Money 2.0 Download

            -

            Do you love racing games? Do you want to experience the thrill of driving fast cars on busy roads? If yes, then you should try Traffic Racer, one of the most popular and addictive racing games on Android. But wait, there's more! You can also download the Traffic Racer Hack APK Mod, which gives you unlimited money and unlocks all the cars and features in the game. In this article, we will tell you everything you need to know about Traffic Racer and its hack mod, including how to download and install it on your device. Let's get started!

            -

            traffic racer hack apk mod unlimited money 2.0 download


            Download === https://ssurll.com/2uNU8G



            -

            What is Traffic Racer?

            -

            Traffic Racer is a 3D racing game developed by Soner Kara, a Turkish game developer. The game was released in 2012 and has been downloaded over 100 million times on Google Play Store. The game has a simple but addictive gameplay: you have to drive your car through heavy traffic, avoid crashes, and earn cash. You can use the cash to buy new cars or upgrade your existing ones. You can also customize your car with different colors, wheels, and stickers. The game has four modes: Endless, Time Trial, Free Ride, and Police Chase. The game also has 40 different cars to choose from, ranging from sedans to sports cars to trucks. The game has realistic graphics, smooth controls, and amazing sound effects that make you feel like you are really driving on the road.

            -

            Features of Traffic Racer

            -

            Some of the features of Traffic Racer are:

            -
              -
            • Stunning 3D graphics and realistic physics
            • -
            • Smooth and easy controls
            • -
            • 40 different cars to choose from
            • -
            • 5 detailed environments: suburb, desert, snowy, rainy, and city night
            • -
            • 5 game modes: Endless, Time Trial, Free Ride, Police Chase, and Two-Way
            • -
            • Rich traffic types: trucks, buses, vans, pickups, SUVs, etc.
            • -
            • Basic customization: paint, wheels, and stickers
            • -
            • Online leaderboards and achievements
            • -
            -

            How to play Traffic Racer

            -

            The gameplay of Traffic Racer is simple and intuitive. You just have to tilt your device to steer your car left or right. You can also use the touch buttons on the screen to accelerate or brake. Your goal is to drive as fast as possible without crashing into other vehicles or obstacles. The faster you drive, the more points you get. You can also earn extra points by driving in the opposite direction in Two-Way mode or by overtaking other cars closely in Endless mode. You can also use the nitro boost to speed up your car temporarily. The game ends when you crash or run out of time in Time Trial mode.

            -

            What is Traffic Racer Hack APK Mod?

            -

            Traffic Racer Hack APK Mod is a modified version of the original game that gives you unlimited money and unlocks all the cars and features in the game. With this mod, you don't have to worry about earning cash or buying new cars. You can just enjoy the game with all its options available. You can also customize your car as much as you want without spending any money.

            -

            Benefits of Traffic Racer Hack APK Mod

            -

            Some of the benefits of Traffic Racer Hack APK Mod are:

            -
              -
            • You get unlimited money to buy and upgrade any car you want
            • -
            • You get access to all the cars in the game, including the premium ones
            • -
            • You get access to all the features in the game, including the

              How to download and install Traffic Racer Hack APK Mod

              -

              If you want to download and install Traffic Racer Hack APK Mod on your Android device, you need to follow these simple steps:

              -

              [Traffic Racer Mod apk [Unlimited money] download - Traffic Racer MOD ...](^1^)

              -
                -
              1. First, you need to uninstall the original game from your device if you have it installed.
              2. -
              3. Then, you need to download the Traffic Racer Hack APK Mod file from a trusted source. You can use this link to download it.
              4. -
              5. Next, you need to enable the installation of apps from unknown sources on your device. You can do this by going to Settings > Security > Unknown Sources and toggling it on.
              6. -
              7. After that, you need to locate the downloaded file on your device and tap on it to start the installation process.
              8. -
              9. Finally, you need to wait for the installation to finish and then launch the game from your app drawer or home screen.
              10. -
              -

              Congratulations! You have successfully installed Traffic Racer Hack APK Mod on your device. Now you can enjoy the game with unlimited money and all the cars and features unlocked.

              -

              Tips and tricks for Traffic Racer

              -

              Traffic Racer is a fun and challenging game that requires skill and strategy. Here are some tips and tricks that can help you improve your performance and score higher in the game:

              -

              Choose the right car

              -

              The game has 40 different cars to choose from, each with its own speed, handling, braking, and nitro. You should choose a car that suits your play style and preference. For example, if you like fast cars, you should go for the sports cars or the supercars. If you like more control, you should go for the sedans or the hatchbacks. If you like more durability, you should go for the trucks or the SUVs. You can also test drive each car before buying it to see how it feels on the road.

              -

              Upgrade your car

              -

              You can upgrade your car with the money you earn in the game. You can upgrade four aspects of your car: speed, handling, braking, and nitro. Upgrading your speed will make your car faster and more responsive. Upgrading your handling will make your car easier to steer and maneuver. Upgrading your braking will make your car stop faster and safer. Upgrading your nitro will make your car boost longer and more powerful. You should upgrade your car regularly to keep up with the increasing difficulty of the game.

              -

              Use the nitro wisely

              -

              The nitro is a useful feature that can help you speed up your car temporarily. However, you should use it wisely and strategically. You should use it when you have a clear road ahead of you or when you need to overtake other cars quickly. You should avoid using it when you are near other vehicles or obstacles, as it can cause collisions and crashes. You should also save some nitro for emergencies or critical situations. You can refill your nitro by driving fast or by driving in the opposite direction in Two-Way mode.

              -

              Avoid collisions

              -

              The most important thing in Traffic Racer is to avoid collisions with other vehicles or obstacles. Collisions will slow down your car, damage it, and end your game. You should try to drive as smoothly as possible and avoid sudden movements or changes of direction. You should also keep an eye on the traffic ahead of you and anticipate their movements. You should also use the brake button when necessary to avoid crashes or reduce their impact.

              -

              Conclusion

              -

              Traffic Racer is a fun and addictive racing game that will keep you entertained for hours. You can drive fast cars on busy roads, avoid crashes, and earn cash. You can also download the Traffic Racer Hack APK Mod, which gives you unlimited money and unlocks all the cars and features in the game. You can also follow some tips and tricks to improve your performance and score higher in the game. So what are you waiting for? Download Traffic Racer Hack APK Mod now and enjoy the ultimate racing experience!

              -

              FAQs

              -

              Here are some frequently asked questions about Traffic Racer Hack APK Mod:

              -
                -
              • Is Traffic Racer Hack APK Mod safe to use?
                Yes, Traffic Racer Hack APK Mod is safe to use as long as you download it from a trusted source like this link. However, you should be careful not to download any fake or malicious files that may harm your device or steal your data.
              • -
              • Is Traffic Racer Hack APK Mod compatible with my device?
                Traffic Racer Hack APK Mod is compatible with most Android devices that run on Android 4.1 or higher. However, some older or low-end devices may experience some lag or performance issues due to the high-quality graphics and physics of the game.
              • -
              • Can I play Traffic Racer Hack APK Mod online?
                No, Traffic Racer Hack APK Mod is an offline game that does not require an internet connection to play. However, you can still access the online leaderboards and achievements if you connect to the internet.
              • -
              • Can I update Traffic Racer Hack APK Mod?
                Yes, you can update Traffic Racer Hack APK Mod whenever there is a new version available. However, you should always download the latest version from the same source as before to avoid any compatibility or security issues.
              • -
              • Can I use Traffic Racer Hack APK Mod with other mods or cheats?
                No, Traffic Racer Hack APK Mod is not compatible with other mods or cheats. You should only use Traffic Racer Hack APK Mod alone to avoid any conflicts or errors.
              • -

              197e85843d
              -
              -
              \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/FS19 APK Download - Play the Latest Version of Farming Simulator 19 on Android and iOS.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/FS19 APK Download - Play the Latest Version of Farming Simulator 19 on Android and iOS.md deleted file mode 100644 index 817d29c11eab0586e689b7702adcd357c19275f8..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/FS19 APK Download - Play the Latest Version of Farming Simulator 19 on Android and iOS.md +++ /dev/null @@ -1,125 +0,0 @@ -
              -

              What is FS 19 APK and why you should download it

              -

              If you ever dreamed of becoming a farmer but you just had no chance to do it in real life, Farming Simulator 19 (FS 19) is the game for you. It is a realistic simulation game that lets you experience the joys and challenges of running your own farm. You can grow crops, raise animals, drive over 300 authentic vehicles and machines, and explore two huge open-world maps based on American and European environments.

              -

              FS 19 is available on various platforms, including Windows, Mac, PlayStation, Xbox, Nintendo Switch, Android, and iOS. However, if you want to play it on your mobile device, you may not find it on the official app stores. That's because the game is not officially supported by Google Play or Apple App Store. But don't worry, there is a way to get it on your phone or tablet. You can download FS 19 APK, which is an application package file that contains all the data and code needed to install and run the game.

              -

              fs 19 apk download


              Download Zip ··· https://ssurll.com/2uO0eh



              -

              Downloading FS 19 APK has some benefits over getting it from the app stores. For example:

              -
                -
              • You can get it for free, without paying any fees or subscriptions.
              • -
              • You can get it faster, without waiting for updates or approvals.
              • -
              • You can get it easier, without creating an account or logging in.
              • -
              • You can get it more flexibly, without worrying about compatibility or region restrictions.
              • -
              -

              However, downloading FS 19 APK also has some risks that you should be aware of. For example:

              -
                -
              • You may get a fake or malicious file that can harm your device or steal your data.
              • -
              • You may get a corrupted or outdated file that can cause errors or crashes.
              • -
              • You may get a modified or hacked file that can ruin your gaming experience or get you banned.
              • -
              • You may violate some terms of service or policies that can result in legal issues or penalties.
              • -
              -

              Therefore, you should be careful when downloading FS 19 APK and follow some precautions to avoid any problems. Here are some tips:

              -
                -
              • Only download FS 19 APK from reputable sources that have positive reviews and ratings.
              • -
              • Only download FS 19 APK from secure websites that have HTTPS encryption and SSL certificates.
              • -
              • Only download FS 19 APK from verified links that have clear descriptions and screenshots.
              • -
              • Only download FS 19 APK from trusted developers that have official licenses and permissions.
              • -
              -

              How to download FS 19 APK for Android

              -

              If you have an Android device, you can follow these steps to download and install FS 19 APK:

              -
                -
              1. Allow unknown apps on your device. Go to your device settings and look for the security or privacy option. Then, enable the option to allow installation of apps from unknown sources. This will let you install apps that are not from the Play Store.
              2. -
              3. Install a file manager app. Go to the Play Store and search for a file manager app that can help you locate and manage your files. Some popular ones are ES File Explorer, File Manager, and Files by Google. Download and install the app of your choice.
              4. -
              5. Download the APK file from a reputable source. Use your browser to search for a website that offers FS 19 APK download. Make sure the website is secure, verified, and trusted. Then, click on the download link and wait for the file to be downloaded.
              6. -
              7. Transfer the APK file to your device (optional). If you downloaded the APK file on your computer, you need to transfer it to your device. You can use a USB cable, a Bluetooth connection, or a cloud service to do this.
              8. -
              9. Install the APK file and enjoy the game. Use your file manager app to locate the APK file on your device. Tap on it and follow the instructions to install it. You may need to grant some permissions or accept some terms and conditions. Once the installation is complete, you can launch the game and start playing.
              10. -
              -

              How to download FS 19 APK for iOS

              -

              If you have an iOS device, you can follow these steps to download and install FS 19 APK:

              -

              fs 19 apk download for android
              -fs 19 apk download free
              -fs 19 apk download ios
              -fs 19 apk download latest version
              -fs 19 apk download mod
              -fs 19 apk download obb
              -fs 19 apk download offline
              -fs 19 apk download pc
              -fs 19 apk download unlimited money
              -fs 19 apk download update
              -fs 19 mobile apk download
              -fs 19 android apk free download
              -fs 19 full apk download
              -fs 19 hack apk download
              -fs 19 mod apk free download
              -farming simulator 19 apk download android
              -farming simulator 19 apk download for ios
              -farming simulator 19 apk download free full version
              -farming simulator 19 apk download mod
              -farming simulator 19 apk download pc
              -farming simulator 19 mobile apk download
              -farming simulator 19 android apk free download
              -farming simulator 19 full apk download
              -farming simulator 19 hack apk download
              -farming simulator 19 mod apk free download
              -how to download fs 19 apk
              -how to install fs 19 apk
              -how to play fs 19 apk
              -how to update fs 19 apk
              -where to download fs 19 apk
              -best site to download fs 19 apk
              -best way to download fs 19 apk
              -can i download fs 19 apk on iphone
              -can you play fs 19 online with the downloaded APK file?
              -does fs 19 apk work on ios?
              -is fs 19 apk safe to download?
              -is there a way to get fs 19 for free without downloading the APK file?
              -what is the size of fs 19 apk file?
              -what are the requirements for fs 19 apk?
              -what are the features of fs 19 apk?

              -
                -
              1. Install an iOS emulator app. Go to the App Store and search for an iOS emulator app that can help you run Android apps on your device. Some popular ones are iAndroid, Appetize.io, and iEMU. Download and install the app of your choice.
              2. -
              3. Download the APK file from a reputable source. Use your browser to search for a website that offers FS 19 APK download. Make sure the website is secure, verified, and trusted. Then, click on the download link and wait for the file to be downloaded.
              4. -
              5. Transfer the APK file to your emulator app. Use your file manager app to locate the APK file on your device. Tap on it and choose to open it with your emulator app. This will import the file to your emulator app.
              6. -
              7. Install the APK file and enjoy the game. Use your emulator app to locate the APK file on its interface. Tap on it and follow the instructions to install it. You may need to grant some permissions or accept some terms and conditions. Once the installation is complete, you can launch the game and start playing.
              8. -
              -

              Tips and tricks for playing FS 19 on mobile

              -

              Now that you have downloaded FS 19 APK on your device, you may want to know some tips and tricks to make the most out of your gaming experience. Here are some suggestions:

              -
                -
              • Customize your controls and settings. Go to the game menu and adjust your controls and settings according to your preferences. You can change the sensitivity, layout, sound, graphics, language, and more.
              • -
              • Use the map and GPS to navigate. The game has two large maps that you can explore: Ravenport (American) and Felsbrunn (European). To find your way around, you can use the map and GPS features that show you your location, objectives, landmarks, roads, fields, shops, etc.
              • -
              • Manage your crops and animals efficiently. The game has over 70 crops that you can grow, harvest, sell, or store. You can also raise animals such as cows, pigs, sheep, horses, chickens, etc. To succeed in farming, you need to take care of your crops and animals by fertilizing, watering, feeding, cleaning, etc.
              • -
              • Expand your farm and buy new vehicles and equipment. The game has over 300 vehicles and machines that you can use for various farming tasks. You can also buy new land, buildings, silos, sheds, etc. To earn money for these purchases, you need to sell your products or complete contracts.
              • -
              • Join online multiplayer and modding communities. The game has an online multiplayer mode that allows you to play with up to 16 players on a server. You can cooperate or compete with other farmers from around the world. You can also access modding communities that offer custom content such as maps, vehicles, equipment, etc.
              • -
              -

              Conclusion

              -

              In conclusion, FS 19 is a fun and realistic simulation game that lets you experience farming like never before. You can download FS 19 APK on your mobile device by following some simple steps and precautions. You can also improve your gaming experience by following some tips and tricks that we shared in this article.

              -

              If you are ready to become a farmer, download FS 19 APK today and start your farming adventure.

              -

              Do you have any questions about FS 19 APK download? Here are some FAQs that may help you:

              - - - - - - - - - - - - - - - - - - - - - - - - - -
              QuestionAnswer
              Is FS 19 APK safe to download?FS 19 APK is safe to download if you get it from a reputable source that has no viruses, malware, or spyware. You should also scan the file with an antivirus app before installing it.
              Is FS 19 APK legal to download?FS 19 APK is legal to download if you have purchased the game from the official website or platform. You should not download FS 19 APK if you have not paid for the game or if you are violating any terms of service or policies.
              Is FS 19 APK compatible with my device?FS 19 APK is compatible with most Android and iOS devices that have enough storage space and system requirements. You can check the minimum and recommended specifications on the game's website or on the APK file's description.
              How can I update FS 19 APK?You can update FS 19 APK by downloading the latest version of the file from the same source that you got it from. You should also uninstall the previous version of the game before installing the new one.
              How can I uninstall FS 19 APK?You can uninstall FS 19 APK by going to your device settings and looking for the app manager or application option. Then, find and select FS 19 and tap on the uninstall button. You should also delete the APK file from your device.
              -

              I hope you enjoyed this article and learned something new. If you have any feedback or suggestions, please let me know in the comments section below. Thank you for reading and happy farming!

              197e85843d
              -
              -
              \ No newline at end of file diff --git a/spaces/simsa/Fashion-Image-Captioning-using-BLIP-2/model/blip2_peft/README.md b/spaces/simsa/Fashion-Image-Captioning-using-BLIP-2/model/blip2_peft/README.md deleted file mode 100644 index 78ee7dce84ce97f966faaa52b1c1dced955d00b2..0000000000000000000000000000000000000000 --- a/spaces/simsa/Fashion-Image-Captioning-using-BLIP-2/model/blip2_peft/README.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -library_name: peft ---- -## Training procedure - - -The following `bitsandbytes` quantization config was used during training: -- load_in_8bit: True -- load_in_4bit: False -- llm_int8_threshold: 6.0 -- llm_int8_skip_modules: None -- llm_int8_enable_fp32_cpu_offload: False -- llm_int8_has_fp16_weight: False -- bnb_4bit_quant_type: fp4 -- bnb_4bit_use_double_quant: False -- bnb_4bit_compute_dtype: float32 -### Framework versions - - -- PEFT 0.4.0.dev0 diff --git a/spaces/sinz2002/ChuanhuChatGPT/modules/llama_func.py b/spaces/sinz2002/ChuanhuChatGPT/modules/llama_func.py deleted file mode 100644 index e1c513af1bf6d1569b071eb5fc0ce441d0692f83..0000000000000000000000000000000000000000 --- a/spaces/sinz2002/ChuanhuChatGPT/modules/llama_func.py +++ /dev/null @@ -1,166 +0,0 @@ -import os -import logging - -from llama_index import download_loader -from llama_index import ( - Document, - LLMPredictor, - PromptHelper, - QuestionAnswerPrompt, - RefinePrompt, -) -import colorama -import PyPDF2 -from tqdm import tqdm - -from modules.presets import * -from modules.utils import * -from modules.config import local_embedding - - -def get_index_name(file_src): - file_paths = [x.name for x in file_src] - file_paths.sort(key=lambda x: os.path.basename(x)) - - md5_hash = hashlib.md5() - for file_path in file_paths: - with open(file_path, "rb") as f: - while chunk := f.read(8192): - md5_hash.update(chunk) - - return md5_hash.hexdigest() - - -def block_split(text): - blocks = [] - while len(text) > 0: - blocks.append(Document(text[:1000])) - text = text[1000:] - return blocks - - -def get_documents(file_src): - documents = [] - logging.debug("Loading documents...") - logging.debug(f"file_src: {file_src}") - for file in file_src: - filepath = file.name - filename = os.path.basename(filepath) - file_type = os.path.splitext(filepath)[1] - logging.info(f"loading file: {filename}") - try: - if file_type == ".pdf": - logging.debug("Loading PDF...") - try: - from modules.pdf_func import parse_pdf - from modules.config import advance_docs - - two_column = advance_docs["pdf"].get("two_column", False) - pdftext = parse_pdf(filepath, two_column).text - except: - pdftext = "" - with open(filepath, "rb") as pdfFileObj: - pdfReader = PyPDF2.PdfReader(pdfFileObj) - for page in tqdm(pdfReader.pages): - pdftext += page.extract_text() - text_raw = pdftext - elif file_type == ".docx": - logging.debug("Loading Word...") - DocxReader = download_loader("DocxReader") - loader = DocxReader() - text_raw = loader.load_data(file=filepath)[0].text - elif file_type == ".epub": - logging.debug("Loading EPUB...") - EpubReader = download_loader("EpubReader") - loader = EpubReader() - text_raw = loader.load_data(file=filepath)[0].text - elif file_type == ".xlsx": - logging.debug("Loading Excel...") - text_list = excel_to_string(filepath) - for elem in text_list: - documents.append(Document(elem)) - continue - else: - logging.debug("Loading text file...") - with open(filepath, "r", encoding="utf-8") as f: - text_raw = f.read() - except Exception as e: - logging.error(f"Error loading file: {filename}") - pass - text = add_space(text_raw) - # text = block_split(text) - # documents += text - documents += [Document(text)] - logging.debug("Documents loaded.") - return documents - - -def construct_index( - api_key, - file_src, - max_input_size=4096, - num_outputs=5, - max_chunk_overlap=20, - chunk_size_limit=600, - embedding_limit=None, - separator=" ", -): - from langchain.chat_models import ChatOpenAI - from langchain.embeddings.huggingface import HuggingFaceEmbeddings - from llama_index import GPTSimpleVectorIndex, ServiceContext, LangchainEmbedding, OpenAIEmbedding - - if api_key: - os.environ["OPENAI_API_KEY"] = api_key - else: - # 由于一个依赖的愚蠢的设计,这里必须要有一个API KEY - os.environ["OPENAI_API_KEY"] = "sk-xxxxxxx" - chunk_size_limit = None if chunk_size_limit == 0 else chunk_size_limit - embedding_limit = None if embedding_limit == 0 else embedding_limit - separator = " " if separator == "" else separator - - prompt_helper = PromptHelper( - max_input_size=max_input_size, - num_output=num_outputs, - max_chunk_overlap=max_chunk_overlap, - embedding_limit=embedding_limit, - chunk_size_limit=600, - separator=separator, - ) - index_name = get_index_name(file_src) - if os.path.exists(f"./index/{index_name}.json"): - logging.info("找到了缓存的索引文件,加载中……") - return GPTSimpleVectorIndex.load_from_disk(f"./index/{index_name}.json") - else: - try: - documents = get_documents(file_src) - if local_embedding: - embed_model = LangchainEmbedding(HuggingFaceEmbeddings(model_name = "sentence-transformers/distiluse-base-multilingual-cased-v2")) - else: - embed_model = OpenAIEmbedding() - logging.info("构建索引中……") - with retrieve_proxy(): - service_context = ServiceContext.from_defaults( - prompt_helper=prompt_helper, - chunk_size_limit=chunk_size_limit, - embed_model=embed_model, - ) - index = GPTSimpleVectorIndex.from_documents( - documents, service_context=service_context - ) - logging.debug("索引构建完成!") - os.makedirs("./index", exist_ok=True) - index.save_to_disk(f"./index/{index_name}.json") - logging.debug("索引已保存至本地!") - return index - - except Exception as e: - logging.error("索引构建失败!", e) - print(e) - return None - - -def add_space(text): - punctuations = {",": ", ", "。": "。 ", "?": "? ", "!": "! ", ":": ": ", ";": "; "} - for cn_punc, en_punc in punctuations.items(): - text = text.replace(cn_punc, en_punc) - return text diff --git a/spaces/smartinezbragado/reddit-topic-modelling/README.md b/spaces/smartinezbragado/reddit-topic-modelling/README.md deleted file mode 100644 index 876e8b6dd603995d0edafa9891d732cf8babfd47..0000000000000000000000000000000000000000 --- a/spaces/smartinezbragado/reddit-topic-modelling/README.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -title: Reddit topic modelling app -emoji: ⚗️ -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 2.9.1 -python_version: 3.10.4 -app_file: app.py -models: - - bertopic -datasets: - - emotion -license: mit -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/wav2vec/README.md b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/wav2vec/README.md deleted file mode 100644 index 253c8af2516580bbc33e8ecc8efe4f7a526d7142..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/wav2vec/README.md +++ /dev/null @@ -1,376 +0,0 @@ -# wav2vec 2.0 - -wav2vec 2.0 learns speech representations on unlabeled data as described in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations (Baevski et al., 2020)](https://arxiv.org/abs/2006.11477). - -We learned speech representations in multiple languages as well in [Unsupervised Cross-lingual Representation Learning for Speech Recognition (Conneau et al., 2020)](https://arxiv.org/abs/2006.13979). - -We also combined wav2vec 2.0 with self-training in [Self-training and Pre-training are Complementary for Speech Recognition (Xu et al., 2020)](https://arxiv.org/abs/2010.11430). - -We combined speech data from multiple domains in [Robust wav2vec 2.0: Analyzing Domain Shift in Self-Supervised Pre-Training (Hsu, et al., 2021)](https://arxiv.org/abs/2104.01027) - -## Pre-trained models - -Model | Finetuning split | Dataset | Model -|---|---|---|--- -Wav2Vec 2.0 Base | No finetuning | [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_small.pt) -Wav2Vec 2.0 Base | 10 minutes | [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_small_10m.pt) -Wav2Vec 2.0 Base | 100 hours | [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_small_100h.pt) -Wav2Vec 2.0 Base | 960 hours | [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_small_960h.pt) -Wav2Vec 2.0 Large | No finetuning | [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/libri960_big.pt) -Wav2Vec 2.0 Large | 10 minutes | [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_big_10m.pt) -Wav2Vec 2.0 Large | 100 hours | [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_big_100h.pt) -Wav2Vec 2.0 Large | 960 hours | [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_big_960h.pt) -Wav2Vec 2.0 Large (LV-60)* | No finetuning | [Libri-Light](https://github.com/facebookresearch/libri-light) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_vox_new.pt) -Wav2Vec 2.0 Large (LV-60)* | 10 minutes | [Libri-Light](https://github.com/facebookresearch/libri-light) + [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_vox_10m_new.pt) -Wav2Vec 2.0 Large (LV-60)* | 100 hours | [Libri-Light](https://github.com/facebookresearch/libri-light) + [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_vox_100h_new.pt) -Wav2Vec 2.0 Large (LV-60)* | 960 hours | [Libri-Light](https://github.com/facebookresearch/libri-light) + [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec2_vox_960h_new.pt) -Wav2Vec 2.0 Large (LV-60) + Self Training * | 10 minutes | [Libri-Light](https://github.com/facebookresearch/libri-light) + [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_vox_10m_pl.pt) -Wav2Vec 2.0 Large (LV-60) + Self Training * | 100 hours | [Libri-Light](https://github.com/facebookresearch/libri-light) + [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_vox_100h_pl.pt) -Wav2Vec 2.0 Large (LV-60) + Self Training * | 960 hours | [Libri-Light](https://github.com/facebookresearch/libri-light) + [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_vox_960h_pl.pt) -Wav2Vec 2.0 Large (LV-60 + CV + SWBD + FSH) ** | No finetuning | [Libri-Light](https://github.com/facebookresearch/libri-light) + [CommonVoice](https://commonvoice.mozilla.org/en/languages) + [Switchboard](https://catalog.ldc.upenn.edu/LDC97S62) + [Fisher](https://catalog.ldc.upenn.edu/LDC2004T19) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/w2v_large_lv_fsh_swbd_cv.pt) -Wav2Vec 2.0 Large (LV-60 + CV + SWBD + FSH) ** | 960 hours Librispeech | [Libri-Light](https://github.com/facebookresearch/libri-light) + [CommonVoice](https://commonvoice.mozilla.org/en/languages) + [Switchboard](https://catalog.ldc.upenn.edu/LDC97S62) + [Fisher](https://catalog.ldc.upenn.edu/LDC2004T19) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/w2v_large_lv_fsh_swbd_cv_ftls960.pt) -Wav2Vec 2.0 Large (LV-60 + CV + SWBD + FSH) ** | 300 hours Switchboard | [Libri-Light](https://github.com/facebookresearch/libri-light) + [CommonVoice](https://commonvoice.mozilla.org/en/languages) + [Switchboard](https://catalog.ldc.upenn.edu/LDC97S62) + [Fisher](https://catalog.ldc.upenn.edu/LDC2004T19) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/w2v_large_lv_fsh_swbd_cv_ftsb300.pt) - -\* updated (Oct. 24, 2020)\ -** updated (Jul. 8, 2021) - -We also release multilingual pre-trained wav2vec 2.0 (XLSR) models: - -Model | Architecture | Hours | Languages | Datasets | Model -|---|---|---|---|---|--- -XLSR-53 | Large | 56k | 53 | MLS, CommonVoice, BABEL | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/xlsr_53_56k.pt) - -The XLSR model uses the following datasets for multilingual pretraining: - -* **[MLS: Multilingual LibriSpeech](https://indico2.conference4me.psnc.pl/event/35/contributions/3585/attachments/1060/1101/Wed-2-6-10.pdf)** (8 languages, 50.7k hours): *Dutch, English, French, German, Italian, Polish, Portuguese, Spanish* - -* **[CommonVoice](https://commonvoice.mozilla.org/en/languages)** (36 languages, 3.6k hours): *Arabic, Basque, Breton, Chinese (CN), Chinese (HK), Chinese (TW), Chuvash, Dhivehi, Dutch, English, Esperanto, Estonian, French, German, Hakh-Chin, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kinyarwanda, Kyrgyz, Latvian, Mongolian, Persian, Portuguese, Russian, Sakha, Slovenian, Spanish, Swedish, Tamil, Tatar, Turkish, Welsh* (see also [finetuning splits]([https://dl.fbaipublicfiles.com/cpc_audio/common_voices_splits.tar.gz]) from [this paper](https://arxiv.org/abs/2002.02848)). - -* **[Babel](https://catalog.ldc.upenn.edu/byyear)** (17 languages, 1.7k hours): *Assamese, Bengali, Cantonese, Cebuano, Georgian, Haitian, Kazakh, Kurmanji, Lao, Pashto, Swahili, Tagalog, Tamil, Tok, Turkish, Vietnamese, Zulu* - - -## Training a new model with the CLI tools - -Given a directory containing wav files to be used for pretraining (we recommend splitting each file into separate file 10 to 30 seconds in length) - -### Prepare training data manifest: - -First, install the `soundfile` library: -```shell script -pip install soundfile -``` - -Next, run: - -```shell script -$ python examples/wav2vec/wav2vec_manifest.py /path/to/waves --dest /manifest/path --ext $ext --valid-percent $valid -``` - -$ext should be set to flac, wav, or whatever format your dataset happens to use that soundfile can read. - -$valid should be set to some reasonable percentage (like 0.01) of training data to use for validation. -To use a pre-defined validation set (like dev-other from librispeech), set to it 0 and then overwrite valid.tsv with a -separately pre-processed manifest file. - -### Train a wav2vec 2.0 base model: - -This configuration was used for the base model trained on the Librispeech dataset in the wav2vec 2.0 paper - -Note that the input is expected to be single channel, sampled at 16 kHz - -```shell script -$ fairseq-hydra-train \ - task.data=/path/to/data \ - --config-dir /path/to/fairseq-py/examples/wav2vec/config/pretraining \ - --config-name wav2vec2_base_librispeech -``` - -Note: you can simulate 64 GPUs by using k GPUs and adding command line parameters (before `--config-dir`) -`distributed_training.distributed_world_size=k` `+optimization.update_freq='[x]'` where x = 64/k - -### Train a wav2vec 2.0 large model: - -This configuration was used for the large model trained on the Libri-light dataset in the wav2vec 2.0 paper - -```shell script -$ fairseq-hydra-train \ - task.data=/path/to/data \ - --config-dir /path/to/fairseq-py/examples/wav2vec/config/pretraining \ - --config-name wav2vec2_large_librivox -``` - -Note: you can simulate 128 GPUs by using k GPUs and adding command line parameters (before `--config-dir`) -`distributed_training.distributed_world_size=k` `+optimization.update_freq='[x]'` where x = 128/k - -### Fine-tune a pre-trained model with CTC: - -Fine-tuning a model requires parallel audio and labels file, as well as a vocabulary file in fairseq format. -A letter vocabulary can be downloaded [here](https://dl.fbaipublicfiles.com/fairseq/wav2vec/dict.ltr.txt). -An example [script](libri_labels.py) that generates labels for the Librispeech dataset from the tsv file produced by wav2vec_manifest.py can be used as follows: - -```shell script -split=train -$ python libri_labels.py /path/to/tsv --output-dir /output/dir --output-name $split -``` - -Fine-tuning on 100h of Librispeech with letter targets: -```shell script -$ fairseq-hydra-train \ - distributed_training.distributed_port=$PORT \ - task.data=/path/to/data \ - model.w2v_path=/path/to/model.pt \ - --config-dir /path/to/fairseq-py/examples/wav2vec/config/finetuning \ - --config-name base_100h -``` - -There are other config files in the config/finetuning directory that can be used to fine-tune on other splits. -You can specify the right config via the `--config-name` parameter. - -Note: you can simulate 24 GPUs by using k GPUs and adding command line parameters (before `--config-dir`) -`distributed_training.distributed_world_size=k` `+optimization.update_freq='[x]'` where x = 24/k - -Decoding with a language model during training requires flashlight [python bindings](https://github.com/facebookresearch/flashlight/tree/master/bindings/python) (previously called [wav2letter](https://github.com/facebookresearch/wav2letter). -If you want to use a language model, add `+criterion.wer_args='[/path/to/kenlm, /path/to/lexicon, 2, -1]'` to the command line. - -### Evaluating a CTC model: - -Evaluating a CTC model with a language model requires [flashlight python bindings](https://github.com/facebookresearch/flashlight/tree/master/bindings/python) (previously called [wav2letter](https://github.com/facebookresearch/wav2letter) to be installed. - -Fairseq transformer language model used in the wav2vec 2.0 paper can be obtained from the [wav2letter model repository](https://github.com/facebookresearch/wav2letter/tree/master/recipes/sota/2019). -Be sure to upper-case the language model vocab after downloading it. - -Letter dictionary for pre-trained models can be found [here](https://dl.fbaipublicfiles.com/fairseq/wav2vec/dict.ltr.txt). - -Next, run the evaluation command: - -```shell script -$subset=dev_other -python examples/speech_recognition/infer.py /checkpoint/abaevski/data/speech/libri/10h/wav2vec/raw --task audio_finetuning \ ---nbest 1 --path /path/to/model --gen-subset $subset --results-path /path/to/save/results/for/sclite --w2l-decoder kenlm \ ---lm-model /path/to/kenlm.bin --lm-weight 2 --word-score -1 --sil-weight 0 --criterion ctc --labels ltr --max-tokens 4000000 \ ---post-process letter -``` - -To get raw numbers, use --w2l-decoder viterbi and omit the lexicon. To use the transformer language model, use --w2l-decoder fairseqlm. - -## Use wav2vec 2.0 with 🤗Transformers: - -Wav2Vec2 is also available in the [🤗Transformers library](https://github.com/huggingface/transformers) since version 4.4. - -Pretrained Models can be found on the [hub](https://huggingface.co/models?filter=wav2vec2) -and documentation can be found [here](https://huggingface.co/transformers/master/model_doc/wav2vec2.html). - -Usage example: - -```python -# !pip install transformers -# !pip install datasets -import soundfile as sf -import torch -from datasets import load_dataset -from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor - -# load pretrained model -processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h") -model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h") - - -librispeech_samples_ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") - -# load audio -audio_input, sample_rate = sf.read(librispeech_samples_ds[0]["file"]) - -# pad input values and return pt tensor -input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values - -# INFERENCE - -# retrieve logits & take argmax -logits = model(input_values).logits -predicted_ids = torch.argmax(logits, dim=-1) - -# transcribe -transcription = processor.decode(predicted_ids[0]) - -# FINE-TUNE - -target_transcription = "A MAN SAID TO THE UNIVERSE I EXIST" - -# encode labels -with processor.as_target_processor(): - labels = processor(target_transcription, return_tensors="pt").input_ids - -# compute loss by passing labels -loss = model(input_values, labels=labels).loss -loss.backward() -``` - -# wav2vec - -Example to train a wav2vec model as described in [wav2vec: Unsupervised Pre-training for Speech Recognition (Schneider et al., 2019)](https://arxiv.org/abs/1904.05862). - -## Pre-trained models - -Description | Dataset | Model ----|---|--- -Wav2Vec large | [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_large.pt) - -#### Example usage: -```python -import torch -import fairseq - -cp_path = '/path/to/wav2vec.pt' -model, cfg, task = fairseq.checkpoint_utils.load_model_ensemble_and_task([cp_path]) -model = model[0] -model.eval() - -wav_input_16khz = torch.randn(1,10000) -z = model.feature_extractor(wav_input_16khz) -c = model.feature_aggregator(z) -``` - -## Training a new model with the CLI tools - -Given a directory containing wav files to be used for pretraining (we recommend splitting each file into separate files 10 to 30 seconds in length) - -### Prepare training data manifest: - -``` -$ python examples/wav2vec/wav2vec_manifest.py /path/to/waves --dest /manifest/path --ext wav -``` - -### Train a wav2vec model: - -``` -$ python train.py /manifest/path --save-dir /model/path --num-workers 6 --fp16 --max-update 400000 --save-interval 1 --no-epoch-checkpoints \ ---arch wav2vec --task audio_pretraining --min-lr 1e-06 --stop-min-lr 1e-09 --optimizer adam --lr 0.005 --lr-scheduler cosine \ ---conv-feature-layers [(512, 10, 5), (512, 8, 4), (512, 4, 2), (512, 4, 2), (512, 4, 2), (512, 1, 1), (512, 1, 1)] \ ---conv-aggregator-layers [(512, 2, 1), (512, 3, 1), (512, 4, 1), (512, 5, 1), (512, 6, 1), (512, 7, 1), (512, 8, 1), (512, 9, 1), (512, 10, 1), (512, 11, 1), (512, 12, 1), (512, 13, 1)] \ ---skip-connections-agg --residual-scale 0.5 --log-compression --warmup-updates 500 --warmup-init-lr 1e-07 --criterion wav2vec --num-negatives 10 \ ---max-sample-size 150000 --max-tokens 1500000 --skip-invalid-size-inputs-valid-test -``` - -### Run wav2vec2 pre-training on Google Cloud TPUs: - -Wav2Vec2 is now supported on TPUs! It's currently pre-training only. - -#### Using hydra on a v3-8: - -``` -$ OMP_NUM_THREADS=1 fairseq-hydra-train \ - task.data=/manifest/path \ - --config-dir /PATH/TO/FAIRSEQ/examples/wav2vec/config/pretraining \ - --config-name wav2vec2_large_librivox_tpu.yaml -``` - -#### Using command line arguments on a v3-8: -Note: Commandline arguments way of execution has a [known-problem](https://github.com/pytorch/fairseq/issues/3741) currently. - -``` -$ OMP_NUM_THREADS=1 python train.py /manifest/path --save-dir /model/path --num-workers 6 --fp16 --max-update 400000 --save-interval 1 --no-epoch-checkpoints \ ---arch wav2vec2 --task audio_pretraining --min-lr 1e-06 --stop-min-lr 1e-09 --optimizer adam --lr 0.005 --lr-scheduler cosine \ ---conv-feature-layers [(512, 10, 5), (512, 8, 4), (512, 4, 2), (512, 4, 2), (512, 4, 2), (512, 1, 1), (512, 1, 1)] \ ---conv-aggregator-layers [(512, 2, 1), (512, 3, 1), (512, 4, 1), (512, 5, 1), (512, 6, 1), (512, 7, 1), (512, 8, 1), (512, 9, 1), (512, 10, 1), (512, 11, 1), (512, 12, 1), (512, 13, 1)] \ ---skip-connections-agg --residual-scale 0.5 --log-compression --warmup-updates 500 --warmup-init-lr 1e-07 --criterion wav2vec --num-negatives 10 \ ---max-sample-size 150000 --max-tokens 1500000 --skip-invalid-size-inputs-valid-test \ ---tpu --distributed-world-size 8 --num-batch-buckets 3 --enable-padding \ ---encoder-layerdrop 0 --mask-channel-prob 0.1 -``` - -#### Using hydra on a pod slice (v3-N with N > 8): - -``` -$ OMP_NUM_THREADS=1 fairseq-hydra-train \ - task.data=/manifest/path \ - --config-dir /PATH/TO/FAIRSEQ/examples/wav2vec/config/pretraining \ - --config-name wav2vec2_large_librivox_tpu-pod.yaml # edit distributed-world-size accordingly -``` - -#### Using command line arguments on a pod slice (v3-N with N > 8): -Note: Commandline arguments way of execution has a [known-problem](https://github.com/pytorch/fairseq/issues/3741) currently. - -``` -$ python -m torch_xla.distributed.xla_dist \ - --tpu ${TPUNAME} --conda-env=torch-xla-${TORCH_XLA_VERSION} --env OMP_NUM_THREADS=1 \ - -- \ -python train.py /manifest/path --save-dir /model/path --num-workers 6 --fp16 --max-update 400000 --save-interval 1 --no-epoch-checkpoints \ ---arch wav2vec2 --task audio_pretraining --min-lr 1e-06 --stop-min-lr 1e-09 --optimizer adam --lr 0.005 --lr-scheduler cosine \ ---conv-feature-layers [(512, 10, 5), (512, 8, 4), (512, 4, 2), (512, 4, 2), (512, 4, 2), (512, 1, 1), (512, 1, 1)] \ ---conv-aggregator-layers [(512, 2, 1), (512, 3, 1), (512, 4, 1), (512, 5, 1), (512, 6, 1), (512, 7, 1), (512, 8, 1), (512, 9, 1), (512, 10, 1), (512, 11, 1), (512, 12, 1), (512, 13, 1)] \ ---skip-connections-agg --residual-scale 0.5 --log-compression --warmup-updates 500 --warmup-init-lr 1e-07 --criterion wav2vec --num-negatives 10 \ ---max-sample-size 150000 --max-tokens 1500000 --skip-invalid-size-inputs-valid-test \ ---tpu --distributed-world-size ${WORLD_SIZE} --num-batch-buckets 3 --enable-padding \ ---encoder-layerdrop 0 --mask-channel-prob 0.1 -``` - -### Extract embeddings from the downstream task data: - -``` -$ PYTHONPATH=/path/to/fairseq python examples/wav2vec/wav2vec_featurize.py --input /path/to/task/waves --output /path/to/output \ ---model /model/path/checkpoint_best.pt --split train valid test -``` - -# vq-wav2vec - -Example to train a vq-wav2vec model as described in [vq-wav2vec: Self-Supervised Learning of Discrete Speech Representations (Baevski et al., 2019)](https://arxiv.org/abs/1910.05453). - -These models are also used in [Effectiveness of self-supervised pre-training for speech recognition (Baevski et al., 2019)](https://arxiv.org/abs/1911.03912). - -## Pre-trained models - -Description | Dataset | Model ----|---|--- -vq-wav2vec Gumbel | [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/vq-wav2vec.pt) -vq-wav2vec K-means | [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/vq-wav2vec_kmeans.pt) -Roberta on K-means codes | [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/bert_kmeans.tar) - -#### Example usage: -```python -import torch -import fairseq - -cp = torch.load('/path/to/vq-wav2vec.pt') -model, cfg, task = fairseq.checkpoint_utils.load_model_ensemble_and_task([cp]) -model = model[0] -model.eval() - -wav_input_16khz = torch.randn(1,10000) -z = model.feature_extractor(wav_input_16khz) -_, idxs = model.vector_quantizer.forward_idx(z) -print(idxs.shape) # output: torch.Size([1, 60, 2]), 60 timesteps with 2 indexes corresponding to 2 groups in the model -``` - -## Training a new model with the CLI tools - -Given a directory containing wav files to be used for pretraining (we recommend splitting each file into separate file 10 to 30 seconds in length) - -### Prepare training data manifest: - -``` -$ python examples/wav2vec/wav2vec_manifest.py /path/to/waves --dest /manifest/path --ext wav -``` - -### Train a gumbel vq-wav2vec model: - -``` -$ python train.py /manifest/path --save-dir /model/path --num-workers 6 --fp16 --max-update 400000 \ ---save-interval 1 --no-epoch-checkpoints --arch wav2vec --task audio_pretraining --min-lr 1e-06 --stop-min-lr 1e-09 \ ---optimizer adam --lr 1e-05 --lr-scheduler cosine \ ---conv-feature-layers [(512, 10, 5), (512, 8, 4), (512, 4, 2), (512, 4, 2), (512, 4, 2), (512, 1, 1), (512, 1, 1), (512, 1, 1)] \ ---conv-aggregator-layers [(512, 2, 1), (512, 3, 1), (512, 4, 1), (512, 5, 1), (512, 6, 1), (512, 7, 1), (512, 8, 1), (512, 9, 1), (512, 10, 1), (512, 11, 1), (512, 12, 1), (512, 13, 1)] \ ---activation gelu --offset auto --skip-connections-agg --residual-scale 0.5 \ ---log-keys ["prob_perplexity","code_perplexity","temp"] --vq-type gumbel --vq-groups 2 --vq-depth 2 \ ---combine-groups --vq-vars 320 --vq-temp (2,0.5,0.999995) --prediction-steps 12 --warmup-updates 1000 \ ---warmup-init-lr 1e-07 --criterion wav2vec --num-negatives 10 --max-sample-size 150000 \ ---max-tokens 300000 --cross-sample-negatives 0 --update-freq 1 --seed 2 --skip-invalid-size-inputs-valid-test -``` - -for k-means training, set vq-type with "kmeans" and add --loss-weights [1] argument. Pre-trained models were trained on 16 GPUs. - -### Tokenize audio data (e.g. for BERT training): - -``` -$ PYTHONPATH=/path/to/fairseq python examples/wav2vec/vq-wav2vec_featurize.py --data-dir /manifest/path --output-dir /path/to/output \ ---checkpoint /model/path/checkpoint_best.pt --split train valid test --extension tsv -``` diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/criterions/wav2vec_criterion.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/criterions/wav2vec_criterion.py deleted file mode 100644 index e04786cc3b75517cefd06303f98f8536f9279311..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/criterions/wav2vec_criterion.py +++ /dev/null @@ -1,229 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from dataclasses import dataclass, field -from typing import List, Optional - -import torch -import torch.nn.functional as F -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.dataclass import FairseqDataclass -from fairseq.logging.meters import safe_round -from fairseq.utils import is_xla_tensor - - -@dataclass -class Wav2VecCriterionConfig(FairseqDataclass): - infonce: bool = field( - default=False, - metadata={ - "help": "if set, uses cross entropy instead of binary cross entropy (i.e. InfoNCE loss)" - }, - ) - loss_weights: Optional[List[float]] = field( - default=None, - metadata={"help": "weights for additional loss terms (not first one)"}, - ) - log_keys: List[str] = field( - default_factory=lambda: [], - metadata={"help": "output keys to log"}, - ) - -@register_criterion("wav2vec", dataclass=Wav2VecCriterionConfig) -class Wav2vecCriterion(FairseqCriterion): - def __init__(self, task, infonce=False, loss_weights=None, log_keys=None): - super().__init__(task) - self.infonce = infonce - self.loss_weights = loss_weights - self.log_keys = [] if log_keys is None else log_keys - - def forward(self, model, sample, reduce=True): - """Compute the loss for the given sample. - - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - net_output = model(**sample["net_input"]) - logits = model.get_logits(net_output).float() - target = model.get_targets(sample, net_output) - self.xla = is_xla_tensor(logits) - - # XXX: handle weights on xla. - weights = None - if hasattr(model, "get_target_weights") and not self.infonce: - weights = model.get_target_weights(target, net_output) - if torch.is_tensor(weights): - weights = weights.float() - - losses = [] - - reduction = "none" if ((not reduce) or self.xla) else "sum" - if self.infonce: - loss = F.cross_entropy(logits, target, reduction=reduction) - else: - loss = F.binary_cross_entropy_with_logits( - logits, target.float(), weights, reduction=reduction - ) - - if self.xla: - # tpu-comment: since dynamic shapes lead to recompilations on xla, - # we don't shrink tensors using mask_indices. - # Instead, we use mask indices to adjust loss. - mi = ( - sample['net_input']['mask_indices'] - .transpose(0, 1) # logits are transposed in `model.get_logits` - .reshape(logits.size(0)) - ) - loss = (loss * mi).sum() if reduce else (loss * mi) - - if 'sample_size' in sample: - sample_size = sample['sample_size'] - elif 'mask_indices' in sample['net_input']: - sample_size = sample['net_input']['mask_indices'].sum() - else: - sample_size = target.numel() if self.infonce else target.long().sum().item() - losses.append(loss.detach().clone()) - - if self.loss_weights is not None: - assert hasattr(model, "get_extra_losses") - extra_losses = model.get_extra_losses(net_output) - if torch.is_tensor(extra_losses): - extra_losses = [extra_losses] - if len(self.loss_weights) == 1 and len(extra_losses) != 1: - self.loss_weights = [self.loss_weights[0]] * len(extra_losses) - assert len(extra_losses) == len( - self.loss_weights - ), f"{len(extra_losses)}, {len(self.loss_weights)}" - for p, coef in zip(extra_losses, self.loss_weights): - if coef != 0 and p is not None: - p = coef * p.float() * sample_size - loss += p - losses.append(p) - - logging_output = { - "loss": loss.item() if (reduce and not self.xla) else loss.detach(), - "ntokens": sample_size, - "nsentences": sample["id"].numel(), - "sample_size": sample_size, - } - - for lk in self.log_keys: - # Only store "logits" and "target" for computing MAP and MAUC - # during validation - if lk == "logits": - if not self.training: - logging_output["logits"] = logits.cpu().numpy() - elif lk == "target": - if not self.training: - # If the targets have been mixed with the predictions of - # teacher models, find the original targets - if hasattr(model, "get_original_targets"): - original_target = model.get_original_targets(sample, net_output) - else: - original_target = target - logging_output["target"] = original_target.cpu().numpy() - elif lk in net_output: - value = net_output[lk] - if not is_xla_tensor(value): - value = float(value) - logging_output[lk] = value - - if len(losses) > 1: - for i, l in enumerate(losses): - logging_output[f"loss_{i}"] = l.item() if not self.xla else l.detach() - - if self.infonce: - with torch.no_grad(): - if logits.numel() == 0: - corr = 0 - count = 0 - else: - assert logits.dim() > 1, logits.shape - max = logits.argmax(-1) == 0 - min = logits.argmin(-1) == 0 - if is_xla_tensor(logits): - max, min = max * mi, min * mi - both = max & min - corr = max.long().sum() - both.long().sum() - count = mi.sum() - else: - both = max & min - corr = max.long().sum().item() - both.long().sum().item() - count = float(max.numel()) - - logging_output["correct"] = corr - logging_output["count"] = count - - return loss, sample_size, logging_output - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - loss_sum = utils.item(sum(log.get("loss", 0) for log in logging_outputs)) - ntokens = utils.item(sum(log.get("ntokens", 0) for log in logging_outputs)) - nsentences = utils.item( - sum(log.get("nsentences", 0) for log in logging_outputs) - ) - sample_size = utils.item( - sum(log.get("sample_size", 0) for log in logging_outputs) - ) - - metrics.log_scalar( - "loss", loss_sum / (sample_size or 1) / math.log(2), sample_size, round=3 - ) - metrics.log_scalar("ntokens", ntokens) - metrics.log_scalar("nsentences", nsentences) - - correct = sum(log.get("correct", 0) for log in logging_outputs) - metrics.log_scalar("_correct", correct) - - total = sum(log.get("count", 0) for log in logging_outputs) - metrics.log_scalar("_total", total) - - if total > 0: - metrics.log_derived( - "accuracy", - lambda meters: safe_round( - meters["_correct"].sum / meters["_total"].sum, 5 - ) - if meters["_total"].sum > 0 - else float("nan"), - ) - - builtin_keys = { - "loss", - "ntokens", - "nsentences", - "sample_size", - "correct", - "count", - } - - for k in logging_outputs[0]: - if k not in builtin_keys: - val = sum(log.get(k, 0) for log in logging_outputs) - if k.startswith("loss"): - metrics.log_scalar( - k, val / (sample_size or 1) / math.log(2), sample_size, round=3 - ) - else: - metrics.log_scalar(k, val / len(logging_outputs), round=3) - - # FIXME: revert when gather based xla reduction is implemented - #@staticmethod - #def logging_outputs_can_be_summed() -> bool: - def logging_outputs_can_be_summed(self) -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - # XXX: Gather based reduction not implemented for xla yet. - # So we fall to sum based reduction for xla. - return self.xla diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/models/text_to_speech/tacotron2.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/models/text_to_speech/tacotron2.py deleted file mode 100644 index bb327e81e74900349e1357261bf2f14bc037ccd6..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/models/text_to_speech/tacotron2.py +++ /dev/null @@ -1,350 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging - -import torch -from torch import nn -from torch.nn import functional as F - -from fairseq.models import (FairseqEncoder, FairseqEncoderDecoderModel, - FairseqIncrementalDecoder, register_model, - register_model_architecture) -from fairseq.modules import LSTMCellWithZoneOut, LocationAttention - - -logger = logging.getLogger(__name__) - - -def encoder_init(m): - if isinstance(m, nn.Conv1d): - nn.init.xavier_uniform_(m.weight, torch.nn.init.calculate_gain("relu")) - - -class Tacotron2Encoder(FairseqEncoder): - def __init__(self, args, src_dict, embed_speaker): - super().__init__(src_dict) - self.padding_idx = src_dict.pad() - self.embed_speaker = embed_speaker - self.spk_emb_proj = None - if embed_speaker is not None: - self.spk_emb_proj = nn.Linear( - args.encoder_embed_dim + args.speaker_embed_dim, - args.encoder_embed_dim - ) - - self.embed_tokens = nn.Embedding(len(src_dict), args.encoder_embed_dim, - padding_idx=self.padding_idx) - - assert(args.encoder_conv_kernel_size % 2 == 1) - self.convolutions = nn.ModuleList( - nn.Sequential( - nn.Conv1d(args.encoder_embed_dim, args.encoder_embed_dim, - kernel_size=args.encoder_conv_kernel_size, - padding=((args.encoder_conv_kernel_size - 1) // 2)), - nn.BatchNorm1d(args.encoder_embed_dim), - nn.ReLU(), - nn.Dropout(args.encoder_dropout) - ) - for _ in range(args.encoder_conv_layers) - ) - - self.lstm = nn.LSTM(args.encoder_embed_dim, args.encoder_embed_dim // 2, - num_layers=args.encoder_lstm_layers, - batch_first=True, bidirectional=True) - - self.apply(encoder_init) - - def forward(self, src_tokens, src_lengths=None, speaker=None, **kwargs): - x = self.embed_tokens(src_tokens) - x = x.transpose(1, 2).contiguous() # B x T x C -> B x C x T - for conv in self.convolutions: - x = conv(x) - x = x.transpose(1, 2).contiguous() # B x C x T -> B x T x C - - src_lengths = src_lengths.cpu().long() - x = nn.utils.rnn.pack_padded_sequence(x, src_lengths, batch_first=True) - x = self.lstm(x)[0] - x = nn.utils.rnn.pad_packed_sequence(x, batch_first=True)[0] - - encoder_padding_mask = src_tokens.eq(self.padding_idx) - - if self.embed_speaker is not None: - seq_len, bsz, _ = x.size() - emb = self.embed_speaker(speaker).expand(seq_len, bsz, -1) - x = self.spk_emb_proj(torch.cat([x, emb], dim=2)) - - return { - "encoder_out": [x], # B x T x C - "encoder_padding_mask": encoder_padding_mask, # B x T - } - - -class Prenet(nn.Module): - def __init__(self, in_dim, n_layers, n_units, dropout): - super().__init__() - self.layers = nn.ModuleList( - nn.Sequential(nn.Linear(in_dim if i == 0 else n_units, n_units), - nn.ReLU()) - for i in range(n_layers) - ) - self.dropout = dropout - - def forward(self, x): - for layer in self.layers: - x = F.dropout(layer(x), p=self.dropout) # always applies dropout - return x - - -class Postnet(nn.Module): - def __init__(self, in_dim, n_channels, kernel_size, n_layers, dropout): - super(Postnet, self).__init__() - self.convolutions = nn.ModuleList() - assert(kernel_size % 2 == 1) - for i in range(n_layers): - cur_layers = [ - nn.Conv1d(in_dim if i == 0 else n_channels, - n_channels if i < n_layers - 1 else in_dim, - kernel_size=kernel_size, - padding=((kernel_size - 1) // 2)), - nn.BatchNorm1d(n_channels if i < n_layers - 1 else in_dim) - ] + ([nn.Tanh()] if i < n_layers - 1 else []) + [nn.Dropout(dropout)] - nn.init.xavier_uniform_( - cur_layers[0].weight, - torch.nn.init.calculate_gain( - "tanh" if i < n_layers - 1 else "linear" - ) - ) - self.convolutions.append(nn.Sequential(*cur_layers)) - - def forward(self, x): - x = x.transpose(1, 2) # B x T x C -> B x C x T - for conv in self.convolutions: - x = conv(x) - return x.transpose(1, 2) - - -def decoder_init(m): - if isinstance(m, torch.nn.Conv1d): - nn.init.xavier_uniform_(m.weight, torch.nn.init.calculate_gain("tanh")) - - -class Tacotron2Decoder(FairseqIncrementalDecoder): - def __init__(self, args, src_dict): - super().__init__(None) - self.args = args - self.n_frames_per_step = args.n_frames_per_step - self.out_dim = args.output_frame_dim * args.n_frames_per_step - - self.prenet = Prenet(self.out_dim, args.prenet_layers, args.prenet_dim, - args.prenet_dropout) - - # take prev_context, prev_frame, (speaker embedding) as input - self.attention_lstm = LSTMCellWithZoneOut( - args.zoneout, - args.prenet_dim + args.encoder_embed_dim, - args.decoder_lstm_dim - ) - - # take attention_lstm output, attention_state, encoder_out as input - self.attention = LocationAttention( - args.attention_dim, args.encoder_embed_dim, args.decoder_lstm_dim, - (1 + int(args.attention_use_cumprob)), - args.attention_conv_dim, args.attention_conv_kernel_size - ) - - # take attention_lstm output, context, (gated_latent) as input - self.lstm = nn.ModuleList( - LSTMCellWithZoneOut( - args.zoneout, - args.encoder_embed_dim + args.decoder_lstm_dim, - args.decoder_lstm_dim - ) - for i in range(args.decoder_lstm_layers) - ) - - proj_in_dim = args.encoder_embed_dim + args.decoder_lstm_dim - self.feat_proj = nn.Linear(proj_in_dim, self.out_dim) - self.eos_proj = nn.Linear(proj_in_dim, 1) - - self.postnet = Postnet(self.out_dim, args.postnet_conv_dim, - args.postnet_conv_kernel_size, - args.postnet_layers, args.postnet_dropout) - - self.ctc_proj = None - if getattr(args, "ctc_weight", 0.) > 0.: - self.ctc_proj = nn.Linear(self.out_dim, len(src_dict)) - - self.apply(decoder_init) - - def _get_states(self, incremental_state, enc_out): - bsz, in_len, _ = enc_out.size() - alstm_h = self.get_incremental_state(incremental_state, "alstm_h") - if alstm_h is None: - alstm_h = enc_out.new_zeros(bsz, self.args.decoder_lstm_dim) - alstm_c = self.get_incremental_state(incremental_state, "alstm_c") - if alstm_c is None: - alstm_c = enc_out.new_zeros(bsz, self.args.decoder_lstm_dim) - - lstm_h = self.get_incremental_state(incremental_state, "lstm_h") - if lstm_h is None: - lstm_h = [enc_out.new_zeros(bsz, self.args.decoder_lstm_dim) - for _ in range(self.args.decoder_lstm_layers)] - lstm_c = self.get_incremental_state(incremental_state, "lstm_c") - if lstm_c is None: - lstm_c = [enc_out.new_zeros(bsz, self.args.decoder_lstm_dim) - for _ in range(self.args.decoder_lstm_layers)] - - attn_w = self.get_incremental_state(incremental_state, "attn_w") - if attn_w is None: - attn_w = enc_out.new_zeros(bsz, in_len) - attn_w_cum = self.get_incremental_state(incremental_state, "attn_w_cum") - if attn_w_cum is None: - attn_w_cum = enc_out.new_zeros(bsz, in_len) - return alstm_h, alstm_c, lstm_h, lstm_c, attn_w, attn_w_cum - - def _get_init_attn_c(self, enc_out, enc_mask): - bsz = enc_out.size(0) - if self.args.init_attn_c == "zero": - return enc_out.new_zeros(bsz, self.args.encoder_embed_dim) - elif self.args.init_attn_c == "avg": - enc_w = (~enc_mask).type(enc_out.type()) - enc_w = enc_w / enc_w.sum(dim=1, keepdim=True) - return torch.sum(enc_out * enc_w.unsqueeze(2), dim=1) - else: - raise ValueError(f"{self.args.init_attn_c} not supported") - - def forward(self, prev_output_tokens, encoder_out=None, - incremental_state=None, target_lengths=None, **kwargs): - enc_mask = encoder_out["encoder_padding_mask"] - enc_out = encoder_out["encoder_out"][0] - in_len = enc_out.size(1) - - if incremental_state is not None: - prev_output_tokens = prev_output_tokens[:, -1:, :] - bsz, out_len, _ = prev_output_tokens.size() - - prenet_out = self.prenet(prev_output_tokens) - (alstm_h, alstm_c, lstm_h, lstm_c, - attn_w, attn_w_cum) = self._get_states(incremental_state, enc_out) - attn_ctx = self._get_init_attn_c(enc_out, enc_mask) - - attn_out = enc_out.new_zeros(bsz, in_len, out_len) - feat_out = enc_out.new_zeros(bsz, out_len, self.out_dim) - eos_out = enc_out.new_zeros(bsz, out_len) - for t in range(out_len): - alstm_in = torch.cat((attn_ctx, prenet_out[:, t, :]), dim=1) - alstm_h, alstm_c = self.attention_lstm(alstm_in, (alstm_h, alstm_c)) - - attn_state = attn_w.unsqueeze(1) - if self.args.attention_use_cumprob: - attn_state = torch.stack((attn_w, attn_w_cum), dim=1) - attn_ctx, attn_w = self.attention( - enc_out, enc_mask, alstm_h, attn_state - ) - attn_w_cum = attn_w_cum + attn_w - attn_out[:, :, t] = attn_w - - for i, cur_lstm in enumerate(self.lstm): - if i == 0: - lstm_in = torch.cat((attn_ctx, alstm_h), dim=1) - else: - lstm_in = torch.cat((attn_ctx, lstm_h[i - 1]), dim=1) - lstm_h[i], lstm_c[i] = cur_lstm(lstm_in, (lstm_h[i], lstm_c[i])) - - proj_in = torch.cat((attn_ctx, lstm_h[-1]), dim=1) - feat_out[:, t, :] = self.feat_proj(proj_in) - eos_out[:, t] = self.eos_proj(proj_in).squeeze(1) - self.attention.clear_cache() - - self.set_incremental_state(incremental_state, "alstm_h", alstm_h) - self.set_incremental_state(incremental_state, "alstm_c", alstm_c) - self.set_incremental_state(incremental_state, "lstm_h", lstm_h) - self.set_incremental_state(incremental_state, "lstm_c", lstm_c) - self.set_incremental_state(incremental_state, "attn_w", attn_w) - self.set_incremental_state(incremental_state, "attn_w_cum", attn_w_cum) - - post_feat_out = feat_out + self.postnet(feat_out) - eos_out = eos_out.view(bsz, out_len, 1) - return post_feat_out, eos_out, {"attn": attn_out, "feature_out": feat_out} - - -@register_model("tacotron_2") -class Tacotron2Model(FairseqEncoderDecoderModel): - """ - Implementation for https://arxiv.org/pdf/1712.05884.pdf - """ - - @staticmethod - def add_args(parser): - # encoder - parser.add_argument("--encoder-dropout", type=float) - parser.add_argument("--encoder-embed-dim", type=int) - parser.add_argument("--encoder-conv-layers", type=int) - parser.add_argument("--encoder-conv-kernel-size", type=int) - parser.add_argument("--encoder-lstm-layers", type=int) - # decoder - parser.add_argument("--attention-dim", type=int) - parser.add_argument("--attention-conv-dim", type=int) - parser.add_argument("--attention-conv-kernel-size", type=int) - parser.add_argument("--prenet-dropout", type=float) - parser.add_argument("--prenet-layers", type=int) - parser.add_argument("--prenet-dim", type=int) - parser.add_argument("--postnet-dropout", type=float) - parser.add_argument("--postnet-layers", type=int) - parser.add_argument("--postnet-conv-dim", type=int) - parser.add_argument("--postnet-conv-kernel-size", type=int) - parser.add_argument("--init-attn-c", type=str) - parser.add_argument("--attention-use-cumprob", action='store_true') - parser.add_argument("--zoneout", type=float) - parser.add_argument("--decoder-lstm-layers", type=int) - parser.add_argument("--decoder-lstm-dim", type=int) - parser.add_argument("--output-frame-dim", type=int) - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self._num_updates = 0 - - @classmethod - def build_model(cls, args, task): - embed_speaker = task.get_speaker_embeddings(args) - encoder = Tacotron2Encoder(args, task.src_dict, embed_speaker) - decoder = Tacotron2Decoder(args, task.src_dict) - return cls(encoder, decoder) - - def forward_encoder(self, src_tokens, src_lengths, **kwargs): - return self.encoder(src_tokens, src_lengths=src_lengths, **kwargs) - - def set_num_updates(self, num_updates): - super().set_num_updates(num_updates) - self._num_updates = num_updates - - -@register_model_architecture("tacotron_2", "tacotron_2") -def base_architecture(args): - # encoder - args.encoder_dropout = getattr(args, "encoder_dropout", 0.5) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_conv_layers = getattr(args, "encoder_conv_layers", 3) - args.encoder_conv_kernel_size = getattr(args, "encoder_conv_kernel_size", 5) - args.encoder_lstm_layers = getattr(args, "encoder_lstm_layers", 1) - # decoder - args.attention_dim = getattr(args, "attention_dim", 128) - args.attention_conv_dim = getattr(args, "attention_conv_dim", 32) - args.attention_conv_kernel_size = getattr(args, - "attention_conv_kernel_size", 15) - args.prenet_dropout = getattr(args, "prenet_dropout", 0.5) - args.prenet_layers = getattr(args, "prenet_layers", 2) - args.prenet_dim = getattr(args, "prenet_dim", 256) - args.postnet_dropout = getattr(args, "postnet_dropout", 0.5) - args.postnet_layers = getattr(args, "postnet_layers", 5) - args.postnet_conv_dim = getattr(args, "postnet_conv_dim", 512) - args.postnet_conv_kernel_size = getattr(args, "postnet_conv_kernel_size", 5) - args.init_attn_c = getattr(args, "init_attn_c", "zero") - args.attention_use_cumprob = getattr(args, "attention_use_cumprob", True) - args.zoneout = getattr(args, "zoneout", 0.1) - args.decoder_lstm_layers = getattr(args, "decoder_lstm_layers", 2) - args.decoder_lstm_dim = getattr(args, "decoder_lstm_dim", 1024) - args.output_frame_dim = getattr(args, "output_frame_dim", 80) diff --git a/spaces/starlit7/USPoliticsTTS/text/__init__.py b/spaces/starlit7/USPoliticsTTS/text/__init__.py deleted file mode 100644 index 4e69c354dd24e3243980236eca962cd5945a92fc..0000000000000000000000000000000000000000 --- a/spaces/starlit7/USPoliticsTTS/text/__init__.py +++ /dev/null @@ -1,32 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners - - -def text_to_sequence(text, symbols, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - _symbol_to_id = {s: i for i, s in enumerate(symbols)} - - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - if symbol not in _symbol_to_id.keys(): - continue - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text diff --git a/spaces/stomexserde/gpt4-ui/Examples/Antenna Magus Professional 4.1 Crack WORK.md b/spaces/stomexserde/gpt4-ui/Examples/Antenna Magus Professional 4.1 Crack WORK.md deleted file mode 100644 index d97813632519fac5825d5088b61fca718957bbac..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Antenna Magus Professional 4.1 Crack WORK.md +++ /dev/null @@ -1,35 +0,0 @@ -
              -

              How to Design and Model Antennas with Antenna Magus Professional 4.1

              -

              Antennas are essential components of any wireless communication system. They transmit and receive electromagnetic waves that carry information from one point to another. However, designing and modeling antennas can be a challenging and time-consuming task, especially for complex and novel antenna types.

              -

              Antenna magus professional 4.1 crack


              Download Zip ->->->-> https://urlgoal.com/2uI7p9



              -

              Fortunately, there is a software tool that can help you accelerate the antenna design and modeling process: Antenna Magus Professional 4.1. This tool is developed by SIMULIA by Dassault Systèmes®, a leading provider of simulation solutions for engineering and science.

              -

              In this article, we will show you how Antenna Magus Professional 4.1 can help you design and model antennas with ease and efficiency.

              -

              What is Antenna Magus Professional 4.1?

              -

              Antenna Magus Professional 4.1 is a software tool that allows you to explore, design, and model antennas from a huge database of over 350 antenna types. You can choose an antenna that meets your specifications, such as frequency range, gain, bandwidth, polarization, and size. You can also customize the antenna parameters, such as geometry, materials, and feed location.

              -

              Once you have designed your antenna, you can export it to CST Studio Suite®, a powerful electromagnetic simulation software that can analyze the performance of your antenna in various scenarios. You can also export your antenna to other formats, such as MATLAB®, FEKO®, ANSYS HFSS™, and more.

              -

              -

              Antenna Magus Professional 4.1 also includes many additional tools and utilities that can help you with antenna and array design or evaluation. For example, you can use the Array Synthesis tool to create and optimize array layouts with different element types, distributions, and excitations. You can also use the Specification-based Design tool to find the best antenna for your application based on your system requirements.

              -

              How to Use Antenna Magus Professional 4.1?

              -

              To use Antenna Magus Professional 4.1, you need to follow these steps:

              -
                -
              1. Download and install Antenna Magus Professional 4.1 from here. You will need a license to activate the software.
              2. -
              3. Launch Antenna Magus Professional 4.1 and select your preferred language and units.
              4. -
              5. Browse the antenna database by using the Find mode or the Explore mode. You can filter the antennas by category, keyword, or specification.
              6. -
              7. Select an antenna that matches your criteria and click on Design mode. You can adjust the antenna parameters by using the sliders or entering values manually.
              8. -
              9. Click on Estimate performance to see the estimated characteristics of your antenna, such as radiation pattern, impedance, gain, bandwidth, etc.
              10. -
              11. Click on Export model to export your antenna to CST Studio Suite® or other formats. You can also save your antenna as a custom template for future use.
              12. -
              13. Open your antenna model in CST Studio Suite® or other simulation software and perform further analysis and optimization.
              14. -
              -

              Why Choose Antenna Magus Professional 4.1?

              -

              Antenna Magus Professional 4.1 is a unique and powerful tool that can help you design and model antennas faster and better. Here are some of the benefits of using Antenna Magus Professional 4.1:

              -
                -
              • You can access a large and diverse collection of validated antenna models that cover a wide range of applications and frequencies.
              • -
              • You can design antennas with confidence by using accurate and reliable estimation algorithms that are based on rigorous electromagnetic theory.
              • -
              • You can save time and effort by using an intuitive and user-friendly interface that guides you through the antenna design process.
              • -
              • You can integrate seamlessly with CST Studio Suite® or other simulation software that can provide detailed and realistic analysis of your antenna performance.
              • -
              • You can learn more about antennas by using the extensive knowledge base that provides detailed information about each antenna type, such as history, theory, advantages, disadvantages, applications, references, etc.
              • -
              -

              Conclusion

              -

              Antenna Magus Professional 4.1 is a software tool that can help you design and model antennas with ease and efficiency. It

              cec2833e83
              -
              -
              \ No newline at end of file diff --git a/spaces/subhendupsingh/dis-background-removal/README.md b/spaces/subhendupsingh/dis-background-removal/README.md deleted file mode 100644 index fa3aad2c41909e69815d42b5762a74833b26965f..0000000000000000000000000000000000000000 --- a/spaces/subhendupsingh/dis-background-removal/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: DIS Background Removal -emoji: 🔥 🌠 🏰 -colorFrom: yellow -colorTo: blue -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: ECCV2022/dis-background-removal ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Krim Dhe Ndeshkim Pdf 20.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Krim Dhe Ndeshkim Pdf 20.md deleted file mode 100644 index 954dde348b14512841a0c5f8c34d0bb361f8d799..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Krim Dhe Ndeshkim Pdf 20.md +++ /dev/null @@ -1,6 +0,0 @@ -

              krim dhe ndeshkim pdf 20


              Download File 🗸🗸🗸 https://cinurl.com/2uEXuL



              - -Studio 21 A1 Cornelsen Pdf Download 5https://tlniurl.com/1nmd3j. ... .simplesite.com/433963146/5898942/posting/krim-dhe-ndeshkim-pdf-20 ... 1fdad05405
              -
              -
              -

              diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Type3.type Edit 2008 Dongle ((LINK)) Crack 367.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Type3.type Edit 2008 Dongle ((LINK)) Crack 367.md deleted file mode 100644 index d8b25c9ac8eca393108af3a12997a77955bb9179..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Type3.type Edit 2008 Dongle ((LINK)) Crack 367.md +++ /dev/null @@ -1,6 +0,0 @@ -

              Type3.type Edit 2008 Dongle Crack 367


              Download Zip >>> https://cinurl.com/2uEY9V



              -
              -... Feinleib matip-type-a 350/tcp MATIP Type A matip-type-a 350/udp MATIP Type ... mortgageware 367/tcp MortgageWare mortgageware 367/udp MortgageWare ... Charles Bennett 29 August 2008 genie 402/tcp Genie Protocol genie 402/udp ... 1901/tcp Fujitsu ICL Terminal Emulator Program A fjicl-tep-a 1901/udp Fujitsu ... 1fdad05405
              -
              -
              -

              diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/U-he Zebra 2.7.2 (TOP Full Crack).md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/U-he Zebra 2.7.2 (TOP Full Crack).md deleted file mode 100644 index 7fafe8d34cc204ba3d8f143b4401842a2cb59a9b..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/U-he Zebra 2.7.2 (TOP Full Crack).md +++ /dev/null @@ -1,6 +0,0 @@ -

              u-he Zebra 2.7.2 (Full Crack)


              Download ★★★★★ https://cinurl.com/2uEZ9Y



              -
              -Zebra 2 Vst Osx Mac Crack. Post Reply. Add Poll. Gatolethe Admin replied. 2 years ago. Zebra 2 Vst Osx Mac Crack Show Spoiler. zebra zebra crossing 4d29de3e1b
              -
              -
              -

              diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/runner/checkpoint.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/runner/checkpoint.py deleted file mode 100644 index b29ca320679164432f446adad893e33fb2b4b29e..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/runner/checkpoint.py +++ /dev/null @@ -1,707 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import io -import os -import os.path as osp -import pkgutil -import re -import time -import warnings -from collections import OrderedDict -from importlib import import_module -from tempfile import TemporaryDirectory - -import torch -import torchvision -from torch.optim import Optimizer -from torch.utils import model_zoo - -import annotator.uniformer.mmcv as mmcv -from ..fileio import FileClient -from ..fileio import load as load_file -from ..parallel import is_module_wrapper -from ..utils import mkdir_or_exist -from .dist_utils import get_dist_info - -ENV_MMCV_HOME = 'MMCV_HOME' -ENV_XDG_CACHE_HOME = 'XDG_CACHE_HOME' -DEFAULT_CACHE_DIR = '~/.cache' - - -def _get_mmcv_home(): - mmcv_home = os.path.expanduser( - os.getenv( - ENV_MMCV_HOME, - os.path.join( - os.getenv(ENV_XDG_CACHE_HOME, DEFAULT_CACHE_DIR), 'mmcv'))) - - mkdir_or_exist(mmcv_home) - return mmcv_home - - -def load_state_dict(module, state_dict, strict=False, logger=None): - """Load state_dict to a module. - - This method is modified from :meth:`torch.nn.Module.load_state_dict`. - Default value for ``strict`` is set to ``False`` and the message for - param mismatch will be shown even if strict is False. - - Args: - module (Module): Module that receives the state_dict. - state_dict (OrderedDict): Weights. - strict (bool): whether to strictly enforce that the keys - in :attr:`state_dict` match the keys returned by this module's - :meth:`~torch.nn.Module.state_dict` function. Default: ``False``. - logger (:obj:`logging.Logger`, optional): Logger to log the error - message. If not specified, print function will be used. - """ - unexpected_keys = [] - all_missing_keys = [] - err_msg = [] - - metadata = getattr(state_dict, '_metadata', None) - state_dict = state_dict.copy() - if metadata is not None: - state_dict._metadata = metadata - - # use _load_from_state_dict to enable checkpoint version control - def load(module, prefix=''): - # recursively check parallel module in case that the model has a - # complicated structure, e.g., nn.Module(nn.Module(DDP)) - if is_module_wrapper(module): - module = module.module - local_metadata = {} if metadata is None else metadata.get( - prefix[:-1], {}) - module._load_from_state_dict(state_dict, prefix, local_metadata, True, - all_missing_keys, unexpected_keys, - err_msg) - for name, child in module._modules.items(): - if child is not None: - load(child, prefix + name + '.') - - load(module) - load = None # break load->load reference cycle - - # ignore "num_batches_tracked" of BN layers - missing_keys = [ - key for key in all_missing_keys if 'num_batches_tracked' not in key - ] - - if unexpected_keys: - err_msg.append('unexpected key in source ' - f'state_dict: {", ".join(unexpected_keys)}\n') - if missing_keys: - err_msg.append( - f'missing keys in source state_dict: {", ".join(missing_keys)}\n') - - rank, _ = get_dist_info() - if len(err_msg) > 0 and rank == 0: - err_msg.insert( - 0, 'The model and loaded state dict do not match exactly\n') - err_msg = '\n'.join(err_msg) - if strict: - raise RuntimeError(err_msg) - elif logger is not None: - logger.warning(err_msg) - else: - print(err_msg) - - -def get_torchvision_models(): - model_urls = dict() - for _, name, ispkg in pkgutil.walk_packages(torchvision.models.__path__): - if ispkg: - continue - _zoo = import_module(f'torchvision.models.{name}') - if hasattr(_zoo, 'model_urls'): - _urls = getattr(_zoo, 'model_urls') - model_urls.update(_urls) - return model_urls - - -def get_external_models(): - mmcv_home = _get_mmcv_home() - default_json_path = osp.join(mmcv.__path__[0], 'model_zoo/open_mmlab.json') - default_urls = load_file(default_json_path) - assert isinstance(default_urls, dict) - external_json_path = osp.join(mmcv_home, 'open_mmlab.json') - if osp.exists(external_json_path): - external_urls = load_file(external_json_path) - assert isinstance(external_urls, dict) - default_urls.update(external_urls) - - return default_urls - - -def get_mmcls_models(): - mmcls_json_path = osp.join(mmcv.__path__[0], 'model_zoo/mmcls.json') - mmcls_urls = load_file(mmcls_json_path) - - return mmcls_urls - - -def get_deprecated_model_names(): - deprecate_json_path = osp.join(mmcv.__path__[0], - 'model_zoo/deprecated.json') - deprecate_urls = load_file(deprecate_json_path) - assert isinstance(deprecate_urls, dict) - - return deprecate_urls - - -def _process_mmcls_checkpoint(checkpoint): - state_dict = checkpoint['state_dict'] - new_state_dict = OrderedDict() - for k, v in state_dict.items(): - if k.startswith('backbone.'): - new_state_dict[k[9:]] = v - new_checkpoint = dict(state_dict=new_state_dict) - - return new_checkpoint - - -class CheckpointLoader: - """A general checkpoint loader to manage all schemes.""" - - _schemes = {} - - @classmethod - def _register_scheme(cls, prefixes, loader, force=False): - if isinstance(prefixes, str): - prefixes = [prefixes] - else: - assert isinstance(prefixes, (list, tuple)) - for prefix in prefixes: - if (prefix not in cls._schemes) or force: - cls._schemes[prefix] = loader - else: - raise KeyError( - f'{prefix} is already registered as a loader backend, ' - 'add "force=True" if you want to override it') - # sort, longer prefixes take priority - cls._schemes = OrderedDict( - sorted(cls._schemes.items(), key=lambda t: t[0], reverse=True)) - - @classmethod - def register_scheme(cls, prefixes, loader=None, force=False): - """Register a loader to CheckpointLoader. - - This method can be used as a normal class method or a decorator. - - Args: - prefixes (str or list[str] or tuple[str]): - The prefix of the registered loader. - loader (function, optional): The loader function to be registered. - When this method is used as a decorator, loader is None. - Defaults to None. - force (bool, optional): Whether to override the loader - if the prefix has already been registered. Defaults to False. - """ - - if loader is not None: - cls._register_scheme(prefixes, loader, force=force) - return - - def _register(loader_cls): - cls._register_scheme(prefixes, loader_cls, force=force) - return loader_cls - - return _register - - @classmethod - def _get_checkpoint_loader(cls, path): - """Finds a loader that supports the given path. Falls back to the local - loader if no other loader is found. - - Args: - path (str): checkpoint path - - Returns: - loader (function): checkpoint loader - """ - - for p in cls._schemes: - if path.startswith(p): - return cls._schemes[p] - - @classmethod - def load_checkpoint(cls, filename, map_location=None, logger=None): - """load checkpoint through URL scheme path. - - Args: - filename (str): checkpoint file name with given prefix - map_location (str, optional): Same as :func:`torch.load`. - Default: None - logger (:mod:`logging.Logger`, optional): The logger for message. - Default: None - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - - checkpoint_loader = cls._get_checkpoint_loader(filename) - class_name = checkpoint_loader.__name__ - mmcv.print_log( - f'load checkpoint from {class_name[10:]} path: {filename}', logger) - return checkpoint_loader(filename, map_location) - - -@CheckpointLoader.register_scheme(prefixes='') -def load_from_local(filename, map_location): - """load checkpoint by local file path. - - Args: - filename (str): local checkpoint file path - map_location (str, optional): Same as :func:`torch.load`. - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - - if not osp.isfile(filename): - raise IOError(f'{filename} is not a checkpoint file') - checkpoint = torch.load(filename, map_location=map_location) - return checkpoint - - -@CheckpointLoader.register_scheme(prefixes=('http://', 'https://')) -def load_from_http(filename, map_location=None, model_dir=None): - """load checkpoint through HTTP or HTTPS scheme path. In distributed - setting, this function only download checkpoint at local rank 0. - - Args: - filename (str): checkpoint file path with modelzoo or - torchvision prefix - map_location (str, optional): Same as :func:`torch.load`. - model_dir (string, optional): directory in which to save the object, - Default: None - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - rank, world_size = get_dist_info() - rank = int(os.environ.get('LOCAL_RANK', rank)) - if rank == 0: - checkpoint = model_zoo.load_url( - filename, model_dir=model_dir, map_location=map_location) - if world_size > 1: - torch.distributed.barrier() - if rank > 0: - checkpoint = model_zoo.load_url( - filename, model_dir=model_dir, map_location=map_location) - return checkpoint - - -@CheckpointLoader.register_scheme(prefixes='pavi://') -def load_from_pavi(filename, map_location=None): - """load checkpoint through the file path prefixed with pavi. In distributed - setting, this function download ckpt at all ranks to different temporary - directories. - - Args: - filename (str): checkpoint file path with pavi prefix - map_location (str, optional): Same as :func:`torch.load`. - Default: None - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - assert filename.startswith('pavi://'), \ - f'Expected filename startswith `pavi://`, but get {filename}' - model_path = filename[7:] - - try: - from pavi import modelcloud - except ImportError: - raise ImportError( - 'Please install pavi to load checkpoint from modelcloud.') - - model = modelcloud.get(model_path) - with TemporaryDirectory() as tmp_dir: - downloaded_file = osp.join(tmp_dir, model.name) - model.download(downloaded_file) - checkpoint = torch.load(downloaded_file, map_location=map_location) - return checkpoint - - -@CheckpointLoader.register_scheme(prefixes='s3://') -def load_from_ceph(filename, map_location=None, backend='petrel'): - """load checkpoint through the file path prefixed with s3. In distributed - setting, this function download ckpt at all ranks to different temporary - directories. - - Args: - filename (str): checkpoint file path with s3 prefix - map_location (str, optional): Same as :func:`torch.load`. - backend (str, optional): The storage backend type. Options are 'ceph', - 'petrel'. Default: 'petrel'. - - .. warning:: - :class:`mmcv.fileio.file_client.CephBackend` will be deprecated, - please use :class:`mmcv.fileio.file_client.PetrelBackend` instead. - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - allowed_backends = ['ceph', 'petrel'] - if backend not in allowed_backends: - raise ValueError(f'Load from Backend {backend} is not supported.') - - if backend == 'ceph': - warnings.warn( - 'CephBackend will be deprecated, please use PetrelBackend instead') - - # CephClient and PetrelBackend have the same prefix 's3://' and the latter - # will be chosen as default. If PetrelBackend can not be instantiated - # successfully, the CephClient will be chosen. - try: - file_client = FileClient(backend=backend) - except ImportError: - allowed_backends.remove(backend) - file_client = FileClient(backend=allowed_backends[0]) - - with io.BytesIO(file_client.get(filename)) as buffer: - checkpoint = torch.load(buffer, map_location=map_location) - return checkpoint - - -@CheckpointLoader.register_scheme(prefixes=('modelzoo://', 'torchvision://')) -def load_from_torchvision(filename, map_location=None): - """load checkpoint through the file path prefixed with modelzoo or - torchvision. - - Args: - filename (str): checkpoint file path with modelzoo or - torchvision prefix - map_location (str, optional): Same as :func:`torch.load`. - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - model_urls = get_torchvision_models() - if filename.startswith('modelzoo://'): - warnings.warn('The URL scheme of "modelzoo://" is deprecated, please ' - 'use "torchvision://" instead') - model_name = filename[11:] - else: - model_name = filename[14:] - return load_from_http(model_urls[model_name], map_location=map_location) - - -@CheckpointLoader.register_scheme(prefixes=('open-mmlab://', 'openmmlab://')) -def load_from_openmmlab(filename, map_location=None): - """load checkpoint through the file path prefixed with open-mmlab or - openmmlab. - - Args: - filename (str): checkpoint file path with open-mmlab or - openmmlab prefix - map_location (str, optional): Same as :func:`torch.load`. - Default: None - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - - model_urls = get_external_models() - prefix_str = 'open-mmlab://' - if filename.startswith(prefix_str): - model_name = filename[13:] - else: - model_name = filename[12:] - prefix_str = 'openmmlab://' - - deprecated_urls = get_deprecated_model_names() - if model_name in deprecated_urls: - warnings.warn(f'{prefix_str}{model_name} is deprecated in favor ' - f'of {prefix_str}{deprecated_urls[model_name]}') - model_name = deprecated_urls[model_name] - model_url = model_urls[model_name] - # check if is url - if model_url.startswith(('http://', 'https://')): - checkpoint = load_from_http(model_url, map_location=map_location) - else: - filename = osp.join(_get_mmcv_home(), model_url) - if not osp.isfile(filename): - raise IOError(f'{filename} is not a checkpoint file') - checkpoint = torch.load(filename, map_location=map_location) - return checkpoint - - -@CheckpointLoader.register_scheme(prefixes='mmcls://') -def load_from_mmcls(filename, map_location=None): - """load checkpoint through the file path prefixed with mmcls. - - Args: - filename (str): checkpoint file path with mmcls prefix - map_location (str, optional): Same as :func:`torch.load`. - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - - model_urls = get_mmcls_models() - model_name = filename[8:] - checkpoint = load_from_http( - model_urls[model_name], map_location=map_location) - checkpoint = _process_mmcls_checkpoint(checkpoint) - return checkpoint - - -def _load_checkpoint(filename, map_location=None, logger=None): - """Load checkpoint from somewhere (modelzoo, file, url). - - Args: - filename (str): Accept local filepath, URL, ``torchvision://xxx``, - ``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for - details. - map_location (str, optional): Same as :func:`torch.load`. - Default: None. - logger (:mod:`logging.Logger`, optional): The logger for error message. - Default: None - - Returns: - dict or OrderedDict: The loaded checkpoint. It can be either an - OrderedDict storing model weights or a dict containing other - information, which depends on the checkpoint. - """ - return CheckpointLoader.load_checkpoint(filename, map_location, logger) - - -def _load_checkpoint_with_prefix(prefix, filename, map_location=None): - """Load partial pretrained model with specific prefix. - - Args: - prefix (str): The prefix of sub-module. - filename (str): Accept local filepath, URL, ``torchvision://xxx``, - ``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for - details. - map_location (str | None): Same as :func:`torch.load`. Default: None. - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - - checkpoint = _load_checkpoint(filename, map_location=map_location) - - if 'state_dict' in checkpoint: - state_dict = checkpoint['state_dict'] - else: - state_dict = checkpoint - if not prefix.endswith('.'): - prefix += '.' - prefix_len = len(prefix) - - state_dict = { - k[prefix_len:]: v - for k, v in state_dict.items() if k.startswith(prefix) - } - - assert state_dict, f'{prefix} is not in the pretrained model' - return state_dict - - -def load_checkpoint(model, - filename, - map_location=None, - strict=False, - logger=None, - revise_keys=[(r'^module\.', '')]): - """Load checkpoint from a file or URI. - - Args: - model (Module): Module to load checkpoint. - filename (str): Accept local filepath, URL, ``torchvision://xxx``, - ``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for - details. - map_location (str): Same as :func:`torch.load`. - strict (bool): Whether to allow different params for the model and - checkpoint. - logger (:mod:`logging.Logger` or None): The logger for error message. - revise_keys (list): A list of customized keywords to modify the - state_dict in checkpoint. Each item is a (pattern, replacement) - pair of the regular expression operations. Default: strip - the prefix 'module.' by [(r'^module\\.', '')]. - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - checkpoint = _load_checkpoint(filename, map_location, logger) - # OrderedDict is a subclass of dict - if not isinstance(checkpoint, dict): - raise RuntimeError( - f'No state_dict found in checkpoint file {filename}') - # get state_dict from checkpoint - if 'state_dict' in checkpoint: - state_dict = checkpoint['state_dict'] - else: - state_dict = checkpoint - - # strip prefix of state_dict - metadata = getattr(state_dict, '_metadata', OrderedDict()) - for p, r in revise_keys: - state_dict = OrderedDict( - {re.sub(p, r, k): v - for k, v in state_dict.items()}) - # Keep metadata in state_dict - state_dict._metadata = metadata - - # load state_dict - load_state_dict(model, state_dict, strict, logger) - return checkpoint - - -def weights_to_cpu(state_dict): - """Copy a model state_dict to cpu. - - Args: - state_dict (OrderedDict): Model weights on GPU. - - Returns: - OrderedDict: Model weights on GPU. - """ - state_dict_cpu = OrderedDict() - for key, val in state_dict.items(): - state_dict_cpu[key] = val.cpu() - # Keep metadata in state_dict - state_dict_cpu._metadata = getattr(state_dict, '_metadata', OrderedDict()) - return state_dict_cpu - - -def _save_to_state_dict(module, destination, prefix, keep_vars): - """Saves module state to `destination` dictionary. - - This method is modified from :meth:`torch.nn.Module._save_to_state_dict`. - - Args: - module (nn.Module): The module to generate state_dict. - destination (dict): A dict where state will be stored. - prefix (str): The prefix for parameters and buffers used in this - module. - """ - for name, param in module._parameters.items(): - if param is not None: - destination[prefix + name] = param if keep_vars else param.detach() - for name, buf in module._buffers.items(): - # remove check of _non_persistent_buffers_set to allow nn.BatchNorm2d - if buf is not None: - destination[prefix + name] = buf if keep_vars else buf.detach() - - -def get_state_dict(module, destination=None, prefix='', keep_vars=False): - """Returns a dictionary containing a whole state of the module. - - Both parameters and persistent buffers (e.g. running averages) are - included. Keys are corresponding parameter and buffer names. - - This method is modified from :meth:`torch.nn.Module.state_dict` to - recursively check parallel module in case that the model has a complicated - structure, e.g., nn.Module(nn.Module(DDP)). - - Args: - module (nn.Module): The module to generate state_dict. - destination (OrderedDict): Returned dict for the state of the - module. - prefix (str): Prefix of the key. - keep_vars (bool): Whether to keep the variable property of the - parameters. Default: False. - - Returns: - dict: A dictionary containing a whole state of the module. - """ - # recursively check parallel module in case that the model has a - # complicated structure, e.g., nn.Module(nn.Module(DDP)) - if is_module_wrapper(module): - module = module.module - - # below is the same as torch.nn.Module.state_dict() - if destination is None: - destination = OrderedDict() - destination._metadata = OrderedDict() - destination._metadata[prefix[:-1]] = local_metadata = dict( - version=module._version) - _save_to_state_dict(module, destination, prefix, keep_vars) - for name, child in module._modules.items(): - if child is not None: - get_state_dict( - child, destination, prefix + name + '.', keep_vars=keep_vars) - for hook in module._state_dict_hooks.values(): - hook_result = hook(module, destination, prefix, local_metadata) - if hook_result is not None: - destination = hook_result - return destination - - -def save_checkpoint(model, - filename, - optimizer=None, - meta=None, - file_client_args=None): - """Save checkpoint to file. - - The checkpoint will have 3 fields: ``meta``, ``state_dict`` and - ``optimizer``. By default ``meta`` will contain version and time info. - - Args: - model (Module): Module whose params are to be saved. - filename (str): Checkpoint filename. - optimizer (:obj:`Optimizer`, optional): Optimizer to be saved. - meta (dict, optional): Metadata to be saved in checkpoint. - file_client_args (dict, optional): Arguments to instantiate a - FileClient. See :class:`mmcv.fileio.FileClient` for details. - Default: None. - `New in version 1.3.16.` - """ - if meta is None: - meta = {} - elif not isinstance(meta, dict): - raise TypeError(f'meta must be a dict or None, but got {type(meta)}') - meta.update(mmcv_version=mmcv.__version__, time=time.asctime()) - - if is_module_wrapper(model): - model = model.module - - if hasattr(model, 'CLASSES') and model.CLASSES is not None: - # save class name to the meta - meta.update(CLASSES=model.CLASSES) - - checkpoint = { - 'meta': meta, - 'state_dict': weights_to_cpu(get_state_dict(model)) - } - # save optimizer state dict in the checkpoint - if isinstance(optimizer, Optimizer): - checkpoint['optimizer'] = optimizer.state_dict() - elif isinstance(optimizer, dict): - checkpoint['optimizer'] = {} - for name, optim in optimizer.items(): - checkpoint['optimizer'][name] = optim.state_dict() - - if filename.startswith('pavi://'): - if file_client_args is not None: - raise ValueError( - 'file_client_args should be "None" if filename starts with' - f'"pavi://", but got {file_client_args}') - try: - from pavi import modelcloud - from pavi import exception - except ImportError: - raise ImportError( - 'Please install pavi to load checkpoint from modelcloud.') - model_path = filename[7:] - root = modelcloud.Folder() - model_dir, model_name = osp.split(model_path) - try: - model = modelcloud.get(model_dir) - except exception.NodeNotFoundError: - model = root.create_training_model(model_dir) - with TemporaryDirectory() as tmp_dir: - checkpoint_file = osp.join(tmp_dir, model_name) - with open(checkpoint_file, 'wb') as f: - torch.save(checkpoint, f) - f.flush() - model.create_file(checkpoint_file, name=model_name) - else: - file_client = FileClient.infer_client(file_client_args, filename) - with io.BytesIO() as f: - torch.save(checkpoint, f) - file_client.put(f.getvalue(), filename) diff --git a/spaces/szukevin/VISOR-GPT/utils/seq2coord.py b/spaces/szukevin/VISOR-GPT/utils/seq2coord.py deleted file mode 100644 index 636046cb2f17bc4076e87e5cc501970e8ad6ce8d..0000000000000000000000000000000000000000 --- a/spaces/szukevin/VISOR-GPT/utils/seq2coord.py +++ /dev/null @@ -1,329 +0,0 @@ - -""" -decode sequential output to visual locations -author: sierkinhane.github.io -""" -import random -from tqdm import tqdm -import json -import numpy as np -import re -import argparse -import cv2 -import math -import os - -# COCO keypoints -stickwidth = 4 - -limbSeq_coco = [[2, 3], [2, 6], [3, 4], [4, 5], [6, 7], [7, 8], [2, 9], [9, 10], \ - [10, 11], [2, 12], [12, 13], [13, 14], [2, 1], [1, 15], [15, 17], \ - [1, 16], [16, 18], [3, 17], [6, 18]] - -limbSeq_cp = [[14, 2], [14, 1], [2, 4], [4, 6], [1, 3], [3, 5], [14, 8], [8, 10], [10, 12], [14, 7], [7, 9], [9, 11], [13, 14]] - -# CrowdPose -# {'0': 'left shoulder', '1': 'right shoulder', '2': 'left elbow', '3': 'right elbow', '4': 'left wrist', '5': 'right wrist', '6': 'left hip', '7': 'right hip', '8': 'left knee', '9': 'right knee', '10': 'left ankle', '11': 'right ankle', '12': 'head', '13': 'neck'} - -# for human pose visualization -colors = [[255, 0, 0], [255, 85, 0], [255, 170, 0], [255, 255, 0], [170, 255, 0], [85, 255, 0], [0, 255, 0], \ - [0, 255, 85], [0, 255, 170], [0, 255, 255], [0, 170, 255], [0, 85, 255], [0, 0, 255], [85, 0, 255], \ - [170, 0, 255], [255, 0, 255], [255, 0, 170], [255, 0, 85]] - -# for box visualization -colors_box = [[217, 221, 116], [137, 165, 171], [230, 126, 175], [63, 157, 5], [107, 51, 75], [217, 147, 152], [129, 132, 8], [232, 85, 249], [254, 98, 33], [89, 108, 230], [253, 34, 161], [91, 150, 30], [255, 147, 26], [209, 154, 205], [134, 57, 11], [143, 181, 122], [241, 176, 87], [104, 73, 26], [122, 147, 59], [235, 230, 229], [119, 18, 125], [185, 61, 138], [237, 115, 90], [13, 209, 111], [219, 172, 212]] - -# Plots one bounding box on image -def plot_one_box(x, img, color=None, label=None, line_thickness=None, idx=0): - tl = line_thickness or round(0.002 * (img.shape[0] + img.shape[1]) / 2) + 1 # line thickness - color = color or [random.randint(0, 255) for _ in range(3)] - color = colors_box[idx] - c1, c2 = (int(x[0]), int(x[1])), (int(x[2]), int(x[3])) - cv2.rectangle(img, c1, c2, color, thickness=tl) - if label: - tf = max(tl - 1, 1) # font thickness - t_size = cv2.getTextSize(label, 0, fontScale=tl / 3, thickness=tf)[0] - c2 = c1[0] + t_size[0], c1[1] - t_size[1] - 3 - cv2.rectangle(img, c1, c2, color, -1) # filled - cv2.putText(img, label, c1, 0, tl / 3, [0, 0, 0], thickness=tf, lineType=cv2.LINE_AA) - return img - - -# decode one sequence to visual locations -def decode(coordinate_str, type='box'): - - # find numbers - locations = np.array([int(i) for i in re.findall(r"\d+", coordinate_str)]) - - if type == 'box': - locations = locations.reshape(-1, 4) - elif type == 'cocokeypoint': - locations = locations.reshape(-1, 18, 2) - visible = np.ones((locations.shape[0], 18, 1)) - eq_0_idx = np.where(locations[:, :, 0] * locations[:, :, 1] == 0) - visible[eq_0_idx] = 0 - locations = np.concatenate([locations, visible], axis=-1) - for i in range(locations.shape[0]): - if locations[i, 2, -1] == 0 or locations[i, 5, -1] == 0: - locations[i, 1, -1] = 0 - elif type == 'crowdpose': - locations = locations.reshape(-1, 14, 2) - visible = np.ones((locations.shape[0], 14, 1)) - eq_0_idx = np.where(locations[:, :, 0] * locations[:, :, 1] == 0) - visible[eq_0_idx] = 0 - locations = np.concatenate([locations, visible], axis=-1) - elif type == 'mask': - locations = [] - for c_str in coordinate_str.split('m0'): - c_str = ''.join(re.split(r'm\d+', c_str)) - mask_coord = np.array([int(i) for i in re.findall(r"\d+ ", c_str)]) - if len(mask_coord) != 0: - locations.append(mask_coord.reshape(-1, 1, 2)) - else: - raise NotImplementedError - - return locations - - -# process raw sequences inferred by VisorGPT -def to_coordinate(file_path, ctn=True): - - if isinstance(file_path, list): - texts = [i.strip().replace(' ##', '') for i in file_path] - else: - with open(file_path, 'r') as file: - texts = [i.strip().replace(' ##', '') for i in file.readlines()] - - location_list = [] - classname_list = [] - type_list = [] - valid_sequences = [] - cnt = 0 - print('to coordinate ...') - - for ste in tqdm(texts): - cnt += 1 - if 'box' in ste: - type = 'box' - elif 'key point' in ste: - type = 'cocokeypoint' if '; 18 ;' in ste else 'crowdpose' - elif 'mask' in ste: - type = 'mask' - else: - raise NotImplementedError - - if '[SEP]' not in ste: - continue - - try: - if ctn: - temp = ste[:ste.index('[SEP]')].split(' ; ')[5].split('] ') - classnames = [] - for t in temp: - classnames.append(t.split(' xmin ')[0].split(' m0')[0][2:]) - classnames = classnames[:-1] - locations = decode(ste[:ste.index('[SEP]')].split(' ; ')[5], type=type) - - else: - classnames = ste[:ste.index('[SEP]')].split(' ; ')[5].split(' , ') - locations = decode(ste[:ste.index('[SEP]')].split(' ; ')[6], type=type) - except: - pass - else: - valid_sequences.append(ste[:ste.index('[SEP]')]) - location_list.append(locations) - classname_list.append(classnames) - type_list.append(type) - - with open('valid_sequences.txt', 'w') as file: - [file.write(i.split('[CLS] ')[-1] + '\n') for i in valid_sequences] - - return location_list, classname_list, type_list, valid_sequences - -# visualize object locations on a canvas -def visualization(location_list, classname_list, type_list, save_dir='debug/', save_fig=False): - - if save_fig: - if not os.path.exists(save_dir): - os.makedirs(save_dir) - - print('visualizing ...') - for b, (loc, classnames, type) in tqdm(enumerate(zip(location_list, classname_list, type_list))): - canvas = np.zeros((512, 512, 3), dtype=np.uint8) + 50 - - if len(loc) != len(classnames): - continue - - if type == 'box': - for i in range(loc.shape[0]): - canvas = plot_one_box(loc[i], canvas, label=classnames[i], idx=i) - - elif type == 'cocokeypoint': - for i in range(loc.shape[0]): - for j in range(loc.shape[1]): - x, y, v = loc[i, j] - if v != 0: - cv2.circle(canvas, (int(x), int(y)), 4, colors[j], thickness=-1) - for j in range(17): - lim = limbSeq_coco[j] - cur_canvas = canvas.copy() - - Y = [loc[i][lim[0] - 1][0], loc[i][lim[1] - 1][0]] - X = [loc[i][lim[0] - 1][1], loc[i][lim[1] - 1][1]] - - if loc[i][lim[0] - 1][-1] == 0 or loc[i][lim[1] - 1][-1] == 0: - continue - - mX = np.mean(X) - mY = np.mean(Y) - length = ((X[0] - X[1]) ** 2 + (Y[0] - Y[1]) ** 2) ** 0.5 - angle = math.degrees(math.atan2(X[0] - X[1], Y[0] - Y[1])) - polygon = cv2.ellipse2Poly((int(mY), int(mX)), (int(length / 2), stickwidth), int(angle), 0, 360, 1) - cv2.fillConvexPoly(cur_canvas, polygon, colors[j]) - canvas = cv2.addWeighted(canvas, 0.4, cur_canvas, 0.6, 0) - - elif type == 'crowdpose': - for i in range(loc.shape[0]): - for j in range(loc.shape[1]): - x, y, _ = loc[i, j] - if x != 0 and y != 0: - cv2.circle(canvas, (int(x), int(y)), 4, colors[j], thickness=-1) - for j in range(13): - lim = limbSeq_cp[j] - cur_canvas = canvas.copy() - - Y = [loc[i][lim[0] - 1][0], loc[i][lim[1] - 1][0]] - X = [loc[i][lim[0] - 1][1], loc[i][lim[1] - 1][1]] - - if (Y[0] == 0 and X[0] == 0) or (Y[1] == 0 and X[1] == 0): - continue - - mX = np.mean(X) - mY = np.mean(Y) - length = ((X[0] - X[1]) ** 2 + (Y[0] - Y[1]) ** 2) ** 0.5 - angle = math.degrees(math.atan2(X[0] - X[1], Y[0] - Y[1])) - polygon = cv2.ellipse2Poly((int(mY), int(mX)), (int(length / 2), stickwidth), int(angle), 0, 360, 1) - cv2.fillConvexPoly(cur_canvas, polygon, colors[j]) - canvas = cv2.addWeighted(canvas, 0.4, cur_canvas, 0.6, 0) - - elif type == 'mask': - for i in range(len(loc)): - color = [random.randint(0, 255) for _ in range(3)] - xmin, ymin, xmax, ymax = loc[i][:, :, 0].min(), loc[i][:, :, 1].min(), loc[i][:, :, 0].max(), loc[i][:, :, 1].max() - cur_canvas = canvas.copy() - cv2.fillPoly(cur_canvas, [loc[i]], color) - cur_canvas = plot_one_box((xmin, ymin, xmax, ymax), cur_canvas, color=color, label=classnames[i]) - canvas = cv2.addWeighted(canvas, 0.5, cur_canvas, 0.5, 0) - else: - raise NotImplementedError - if save_fig: - cv2.imwrite(f'{save_dir}/test_{b}.png', canvas[..., ::-1]) - - return canvas[..., ::-1] - -# to json output -def to_json(location_list, classname_list, type_list, valid_sequences): - - ret_json_box = {'bboxes': [], 'sequences': []} - ret_json_mask = {'masks': [], 'sequences': []} - ret_json_keypoint = {'keypoints': [], 'sequences': []} - print('to json ...') - for loc, classnames, type, seq in tqdm(zip(location_list, classname_list, type_list, valid_sequences)): - ins_list = [] - kpt_list = [] - mask_list = [] - seq_list = [] - if len(loc) != len(classnames):# or len(classnames) > 8: - continue - - if type == 'box': - for i in range(loc.shape[0]): - # xmin, ymin, xmax, ymax = loc[i] - # area = (xmax - xmin) * (ymax - ymin) - # compute area and omit very small one due to the synthesis ability of AIGC - # if area < 32**2: - # continue - - dic = {classnames[i]: loc[i].tolist()} - ins_list.append(dic) - if len(seq_list) == 0: - seq_list.append(seq) - - elif type == 'cocokeypoint' or type == 'crowdpose': - for i in range(loc.shape[0]): - # compute validate key points and omit the less one, as the synthesis ability of AIGC - # if loc[i, :, -1].sum() <= 4: - # continue - - # compute area and omit very small one due to the synthesis ability of AIGC - # xmin, ymin, xmax, ymax = loc[i, :, 0].min(), loc[i, :, 1].min(), loc[i, :, 0].max(), loc[i, :, 1].max() - # area = (xmax - xmin) * (ymax - ymin) - # if area < 32 ** 2: - # continue - - dic = {classnames[i]: loc[i][:, :].tolist()} - kpt_list.append(dic) - if len(seq_list) == 0: - seq_list.append(seq) - - elif type == 'mask': - for i in range(len(loc)): - - # xmin, ymin, xmax, ymax = loc[i][:, :, 0].min(), loc[i][:, :, 1].min(), loc[i][:, :, 0].max(), loc[i][:, :, 1].max() - # area = (xmax - xmin) * (ymax - ymin) - # if area < 32 ** 2: - # continue - - dic = {classnames[i]: loc[i].tolist()} - mask_list.append(dic) - if len(seq_list) == 0: - seq_list.append(seq) - else: - raise NotImplementedError - - if len(ins_list) != 0: - ret_json_box['bboxes'].append(ins_list) - ret_json_box['sequences'].append(seq_list) - if len(kpt_list) != 0: - ret_json_keypoint['keypoints'].append(kpt_list) - ret_json_keypoint['sequences'].append(seq_list) - if len(mask_list) != 0: - ret_json_mask['masks'].append(mask_list) - ret_json_mask['sequences'].append(seq_list) - - return [ret_json_box, ret_json_mask, ret_json_keypoint] - - -def gen_cond_mask(texts, ctn): - location_list, classname_list, type_list, valid_sequences = to_coordinate(texts, ctn) - ret_mask = visualization(location_list, classname_list, type_list, None, False) - ret_json = to_json(location_list, classname_list, type_list, valid_sequences) - return ret_mask, ret_json - -if __name__ == '__main__': - - parser = argparse.ArgumentParser() - parser.add_argument('--file_path', type=str, required=True) - parser.add_argument('--save_dir', type=str, default='debug') - parser.add_argument('--visualize', type=bool, default=False) - args = parser.parse_args() - - location_list, classname_list, type_list, valid_sequences = to_coordinate(args.file_path) - - if not os.path.exists(args.save_dir): - os.makedirs(args.save_dir) - - # visualization - if args.visualize: - visualization(location_list, classname_list, type_list, args.save_dir) - - # to json data - rets = to_json(location_list, classname_list, type_list, valid_sequences) - - for ret, flag in zip(rets, ['box', 'mask', 'keypoint']): - save_path = args.file_path.split('/')[-1].split('.')[0] + f'_{flag}.json' - with open('files/' + save_path, 'w') as file: - json.dump(ret, file, indent=2) - - - diff --git a/spaces/templates/flask/templates/index.html b/spaces/templates/flask/templates/index.html deleted file mode 100644 index d7121e95522118e40ef0d7217228061d79250406..0000000000000000000000000000000000000000 --- a/spaces/templates/flask/templates/index.html +++ /dev/null @@ -1,1929 +0,0 @@ - - - - - - Flask 🤗 Space served with development server - - - - -
              -

              Flask 🤗 Space served with development server

              -
              -

              Image generation from Inference API

              -

              - Model: - osanseviero/BigGAN-deep-128 -

              - - - pelican generated from BigGAN AI model -
              -
              -

              Text generation from transformers library

              -

              - Model: - t5-small -

              -
              - - - -

              -
              -
              -
              -

              Dataset from datasets library

              -

              - Dataset: - emotion -

              -
              - - -
              -
              -
              -
              - - - diff --git a/spaces/tengxiu/img-to-music/style.css b/spaces/tengxiu/img-to-music/style.css deleted file mode 100644 index 0daf378bf2304091c1a391af80d333a53c6e3354..0000000000000000000000000000000000000000 --- a/spaces/tengxiu/img-to-music/style.css +++ /dev/null @@ -1,48 +0,0 @@ -#col-container {max-width: 580px; margin-left: auto; margin-right: auto;} -a {text-decoration-line: underline; font-weight: 600;} -.footer { - margin-bottom: 45px; - margin-top: 10px; - text-align: center; - border-bottom: 1px solid #e5e5e5; - } - .footer>p { - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; - } - .dark .footer { - border-color: #303030; - } - .dark .footer>p { - background: #0b0f19; - } -.animate-spin { - animation: spin 1s linear infinite; -} -@keyframes spin { - from { - transform: rotate(0deg); - } - to { - transform: rotate(360deg); - } -} -#share-btn-container { - display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem; -} -#share-btn { - all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important;right:0; -} -#share-btn * { - all: unset; -} -#share-btn-container div:nth-child(-n+2){ - width: auto !important; - min-height: 0px !important; -} -#share-btn-container .wrap { - display: none !important; -} \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/AFT Impulse 4.0 (Portable).md b/spaces/terfces0erbo/CollegeProjectV2/AFT Impulse 4.0 (Portable).md deleted file mode 100644 index f7ca5b288af5c66da4f4de18fa79af0a85fb254d..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/AFT Impulse 4.0 (Portable).md +++ /dev/null @@ -1,6 +0,0 @@ -

              AFT Impulse 4.0 (Portable)


              Download File ○○○ https://bytlly.com/2uGjLR



              -
              -... LAB-SEND OUTS, 439794, **AFT CUL+SMEAR POS PROFILE, 154.00 ... 2411, MEDICAL IMAGING, RADIOLOGY, 52508, PORTABLE FACIAL BONES ... SERVS, 729521807, CATHETER,IMPULSE FR 4.0/6F/100, 7.60. 1fdad05405
              -
              -
              -

              diff --git a/spaces/terfces0erbo/CollegeProjectV2/Hyperspininstall10finalzip FREE.md b/spaces/terfces0erbo/CollegeProjectV2/Hyperspininstall10finalzip FREE.md deleted file mode 100644 index 818540e579baf681f97b76f4f2e633b3ea7ad161..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Hyperspininstall10finalzip FREE.md +++ /dev/null @@ -1,6 +0,0 @@ -

              Hyperspininstall10finalzip


              Download File ❤❤❤ https://bytlly.com/2uGlp6



              -
              -Hyperspininstall10finalzip · bajrangi bhaijaan movie download filmywap hd · Advance Point of Sale System (POS) rar · microsoft windows 7 ... 4d29de3e1b
              -
              -
              -

              diff --git a/spaces/tficar/amazon-rating-calculator/README.md b/spaces/tficar/amazon-rating-calculator/README.md deleted file mode 100644 index efaee4e8fa6bbbe7f49744ed2888b7595884c3ce..0000000000000000000000000000000000000000 --- a/spaces/tficar/amazon-rating-calculator/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Amazon Rating Calculator -emoji: 🐢 -colorFrom: green -colorTo: purple -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Ao Oni School Nightmare English Download A Classic RPG Maker Horror Game.md b/spaces/tialenAdioni/chat-gpt-api/logs/Ao Oni School Nightmare English Download A Classic RPG Maker Horror Game.md deleted file mode 100644 index 40b2621200e7f81d29b6407bb592933d6a9623b8..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Ao Oni School Nightmare English Download A Classic RPG Maker Horror Game.md +++ /dev/null @@ -1,125 +0,0 @@ - -

              Ao Oni School Nightmare: A Terrifying RPG Horror Game

              -

              If you are a fan of horror games, you might have heard of Ao Oni, a Japanese indie game that became popular online for its simple yet effective scares. But did you know that there is a Korean fan-made version of Ao Oni that takes place in a school? It's called Ao Oni School Nightmare, and it's one of the most terrifying RPG horror games you can play.

              -

              ao oni school nightmare download english


              Download File ✑ ✑ ✑ https://urlcod.com/2uK6QW



              -

              What is Ao Oni School Nightmare?

              -

              Ao Oni School Nightmare, also known as Ao Oni S, is a Korean fan-made game that features Hiroshi and Takeshi, two characters from the original Ao Oni, with additional new friends. They all visit their empty school at night, to investigate rumors of a monster. They quickly discover that the oni, a blue creature that resembles a giant distorted human, haunts the school. It is possible to escape alone, or save your friends by collecting eight orbs hidden around the school and unlock the secret ending.

              -

              The plot of the game

              -

              The game begins with Hiroshi receiving a phone call from his friend Akira, who dares him to come to the school at night. Hiroshi accepts the challenge and arrives at the school with his other friends: Takeshi, Mika, Naomi, Rina, and Shinji. They find Akira waiting for them in the classroom. He tells them that he heard rumors of a monster in the school and wants to prove them wrong. He suggests that they split up and explore the school.

              -

              However, as soon as they separate, they realize that they are locked inside the building with seemingly no way out other than to solve puzzles. They also encounter the oni, who chases them relentlessly and kills them if he catches them. Hiroshi must find a way to escape from the school while avoiding the oni and saving his friends.

              -

              The gameplay of the game

              -

              The game is a 2D RPG horror game that uses pixel art graphics and a top-down perspective. The player controls Hiroshi as he explores the school, interacts with objects, collects items, solves puzzles, and hides from the oni. The game has multiple endings depending on the player's actions and choices.

              -

              The game has three difficulty modes: easy, medium, and hard. The difficulty affects the frequency and speed of the oni's appearance, as well as the complexity of the puzzles. The game also has a mobile version that can be played on Android devices.

              -

              The characters of the game

              -

              The game has seven main characters:

              -
                -
              • Hiroshi: The protagonist of the game. He is brave and curious, but also reckless and naive. He is determined to escape from the school and save his friends.
              • -
              • Takeshi: Hiroshi's best friend. He is timid and cowardly, but also loyal and kind. He often follows Hiroshi around and helps him with puzzles.
              • -
              • Mika: Hiroshi's girlfriend. She is cheerful and optimistic, but also stubborn and impulsive. She likes to tease Hiroshi and Takeshi.
              • -
              • Naomi: Mika's best friend. She is smart and calm, but also sarcastic and cynical. She often argues with Mika and Shinji.
              • -
              • Rina: A new student who transferred to Hiroshi's class. She is shy and quiet, but also sweet and gentle. She has a crush on Hiroshi.
              • -
              • Shinji: A delinquent who bullies Hiroshi and his friends. He is arrogant and rude, but also brave and strong. He has a crush on Naomi.
              • -
              • Akira: Hiroshi's friend who invited him to the school at night. He is adventurous and daring, but also reckless and irresponsible. He likes to challenge Hiroshi and his friends.
              • -
              -

              Why is Ao Oni School Nightmare so scary?

              -

              Ao Oni School Nightmare is a game that relies on psychological horror rather than gore or violence. It uses several elements to create a terrifying atmosphere and experience for the player:

              -

              ao oni school nightmare free download english version
              -how to download ao oni school nightmare in english
              -ao oni school nightmare english patch download
              -ao oni school nightmare pc download english
              -ao oni school nightmare full game download english
              -ao oni school nightmare download english sub
              -ao oni school nightmare download english dub
              -ao oni school nightmare android download english
              -ao oni school nightmare online download english
              -ao oni school nightmare mac download english
              -ao oni school nightmare rpg maker download english
              -ao oni school nightmare steam download english
              -ao oni school nightmare wiki download english
              -ao oni school nightmare walkthrough download english
              -ao oni school nightmare endings download english
              -ao oni school nightmare cheats download english
              -ao oni school nightmare mods download english
              -ao oni school nightmare fan games download english
              -ao oni school nightmare soundtrack download english
              -ao oni school nightmare characters download english
              -ao oni school nightmare review download english
              -ao oni school nightmare trailer download english
              -ao oni school nightmare gameplay download english
              -ao oni school nightmare tips download english
              -ao oni school nightmare secrets download english
              -ao oni school nightmare guide download english
              -ao oni school nightmare horror game download english
              -ao oni school nightmare anime download english
              -ao oni school nightmare manga download english
              -ao oni school nightmare novel download english
              -ao oni school nightmare movie download english
              -ao oni school nightmare live action download english
              -ao oni school nightmare remake download english
              -ao oni school nightmare sequel download english
              -ao oni school nightmare update download english
              -ao oni school nightmare crack download english
              -ao oni school nightmare iso download english
              -ao oni school nightmare rar download english
              -ao oni school nightmare zip download english
              -ao oni school nightmare exe download english
              -where to download ao oni school nightmare in english
              -best site to download ao oni school nightmare in english
              -safe way to download ao oni school nightmare in english
              -fast way to download ao oni school nightmare in english
              -easy way to download ao oni school nightmare in english
              -legal way to download ao oni school nightmare in english
              -how to install ao oni school nightmare in english after downloading it
              -how to play ao oni school nightmare in english after downloading it
              -how to uninstall ao oni school nightmare in english after downloading it
              -how to update ao oni school nightmare in english after downloading it

              -

              The design of the oni

              -

              The oni is a blue creature that resembles a giant distorted human with a large mouth full of sharp teeth. It has no eyes or nose, only two holes on its face. It has long arms and legs that allow it to move fast and reach far. It can also change its shape and size depending on the situation.

              -

              The oni's appearance is based on a Japanese folklore creature called ao oni (literally "blue demon"), which is said to haunt abandoned buildings and prey on humans. The oni's design is simple yet effective in creating a sense of fear and dread in the player.

              -

              The sound effects and music

              -

              The game uses minimal sound effects and music to create tension and suspense in the player. The game is mostly silent except for some ambient noises such as footsteps, doors opening and closing, clocks ticking, etc. The silence makes the player more alert and attentive to their surroundings.

              -

              However, when the oni appears or chases the player, the game plays loud and distorted sound effects such as screams, roars, crashes, etc., as well as a fast-paced chase theme that increases the player's heart rate and adrenaline. The contrast between silence and noise makes the player more anxious and panicked when facing the oni.

              -

              The unpredictability and randomness

              -

              The game uses unpredictability and randomness to create surprise and shock in the player. The game does not follow a fixed pattern or script for when or where the oni will appear or chase the player. The oni can appear at any time or place in the school without warning or trigger.

              -

              The game also uses jump scares to catch the player off guard when they least expect it. For example, when opening a door or turning a corner, the oni might suddenly pop up in front of them or behind them. The unpredictability and randomness make the player more nervous and paranoid when exploring

              The difficulty and challenge

              -

              The game is not easy to complete, especially on the hard mode. The game requires the player to have a good memory, logic, and intuition to solve the puzzles and find the clues. Some of the puzzles are language-based, which means that the player needs to know some Korean or use a translator to understand them. The game also requires the player to have a good reflex, timing, and strategy to avoid or outrun the oni. The game does not have a save feature, which means that the player has to start over from the beginning if they die or quit.

              -

              The game is a challenge for both the mind and the body of the player. The game tests the player's intelligence, courage, and endurance. The game rewards the player with a sense of accomplishment and satisfaction if they manage to escape from the school and save their friends.

              -

              How to download and play Ao Oni School Nightmare?

              -

              If you are interested in playing Ao Oni School Nightmare, here are some things you need to know:

              -

              The requirements and compatibility

              -

              The game is compatible with Windows PC and Android devices. The game does not require a high-end system or device to run smoothly. However, the game might have some bugs or glitches depending on your device or version. The game also might have some missing text or fonts if you do not have the Korean language pack installed on your device.

              -

              The download links and sources

              -

              The game is available for free download from various sources online. However, some of these sources might be unreliable or unsafe, so be careful when downloading the game. Here are some of the official and trusted sources where you can download the game:

              -
                -
              • For PC: http://hukutonmai.com/file/aooniS.egg (Easy mode) or http://hukutonmai.com/file/aooniSHard.egg (Hard mode)
              • -
              • For Android: http://hukutonmai.com/file/aooniSMobile.egg (Mobile version)
              • -
              -

              These links are from the original creator of the game, Hukutonmai. You can also visit his website for more information about the game and his other works: http://hukutonmai.com/

              -

              The tips and tricks for playing

              -

              If you want to have a better experience and performance when playing Ao Oni School Nightmare, here are some tips and tricks that might help you:

              -
                -
              • Use headphones or earphones when playing. This will enhance the sound effects and music of the game and make it more immersive and scary.
              • -
              • Play in a dark room or at night. This will create a more spooky atmosphere and mood for the game and make it more realistic and thrilling.
              • -
              • Save your items and resources wisely. You will need them for solving puzzles or escaping from the oni. Do not waste them on unnecessary things or situations.
              • -
              • Explore every room and corner of the school. You might find hidden items, clues, or secrets that will help you progress in the game or unlock different endings.
              • -
              • Be careful when opening doors or turning corners. The oni might be waiting for you there or behind you. Always be alert and ready to run or hide.
              • -
              • Do not give up easily. The game is hard but not impossible. If you die or get stuck, try again or look for solutions online. You can do it!
              • -
              -

              Conclusion

              -

              Ao Oni School Nightmare is a terrifying RPG horror game that will keep you on edge and make you scream. It is a fan-made version of Ao Oni that takes place in a school where a monster lurks and chases you. It is a game that combines psychological horror, sound effects, unpredictability, randomness, difficulty, challenge, puzzles, and survival action sequences.

              -

              If you are looking for a horror game that will test your intelligence, courage, and endurance, Ao Oni School Nightmare is a game that you should try. It is available for free download for Windows PC and Android devices. It is a game that will give you a nightmare that you will never forget.

              -

              FAQs

              -
                -
              1. What is Ao Oni?
              2. -

                Ao Oni is a Japanese indie RPG horror game that was created by noprops in 2008. It is about a group of friends who visit an abandoned mansion where they encounter a blue creature called ao oni who chases them.

                -
              3. What is Ao Oni School Nightmare?
              4. -

                Ao Oni School Nightmare is a Korean fan-made RPG horror game that was created by Hukutonmai in 2011. It is about a group of friends who visit an empty school where they encounter a blue creature called ao oni who chases them.

                -
              5. How many endings does Ao Oni School Nightmare have?
              6. -

                Ao Oni School Nightmare has four endings: Escape Alone (Bad Ending), Escape with Takeshi (Normal Ending), Escape with All Friends (Good Ending), and Secret Ending (True Ending).

                -
              7. How do I get the secret ending in Ao Oni School Nightmare?
              8. -

                To get the secret ending in Ao Oni School Nightmare, you need to collect eight orbs that are hidden around the school. Each orb corresponds to one of your friends who died in the game. After collecting all eight orbs, you need to go to the roof where you will find a secret door that leads to the final confrontation with the oni.

                -
              9. Is Ao Oni School Nightmare scary?
              10. -

                Ao Oni School Nightmare is very scary. It is one of the most terrifying RPG horror games you can play. It uses psychological horror, sound effects, unpredictability, randomness, difficulty, challenge, puzzles, and survival action sequences to create a terrifying atmosphere and experience for the player.

              -

              0a6ba089eb
              -
              -
              \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Artensoft Photo Collage Maker 2.0.135 Keys What You Need to Know Before You Buy.md b/spaces/tialenAdioni/chat-gpt-api/logs/Artensoft Photo Collage Maker 2.0.135 Keys What You Need to Know Before You Buy.md deleted file mode 100644 index b68af539cdbd18940b2876f4475d54e9103b4265..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Artensoft Photo Collage Maker 2.0.135 Keys What You Need to Know Before You Buy.md +++ /dev/null @@ -1,142 +0,0 @@ - -

              Artensoft Photo Collage Maker 2.0.135 Keys: How to Create Stunning Photo Collages with Ease

              -

              Introduction

              -

              Are you looking for a way to turn your photos into amazing artworks? Do you want to create photo collages that look like paintings or mosaics? If so, you should try Artensoft Photo Collage Maker, a unique and powerful software that lets you create photo collages automatically.

              -

              What is Artensoft Photo Collage Maker?

              -

              Artensoft Photo Collage Maker is a software that allows you to create photo collages from your own photos. Unlike other collage makers, Artensoft Photo Collage Maker does not use templates or grids. Instead, it uses a sophisticated algorithm that analyzes your photos and arranges them into a collage that resembles a base image that you choose.

              -

              Artensoft Photo Collage Maker 2.0.135 Keys


              Download Zip ✯✯✯ https://urlcod.com/2uK8Wa



              -

              The result is a stunning photo collage that looks like a photomosaic, where each cell is a small photo that matches the color and brightness of the corresponding pixel in the base image. You can zoom in and out of the collage and see the details of each photo.

              -

              What are the benefits of using Artensoft Photo Collage Maker?

              -

              There are many benefits of using Artensoft Photo Collage Maker, such as:

              -
                -
              • It is easy and fun to use. You don't need any design skills or experience to create beautiful photo collages.
              • -
              • It is fast and efficient. You can create a photo collage in minutes with just a few clicks.
              • -
              • It is versatile and creative. You can use any photos you want, from your personal collection, from online sources, or from the free photo archive that comes with the software. You can also use any image as a base image, such as a portrait, a landscape, a logo, or a text.
              • -
              • It is high-quality and professional. You can create photo collages with high resolution and print them on any size and format. You can also save them in various file formats (JPEG, Bitmap, TIFF, PNG) and share them online or offline.
              • -
              -

              How to get Artensoft Photo Collage Maker 2.0.135 Keys?

              -

              If you want to enjoy all the features and benefits of Artensoft Photo Collage Maker, you need to get the latest version of the software, which is 2.0.135. To do that, you need to get the serial key that will activate the software.

              -

              Artensoft Photo Collage Maker 2.0.135 Crack
              -Artensoft Photo Collage Maker 2.0.135 Serial Number
              -Artensoft Photo Collage Maker 2.0.135 Activation Code
              -Artensoft Photo Collage Maker 2.0.135 License Key
              -Artensoft Photo Collage Maker 2.0.135 Registration Key
              -Artensoft Photo Collage Maker 2.0.135 Patch
              -Artensoft Photo Collage Maker 2.0.135 Keygen
              -Artensoft Photo Collage Maker 2.0.135 Full Version
              -Artensoft Photo Collage Maker 2.0.135 Free Download
              -Artensoft Photo Collage Maker 2.0.135 Torrent
              -How to Install Artensoft Photo Collage Maker 2.0.135
              -How to Use Artensoft Photo Collage Maker 2.0.135
              -How to Create a Photo Collage with Artensoft Photo Collage Maker 2.0.135
              -Artensoft Photo Collage Maker 2.0.135 Review
              -Artensoft Photo Collage Maker 2.0.135 Features
              -Artensoft Photo Collage Maker 2.0.135 Tutorial
              -Artensoft Photo Collage Maker 2.0.135 User Guide
              -Artensoft Photo Collage Maker 2.0.135 Manual
              -Artensoft Photo Collage Maker 2.0.135 Tips and Tricks
              -Artensoft Photo Collage Maker 2.0.135 Alternatives
              -Artensoft Photo Collage Maker 2.0.135 vs Other Photo Collage Software
              -Artensoft Photo Collage Maker 2.0.135 Discount Code
              -Artensoft Photo Collage Maker 2.0.135 Coupon Code
              -Artensoft Photo Collage Maker 2.0.135 Promo Code
              -Artensoft Photo Collage Maker 2.0.135 Deals and Offers
              -Artensoft Photo Collage Maker 2.0.135 Testimonials
              -Artensoft Photo Collage Maker 2.0.135 Customer Reviews
              -Artensoft Photo Collage Maker 2.0.135 FAQs
              -Artensoft Photo Collage Maker 2.0.135 Support
              -Artensoft Photo Collage Maker 2.0.135 Contact Information
              -Artensoft Photo Collage Maker 2.0.135 System Requirements
              -Artensoft Photo Collage Maker 2.0.135 Compatibility
              -Artensoft Photo Collage Maker 2.0.135 Update History
              -Artensoft Photo Collage Maker 2.0.135 Changelog
              -Artensoft Photo Collage Maker 2.0.135 Bug Fixes and Improvements
              -Artensoft Photo Collage Maker 2.0.135 New Features and Enhancements
              -Artensoft Photo Collage Maker 2.0.135 Demo Version
              -Artensoft Photo Collage Maker 2.0.135 Trial Version
              -Artensoft Photo Collage Maker 2.0.135 Refund Policy
              -Artensoft Photo Collage Maker 2.0.135 Privacy Policy
              -Artensoft Photo Collage Maker 2

              -

              There are two ways to get the serial key:

              -
                -
              1. You can buy it from the official website of Artensoft for $79.95. This will give you a lifetime license for one computer and free updates.
              2. -
              3. You can download it from a cracked software website for free. This will give you access to the full version of the software without paying anything.
              4. -
              -

              However, we recommend that you choose the first option, as it is more secure and reliable. Downloading cracked software can expose your computer to viruses, malware, or spyware that can harm your system or steal your personal information.

              -

              How to use Artensoft Photo Collage Maker?

              -

              Using Artensoft Photo Collage Maker is very simple and straightforward. You just need to follow these steps:

              -

              Step 1: Download and install the software

              -

              The first step is to download and install the software on your computer. You can do that by visiting the official website of Artensoft and clicking on the "Download" button. Then, run the setup file and follow the instructions on the screen.

              -

              Once you have installed the software, you need to enter the serial key that you have purchased or downloaded. This will activate the software and allow you to use it without any limitations.

              -

              Step 2: Choose a base image for your collage

              -

              The next step is to choose a base image for your collage. This is the image that will determine how your collage will look like.

              -

              You can choose any image you want from your computer or from online sources. You can also use one of the sample images that come with the software.

              -

              To choose a base image, click on the "Load" button on the top left corner of the main window and browse for the image file on your computer or enter its URL if it is online.

              -

              Once you have chosen a base image, it will appear on the left side of the main window. You can adjust its size and position by dragging its corners or edges.

              -

              Step 3: Add photos to the database

              -

              The third step is to add photos to the database. These are the photos that will be used as elements of your collage.

              -

              You can add as many photos as you want from your computer or from online sources. You can also use one of the folders of photos that come with the software.

              -

              To add photos to the database, click on the "Add Folder" button on the top right corner of the main window and browse for the folder that contains your photos on your computer or enter its URL if it is online. Once you have added a folder of photos, it will appear on the right side of the main window. You can add more folders by repeating this process. The more photos you add, the more varied and detailed your collage will be. However, you should also consider the size and quality of the photos, as they will affect the speed and performance of the collage creation process.

              -

              Step 4: Start the collage creation process

              -

              The fourth step is to start the collage creation process. This is the most exciting part, as you will see how your collage takes shape. To start the collage creation process, click on the "Create" button on the bottom right corner of the main window. This will open a new window where you can see the progress and status of the process. The process may take some time, depending on the number and size of your photos, the complexity of your base image, and the power of your computer. You can pause or cancel the process at any time by clicking on the corresponding buttons. When the process is finished, you will see your collage on the right side of the new window. You can zoom in and out by using the mouse wheel or by clicking on the plus (+) or minus (-) buttons. You can also drag the collage around by holding down the left mouse button.

              -

              Step 5: Review and edit the collage

              -

              The fifth step is to review and edit your collage. You can make some adjustments and changes to improve its appearance and quality. To review and edit your collage, you can use some of the tools and options available on the top toolbar of the new window. For example, you can:

                -
              • Change the cell size by moving the slider or entering a value in the box next to it. The cell size determines how big each photo element will be in your collage. A smaller cell size will make your collage more detailed but also more crowded. A larger cell size will make your collage less detailed but also more spacious.
              • -
              • Change the shape of the cells by clicking on one of the icons next to the cell size slider.
                  -
                • Change the shape of the cells by clicking on one of the icons next to the cell size slider. You can choose between square, rectangle, or hexagon shapes. The shape of the cells affects how well they fit into the base image and how realistic the collage looks.
                • -
                • Change the rotation of the cells by moving the slider or entering a value in the box next to it. The rotation determines how much each photo element will be rotated in your collage. A higher rotation will make your collage more dynamic and artistic. A lower rotation will make your collage more orderly and realistic.
                • -
                • Change the opacity of the cells by moving the slider or entering a value in the box next to it. The opacity determines how transparent each photo element will be in your collage. A higher opacity will make your collage more vivid and colorful. A lower opacity will make your collage more subtle and blended.
                • -
                • Change the color correction of the cells by moving the slider or entering a value in the box next to it. The color correction determines how much each photo element will be adjusted to match the color and brightness of the base image. A higher color correction will make your collage more harmonious and consistent. A lower color correction will make your collage more diverse and contrasted.
                • -
                • Change the number of repeats of the photos by moving the slider or entering a value in the box next to it. The number of repeats determines how many times each photo element will be used in your collage. A higher number of repeats will make your collage more uniform and smooth. A lower number of repeats will make your collage more varied and textured.
                • -
                • Edit individual cells by clicking on them and using the options that appear on the bottom toolbar of the new window. You can move, rotate, resize, delete, or replace any cell in your collage. You can also lock or unlock any cell to prevent it from being changed by other options.
                • -
                -

                You can preview how your collage will look like when printed or saved by clicking on the "Preview" button on the bottom toolbar of the new window. This will open a new window where you can see your collage in full size and quality.

                -

                Step 6: Save and share the collage

                -

                The final step is to save and share your collage. You can do that by using the options available on the bottom toolbar of the new window. For example, you can:

                  -
                • Save your collage as an image file by clicking on the "Save" button. This will open a dialog box where you can choose the file name, the file format (JPEG, Bitmap, TIFF, PNG), and the file size (in pixels) of your collage. You can also choose the quality (from 1 to 100) and the compression (from 0 to 100) of your collage.
                • -
                • Print your collage by clicking on the "Print" button. This will open a dialog box where you can choose the printer, the paper size, and the orientation (portrait or landscape) of your collage. You can also adjust the margins, the scaling, and the alignment of your collage.
                • -
                • Share your collage online by clicking on the "Share" button. This will open a dialog box where you can choose the social media platform (Facebook, Twitter, Instagram, etc.) or the email service (Gmail, Yahoo, Outlook, etc.) that you want to use to share your collage. You can also add a caption, a hashtag, or a tag to your collage.
                • -
                -

                Tips and tricks for making better photo collages

                -

                To make better photo collages with Artensoft Photo Collage Maker, you can follow these tips and tricks:

                -

                Use high-quality photos

                -

                The quality of your photos affects the quality of your collage. Therefore, you should use high-quality photos that are clear, sharp, and bright. You should also avoid using photos that are blurry, dark, or noisy.

                -

                Adjust the cell size and shape

                -

                The cell size and shape affect how detailed and realistic your collage looks. Therefore, you should adjust them according to your preference and purpose. For example, if you want to create a portrait collage, you may want to use smaller cells and square shapes to capture more facial features. If you want to create a landscape collage, you may want to use larger cells and rectangular shapes to cover more area.

                -

                Experiment with different settings and effects

                -

                The settings and effects affect how artistic and creative your collage looks. Therefore, you should experiment with different settings and effects to find the best combination for your collage. For example, if you want to create a dynamic and colorful collage, you may want to use higher rotation, opacity, and color correction values. If you want to create a subtle and blended collage, you may want to use lower rotation, opacity, and color correction values.

                -

                Use a theme or a story for your collage

                -

                A theme or a story can make your collage more meaningful and interesting. Therefore, you should use a theme or a story for your collage that relates to your base image or your photos. For example, if you want to create a birthday collage for someone, you may want to use photos that show their life milestones or their hobbies. If you want to create a travel collage for yourself, you may want to use photos that show different places or cultures that you have visited.

                -

                Conclusion

                -

                In conclusion, Artensoft Photo Collage Maker is a great software that allows you to create stunning photo collages with ease. You can use any photos and any image as a base image for your collage. You can also customize your collage with various tools and options. You can save, print, or share your collage with anyone.

                -

                If you want to try Artensoft Photo Collage Maker for yourself, you can download it from here or here. You can also visit their official website for more information and examples.

                -

                Summary of the main points

                -
                  -
                • Artensoft Photo Collage Maker is a software that allows you to create photo collages automatically from your own photos.
                • -
                • You can choose any image as a base image for your collage and any photos as elements of your collage.
                • -
                • You can customize your collage with various tools and options such as cell size, shape, rotation, opacity, color correction, number of repeats, etc.
                • -
                • You can save, print, or share your collage in various file formats and sizes.
                • -
                • You can make better photo collages by using high-quality photos,
                    -
                  • You can make better photo collages by using high-quality photos, adjusting the cell size and shape, experimenting with different settings and effects, and using a theme or a story for your collage.
                  • -
                  -

                  Call to action

                  -

                  So what are you waiting for? Download Artensoft Photo Collage Maker today and unleash your creativity. You will be amazed by what you can create with your photos. Whether you want to make a collage for yourself, for your friends, or for your business, Artensoft Photo Collage Maker will help you achieve your goals.

                  -

                  Don't forget to share your collage with us and let us know what you think. We would love to see your masterpiece and hear your feedback. You can also check out our blog and social media pages for more tips and tricks on how to make better photo collages.

                  -

                  Thank you for reading this article and happy collaging!

                  -

                  FAQs

                  -

                  Here are some frequently asked questions about Artensoft Photo Collage Maker:

                  -
                    -
                  1. How many photos do I need to make a collage?
                  2. -

                    There is no fixed number of photos that you need to make a collage. However, we recommend that you use at least 50 photos to get a good result. The more photos you use, the more detailed and varied your collage will be.

                    -
                  3. Can I use photos from different sources?
                  4. -

                    Yes, you can use photos from different sources, such as your computer, online sources, or the free photo archive that comes with the software. However, you should make sure that the photos are compatible with the software and that you have the permission to use them.

                    -
                  5. Can I use any image as a base image?
                  6. -

                    Yes, you can use any image as a base image for your collage, such as a portrait, a landscape, a logo, or a text. However, you should make sure that the image is clear, sharp, and bright enough to be recognized by the software and to match the photos in your database.

                    -
                  7. How long does it take to create a collage?
                  8. -

                    The time it takes to create a collage depends on several factors, such as the number and size of your photos, the complexity of your base image, and the power of your computer. It may take from a few minutes to a few hours. You can see the progress and status of the process on the new window that opens when you start the collage creation process.

                    -
                  9. How can I print or share my collage?
                  10. -

                    You can print or share your collage by using the options available on the bottom toolbar of the new window that opens when you finish the collage creation process. You can save your collage as an image file in various formats and sizes. You can also print your collage on any paper size and format. You can also share your collage online on various social media platforms or email services.

                    -
                  -

                  0a6ba089eb
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Data Migration Using Emcopy.md b/spaces/tialenAdioni/chat-gpt-api/logs/Data Migration Using Emcopy.md deleted file mode 100644 index 130271c0c659631da57430e62c632ae71ace5718..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Data Migration Using Emcopy.md +++ /dev/null @@ -1,38 +0,0 @@ -
                  -

                  How to Migrate Data Between File Systems Using EMCOPY

                  -

                  Data migration is the process of transferring data from one storage system to another, usually for the purpose of upgrading, consolidating, or optimizing the performance of the data. Data migration can be challenging, especially when dealing with large amounts of data and complex file systems. Fortunately, there are tools that can help simplify and automate the data migration process.

                  -

                  Data Migration using emcopy


                  Download Zip 🆓 https://urlcod.com/2uKbcP



                  -

                  One such tool is EMCOPY, a command-line Windows tool that was developed by Dell Technologies to aid the migration of data between file systems. EMCOPY can be used to migrate data to PowerStore from any supported Dell storage system or third-party storage system. EMCOPY is available as a free download from Dell Support.

                  -

                  EMCOPY supports the SMB protocol and has awareness for file-system-access-control settings. This support allows this information to be migrated along with the files themselves. EMCOPY can be configured to run regularly on the same file systems to establish an asynchronous host-based replication session. Only modified file system data is transferred when EMCOPY is run on the same file system multiple times.

                  -

                  In this article, we will show you how to use EMCOPY to migrate data between file systems in a few simple steps.

                  -

                  Step 1: Download and Install EMCOPY

                  -

                  The first step is to download and install EMCOPY on the Windows server that will perform the data migration. You can download EMCOPY from Dell Support by following this link: https://www.dell.com/support/home/en-us/drivers/driversdetails?driverid=9x0j4&oscode=wt64a&productcode=powerstore-1000t.

                  -

                  After downloading the zip file, extract it to a folder of your choice. You will see two files: emcopy.exe and emcopy64.exe. The former is for 32-bit systems and the latter is for 64-bit systems. Choose the appropriate file for your system and copy it to a location that is accessible from the command prompt, such as C:\Windows\System32.

                  -

                  -

                  Step 2: Prepare the Source and Destination File Systems

                  -

                  The next step is to prepare the source and destination file systems for the data migration. You will need to have access to both file systems from the Windows server that runs EMCOPY. You can use UNC paths or mapped network drives to access the file systems.

                  -

                  For example, if you want to migrate data from a VNX CIFS share to a PowerStore SMB share, you can use UNC paths like this:

                  -
                    -
                  • Source: \\VNX\Share1
                  • -
                  • Destination: \\PowerStore\Share2
                  • -
                  -

                  Alternatively, you can map network drives like this:

                  -
                    -
                  • Source: Z:\ (mapped to \\VNX\Share1)
                  • -
                  • Destination: Y:\ (mapped to \\PowerStore\Share2)
                  • -
                  -

                  You will also need to ensure that the user account that runs EMCOPY has the appropriate privileges and permissions on both file systems. The user account should have Domain Admin membership and must be logged onto the system that is conducting the actual migration of data and attributes. The user account should also have the following rights:

                  -
                    -
                  • Bypass access checking to back up and restore files and directories.
                  • -
                  • Member of the Backup Operators group.
                  • -
                  • Member of the Administrators or Account Operators group on both the source and destination computers.
                  • -
                  -

                  Step 3: Run EMCOPY with the Desired Options

                  -

                  The final step is to run EMCOPY with the desired options to perform the data migration. You can run EMCOPY from the command prompt by typing emcopy.exe or emcopy64.exe followed by the source and destination paths and any additional options.

                  -

                  For example, if you want to copy all files and directories from Z:\ (source) to Y:\ (destination) with security information (ACLs, owner, audit), you can use this command:

                  - -emcopy64.exe Z:\ Y:\ /s /sec /o /a - -

                  If you want to copy only modified files and directories from Z:\ (source

                  cec2833e83
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Download 3ds Max 2011 Portable 64 Bit A Comprehensive Review and Comparison.md b/spaces/tialenAdioni/chat-gpt-api/logs/Download 3ds Max 2011 Portable 64 Bit A Comprehensive Review and Comparison.md deleted file mode 100644 index 2798d46bb409e53d5a2db730198612ddb1d1bfd1..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Download 3ds Max 2011 Portable 64 Bit A Comprehensive Review and Comparison.md +++ /dev/null @@ -1,126 +0,0 @@ -
                  -

                  Download 3ds Max 2011 Portable 64 Bit: A Review

                  -

                  If you are looking for a professional 3D modeling software that can create stunning scenes for film, game, or architecture, you might want to consider downloading 3ds Max 2011 portable 64 bit. This is a free trial version of Autodesk's popular 3D animation software that offers a wide set of tools and features to unleash your creativity.

                  -

                  In this article, we will review some of the main aspects of 3ds Max 2011 portable 64 bit, such as its installation process, user interface, functionality, and performance. We will also provide you with some download links and tips on how to use the software effectively.

                  -

                  Download 3ds Max 2011 Portable 64 Bit


                  Download File ————— https://urlcod.com/2uK3kb



                  -

                  How to Download and Install 3ds Max 2011 Portable 64 Bit

                  -

                  Downloading and installing 3ds Max 2011 portable 64 bit is not a difficult task, but it does require some patience and attention. Here are the steps you need to follow:

                  -
                    -
                  • Go to the official website of Autodesk and fill out an online form with some basic information, such as your name, email, country, and industry.
                  • -
                  • You will receive an email with a download link and a serial number for the software.
                  • -
                  • Click on the download link and choose the option to download the software using the Akamai Net Session interface. This is a secure and fast way to download large files.
                  • -
                  • Once the download is complete, run the setup file and follow the instructions on the screen. You will need to enter the serial number and the product key that you received in the email.
                  • -
                  • You will also need to activate the software online or by phone within 30 days of installation.
                  • -
                  • After the installation is complete, you can launch the software from your desktop or start menu.
                  • -
                  -

                  The User Interface of 3ds Max 2011 Portable 64 Bit

                  -

                  The user interface of 3ds Max 2011 portable 64 bit has a professional look and feel that is expected from such a software. It consists of several elements, such as:

                  -
                    -
                  • The menu bar, which contains various commands and options for working with the software.
                  • -
                  • The toolbar, which provides quick access to some of the most commonly used tools and features.
                  • -
                  • The viewport, which displays the scene that you are working on from different perspectives. You can switch between different views, such as perspective, front, top, or left.
                  • -
                  • The command panel, which contains tabs for different modes of operation, such as create, modify, hierarchy, motion, display, or utilities.
                  • -
                  • The scene explorer, which shows the hierarchy and properties of the objects in your scene. You can select, rename, hide, or delete objects from here.
                  • -
                  • The time slider, which allows you to control the animation of your scene. You can set keyframes, play back, or scrub through your animation.
                  • -
                  • The status bar, which displays information about your scene, such as coordinates, frames per second, or memory usage.
                  • -
                  -

                  The Functionality of 3ds Max 2011 Portable 64 Bit

                  -

                  3ds Max 2011 portable 64 bit offers a wide range of functionality for creating and animating 3D models. Some of its most important features are:

                  -
                    -
                  • Advanced polygon modeling and texturing: You can create complex shapes and surfaces using various tools and modifiers. You can also apply materials and maps to your models to give them realistic appearance and behavior.
                  • -
                  • Character animation toolkit: You can rig and animate characters using bones, skinning, inverse kinematics (IK), forward kinematics (FK), morphing, or facial animation. You can also use motion capture data or biped templates to create realistic human motion.
                  • -
                  • Pipeline and workflow support: You can import and export files in various formats, such as DWG, FBX, OBJ, or COLLADA. You can also integrate with other Autodesk products, such as Maya or Mudbox. You can also use scripting languages like MAXScript or Python to automate repetitive tasks or customize your workflow.
                  • -
                  • Quicksilver hardware renderer: You can render your scenes faster using your GPU instead of your CPU. You can also use advanced effects like ambient occlusion (AO), depth of field (DOF), or motion blur.
                  • -
                  -

                  The Performance of 3ds Max 2011 Portable 64 Bit

                  -

                  3ds Max 2011 portable 64 bit is a powerful software that requires a lot of system resources to run smoothly. Therefore, you need to make sure that your computer meets the minimum requirements for the software. These are:

                  -

                  How to download 3ds Max 2011 portable 64 bit for free
                  -Download 3ds Max 2011 portable 64 bit crack
                  -Download 3ds Max 2011 portable 64 bit full version
                  -Download 3ds Max 2011 portable 64 bit with keygen
                  -Download 3ds Max 2011 portable 64 bit for Windows 10
                  -Download 3ds Max 2011 portable 64 bit for Mac
                  -Download 3ds Max 2011 portable 64 bit for Linux
                  -Download 3ds Max 2011 portable 64 bit torrent
                  -Download 3ds Max 2011 portable 64 bit online
                  -Download 3ds Max 2011 portable 64 bit offline installer
                  -Download Autodesk 3ds Max 2011 portable 64 bit
                  -Download V-Ray for 3ds Max 2011 portable 64 bit
                  -Download Corona Renderer for 3ds Max 2011 portable 64 bit
                  -Download FumeFX for 3ds Max 2011 portable 64 bit
                  -Download Forest Pack Pro for 3ds Max 2011 portable 64 bit
                  -Download RailClone Pro for 3ds Max 2011 portable 64 bit
                  -Download Phoenix FD for 3ds Max 2011 portable 64 bit
                  -Download Ornatrix for 3ds Max 2011 portable 64 bit
                  -Download tyFlow for 3ds Max 2011 portable 64 bit
                  -Download GrowFX for 3ds Max 2011 portable 64 bit
                  -Download Anima for 3ds Max 2011 portable 64 bit
                  -Download Substance Painter for 3ds Max 2011 portable 64 bit
                  -Download Quixel Mixer for 3ds Max 2011 portable

                  -
                    -
                  • Operating system: Windows XP SP3 (32-bit), Windows Vista SP2 (32-bit or 64-bit), Windows 7 (32-bit or 64-bit)
                  • -
                  • Processor: Intel Pentium IV or AMD Athlon XP or higher (32-bit), Intel EM64T or AMD Athlon X2 or higher (64-bit)
                  • -
                  • Memory: At least 2 GB RAM (32-bit), at least 4 GB RAM (64-bit)
                  • -
                  • Hard disk space: At least 2 GB free disk space for installation
                  • -
                  • Graphics card: At least DirectX®9 compatible graphics card with Shader Model 3 support
                  • -
                  • Display resolution: At least 1024 x768 pixels
                  • -
                  -

                  If you want to improve the performance of your software, you can also follow some tips like:

                  -
                    -
                  • Closing other programs that are running in the background
                  • -
                  • Cleaning up your hard disk space by deleting unnecessary files
                  • -
                  • Defragmenting your hard disk regularly
                  • -
                  • Updating your drivers and software patches
                  • -
                  • Optimizing your scene by reducing polygon count or using proxies
                  • -
                  - -

                  Conclusion

                  - -

                  In conclusion, downloading 3ds Max 2011 portable 64 bit is a great way to try out one of the best professional 3D modeling software available in the market. It offers a wide range of tools and features for creating stunning scenes for film, game, or architecture. It also has a user-friendly interface and a fast rendering engine that can enhance your workflow and productivity. However, you need to make sure that your computer meets the minimum requirements for running the software smoothly. You also need to activate the software online or by phone within 30 days of installation.

                  - -

                  If you want to download 3ds Max 2011 portable 64 bit for free trial version , you can use one of these links:

                  - - - -

                  We hope this article was helpful for you. If you have any questions or feedbacks , please let us know in the comments section below.

                  -

                  The Pros and Cons of Downloading 3ds Max 2011 Portable 64 Bit

                  -

                  Downloading 3ds Max 2011 portable 64 bit has its advantages and disadvantages that you should be aware of before deciding to use it for your projects. Here are some of the pros and cons of this software:

                  -

                  The Pros

                  -
                    -
                  • It is compatible: You can use 3ds Max 2011 portable 64 bit on any Windows system that meets the minimum requirements, regardless of the version or edition. You can also use it on both 32-bit and 64-bit systems.
                  • -
                  • It is flexible: You can use 3ds Max 2011 portable 64 bit for various purposes and industries, such as film and video production, game development and design, architectural visualization, product design, etc. You can also create different types of models, such as organic or hard-surface, static or animated, realistic or stylized, etc.
                  • -
                  • It is customizable: You can customize 3ds Max 2011 portable 64 bit to suit your needs and preferences. You can change the layout and appearance of the user interface, create your own shortcuts and menus, use different plugins and extensions, or write your own scripts and macros.
                  • -
                  • It is educational: You can learn a lot from using 3ds Max 2011 portable 64 bit. You can improve your skills and knowledge in 3D modeling, texturing, lighting, animation, rendering, etc. You can also find many tutorials and resources online that can help you master the software.
                  • -
                  -

                  The Cons

                  -
                    -
                  • It is limited: You can only use 3ds Max 2011 portable 64 bit for free as a trial version for 30 days. After that, you need to activate the software online or by phone using the serial number and the product key that you received in the email. If you don't activate the software, you won't be able to use it anymore.
                  • -
                  • It is outdated: You can only use 3ds Max 2011 portable 64 bit, which is an old version of the software that was released in 2011. This means that you won't be able to access the latest features and updates that are available in the newer versions of the software.
                  • -
                  • It is risky: You can only download 3ds Max 2011 portable 64 bit from unofficial sources that may not be safe or reliable. You may encounter problems such as viruses, malware, corrupted files, or broken links. You may also violate the terms and conditions of Autodesk by using an unauthorized version of the software.
                  • -
                  • It is unsupported: You can't get any technical support or customer service from Autodesk if you use 3ds Max 2011 portable 64 bit. You are on your own if you encounter any issues or errors with the software. You may also not be able to access some online services or features that require an official license.
                  • -
                  - -

                  Conclusion

                  - -

                  In conclusion, downloading 3ds Max 2011 portable 64 bit is a great way to try out one of the best professional 3D modeling software available in the market. It offers a wide range of tools and features for creating stunning scenes for film, game, or architecture. It also has a user-friendly interface and a fast rendering engine that can enhance your workflow and productivity. However, you need to make sure that your computer meets the minimum requirements for running the software smoothly. You also need to activate the software online or by phone within 30 days of installation.

                  - -

                  If you want to download 3ds Max 2011 portable 64 bit for free trial version , you can use one of these links:

                  - - - -

                  We hope this article was helpful for you. If you have any questions or feedbacks , please let us know in the comments section below.

                  -

                  In conclusion, downloading 3ds Max 2011 portable 64 bit is a great way to try out one of the best professional 3D modeling software available in the market. It offers a wide range of tools and features for creating stunning scenes for film, game, or architecture. It also has a user-friendly interface and a fast rendering engine that can enhance your workflow and productivity. However, you need to make sure that your computer meets the minimum requirements for running the software smoothly. You also need to activate the software online or by phone within 30 days of installation.

                  679dcb208e
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Dvd Moviefactory Pro 7 Serial Number Activation Code Final.md b/spaces/tialenAdioni/chat-gpt-api/logs/Dvd Moviefactory Pro 7 Serial Number Activation Code Final.md deleted file mode 100644 index 1070dde9277d81be9b8a5e5ffd1b4ab8aa559426..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Dvd Moviefactory Pro 7 Serial Number Activation Code Final.md +++ /dev/null @@ -1,63 +0,0 @@ - -

                  Dvd Moviefactory Pro 7 Serial Number Activation Code Final

                  -

                  Introduction

                  -
                    -
                  • What is Dvd Moviefactory Pro 7 and what are its features?
                  • -
                  • Why do you need a serial number activation code to use it?
                  • -
                  • How to get a valid serial number activation code for Dvd Moviefactory Pro 7?
                  • -
                  -

                  What is Dvd Moviefactory Pro 7 and what are its features?

                  -
                    -
                  • A brief overview of Dvd Moviefactory Pro 7 and its history
                  • -
                  • A list of the main features of Dvd Moviefactory Pro 7, such as cutting-edge authoring, design Hollywood-style menus, pro-quality DVD production, burn and copy CDs and DVDs, next generation HD, etc.
                  • -
                  • A comparison of Dvd Moviefactory Pro 7 with other similar software, such as VideoStudio, Nero, etc.
                  • -
                  -

                  Why do you need a serial number activation code to use it?

                  -
                    -
                  • An explanation of what a serial number activation code is and how it works
                  • -
                  • The benefits of having a serial number activation code, such as unlocking all the features, avoiding trial limitations, ensuring legal use, etc.
                  • -
                  • The risks of using a fake or invalid serial number activation code, such as malware infection, software malfunction, legal issues, etc.
                  • -
                  -

                  How to get a valid serial number activation code for Dvd Moviefactory Pro 7?

                  -
                    -
                  • The official way to get a serial number activation code for Dvd Moviefactory Pro 7, such as buying it from the Corel website or authorized resellers
                  • -
                  • The alternative ways to get a serial number activation code for Dvd Moviefactory Pro 7, such as using a keygen, a crack, or a patch
                  • -
                  • The pros and cons of each alternative way, such as the ease of use, the reliability, the legality, etc.
                  • -
                  -

                  Conclusion

                  -
                    -
                  • A summary of the main points of the article
                  • -
                  • A recommendation of the best way to get a serial number activation code for Dvd Moviefactory Pro 7
                  • -
                  • A call to action for the readers to try Dvd Moviefactory Pro 7 and share their feedback
                  • -
                  -

                  FAQs

                  -
                    -
                  • Q: Is Dvd Moviefactory Pro 7 compatible with Windows 10?
                  • -
                  • A: Yes, Dvd Moviefactory Pro 7 is compatible with Windows XP, Vista, 7, 8, and 10.
                  • -
                  • Q: How can I contact Corel customer support if I have any issues with Dvd Moviefactory Pro 7?
                  • -
                  • A: You can contact Corel customer support by phone, email, or chat from their website . You can also access their knowledgebase , user forum , user guide , and tutorials for more help.
                  • -
                  • Q: How can I update Dvd Moviefactory Pro 7 to the latest version?
                  • -
                  • A: You can update Dvd Moviefactory Pro 7 by downloading and installing the latest service pack from their website . You can also check for updates from within the software by clicking on Help > Check for Updates.
                  • -
                  • Q: How can I create stunning HD slideshows with Dvd Moviefactory Pro 7?
                  • -
                  • A: You can create stunning HD slideshows with Dvd Moviefactory Pro 7 by following these steps:
                  • -
                      -
                    1. Launch Dvd Moviefactory Pro 7 and click on Create Disc > HD DVD or Blu-ray Disc.
                    2. -
                    3. Select Slideshow Disc from the menu and click Next.
                    4. - Add or Browse. You can also drag and drop them from your computer. -
                    5. Edit your photos and videos by clicking on Edit. You can crop, rotate, adjust, add effects, transitions, captions, and music to your slideshow.
                    6. -
                    7. Preview your slideshow by clicking on Preview. You can also change the disc settings, such as the title, menu, chapters, etc.
                    8. -
                    9. Burn your slideshow to a disc by clicking on Burn. You can choose the disc type, quality, speed, and label for your disc.
                    10. -
                    -
                  • Q: How can I convert my video files to different formats with Dvd Moviefactory Pro 7?
                  • -
                  • A: You can convert your video files to different formats with Dvd Moviefactory Pro 7 by following these steps:
                  • -
                      -
                    1. Launch Dvd Moviefactory Pro 7 and click on Convert Video Files.
                    2. -
                    3. Add your video files to the conversion list by clicking on Add Files or Browse. You can also drag and drop them from your computer.
                    4. -
                    5. Select the output format and settings for your video files by clicking on Output Format. You can choose from a variety of formats, such as AVI, MPEG, WMV, MP4, MOV, etc. You can also customize the video and audio parameters, such as resolution, bitrate, frame rate, codec, etc.
                    6. -
                    7. Start the conversion process by clicking on Start. You can monitor the progress and status of the conversion in the window.
                    8. -
                    -
                  -

                  -

                  Dvd Moviefactory Pro 7 Serial Number Activation Code Final


                  Download Zip ☆☆☆☆☆ https://urlcod.com/2uK9v2



                  b2dd77e56b
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/OMSI-Stadtbus-O305.md b/spaces/tioseFevbu/cartoon-converter/OMSI-Stadtbus-O305.md deleted file mode 100644 index 1edd467c4ea42b4d55f62b355718c99b4e2934c1..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/OMSI-Stadtbus-O305.md +++ /dev/null @@ -1,43 +0,0 @@ -## OMSI Stadtbus O305 - - - -**OMSI Stadtbus O305 • [https://vercupalo.blogspot.com/?d=2tvYpX](https://vercupalo.blogspot.com/?d=2tvYpX)** - - - -# OMSI Stadtbus O305: A Classic Bus Simulator Add-on - - - -If you are a fan of bus simulation games, you might have heard of OMSI: The Bus Simulator, a realistic and detailed game that lets you drive various buses in different scenarios and maps. But did you know that there is an add-on for OMSI that features one of the most iconic buses in German history? The OMSI Stadtbus O305 add-on is a downloadable content that brings the Mercedes-Benz O305 city bus to life in OMSI. - - - -The OMSI Stadtbus O305 add-on was created by Rolf Westphalen, a bus enthusiast and modder who wanted to recreate the experience of driving the O305 in OMSI. The add-on includes three different versions of the O305, each with its own features and characteristics. You can choose between automatic and manual transmission, realistic sounds, dirty buses, cash table, driving light, indoor light, instrument light, heating, ventilation windows, skylights and more. You can also use the stop request button with the "bing" sound at bus stops and control the different door modes. - - - -The add-on also comes with a fictional German map called Neuendorf, where you can test the O305 on five drivable routes. Neuendorf is a medium-sized town in the middle of Germany where you can drive around the city centre or the suburban areas. You can take the workers to their jobs at the Böttcherwerk factory or take them home after their shift. The map has a variety of scenery and traffic situations to challenge your driving skills. - - - -The OMSI Stadtbus O305 add-on is a must-have for any bus simulator fan who wants to relive the history of the German omnibuses and explore the technology of the O305. The add-on is available on Steam for $19.99 and requires the base game OMSI 2: Steam Edition to play. You can also watch some gameplay videos on YouTube to see the add-on in action. - - - -So what are you waiting for? Get behind the wheel of the O305 and enjoy the ride! - - - -The OMSI Stadtbus O305 add-on has received very positive reviews from the users who have tried it. They praised the high quality of the bus models, the realistic driving physics, the detailed interior and exterior, the authentic sounds and animations, and the immersive atmosphere of the Neuendorf map. They also appreciated the option to choose between automatic and manual transmission, which adds more challenge and variety to the gameplay. Some users even said that the O305 is their favorite bus in OMSI and that they enjoy driving it on other maps as well. - - - -The OMSI Stadtbus O305 add-on is not only a fun and realistic simulation game, but also a tribute to the history and technology of the O305 bus. The O305 was one of the first standard buses in Germany, designed by a consortium of bus manufacturers to meet the needs of public transport. The O305 was produced from 1967 to 1987 and served in many cities across Germany and Europe. The O305 was known for its reliability, durability, comfort, and capacity. It was also one of the first buses to feature an articulated design, which allowed for more passengers and better maneuverability. - - - -If you want to experience what it was like to drive the O305 bus in the 1980s, you should definitely check out the OMSI Stadtbus O305 add-on. You will not regret it! - - 1b8d091108 \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Archistation.rar UPD.md b/spaces/tioseFevbu/cartoon-converter/scripts/Archistation.rar UPD.md deleted file mode 100644 index ce9235489694e68bac687433f994f4d4682186ca..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Archistation.rar UPD.md +++ /dev/null @@ -1,52 +0,0 @@ -
                  -

                  How to Use Archistation.rar to Compress and Extract Files on Mac

                  - -

                  Archistation.rar is a file format that can compress and extract data on your Mac. It is similar to ZIP or RAR files, but it has some advantages over them. In this article, we will show you how to use Archistation.rar to store and transport data on your Mac.

                  -

                  Archistation.rar


                  Download Filehttps://urlcod.com/2uHwAH



                  - -

                  What is Archistation.rar?

                  - -

                  Archistation.rar is a file format that uses the RAR compression algorithm to reduce the size of files and folders. It can also split large files into smaller parts, encrypt them with a password, and create self-extracting archives. Archistation.rar is compatible with WinRAR[^1^], a popular compression tool for Windows, as well as other RAR extractors for Mac, Linux, Android, and other platforms[^1^].

                  - -

                  Why Use Archistation.rar?

                  - -

                  Archistation.rar has some benefits over other file formats, such as:

                  - -
                    -
                  • It can compress files more efficiently than ZIP or RAR, saving disk space and bandwidth.
                  • -
                  • It can handle files of any size and type, including multimedia, documents, software, etc.
                  • -
                  • It can protect your files with strong encryption and password protection.
                  • -
                  • It can create self-extracting archives that can run on any Mac without installing any software.
                  • -
                  • It can repair damaged or corrupted archives and recover lost data.
                  • -
                  - -

                  How to Use Archistation.rar?

                  - -

                  To use Archistation.rar, you need a software that can create and extract RAR files on your Mac. There are several options available, such as:

                  - -
                    -
                  • Archistation, a fast and simple tool that can compress and extract Archistation.rar files without a complex configuration setup[^3^]. It also supports ZIP, 7Z, TAR, GZIP, BZIP2, XZ, LZIP, DMG, ISO, LZMA, EXE, CAB, WIM, PAX, JAR, APK, APPX and more[^3^]. You can download it from netcityme.com[^3^].
                  • -
                  • WinRAR, a powerful and versatile tool that can create and extract Archistation.rar files as well as other popular file formats. It also offers advanced features such as encryption, password protection, self-extracting archives, recovery mode, etc. You can download it from win-rar.com[^1^] [^2^].
                  • -
                  • UnRarX, a free and easy-to-use tool that can extract Archistation.rar files on your Mac. It also supports password-protected and multipart archives. You can download it from unrarx.com.
                  • -
                  - -

                  To create an Archistation.rar file on your Mac using Archistation or WinRAR, follow these steps:

                  - -
                    -
                  1. Select the files or folders you want to compress.
                  2. -
                  3. Right-click on them and choose "Compress with Archistation" or "Add to archive" from the context menu.
                  4. -
                  5. Select "Archistation.rar" as the archive format and adjust the compression settings as you wish.
                  6. -
                  7. Click "OK" to start the compression process.
                  8. -
                  9. A new file with the extension ".archistation.rar" will be created in the same location as the original files or folders.
                  10. -
                  - -

                  To extract an Archistation.rar file on your Mac using Archistation or WinRAR, follow these steps:

                  -

                  - -
                    -
                  1. Select the Archistation.rar file you want to extract.
                  2. -
                  3. Double-click on it or right-click on it and choose "Extract with Archistation" or "Extract here" from the context menu.
                  4. -
                  5. If the archive is password-protected, enter the password when prompted.
                  6. -
                  7. The contents of the archive will be extracted to the same location as the Archistation.rar file or to a folder

                    e93f5a0c3f
                    -
                    -
                    \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Geopolitical Simulator Activation Code Keygen For 37 [BEST].md b/spaces/tioseFevbu/cartoon-converter/scripts/Geopolitical Simulator Activation Code Keygen For 37 [BEST].md deleted file mode 100644 index 6809987222742312ec86d22662d6ac4b3a89d2a6..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Geopolitical Simulator Activation Code Keygen For 37 [BEST].md +++ /dev/null @@ -1,27 +0,0 @@ - -

                    How to Get Geopolitical Simulator Activation Code Keygen For 37

                    -

                    Geopolitical Simulator is a realistic simulation game that lets you play as the leader of any country in the world. You can manage your economy, diplomacy, military, social issues, and more. You can also face real-world events and scenarios, such as wars, crises, elections, and disasters.

                    -

                    Geopolitical Simulator Activation Code Keygen For 37


                    Download Filehttps://urlcod.com/2uHwxd



                    -

                    However, to play this game, you need an activation code that is generated by a keygen. A keygen is a software that creates unique serial numbers for a specific program. Unfortunately, the official keygen for Geopolitical Simulator is not available for free. You have to buy the game from the official website or a trusted retailer.

                    -

                    But don't worry, there are some ways to get Geopolitical Simulator activation code keygen for 37 without paying anything. Here are some of them:

                    -
                      -
                    • Search for a cracked version of the game on torrent sites or file-sharing platforms. A cracked version is a modified version of the game that bypasses the activation process. However, this method is risky and illegal. You may download viruses or malware that can harm your computer or compromise your personal data. You may also face legal consequences for violating the game's copyright.
                    • -
                    • Look for a giveaway or a contest that offers Geopolitical Simulator activation code keygen for 37 as a prize. Sometimes, the game's developers or sponsors may host a giveaway or a contest on their social media pages or websites. You can enter by following their instructions and rules. However, this method is not guaranteed and depends on your luck. You may have to compete with many other participants and wait for a long time.
                    • -
                    • Use a generator tool that claims to create Geopolitical Simulator activation code keygen for 37 online. A generator tool is a website or an app that promises to generate valid serial numbers for any program. However, this method is unreliable and unsafe. Most of these tools are scams that will ask you to complete surveys, download apps, or provide personal information. They will not give you the code you want and may steal your data or money.
                    • -
                    -

                    As you can see, none of these methods are easy or secure. The best way to get Geopolitical Simulator activation code keygen for 37 is to buy the game from the official website or a trusted retailer. This way, you will support the game's developers and enjoy the game without any problems.

                    - -

                    If you decide to buy the game, you will need to follow these steps to activate it:

                    -

                    -
                      -
                    1. Go to the official website of Geopolitical Simulator and click on the "Buy" button.
                    2. -
                    3. Select your preferred payment method and complete the transaction.
                    4. -
                    5. Check your email for the confirmation message and the activation code keygen for 37.
                    6. -
                    7. Download and install the game on your computer.
                    8. -
                    9. Launch the game and enter the activation code keygen for 37 when prompted.
                    10. -
                    11. Enjoy playing Geopolitical Simulator as the leader of any country in the world.
                    12. -
                    -

                    If you encounter any issues or errors while activating or playing the game, you can contact the game's support team. They will help you resolve your problems and answer your questions. You can also check the game's FAQ page or forum for more information and tips.

                    -

                    Geopolitical Simulator is a fun and educational game that will challenge your strategic and decision-making skills. You will learn a lot about the world's politics, economics, history, and culture. You will also experience the joys and difficulties of being a leader. Whether you want to create world peace or start a global war, the choice is yours.

                    e93f5a0c3f
                    -
                    -
                    \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/requests/auth.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/requests/auth.py deleted file mode 100644 index 9733686ddb36b826ead4f4666d42311397fa6fec..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/requests/auth.py +++ /dev/null @@ -1,315 +0,0 @@ -""" -requests.auth -~~~~~~~~~~~~~ - -This module contains the authentication handlers for Requests. -""" - -import hashlib -import os -import re -import threading -import time -import warnings -from base64 import b64encode - -from ._internal_utils import to_native_string -from .compat import basestring, str, urlparse -from .cookies import extract_cookies_to_jar -from .utils import parse_dict_header - -CONTENT_TYPE_FORM_URLENCODED = "application/x-www-form-urlencoded" -CONTENT_TYPE_MULTI_PART = "multipart/form-data" - - -def _basic_auth_str(username, password): - """Returns a Basic Auth string.""" - - # "I want us to put a big-ol' comment on top of it that - # says that this behaviour is dumb but we need to preserve - # it because people are relying on it." - # - Lukasa - # - # These are here solely to maintain backwards compatibility - # for things like ints. This will be removed in 3.0.0. - if not isinstance(username, basestring): - warnings.warn( - "Non-string usernames will no longer be supported in Requests " - "3.0.0. Please convert the object you've passed in ({!r}) to " - "a string or bytes object in the near future to avoid " - "problems.".format(username), - category=DeprecationWarning, - ) - username = str(username) - - if not isinstance(password, basestring): - warnings.warn( - "Non-string passwords will no longer be supported in Requests " - "3.0.0. Please convert the object you've passed in ({!r}) to " - "a string or bytes object in the near future to avoid " - "problems.".format(type(password)), - category=DeprecationWarning, - ) - password = str(password) - # -- End Removal -- - - if isinstance(username, str): - username = username.encode("latin1") - - if isinstance(password, str): - password = password.encode("latin1") - - authstr = "Basic " + to_native_string( - b64encode(b":".join((username, password))).strip() - ) - - return authstr - - -class AuthBase: - """Base class that all auth implementations derive from""" - - def __call__(self, r): - raise NotImplementedError("Auth hooks must be callable.") - - -class HTTPBasicAuth(AuthBase): - """Attaches HTTP Basic Authentication to the given Request object.""" - - def __init__(self, username, password): - self.username = username - self.password = password - - def __eq__(self, other): - return all( - [ - self.username == getattr(other, "username", None), - self.password == getattr(other, "password", None), - ] - ) - - def __ne__(self, other): - return not self == other - - def __call__(self, r): - r.headers["Authorization"] = _basic_auth_str(self.username, self.password) - return r - - -class HTTPProxyAuth(HTTPBasicAuth): - """Attaches HTTP Proxy Authentication to a given Request object.""" - - def __call__(self, r): - r.headers["Proxy-Authorization"] = _basic_auth_str(self.username, self.password) - return r - - -class HTTPDigestAuth(AuthBase): - """Attaches HTTP Digest Authentication to the given Request object.""" - - def __init__(self, username, password): - self.username = username - self.password = password - # Keep state in per-thread local storage - self._thread_local = threading.local() - - def init_per_thread_state(self): - # Ensure state is initialized just once per-thread - if not hasattr(self._thread_local, "init"): - self._thread_local.init = True - self._thread_local.last_nonce = "" - self._thread_local.nonce_count = 0 - self._thread_local.chal = {} - self._thread_local.pos = None - self._thread_local.num_401_calls = None - - def build_digest_header(self, method, url): - """ - :rtype: str - """ - - realm = self._thread_local.chal["realm"] - nonce = self._thread_local.chal["nonce"] - qop = self._thread_local.chal.get("qop") - algorithm = self._thread_local.chal.get("algorithm") - opaque = self._thread_local.chal.get("opaque") - hash_utf8 = None - - if algorithm is None: - _algorithm = "MD5" - else: - _algorithm = algorithm.upper() - # lambdas assume digest modules are imported at the top level - if _algorithm == "MD5" or _algorithm == "MD5-SESS": - - def md5_utf8(x): - if isinstance(x, str): - x = x.encode("utf-8") - return hashlib.md5(x).hexdigest() - - hash_utf8 = md5_utf8 - elif _algorithm == "SHA": - - def sha_utf8(x): - if isinstance(x, str): - x = x.encode("utf-8") - return hashlib.sha1(x).hexdigest() - - hash_utf8 = sha_utf8 - elif _algorithm == "SHA-256": - - def sha256_utf8(x): - if isinstance(x, str): - x = x.encode("utf-8") - return hashlib.sha256(x).hexdigest() - - hash_utf8 = sha256_utf8 - elif _algorithm == "SHA-512": - - def sha512_utf8(x): - if isinstance(x, str): - x = x.encode("utf-8") - return hashlib.sha512(x).hexdigest() - - hash_utf8 = sha512_utf8 - - KD = lambda s, d: hash_utf8(f"{s}:{d}") # noqa:E731 - - if hash_utf8 is None: - return None - - # XXX not implemented yet - entdig = None - p_parsed = urlparse(url) - #: path is request-uri defined in RFC 2616 which should not be empty - path = p_parsed.path or "/" - if p_parsed.query: - path += f"?{p_parsed.query}" - - A1 = f"{self.username}:{realm}:{self.password}" - A2 = f"{method}:{path}" - - HA1 = hash_utf8(A1) - HA2 = hash_utf8(A2) - - if nonce == self._thread_local.last_nonce: - self._thread_local.nonce_count += 1 - else: - self._thread_local.nonce_count = 1 - ncvalue = f"{self._thread_local.nonce_count:08x}" - s = str(self._thread_local.nonce_count).encode("utf-8") - s += nonce.encode("utf-8") - s += time.ctime().encode("utf-8") - s += os.urandom(8) - - cnonce = hashlib.sha1(s).hexdigest()[:16] - if _algorithm == "MD5-SESS": - HA1 = hash_utf8(f"{HA1}:{nonce}:{cnonce}") - - if not qop: - respdig = KD(HA1, f"{nonce}:{HA2}") - elif qop == "auth" or "auth" in qop.split(","): - noncebit = f"{nonce}:{ncvalue}:{cnonce}:auth:{HA2}" - respdig = KD(HA1, noncebit) - else: - # XXX handle auth-int. - return None - - self._thread_local.last_nonce = nonce - - # XXX should the partial digests be encoded too? - base = ( - f'username="{self.username}", realm="{realm}", nonce="{nonce}", ' - f'uri="{path}", response="{respdig}"' - ) - if opaque: - base += f', opaque="{opaque}"' - if algorithm: - base += f', algorithm="{algorithm}"' - if entdig: - base += f', digest="{entdig}"' - if qop: - base += f', qop="auth", nc={ncvalue}, cnonce="{cnonce}"' - - return f"Digest {base}" - - def handle_redirect(self, r, **kwargs): - """Reset num_401_calls counter on redirects.""" - if r.is_redirect: - self._thread_local.num_401_calls = 1 - - def handle_401(self, r, **kwargs): - """ - Takes the given response and tries digest-auth, if needed. - - :rtype: requests.Response - """ - - # If response is not 4xx, do not auth - # See https://github.com/psf/requests/issues/3772 - if not 400 <= r.status_code < 500: - self._thread_local.num_401_calls = 1 - return r - - if self._thread_local.pos is not None: - # Rewind the file position indicator of the body to where - # it was to resend the request. - r.request.body.seek(self._thread_local.pos) - s_auth = r.headers.get("www-authenticate", "") - - if "digest" in s_auth.lower() and self._thread_local.num_401_calls < 2: - - self._thread_local.num_401_calls += 1 - pat = re.compile(r"digest ", flags=re.IGNORECASE) - self._thread_local.chal = parse_dict_header(pat.sub("", s_auth, count=1)) - - # Consume content and release the original connection - # to allow our new request to reuse the same one. - r.content - r.close() - prep = r.request.copy() - extract_cookies_to_jar(prep._cookies, r.request, r.raw) - prep.prepare_cookies(prep._cookies) - - prep.headers["Authorization"] = self.build_digest_header( - prep.method, prep.url - ) - _r = r.connection.send(prep, **kwargs) - _r.history.append(r) - _r.request = prep - - return _r - - self._thread_local.num_401_calls = 1 - return r - - def __call__(self, r): - # Initialize per-thread state, if needed - self.init_per_thread_state() - # If we have a saved nonce, skip the 401 - if self._thread_local.last_nonce: - r.headers["Authorization"] = self.build_digest_header(r.method, r.url) - try: - self._thread_local.pos = r.body.tell() - except AttributeError: - # In the case of HTTPDigestAuth being reused and the body of - # the previous request was a file-like object, pos has the - # file position of the previous body. Ensure it's set to - # None. - self._thread_local.pos = None - r.register_hook("response", self.handle_401) - r.register_hook("response", self.handle_redirect) - self._thread_local.num_401_calls = 1 - - return r - - def __eq__(self, other): - return all( - [ - self.username == getattr(other, "username", None), - self.password == getattr(other, "password", None), - ] - ) - - def __ne__(self, other): - return not self == other diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/command/dist_info.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/command/dist_info.py deleted file mode 100644 index ca540ad119ecde6117572cc243f854a1c0f41310..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/command/dist_info.py +++ /dev/null @@ -1,69 +0,0 @@ -""" -Create a dist_info directory -As defined in the wheel specification -""" - -import os -import re -import warnings -from inspect import cleandoc - -from distutils.core import Command -from distutils import log -from setuptools.extern import packaging - - -class dist_info(Command): - - description = 'create a .dist-info directory' - - user_options = [ - ('egg-base=', 'e', "directory containing .egg-info directories" - " (default: top of the source tree)"), - ] - - def initialize_options(self): - self.egg_base = None - - def finalize_options(self): - pass - - def run(self): - egg_info = self.get_finalized_command('egg_info') - egg_info.egg_base = self.egg_base - egg_info.finalize_options() - egg_info.run() - name = _safe(self.distribution.get_name()) - version = _version(self.distribution.get_version()) - base = self.egg_base or os.curdir - dist_info_dir = os.path.join(base, f"{name}-{version}.dist-info") - log.info("creating '{}'".format(os.path.abspath(dist_info_dir))) - - bdist_wheel = self.get_finalized_command('bdist_wheel') - bdist_wheel.egg2dist(egg_info.egg_info, dist_info_dir) - - -def _safe(component: str) -> str: - """Escape a component used to form a wheel name according to PEP 491""" - return re.sub(r"[^\w\d.]+", "_", component) - - -def _version(version: str) -> str: - """Convert an arbitrary string to a version string.""" - v = version.replace(' ', '.') - try: - return str(packaging.version.Version(v)).replace("-", "_") - except packaging.version.InvalidVersion: - msg = f"""Invalid version: {version!r}. - !!\n\n - ################### - # Invalid version # - ################### - {version!r} is not valid according to PEP 440.\n - Please make sure specify a valid version for your package. - Also note that future releases of setuptools may halt the build process - if an invalid version is given. - \n\n!! - """ - warnings.warn(cleandoc(msg)) - return _safe(v).strip("_") diff --git a/spaces/tmaham/DS-Fusion-Express/ldm/lr_scheduler.py b/spaces/tmaham/DS-Fusion-Express/ldm/lr_scheduler.py deleted file mode 100644 index be39da9ca6dacc22bf3df9c7389bbb403a4a3ade..0000000000000000000000000000000000000000 --- a/spaces/tmaham/DS-Fusion-Express/ldm/lr_scheduler.py +++ /dev/null @@ -1,98 +0,0 @@ -import numpy as np - - -class LambdaWarmUpCosineScheduler: - """ - note: use with a base_lr of 1.0 - """ - def __init__(self, warm_up_steps, lr_min, lr_max, lr_start, max_decay_steps, verbosity_interval=0): - self.lr_warm_up_steps = warm_up_steps - self.lr_start = lr_start - self.lr_min = lr_min - self.lr_max = lr_max - self.lr_max_decay_steps = max_decay_steps - self.last_lr = 0. - self.verbosity_interval = verbosity_interval - - def schedule(self, n, **kwargs): - if self.verbosity_interval > 0: - if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_lr}") - if n < self.lr_warm_up_steps: - lr = (self.lr_max - self.lr_start) / self.lr_warm_up_steps * n + self.lr_start - self.last_lr = lr - return lr - else: - t = (n - self.lr_warm_up_steps) / (self.lr_max_decay_steps - self.lr_warm_up_steps) - t = min(t, 1.0) - lr = self.lr_min + 0.5 * (self.lr_max - self.lr_min) * ( - 1 + np.cos(t * np.pi)) - self.last_lr = lr - return lr - - def __call__(self, n, **kwargs): - return self.schedule(n,**kwargs) - - -class LambdaWarmUpCosineScheduler2: - """ - supports repeated iterations, configurable via lists - note: use with a base_lr of 1.0. - """ - def __init__(self, warm_up_steps, f_min, f_max, f_start, cycle_lengths, verbosity_interval=0): - assert len(warm_up_steps) == len(f_min) == len(f_max) == len(f_start) == len(cycle_lengths) - self.lr_warm_up_steps = warm_up_steps - self.f_start = f_start - self.f_min = f_min - self.f_max = f_max - self.cycle_lengths = cycle_lengths - self.cum_cycles = np.cumsum([0] + list(self.cycle_lengths)) - self.last_f = 0. - self.verbosity_interval = verbosity_interval - - def find_in_interval(self, n): - interval = 0 - for cl in self.cum_cycles[1:]: - if n <= cl: - return interval - interval += 1 - - def schedule(self, n, **kwargs): - cycle = self.find_in_interval(n) - n = n - self.cum_cycles[cycle] - if self.verbosity_interval > 0: - if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_f}, " - f"current cycle {cycle}") - if n < self.lr_warm_up_steps[cycle]: - f = (self.f_max[cycle] - self.f_start[cycle]) / self.lr_warm_up_steps[cycle] * n + self.f_start[cycle] - self.last_f = f - return f - else: - t = (n - self.lr_warm_up_steps[cycle]) / (self.cycle_lengths[cycle] - self.lr_warm_up_steps[cycle]) - t = min(t, 1.0) - f = self.f_min[cycle] + 0.5 * (self.f_max[cycle] - self.f_min[cycle]) * ( - 1 + np.cos(t * np.pi)) - self.last_f = f - return f - - def __call__(self, n, **kwargs): - return self.schedule(n, **kwargs) - - -class LambdaLinearScheduler(LambdaWarmUpCosineScheduler2): - - def schedule(self, n, **kwargs): - cycle = self.find_in_interval(n) - n = n - self.cum_cycles[cycle] - if self.verbosity_interval > 0: - if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_f}, " - f"current cycle {cycle}") - - if n < self.lr_warm_up_steps[cycle]: - f = (self.f_max[cycle] - self.f_start[cycle]) / self.lr_warm_up_steps[cycle] * n + self.f_start[cycle] - self.last_f = f - return f - else: - f = self.f_min[cycle] + (self.f_max[cycle] - self.f_min[cycle]) * (self.cycle_lengths[cycle] - n) / (self.cycle_lengths[cycle]) - self.last_f = f - return f - diff --git a/spaces/triple-t/ttt-space/README.md b/spaces/triple-t/ttt-space/README.md deleted file mode 100644 index 71ba0b14b6ab145573a1ec4bf7e448524340d0f4..0000000000000000000000000000000000000000 --- a/spaces/triple-t/ttt-space/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Show Off -emoji: 🖼️ -colorFrom: red -colorTo: indigo -sdk: docker -app_port: 7860 -pinned: false -duplicated_from: huggingface-projects/sd-multiplayer-bot ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/tsfeng/DeepDanbooru-string/README.md b/spaces/tsfeng/DeepDanbooru-string/README.md deleted file mode 100644 index 4330b6f969246dc764a34ea254d2e807159f1c55..0000000000000000000000000000000000000000 --- a/spaces/tsfeng/DeepDanbooru-string/README.md +++ /dev/null @@ -1,39 +0,0 @@ ---- -title: DeepDanbooru String -emoji: 💬 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.6 -app_file: app.py -pinned: false -duplicated_from: NoCrypt/DeepDanbooru_string ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/ucalyptus/PTI/torch_utils/ops/bias_act.cpp b/spaces/ucalyptus/PTI/torch_utils/ops/bias_act.cpp deleted file mode 100644 index 5d2425d8054991a8e8b6f7a940fd0ff7fa0bb330..0000000000000000000000000000000000000000 --- a/spaces/ucalyptus/PTI/torch_utils/ops/bias_act.cpp +++ /dev/null @@ -1,99 +0,0 @@ -// Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -#include -#include -#include -#include "bias_act.h" - -//------------------------------------------------------------------------ - -static bool has_same_layout(torch::Tensor x, torch::Tensor y) -{ - if (x.dim() != y.dim()) - return false; - for (int64_t i = 0; i < x.dim(); i++) - { - if (x.size(i) != y.size(i)) - return false; - if (x.size(i) >= 2 && x.stride(i) != y.stride(i)) - return false; - } - return true; -} - -//------------------------------------------------------------------------ - -static torch::Tensor bias_act(torch::Tensor x, torch::Tensor b, torch::Tensor xref, torch::Tensor yref, torch::Tensor dy, int grad, int dim, int act, float alpha, float gain, float clamp) -{ - // Validate arguments. - TORCH_CHECK(x.is_cuda(), "x must reside on CUDA device"); - TORCH_CHECK(b.numel() == 0 || (b.dtype() == x.dtype() && b.device() == x.device()), "b must have the same dtype and device as x"); - TORCH_CHECK(xref.numel() == 0 || (xref.sizes() == x.sizes() && xref.dtype() == x.dtype() && xref.device() == x.device()), "xref must have the same shape, dtype, and device as x"); - TORCH_CHECK(yref.numel() == 0 || (yref.sizes() == x.sizes() && yref.dtype() == x.dtype() && yref.device() == x.device()), "yref must have the same shape, dtype, and device as x"); - TORCH_CHECK(dy.numel() == 0 || (dy.sizes() == x.sizes() && dy.dtype() == x.dtype() && dy.device() == x.device()), "dy must have the same dtype and device as x"); - TORCH_CHECK(x.numel() <= INT_MAX, "x is too large"); - TORCH_CHECK(b.dim() == 1, "b must have rank 1"); - TORCH_CHECK(b.numel() == 0 || (dim >= 0 && dim < x.dim()), "dim is out of bounds"); - TORCH_CHECK(b.numel() == 0 || b.numel() == x.size(dim), "b has wrong number of elements"); - TORCH_CHECK(grad >= 0, "grad must be non-negative"); - - // Validate layout. - TORCH_CHECK(x.is_non_overlapping_and_dense(), "x must be non-overlapping and dense"); - TORCH_CHECK(b.is_contiguous(), "b must be contiguous"); - TORCH_CHECK(xref.numel() == 0 || has_same_layout(xref, x), "xref must have the same layout as x"); - TORCH_CHECK(yref.numel() == 0 || has_same_layout(yref, x), "yref must have the same layout as x"); - TORCH_CHECK(dy.numel() == 0 || has_same_layout(dy, x), "dy must have the same layout as x"); - - // Create output tensor. - const at::cuda::OptionalCUDAGuard device_guard(device_of(x)); - torch::Tensor y = torch::empty_like(x); - TORCH_CHECK(has_same_layout(y, x), "y must have the same layout as x"); - - // Initialize CUDA kernel parameters. - bias_act_kernel_params p; - p.x = x.data_ptr(); - p.b = (b.numel()) ? b.data_ptr() : NULL; - p.xref = (xref.numel()) ? xref.data_ptr() : NULL; - p.yref = (yref.numel()) ? yref.data_ptr() : NULL; - p.dy = (dy.numel()) ? dy.data_ptr() : NULL; - p.y = y.data_ptr(); - p.grad = grad; - p.act = act; - p.alpha = alpha; - p.gain = gain; - p.clamp = clamp; - p.sizeX = (int)x.numel(); - p.sizeB = (int)b.numel(); - p.stepB = (b.numel()) ? (int)x.stride(dim) : 1; - - // Choose CUDA kernel. - void* kernel; - AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), "upfirdn2d_cuda", [&] - { - kernel = choose_bias_act_kernel(p); - }); - TORCH_CHECK(kernel, "no CUDA kernel found for the specified activation func"); - - // Launch CUDA kernel. - p.loopX = 4; - int blockSize = 4 * 32; - int gridSize = (p.sizeX - 1) / (p.loopX * blockSize) + 1; - void* args[] = {&p}; - AT_CUDA_CHECK(cudaLaunchKernel(kernel, gridSize, blockSize, args, 0, at::cuda::getCurrentCUDAStream())); - return y; -} - -//------------------------------------------------------------------------ - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) -{ - m.def("bias_act", &bias_act); -} - -//------------------------------------------------------------------------ diff --git a/spaces/uin-malang/README/README.md b/spaces/uin-malang/README/README.md deleted file mode 100644 index 67006b1703848d18d1b9bcfa663be8897a80fefc..0000000000000000000000000000000000000000 --- a/spaces/uin-malang/README/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: README -emoji: 🐢 -colorFrom: purple -colorTo: pink -sdk: static -pinned: false ---- - -Maulana Malik Ibrahim Islamic State University Malang is an Islamic public university in Malang, Indonesia. - diff --git a/spaces/user238921933/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/render_modes.py b/spaces/user238921933/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/render_modes.py deleted file mode 100644 index 34c001481f2b474bfd04360d1a98295903054beb..0000000000000000000000000000000000000000 --- a/spaces/user238921933/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/render_modes.py +++ /dev/null @@ -1,154 +0,0 @@ -import os -import pathlib -import json -from .render import render_animation -from .seed import next_seed -from .video_audio_utilities import vid2frames -from .prompt import interpolate_prompts -from .generate import generate -from .animation_key_frames import DeformAnimKeys -from .parseq_adapter import ParseqAnimKeys -from .save_images import save_image -from .settings import get_keys_to_exclude - -# Webui -from modules.shared import opts, cmd_opts, state - -def render_input_video(args, anim_args, video_args, parseq_args, loop_args, controlnet_args, animation_prompts, root): - # create a folder for the video input frames to live in - video_in_frame_path = os.path.join(args.outdir, 'inputframes') - os.makedirs(video_in_frame_path, exist_ok=True) - - # save the video frames from input video - print(f"Exporting Video Frames (1 every {anim_args.extract_nth_frame}) frames to {video_in_frame_path}...") - vid2frames(video_path = anim_args.video_init_path, video_in_frame_path=video_in_frame_path, n=anim_args.extract_nth_frame, overwrite=anim_args.overwrite_extracted_frames, extract_from_frame=anim_args.extract_from_frame, extract_to_frame=anim_args.extract_to_frame) - - # determine max frames from length of input frames - anim_args.max_frames = len([f for f in pathlib.Path(video_in_frame_path).glob('*.jpg')]) - args.use_init = True - print(f"Loading {anim_args.max_frames} input frames from {video_in_frame_path} and saving video frames to {args.outdir}") - - if anim_args.use_mask_video: - # create a folder for the mask video input frames to live in - mask_in_frame_path = os.path.join(args.outdir, 'maskframes') - os.makedirs(mask_in_frame_path, exist_ok=True) - - # save the video frames from mask video - print(f"Exporting Video Frames (1 every {anim_args.extract_nth_frame}) frames to {mask_in_frame_path}...") - vid2frames(video_path=anim_args.video_mask_path,video_in_frame_path=mask_in_frame_path, n=anim_args.extract_nth_frame, overwrite=anim_args.overwrite_extracted_frames, extract_from_frame=anim_args.extract_from_frame, extract_to_frame=anim_args.extract_to_frame) - max_mask_frames = len([f for f in pathlib.Path(mask_in_frame_path).glob('*.jpg')]) - - # limit max frames if there are less frames in the video mask compared to input video - if max_mask_frames < anim_args.max_frames : - anim_args.max_mask_frames - print ("Video mask contains less frames than init video, max frames limited to number of mask frames.") - args.use_mask = True - args.overlay_mask = True - - - render_animation(args, anim_args, video_args, parseq_args, loop_args, controlnet_args, animation_prompts, root) - -# Modified a copy of the above to allow using masking video with out a init video. -def render_animation_with_video_mask(args, anim_args, video_args, parseq_args, loop_args, controlnet_args, animation_prompts, root): - # create a folder for the video input frames to live in - mask_in_frame_path = os.path.join(args.outdir, 'maskframes') - os.makedirs(mask_in_frame_path, exist_ok=True) - - # save the video frames from mask video - print(f"Exporting Video Frames (1 every {anim_args.extract_nth_frame}) frames to {mask_in_frame_path}...") - vid2frames(video_path=anim_args.video_mask_path, video_in_frame_path=mask_in_frame_path, n=anim_args.extract_nth_frame, overwrite=anim_args.overwrite_extracted_frames, extract_from_frame=anim_args.extract_from_frame, extract_to_frame=anim_args.extract_to_frame) - args.use_mask = True - #args.overlay_mask = True - - # determine max frames from length of input frames - anim_args.max_frames = len([f for f in pathlib.Path(mask_in_frame_path).glob('*.jpg')]) - #args.use_init = True - print(f"Loading {anim_args.max_frames} input frames from {mask_in_frame_path} and saving video frames to {args.outdir}") - - render_animation(args, anim_args, video_args, parseq_args, loop_args, controlnet_args, animation_prompts, root) - - -def render_interpolation(args, anim_args, video_args, parseq_args, loop_args, controlnet_args, animation_prompts, root): - - # use parseq if manifest is provided - use_parseq = parseq_args.parseq_manifest != None and parseq_args.parseq_manifest.strip() - - # expand key frame strings to values - keys = DeformAnimKeys(anim_args) if not use_parseq else ParseqAnimKeys(parseq_args, anim_args) - - # create output folder for the batch - os.makedirs(args.outdir, exist_ok=True) - print(f"Saving interpolation animation frames to {args.outdir}") - - # save settings for the batch - exclude_keys = get_keys_to_exclude('general') - settings_filename = os.path.join(args.outdir, f"{args.timestring}_settings.txt") - with open(settings_filename, "w+", encoding="utf-8") as f: - s = {} - for d in [dict(args.__dict__), dict(anim_args.__dict__), dict(parseq_args.__dict__)]: - for key, value in d.items(): - if key not in exclude_keys: - s[key] = value - json.dump(s, f, ensure_ascii=False, indent=4) - - # Compute interpolated prompts - if use_parseq: - print("Parseq prompts are assumed to already be interpolated - not doing any additional prompt interpolation") - prompt_series = keys.prompts - else: - print("Generating interpolated prompts for all frames") - prompt_series = interpolate_prompts(animation_prompts, anim_args.max_frames) - - state.job_count = anim_args.max_frames - frame_idx = 0 - # INTERPOLATION MODE - while frame_idx < anim_args.max_frames: - # print data to cli - prompt_to_print = prompt_series[frame_idx].strip() - if prompt_to_print.endswith("--neg"): - prompt_to_print = prompt_to_print[:-5] - print(f"\033[36mInterpolation frame: \033[0m{frame_idx}/{anim_args.max_frames} ") - print(f"\033[32mSeed: \033[0m{args.seed}") - print(f"\033[35mPrompt: \033[0m{prompt_to_print}") - - state.job = f"frame {frame_idx + 1}/{anim_args.max_frames}" - state.job_no = frame_idx + 1 - - if state.interrupted: - break - - # grab inputs for current frame generation - args.n_samples = 1 - args.prompt = prompt_series[frame_idx] - args.scale = keys.cfg_scale_schedule_series[frame_idx] - args.pix2pix_img_cfg_scale = keys.pix2pix_img_cfg_scale_series[frame_idx] - - if anim_args.enable_checkpoint_scheduling: - args.checkpoint = keys.checkpoint_schedule_series[frame_idx] - print(f"Checkpoint changed to: {args.checkpoint}") - else: - args.checkpoint = None - - if anim_args.enable_subseed_scheduling: - args.subseed = keys.subseed_schedule_series[frame_idx] - args.subseed_strength = keys.subseed_strength_schedule_series[frame_idx] - - if use_parseq: - anim_args.enable_subseed_scheduling = True - args.subseed = int(keys.subseed_series[frame_idx]) - args.subseed_strength = keys.subseed_strength_series[frame_idx] - - if args.seed_behavior == 'schedule' or use_parseq: - args.seed = int(keys.seed_schedule_series[frame_idx]) - - image = generate(args, anim_args, loop_args, controlnet_args, root, frame_idx) - filename = f"{args.timestring}_{frame_idx:05}.png" - - save_image(image, 'PIL', filename, args, video_args, root) - - state.current_image = image - - if args.seed_behavior != 'schedule': - args.seed = next_seed(args) - - frame_idx += 1 \ No newline at end of file diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/vit/utils/loss.py b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/vit/utils/loss.py deleted file mode 100644 index cb2de206f31e165270f5d74e7e3e3dac9642d897..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/vit/utils/loss.py +++ /dev/null @@ -1,294 +0,0 @@ -# Ultralytics YOLO 🚀, AGPL-3.0 license - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ultralytics.vit.utils.ops import HungarianMatcher -from ultralytics.yolo.utils.loss import FocalLoss, VarifocalLoss -from ultralytics.yolo.utils.metrics import bbox_iou - - -class DETRLoss(nn.Module): - - def __init__(self, - nc=80, - loss_gain=None, - aux_loss=True, - use_fl=True, - use_vfl=False, - use_uni_match=False, - uni_match_ind=0): - """ - DETR loss function. - - Args: - nc (int): The number of classes. - loss_gain (dict): The coefficient of loss. - aux_loss (bool): If 'aux_loss = True', loss at each decoder layer are to be used. - use_vfl (bool): Use VarifocalLoss or not. - use_uni_match (bool): Whether to use a fixed layer to assign labels for auxiliary branch. - uni_match_ind (int): The fixed indices of a layer. - """ - super().__init__() - - if loss_gain is None: - loss_gain = {'class': 1, 'bbox': 5, 'giou': 2, 'no_object': 0.1, 'mask': 1, 'dice': 1} - self.nc = nc - self.matcher = HungarianMatcher(cost_gain={'class': 2, 'bbox': 5, 'giou': 2}) - self.loss_gain = loss_gain - self.aux_loss = aux_loss - self.fl = FocalLoss() if use_fl else None - self.vfl = VarifocalLoss() if use_vfl else None - - self.use_uni_match = use_uni_match - self.uni_match_ind = uni_match_ind - self.device = None - - def _get_loss_class(self, pred_scores, targets, gt_scores, num_gts, postfix=''): - # logits: [b, query, num_classes], gt_class: list[[n, 1]] - name_class = f'loss_class{postfix}' - bs, nq = pred_scores.shape[:2] - # one_hot = F.one_hot(targets, self.nc + 1)[..., :-1] # (bs, num_queries, num_classes) - one_hot = torch.zeros((bs, nq, self.nc + 1), dtype=torch.int64, device=targets.device) - one_hot.scatter_(2, targets.unsqueeze(-1), 1) - one_hot = one_hot[..., :-1] - gt_scores = gt_scores.view(bs, nq, 1) * one_hot - - if self.fl: - if num_gts and self.vfl: - loss_cls = self.vfl(pred_scores, gt_scores, one_hot) - else: - loss_cls = self.fl(pred_scores, one_hot.float()) - loss_cls /= max(num_gts, 1) / nq - else: - loss_cls = nn.BCEWithLogitsLoss(reduction='none')(pred_scores, gt_scores).mean(1).sum() # YOLO CLS loss - - return {name_class: loss_cls.squeeze() * self.loss_gain['class']} - - def _get_loss_bbox(self, pred_bboxes, gt_bboxes, postfix=''): - # boxes: [b, query, 4], gt_bbox: list[[n, 4]] - name_bbox = f'loss_bbox{postfix}' - name_giou = f'loss_giou{postfix}' - - loss = {} - if len(gt_bboxes) == 0: - loss[name_bbox] = torch.tensor(0., device=self.device) - loss[name_giou] = torch.tensor(0., device=self.device) - return loss - - loss[name_bbox] = self.loss_gain['bbox'] * F.l1_loss(pred_bboxes, gt_bboxes, reduction='sum') / len(gt_bboxes) - loss[name_giou] = 1.0 - bbox_iou(pred_bboxes, gt_bboxes, xywh=True, GIoU=True) - loss[name_giou] = loss[name_giou].sum() / len(gt_bboxes) - loss[name_giou] = self.loss_gain['giou'] * loss[name_giou] - loss = {k: v.squeeze() for k, v in loss.items()} - return loss - - def _get_loss_mask(self, masks, gt_mask, match_indices, postfix=''): - # masks: [b, query, h, w], gt_mask: list[[n, H, W]] - name_mask = f'loss_mask{postfix}' - name_dice = f'loss_dice{postfix}' - - loss = {} - if sum(len(a) for a in gt_mask) == 0: - loss[name_mask] = torch.tensor(0., device=self.device) - loss[name_dice] = torch.tensor(0., device=self.device) - return loss - - num_gts = len(gt_mask) - src_masks, target_masks = self._get_assigned_bboxes(masks, gt_mask, match_indices) - src_masks = F.interpolate(src_masks.unsqueeze(0), size=target_masks.shape[-2:], mode='bilinear')[0] - # TODO: torch does not have `sigmoid_focal_loss`, but it's not urgent since we don't use mask branch for now. - loss[name_mask] = self.loss_gain['mask'] * F.sigmoid_focal_loss(src_masks, target_masks, - torch.tensor([num_gts], dtype=torch.float32)) - loss[name_dice] = self.loss_gain['dice'] * self._dice_loss(src_masks, target_masks, num_gts) - return loss - - def _dice_loss(self, inputs, targets, num_gts): - inputs = F.sigmoid(inputs) - inputs = inputs.flatten(1) - targets = targets.flatten(1) - numerator = 2 * (inputs * targets).sum(1) - denominator = inputs.sum(-1) + targets.sum(-1) - loss = 1 - (numerator + 1) / (denominator + 1) - return loss.sum() / num_gts - - def _get_loss_aux(self, - pred_bboxes, - pred_scores, - gt_bboxes, - gt_cls, - gt_groups, - match_indices=None, - postfix='', - masks=None, - gt_mask=None): - """Get auxiliary losses""" - # NOTE: loss class, bbox, giou, mask, dice - loss = torch.zeros(5 if masks is not None else 3, device=pred_bboxes.device) - if match_indices is None and self.use_uni_match: - match_indices = self.matcher(pred_bboxes[self.uni_match_ind], - pred_scores[self.uni_match_ind], - gt_bboxes, - gt_cls, - gt_groups, - masks=masks[self.uni_match_ind] if masks is not None else None, - gt_mask=gt_mask) - for i, (aux_bboxes, aux_scores) in enumerate(zip(pred_bboxes, pred_scores)): - aux_masks = masks[i] if masks is not None else None - loss_ = self._get_loss(aux_bboxes, - aux_scores, - gt_bboxes, - gt_cls, - gt_groups, - masks=aux_masks, - gt_mask=gt_mask, - postfix=postfix, - match_indices=match_indices) - loss[0] += loss_[f'loss_class{postfix}'] - loss[1] += loss_[f'loss_bbox{postfix}'] - loss[2] += loss_[f'loss_giou{postfix}'] - # if masks is not None and gt_mask is not None: - # loss_ = self._get_loss_mask(aux_masks, gt_mask, match_indices, postfix) - # loss[3] += loss_[f'loss_mask{postfix}'] - # loss[4] += loss_[f'loss_dice{postfix}'] - - loss = { - f'loss_class_aux{postfix}': loss[0], - f'loss_bbox_aux{postfix}': loss[1], - f'loss_giou_aux{postfix}': loss[2]} - # if masks is not None and gt_mask is not None: - # loss[f'loss_mask_aux{postfix}'] = loss[3] - # loss[f'loss_dice_aux{postfix}'] = loss[4] - return loss - - def _get_index(self, match_indices): - batch_idx = torch.cat([torch.full_like(src, i) for i, (src, _) in enumerate(match_indices)]) - src_idx = torch.cat([src for (src, _) in match_indices]) - dst_idx = torch.cat([dst for (_, dst) in match_indices]) - return (batch_idx, src_idx), dst_idx - - def _get_assigned_bboxes(self, pred_bboxes, gt_bboxes, match_indices): - pred_assigned = torch.cat([ - t[I] if len(I) > 0 else torch.zeros(0, t.shape[-1], device=self.device) - for t, (I, _) in zip(pred_bboxes, match_indices)]) - gt_assigned = torch.cat([ - t[J] if len(J) > 0 else torch.zeros(0, t.shape[-1], device=self.device) - for t, (_, J) in zip(gt_bboxes, match_indices)]) - return pred_assigned, gt_assigned - - def _get_loss(self, - pred_bboxes, - pred_scores, - gt_bboxes, - gt_cls, - gt_groups, - masks=None, - gt_mask=None, - postfix='', - match_indices=None): - """Get losses""" - if match_indices is None: - match_indices = self.matcher(pred_bboxes, - pred_scores, - gt_bboxes, - gt_cls, - gt_groups, - masks=masks, - gt_mask=gt_mask) - - idx, gt_idx = self._get_index(match_indices) - pred_bboxes, gt_bboxes = pred_bboxes[idx], gt_bboxes[gt_idx] - - bs, nq = pred_scores.shape[:2] - targets = torch.full((bs, nq), self.nc, device=pred_scores.device, dtype=gt_cls.dtype) - targets[idx] = gt_cls[gt_idx] - - gt_scores = torch.zeros([bs, nq], device=pred_scores.device) - if len(gt_bboxes): - gt_scores[idx] = bbox_iou(pred_bboxes.detach(), gt_bboxes, xywh=True).squeeze(-1) - - loss = {} - loss.update(self._get_loss_class(pred_scores, targets, gt_scores, len(gt_bboxes), postfix)) - loss.update(self._get_loss_bbox(pred_bboxes, gt_bboxes, postfix)) - # if masks is not None and gt_mask is not None: - # loss.update(self._get_loss_mask(masks, gt_mask, match_indices, postfix)) - return loss - - def forward(self, pred_bboxes, pred_scores, batch, postfix='', **kwargs): - """ - Args: - pred_bboxes (torch.Tensor): [l, b, query, 4] - pred_scores (torch.Tensor): [l, b, query, num_classes] - batch (dict): A dict includes: - gt_cls (torch.Tensor) with shape [num_gts, ], - gt_bboxes (torch.Tensor): [num_gts, 4], - gt_groups (List(int)): a list of batch size length includes the number of gts of each image. - postfix (str): postfix of loss name. - """ - self.device = pred_bboxes.device - match_indices = kwargs.get('match_indices', None) - gt_cls, gt_bboxes, gt_groups = batch['cls'], batch['bboxes'], batch['gt_groups'] - - total_loss = self._get_loss(pred_bboxes[-1], - pred_scores[-1], - gt_bboxes, - gt_cls, - gt_groups, - postfix=postfix, - match_indices=match_indices) - - if self.aux_loss: - total_loss.update( - self._get_loss_aux(pred_bboxes[:-1], pred_scores[:-1], gt_bboxes, gt_cls, gt_groups, match_indices, - postfix)) - - return total_loss - - -class RTDETRDetectionLoss(DETRLoss): - - def forward(self, preds, batch, dn_bboxes=None, dn_scores=None, dn_meta=None): - pred_bboxes, pred_scores = preds - total_loss = super().forward(pred_bboxes, pred_scores, batch) - - if dn_meta is not None: - dn_pos_idx, dn_num_group = dn_meta['dn_pos_idx'], dn_meta['dn_num_group'] - assert len(batch['gt_groups']) == len(dn_pos_idx) - - # denoising match indices - match_indices = self.get_dn_match_indices(dn_pos_idx, dn_num_group, batch['gt_groups']) - - # compute denoising training loss - dn_loss = super().forward(dn_bboxes, dn_scores, batch, postfix='_dn', match_indices=match_indices) - total_loss.update(dn_loss) - else: - total_loss.update({f'{k}_dn': torch.tensor(0., device=self.device) for k in total_loss.keys()}) - - return total_loss - - @staticmethod - def get_dn_match_indices(dn_pos_idx, dn_num_group, gt_groups): - """Get the match indices for denoising. - - Args: - dn_pos_idx (List[torch.Tensor]): A list includes positive indices of denoising. - dn_num_group (int): The number of groups of denoising. - gt_groups (List(int)): a list of batch size length includes the number of gts of each image. - - Returns: - dn_match_indices (List(tuple)): Matched indices. - - """ - dn_match_indices = [] - idx_groups = torch.as_tensor([0, *gt_groups[:-1]]).cumsum_(0) - for i, num_gt in enumerate(gt_groups): - if num_gt > 0: - gt_idx = torch.arange(end=num_gt, dtype=torch.long) + idx_groups[i] - gt_idx = gt_idx.repeat(dn_num_group) - assert len(dn_pos_idx[i]) == len(gt_idx), 'Expected the same length, ' - f'but got {len(dn_pos_idx[i])} and {len(gt_idx)} respectively.' - dn_match_indices.append((dn_pos_idx[i], gt_idx)) - else: - dn_match_indices.append((torch.zeros([0], dtype=torch.long), torch.zeros([0], dtype=torch.long))) - return dn_match_indices diff --git a/spaces/videfikri/aicover/export_onnx_old.py b/spaces/videfikri/aicover/export_onnx_old.py deleted file mode 100644 index 048382f6631c4b3b092deb83355903161b62e64a..0000000000000000000000000000000000000000 --- a/spaces/videfikri/aicover/export_onnx_old.py +++ /dev/null @@ -1,47 +0,0 @@ -from infer_pack.models_onnx_moess import SynthesizerTrnMs256NSFsidM -import torch - -person = "Shiroha/shiroha.pth" -exported_path = "model.onnx" - - -cpt = torch.load(person, map_location="cpu") -cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk -print(*cpt["config"]) -net_g = SynthesizerTrnMs256NSFsidM(*cpt["config"], is_half=False) -net_g.load_state_dict(cpt["weight"], strict=False) - -test_phone = torch.rand(1, 200, 256) -test_phone_lengths = torch.tensor([200]).long() -test_pitch = torch.randint(size=(1, 200), low=5, high=255) -test_pitchf = torch.rand(1, 200) -test_ds = torch.LongTensor([0]) -test_rnd = torch.rand(1, 192, 200) -input_names = ["phone", "phone_lengths", "pitch", "pitchf", "ds", "rnd"] -output_names = [ - "audio", -] -device = "cpu" -torch.onnx.export( - net_g, - ( - test_phone.to(device), - test_phone_lengths.to(device), - test_pitch.to(device), - test_pitchf.to(device), - test_ds.to(device), - test_rnd.to(device), - ), - exported_path, - dynamic_axes={ - "phone": [1], - "pitch": [1], - "pitchf": [1], - "rnd": [2], - }, - do_constant_folding=False, - opset_version=16, - verbose=False, - input_names=input_names, - output_names=output_names, -) diff --git a/spaces/vishnu0001/text2mesh/shap_e/rendering/torch_mesh.py b/spaces/vishnu0001/text2mesh/shap_e/rendering/torch_mesh.py deleted file mode 100644 index 49c6894c9046ac0e0884ceba450b65b2bb847534..0000000000000000000000000000000000000000 --- a/spaces/vishnu0001/text2mesh/shap_e/rendering/torch_mesh.py +++ /dev/null @@ -1,42 +0,0 @@ -from dataclasses import dataclass, field -from typing import Dict, Optional - -import torch - -from .mesh import TriMesh - - -@dataclass -class TorchMesh: - """ - A 3D triangle mesh with optional data at the vertices and faces. - """ - - # [N x 3] array of vertex coordinates. - verts: torch.Tensor - - # [M x 3] array of triangles, pointing to indices in verts. - faces: torch.Tensor - - # Extra data per vertex and face. - vertex_channels: Optional[Dict[str, torch.Tensor]] = field(default_factory=dict) - face_channels: Optional[Dict[str, torch.Tensor]] = field(default_factory=dict) - - def tri_mesh(self) -> TriMesh: - """ - Create a CPU version of the mesh. - """ - return TriMesh( - verts=self.verts.detach().cpu().numpy(), - faces=self.faces.cpu().numpy(), - vertex_channels=( - {k: v.detach().cpu().numpy() for k, v in self.vertex_channels.items()} - if self.vertex_channels is not None - else None - ), - face_channels=( - {k: v.detach().cpu().numpy() for k, v in self.face_channels.items()} - if self.face_channels is not None - else None - ), - ) diff --git a/spaces/whgwd2023/bingo/src/components/settings.tsx b/spaces/whgwd2023/bingo/src/components/settings.tsx deleted file mode 100644 index 80b8a2d3b252b875f5b6f7dfc2f6e3ad9cdfb22a..0000000000000000000000000000000000000000 --- a/spaces/whgwd2023/bingo/src/components/settings.tsx +++ /dev/null @@ -1,157 +0,0 @@ -import { useEffect, useState } from 'react' -import { useAtom } from 'jotai' -import { Switch } from '@headlessui/react' -import { toast } from 'react-hot-toast' -import { hashAtom, voiceAtom } from '@/state' -import { - Dialog, - DialogContent, - DialogDescription, - DialogFooter, - DialogHeader, - DialogTitle -} from '@/components/ui/dialog' -import { Button } from './ui/button' -import { Input } from './ui/input' -import { ChunkKeys, parseCookies, extraCurlFromCookie, encodeHeadersToCookie, getCookie, setCookie } from '@/lib/utils' -import { ExternalLink } from './external-link' -import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard' - - -export function Settings() { - const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 }) - const [loc, setLoc] = useAtom(hashAtom) - const [curlValue, setCurlValue] = useState(extraCurlFromCookie(parseCookies(document.cookie, ChunkKeys))) - const [imageOnly, setImageOnly] = useState(getCookie('IMAGE_ONLY') !== '0') - const [enableTTS, setEnableTTS] = useAtom(voiceAtom) - - useEffect(() => { - if (isCopied) { - toast.success('复制成功') - } - }, [isCopied]) - - if (loc === 'settings') { - return ( - setLoc('')} modal> - - - 设置你的用户信息 - - 请使用 Edge 浏览器 - - 打开并登录 Bing - - ,然后再打开 - Challenge 接口 - 右键 》检查。打开开发者工具,在网络里面找到 Create 接口 》右键复制》复制为 cURL(bash),粘贴到此处,然后保存。 -
                    - 图文示例: - 如何获取 BING_HEADER - - -
                    - -
                    - setCurlValue(e.target.value)} - /> -
                    - 身份信息仅用于画图(推荐) - setImageOnly(checked)} - > - - -
                    - - - - - - - -
                    - ) - } else if (loc === 'voice') { - return ( - setLoc('')} modal> - - - 语音设置 - - 目前仅支持 PC 端 Edge 及 Chrome 浏览器 - - - -
                    - 启用语音回答 - setEnableTTS(checked)} - > - - -
                    - - - - -
                    -
                    - ) - } - return null -} diff --git a/spaces/wuhuqifeidekun/White-box-Cartoonization/wbc/network.py b/spaces/wuhuqifeidekun/White-box-Cartoonization/wbc/network.py deleted file mode 100644 index 6f16cee1aa1994d0a78c524f459764de5164e637..0000000000000000000000000000000000000000 --- a/spaces/wuhuqifeidekun/White-box-Cartoonization/wbc/network.py +++ /dev/null @@ -1,62 +0,0 @@ -import tensorflow as tf -import numpy as np -import tensorflow.contrib.slim as slim - - - -def resblock(inputs, out_channel=32, name='resblock'): - - with tf.variable_scope(name): - - x = slim.convolution2d(inputs, out_channel, [3, 3], - activation_fn=None, scope='conv1') - x = tf.nn.leaky_relu(x) - x = slim.convolution2d(x, out_channel, [3, 3], - activation_fn=None, scope='conv2') - - return x + inputs - - - - -def unet_generator(inputs, channel=32, num_blocks=4, name='generator', reuse=False): - with tf.variable_scope(name, reuse=reuse): - - x0 = slim.convolution2d(inputs, channel, [7, 7], activation_fn=None) - x0 = tf.nn.leaky_relu(x0) - - x1 = slim.convolution2d(x0, channel, [3, 3], stride=2, activation_fn=None) - x1 = tf.nn.leaky_relu(x1) - x1 = slim.convolution2d(x1, channel*2, [3, 3], activation_fn=None) - x1 = tf.nn.leaky_relu(x1) - - x2 = slim.convolution2d(x1, channel*2, [3, 3], stride=2, activation_fn=None) - x2 = tf.nn.leaky_relu(x2) - x2 = slim.convolution2d(x2, channel*4, [3, 3], activation_fn=None) - x2 = tf.nn.leaky_relu(x2) - - for idx in range(num_blocks): - x2 = resblock(x2, out_channel=channel*4, name='block_{}'.format(idx)) - - x2 = slim.convolution2d(x2, channel*2, [3, 3], activation_fn=None) - x2 = tf.nn.leaky_relu(x2) - - h1, w1 = tf.shape(x2)[1], tf.shape(x2)[2] - x3 = tf.image.resize_bilinear(x2, (h1*2, w1*2)) - x3 = slim.convolution2d(x3+x1, channel*2, [3, 3], activation_fn=None) - x3 = tf.nn.leaky_relu(x3) - x3 = slim.convolution2d(x3, channel, [3, 3], activation_fn=None) - x3 = tf.nn.leaky_relu(x3) - - h2, w2 = tf.shape(x3)[1], tf.shape(x3)[2] - x4 = tf.image.resize_bilinear(x3, (h2*2, w2*2)) - x4 = slim.convolution2d(x4+x0, channel, [3, 3], activation_fn=None) - x4 = tf.nn.leaky_relu(x4) - x4 = slim.convolution2d(x4, 3, [7, 7], activation_fn=None) - - return x4 - -if __name__ == '__main__': - - - pass \ No newline at end of file diff --git a/spaces/xiang2811/ChatGPT/modules/presets.py b/spaces/xiang2811/ChatGPT/modules/presets.py deleted file mode 100644 index 10355143fb36c84e2263dc0719aa43568134bb05..0000000000000000000000000000000000000000 --- a/spaces/xiang2811/ChatGPT/modules/presets.py +++ /dev/null @@ -1,226 +0,0 @@ -# -*- coding:utf-8 -*- -import os -from pathlib import Path -import gradio as gr -from .webui_locale import I18nAuto - -i18n = I18nAuto() # internationalization - -CHATGLM_MODEL = None -CHATGLM_TOKENIZER = None -LLAMA_MODEL = None -LLAMA_INFERENCER = None - -# ChatGPT 设置 -INITIAL_SYSTEM_PROMPT = "You are a helpful assistant." -API_HOST = "api.openai.com" -COMPLETION_URL = "https://api.openai.com/v1/chat/completions" -BALANCE_API_URL="https://api.openai.com/dashboard/billing/credit_grants" -USAGE_API_URL="https://api.openai.com/dashboard/billing/usage" -HISTORY_DIR = Path("history") -HISTORY_DIR = "history" -TEMPLATES_DIR = "templates" - -# 错误信息 -STANDARD_ERROR_MSG = i18n("☹️发生了错误:") # 错误信息的标准前缀 -GENERAL_ERROR_MSG = i18n("获取对话时发生错误,请查看后台日志") -ERROR_RETRIEVE_MSG = i18n("请检查网络连接,或者API-Key是否有效。") -CONNECTION_TIMEOUT_MSG = i18n("连接超时,无法获取对话。") # 连接超时 -READ_TIMEOUT_MSG = i18n("读取超时,无法获取对话。") # 读取超时 -PROXY_ERROR_MSG = i18n("代理错误,无法获取对话。") # 代理错误 -SSL_ERROR_PROMPT = i18n("SSL错误,无法获取对话。") # SSL 错误 -NO_APIKEY_MSG = i18n("API key为空,请检查是否输入正确。") # API key 长度不足 51 位 -NO_INPUT_MSG = i18n("请输入对话内容。") # 未输入对话内容 -BILLING_NOT_APPLICABLE_MSG = i18n("账单信息不适用") # 本地运行的模型返回的账单信息 - -TIMEOUT_STREAMING = 60 # 流式对话时的超时时间 -TIMEOUT_ALL = 200 # 非流式对话时的超时时间 -ENABLE_STREAMING_OPTION = True # 是否启用选择选择是否实时显示回答的勾选框 -HIDE_MY_KEY = False # 如果你想在UI中隐藏你的 API 密钥,将此值设置为 True -CONCURRENT_COUNT = 100 # 允许同时使用的用户数量 - -SIM_K = 5 -INDEX_QUERY_TEMPRATURE = 1.0 - -CHUANHU_TITLE = i18n("川虎Chat 🚀") - -CHUANHU_DESCRIPTION = i18n("由Bilibili [土川虎虎虎](https://space.bilibili.com/29125536) 和 [明昭MZhao](https://space.bilibili.com/24807452)开发
                    访问川虎Chat的 [GitHub项目](https://github.com/GaiZhenbiao/ChuanhuChatGPT) 下载最新版脚本") - -FOOTER = """
                    {versions}
                    """ - -APPEARANCE_SWITCHER = """ -
                    -"""+ i18n("切换亮暗色主题") + """ - -
                    -""" - -SUMMARIZE_PROMPT = "你是谁?我们刚才聊了什么?" # 总结对话时的 prompt - -ONLINE_MODELS = [ - "gpt-3.5-turbo", - "gpt-3.5-turbo-0301", - "gpt-4", - "gpt-4-0314", - "gpt-4-32k", - "gpt-4-32k-0314", - "xmchat", -] - -LOCAL_MODELS = [ - "chatglm-6b", - "chatglm-6b-int4", - "chatglm-6b-int4-qe", - "llama-7b-hf", - "llama-7b-hf-int4", - "llama-7b-hf-int8", - "llama-13b-hf", - "llama-13b-hf-int4", - "llama-30b-hf", - "llama-30b-hf-int4", - "llama-65b-hf" -] - -if os.environ.get('HIDE_LOCAL_MODELS', 'false') == 'true': - MODELS = ONLINE_MODELS -else: - MODELS = ONLINE_MODELS + LOCAL_MODELS - -DEFAULT_MODEL = 0 - -os.makedirs("models", exist_ok=True) -os.makedirs("lora", exist_ok=True) -os.makedirs("history", exist_ok=True) -for dir_name in os.listdir("models"): - if os.path.isdir(os.path.join("models", dir_name)): - if dir_name not in MODELS: - MODELS.append(dir_name) - -MODEL_TOKEN_LIMIT = { - "gpt-3.5-turbo": 4096, - "gpt-3.5-turbo-0301": 4096, - "gpt-4": 8192, - "gpt-4-0314": 8192, - "gpt-4-32k": 32768, - "gpt-4-32k-0314": 32768 -} - -TOKEN_OFFSET = 1000 # 模型的token上限减去这个值,得到软上限。到达软上限之后,自动尝试减少token占用。 -DEFAULT_TOKEN_LIMIT = 3000 # 默认的token上限 -REDUCE_TOKEN_FACTOR = 0.5 # 与模型token上限想乘,得到目标token数。减少token占用时,将token占用减少到目标token数以下。 - -REPLY_LANGUAGES = [ - "简体中文", - "繁體中文", - "English", - "日本語", - "Español", - "Français", - "Deutsch", - "跟随问题语言(不稳定)" -] - - -WEBSEARCH_PTOMPT_TEMPLATE = """\ -Web search results: - -{web_results} -Current date: {current_date} - -Instructions: Using the provided web search results, write a comprehensive reply to the given query. Make sure to cite results using [[number](URL)] notation after the reference. If the provided search results refer to multiple subjects with the same name, write separate answers for each subject. -Query: {query} -Reply in {reply_language} -""" - -PROMPT_TEMPLATE = """\ -Context information is below. ---------------------- -{context_str} ---------------------- -Current date: {current_date}. -Using the provided context information, write a comprehensive reply to the given query. -Make sure to cite results using [number] notation after the reference. -If the provided context information refer to multiple subjects with the same name, write separate answers for each subject. -Use prior knowledge only if the given context didn't provide enough information. -Answer the question: {query_str} -Reply in {reply_language} -""" - -REFINE_TEMPLATE = """\ -The original question is as follows: {query_str} -We have provided an existing answer: {existing_answer} -We have the opportunity to refine the existing answer -(only if needed) with some more context below. ------------- -{context_msg} ------------- -Given the new context, refine the original answer to better -Reply in {reply_language} -If the context isn't useful, return the original answer. -""" - -ALREADY_CONVERTED_MARK = "" - -small_and_beautiful_theme = gr.themes.Soft( - primary_hue=gr.themes.Color( - c50="#02C160", - c100="rgba(2, 193, 96, 0.2)", - c200="#02C160", - c300="rgba(2, 193, 96, 0.32)", - c400="rgba(2, 193, 96, 0.32)", - c500="rgba(2, 193, 96, 1.0)", - c600="rgba(2, 193, 96, 1.0)", - c700="rgba(2, 193, 96, 0.32)", - c800="rgba(2, 193, 96, 0.32)", - c900="#02C160", - c950="#02C160", - ), - secondary_hue=gr.themes.Color( - c50="#576b95", - c100="#576b95", - c200="#576b95", - c300="#576b95", - c400="#576b95", - c500="#576b95", - c600="#576b95", - c700="#576b95", - c800="#576b95", - c900="#576b95", - c950="#576b95", - ), - neutral_hue=gr.themes.Color( - name="gray", - c50="#f9fafb", - c100="#f3f4f6", - c200="#e5e7eb", - c300="#d1d5db", - c400="#B2B2B2", - c500="#808080", - c600="#636363", - c700="#515151", - c800="#393939", - c900="#272727", - c950="#171717", - ), - radius_size=gr.themes.sizes.radius_sm, - ).set( - button_primary_background_fill="#06AE56", - button_primary_background_fill_dark="#06AE56", - button_primary_background_fill_hover="#07C863", - button_primary_border_color="#06AE56", - button_primary_border_color_dark="#06AE56", - button_primary_text_color="#FFFFFF", - button_primary_text_color_dark="#FFFFFF", - button_secondary_background_fill="#F2F2F2", - button_secondary_background_fill_dark="#2B2B2B", - button_secondary_text_color="#393939", - button_secondary_text_color_dark="#FFFFFF", - # background_fill_primary="#F7F7F7", - # background_fill_primary_dark="#1F1F1F", - block_title_text_color="*primary_500", - block_title_background_fill="*primary_100", - input_background_fill="#F6F6F6", - ) diff --git a/spaces/yaful/DeepfakeTextDetect/app.py b/spaces/yaful/DeepfakeTextDetect/app.py deleted file mode 100644 index 449b707fe3e4b5cbeb59aff571602c11471517b4..0000000000000000000000000000000000000000 --- a/spaces/yaful/DeepfakeTextDetect/app.py +++ /dev/null @@ -1,52 +0,0 @@ -import gradio as gr -import torch -import os -import transformers -from transformers import ( - AutoModelForSequenceClassification, - AutoTokenizer, -) -from utils import preprocess - -# init -device = 'cpu' -model_dir = "nealcly/detection-longformer" - -# load the Longformer detector -tokenizer = AutoTokenizer.from_pretrained(model_dir) -model = AutoModelForSequenceClassification.from_pretrained(model_dir).to(device) - -def detect(input_text,th=-3.08583984375): - if len(input_text.split()) < 30: - return 'It is not reliable to detect text with less than 30 words.' - - label2decisions = { - 0: "machine-generated", - 1: "human-written", - } - tokenize_input = tokenizer(input_text) - tensor_input = torch.tensor([tokenize_input["input_ids"]]).to(device) - outputs = model(tensor_input) - is_machine = -outputs.logits[0][0].item() - if is_machine < th: - decision = 0 - else: - decision = 1 - - return label2decisions[decision] - -description_e = """ -This is a demo on Github project 🏃 [Deepfake Text Detection in the Wild](https://github.com/yafuly/DeepfakeTextDetect). - -🎯 Input the text to be detected, and click ''submit''' to get the detection result, either human-written or machine-generated. - -⌛️ It takes about 6~ seconds to generate detection results. - -🏠 Check out our [Data Card 🏃](https://huggingface.co/datasets/yaful/DeepfakeTextDetect) and [Model Card 🏃](https://huggingface.co/nealcly/detection-longformer) - -""" - - - -iface = gr.Interface(fn=detect, inputs="text", outputs="text", description=description_e) -iface.launch() \ No newline at end of file diff --git a/spaces/yeahpic/YeahPic/README.md b/spaces/yeahpic/YeahPic/README.md deleted file mode 100644 index e7c895176aa8bd13d982db066819a8072429bf9d..0000000000000000000000000000000000000000 --- a/spaces/yeahpic/YeahPic/README.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: YeahPyc -emoji: 🎆 🍾 🌠 -colorFrom: gray -colorTo: blue -sdk: gradio -sdk_version: 3.44.4 -app_file: app.py -pinned: true -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - ---- - -### About - -This demo is a test for generating a SDXL LORA. -It allows a user to upload selfies and train a LORA with the images and generate images with this that show the person trained in a specific situation that has been prompted (hardcoded atm). diff --git a/spaces/ygangang/VToonify/vtoonify/model/stylegan/readme.md b/spaces/ygangang/VToonify/vtoonify/model/stylegan/readme.md deleted file mode 100644 index c0f2bce780fe2d7a9239c944b165eee7bcdeb9cb..0000000000000000000000000000000000000000 --- a/spaces/ygangang/VToonify/vtoonify/model/stylegan/readme.md +++ /dev/null @@ -1,7 +0,0 @@ -# StyleGAN 2 in PyTorch - -Implementation of Analyzing and Improving the Image Quality of StyleGAN (https://arxiv.org/abs/1912.04958) in PyTorch - -Fork from [https://github.com/rosinality/stylegan2-pytorch](https://github.com/rosinality/stylegan2-pytorch) - -In VToonify, we modify it to accept z+ latent codes. diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/clip/modeling_tf_clip.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/clip/modeling_tf_clip.py deleted file mode 100644 index 335b1f7da8e4c6d395dba26c7cb535b95c34e650..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/clip/modeling_tf_clip.py +++ /dev/null @@ -1,1315 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The OpenAI Team Authors and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" TF 2.0 CLIP model.""" - - -from __future__ import annotations - -import math -from dataclasses import dataclass -from typing import Any, Optional, Tuple, Union - -import numpy as np -import tensorflow as tf - -from ...activations_tf import get_tf_activation -from ...modeling_tf_outputs import TFBaseModelOutput, TFBaseModelOutputWithPooling - -# Public API -from ...modeling_tf_utils import ( - TFModelInputType, - TFPreTrainedModel, - get_initializer, - keras_serializable, - unpack_inputs, -) -from ...tf_utils import check_embeddings_within_bounds, shape_list, stable_softmax -from ...utils import ( - ModelOutput, - add_start_docstrings, - add_start_docstrings_to_model_forward, - logging, - replace_return_docstrings, -) -from .configuration_clip import CLIPConfig, CLIPTextConfig, CLIPVisionConfig - - -logger = logging.get_logger(__name__) - -_CHECKPOINT_FOR_DOC = "openai/clip-vit-base-patch32" - -TF_CLIP_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "openai/clip-vit-base-patch32", - # See all CLIP models at https://huggingface.co/models?filter=clip -] - - -LARGE_NEGATIVE = -1e8 - - -# Copied from transformers.models.bart.modeling_tf_bart._expand_mask -def _expand_mask(mask: tf.Tensor, tgt_len: Optional[int] = None): - """ - Expands attention_mask from `[bsz, seq_len]` to `[bsz, 1, tgt_seq_len, src_seq_len]`. - """ - src_len = shape_list(mask)[1] - tgt_len = tgt_len if tgt_len is not None else src_len - one_cst = tf.constant(1.0) - mask = tf.cast(mask, dtype=one_cst.dtype) - expanded_mask = tf.tile(mask[:, None, None, :], (1, 1, tgt_len, 1)) - - return (one_cst - expanded_mask) * LARGE_NEGATIVE - - -# contrastive loss function, adapted from -# https://sachinruk.github.io/blog/pytorch/pytorch%20lightning/loss%20function/gpu/2021/03/07/CLIP.html -def contrastive_loss(logits: tf.Tensor) -> tf.Tensor: - return tf.math.reduce_mean( - tf.keras.metrics.sparse_categorical_crossentropy( - y_true=tf.range(shape_list(logits)[0]), y_pred=logits, from_logits=True - ) - ) - - -def clip_loss(similarity: tf.Tensor) -> tf.Tensor: - caption_loss = contrastive_loss(similarity) - image_loss = contrastive_loss(tf.transpose(similarity)) - return (caption_loss + image_loss) / 2.0 - - -@dataclass -class TFCLIPOutput(ModelOutput): - """ - Args: - loss (`tf.Tensor` of shape `(1,)`, *optional*, returned when `return_loss` is `True`): - Contrastive loss for image-text similarity. - logits_per_image:(`tf.Tensor` of shape `(image_batch_size, text_batch_size)`): - The scaled dot product scores between `image_embeds` and `text_embeds`. This represents the image-text - similarity scores. - logits_per_text:(`tf.Tensor` of shape `(text_batch_size, image_batch_size)`): - The scaled dot product scores between `text_embeds` and `image_embeds`. This represents the text-image - similarity scores. - text_embeds(`tf.Tensor` of shape `(batch_size, output_dim`): - The text embeddings obtained by applying the projection layer to the pooled output of [`TFCLIPTextModel`]. - image_embeds(`tf.Tensor` of shape `(batch_size, output_dim`): - The image embeddings obtained by applying the projection layer to the pooled output of - [`TFCLIPVisionModel`]. - text_model_output([`~modeling_tf_utils.TFBaseModelOutputWithPooling`]): - The output of the [`TFCLIPTextModel`]. - vision_model_output([`~modeling_tf_utils.TFBaseModelOutputWithPooling`]): - The output of the [`TFCLIPVisionModel`]. - """ - - loss: tf.Tensor | None = None - logits_per_image: tf.Tensor = None - logits_per_text: tf.Tensor = None - text_embeds: tf.Tensor = None - image_embeds: tf.Tensor = None - text_model_output: TFBaseModelOutputWithPooling = None - vision_model_output: TFBaseModelOutputWithPooling = None - - def to_tuple(self) -> Tuple[Any]: - return tuple( - self[k] if k not in ["text_model_output", "vision_model_output"] else getattr(self, k).to_tuple() - for k in self.keys() - ) - - -class TFCLIPVisionEmbeddings(tf.keras.layers.Layer): - def __init__(self, config: CLIPVisionConfig, **kwargs): - super().__init__(**kwargs) - - self.embed_dim = config.hidden_size - self.image_size = config.image_size - self.patch_size = config.patch_size - - self.num_patches = (self.image_size // self.patch_size) ** 2 - self.num_positions = self.num_patches + 1 - - self.config = config - - self.patch_embedding = tf.keras.layers.Conv2D( - filters=self.embed_dim, - kernel_size=self.patch_size, - strides=self.patch_size, - padding="valid", - data_format="channels_last", - use_bias=False, - kernel_initializer=get_initializer(self.config.initializer_range * self.config.initializer_factor), - name="patch_embedding", - ) - - def build(self, input_shape: tf.TensorShape = None): - factor = self.config.initializer_factor - - self.class_embedding = self.add_weight( - shape=(self.embed_dim,), - initializer=get_initializer(self.embed_dim**-0.5 * factor), - trainable=True, - name="class_embedding", - ) - - with tf.name_scope("position_embedding"): - self.position_embedding = self.add_weight( - shape=(self.num_positions, self.embed_dim), - initializer=get_initializer(self.config.initializer_range * factor), - trainable=True, - name="embeddings", - ) - - super().build(input_shape) - - def call(self, pixel_values: tf.Tensor) -> tf.Tensor: - """`pixel_values` is expected to be of NCHW format.""" - - batch_size, num_channels, height, width = shape_list(pixel_values) - - # When running on CPU, `tf.nn.conv2d` doesn't support `NCHW` format. - # So change the input format from `NCHW` to `NHWC`. - # shape = (batch_size, in_height, in_width, in_channels=num_channels) - pixel_values = tf.transpose(pixel_values, perm=(0, 2, 3, 1)) - - patch_embeds = self.patch_embedding(pixel_values) - - # Change the 2D spatial dimensions to a single temporal dimension. - # shape = (batch_size, num_patches, out_channels=embed_dim) - patch_embeds = tf.reshape(tensor=patch_embeds, shape=(batch_size, self.num_patches, -1)) - - # add the [CLS] token to the embedded patch tokens - class_embeds = tf.broadcast_to(self.class_embedding, shape=(batch_size, 1, self.embed_dim)) - embeddings = tf.concat((class_embeds, patch_embeds), axis=1) - - embeddings = embeddings + self.position_embedding - - return embeddings - - -class TFCLIPTextEmbeddings(tf.keras.layers.Layer): - def __init__(self, config: CLIPTextConfig, **kwargs): - super().__init__(**kwargs) - - self.embed_dim = config.hidden_size - - self.config = config - - def build(self, input_shape: tf.TensorShape = None): - with tf.name_scope("token_embedding"): - self.weight = self.add_weight( - shape=(self.config.vocab_size, self.embed_dim), - initializer=get_initializer(self.config.initializer_factor * self.config.initializer_range), - trainable=True, - name="weight", - ) - - with tf.name_scope("position_embedding"): - self.position_embedding = self.add_weight( - shape=(self.config.max_position_embeddings, self.embed_dim), - initializer=get_initializer(self.config.initializer_factor * self.config.initializer_range), - trainable=True, - name="embeddings", - ) - - super().build(input_shape) - - def call( - self, - input_ids: tf.Tensor = None, - position_ids: tf.Tensor = None, - inputs_embeds: tf.Tensor = None, - ) -> tf.Tensor: - """ - Applies embedding based on inputs tensor. - - Returns: - final_embeddings (`tf.Tensor`): output embedding tensor. - """ - if input_ids is None and inputs_embeds is None: - raise ValueError("You have to specify either input_ids or inputs_embeds") - - if inputs_embeds is None: - check_embeddings_within_bounds(input_ids, self.config.vocab_size) - inputs_embeds = tf.gather(params=self.weight, indices=input_ids) - - input_shape = shape_list(inputs_embeds)[:-1] - - if position_ids is None: - position_ids = tf.expand_dims(tf.range(start=0, limit=input_shape[-1]), axis=0) - - position_embeds = tf.gather(params=self.position_embedding, indices=position_ids) - position_embeds = tf.tile(input=position_embeds, multiples=(input_shape[0], 1, 1)) - final_embeddings = inputs_embeds + position_embeds - - return final_embeddings - - -class TFCLIPAttention(tf.keras.layers.Layer): - """Multi-headed attention from 'Attention Is All You Need' paper""" - - def __init__(self, config: CLIPConfig, **kwargs): - super().__init__(**kwargs) - - self.embed_dim = config.hidden_size - self.num_attention_heads = config.num_attention_heads - self.attention_head_size = self.embed_dim // self.num_attention_heads - if self.attention_head_size * self.num_attention_heads != self.embed_dim: - raise ValueError( - f"embed_dim must be divisible by num_heads (got `embed_dim`: {self.embed_dim} and `num_heads`:" - f" {self.num_attention_heads})." - ) - - factor = config.initializer_factor - in_proj_std = (self.embed_dim**-0.5) * ((2 * config.num_hidden_layers) ** -0.5) * factor - out_proj_std = (self.embed_dim**-0.5) * factor - - self.sqrt_att_head_size = math.sqrt(self.attention_head_size) - - self.q_proj = tf.keras.layers.Dense( - units=self.embed_dim, kernel_initializer=get_initializer(in_proj_std), name="q_proj" - ) - self.k_proj = tf.keras.layers.Dense( - units=self.embed_dim, kernel_initializer=get_initializer(in_proj_std), name="k_proj" - ) - self.v_proj = tf.keras.layers.Dense( - units=self.embed_dim, kernel_initializer=get_initializer(in_proj_std), name="v_proj" - ) - - self.dropout = tf.keras.layers.Dropout(rate=config.attention_dropout) - - self.out_proj = tf.keras.layers.Dense( - units=self.embed_dim, kernel_initializer=get_initializer(out_proj_std), name="out_proj" - ) - - # copied from transformers.models.bert.modeling_tf_bert.TFBertSelfAttention.transpose_for_scores - def transpose_for_scores(self, tensor: tf.Tensor, batch_size: int) -> tf.Tensor: - # Reshape from [batch_size, seq_length, all_head_size] to [batch_size, seq_length, num_attention_heads, attention_head_size] - tensor = tf.reshape(tensor=tensor, shape=(batch_size, -1, self.num_attention_heads, self.attention_head_size)) - - # Transpose the tensor from [batch_size, seq_length, num_attention_heads, attention_head_size] to [batch_size, num_attention_heads, seq_length, attention_head_size] - return tf.transpose(tensor, perm=[0, 2, 1, 3]) - - def call( - self, - hidden_states: tf.Tensor, - attention_mask: tf.Tensor, - causal_attention_mask: tf.Tensor, - output_attentions: bool, - training: bool = False, - ) -> Tuple[tf.Tensor]: - """Input shape: Batch x Time x Channel""" - - batch_size = shape_list(hidden_states)[0] - mixed_query_layer = self.q_proj(inputs=hidden_states) - mixed_key_layer = self.k_proj(inputs=hidden_states) - mixed_value_layer = self.v_proj(inputs=hidden_states) - query_layer = self.transpose_for_scores(mixed_query_layer, batch_size) - key_layer = self.transpose_for_scores(mixed_key_layer, batch_size) - value_layer = self.transpose_for_scores(mixed_value_layer, batch_size) - - # Take the dot product between "query" and "key" to get the raw attention scores. - # (batch size, num_heads, seq_len_q, seq_len_k) - attention_scores = tf.matmul(query_layer, key_layer, transpose_b=True) - dk = tf.cast(self.sqrt_att_head_size, dtype=attention_scores.dtype) - attention_scores = tf.divide(attention_scores, dk) - - # apply the causal_attention_mask first - if causal_attention_mask is not None: - # Apply the causal attention mask (precomputed for all layers in TFCLIPModel call() function) - attention_scores = tf.add(attention_scores, causal_attention_mask) - - if attention_mask is not None: - # Apply the attention mask (precomputed for all layers in TFCLIPModel call() function) - attention_scores = tf.add(attention_scores, attention_mask) - - # Normalize the attention scores to probabilities. - _attention_probs = stable_softmax(logits=attention_scores, axis=-1) - - # This is actually dropping out entire tokens to attend to, which might - # seem a bit unusual, but is taken from the original Transformer paper. - attention_probs = self.dropout(inputs=_attention_probs, training=training) - - attention_output = tf.matmul(attention_probs, value_layer) - attention_output = tf.transpose(attention_output, perm=[0, 2, 1, 3]) - - # (batch_size, seq_len_q, embed_dim) - attention_output = tf.reshape(tensor=attention_output, shape=(batch_size, -1, self.embed_dim)) - - attention_output = self.out_proj(attention_output, training=training) - # In TFBert, attention weights are returned after dropout. - # However, in CLIP, they are returned before dropout. - outputs = (attention_output, _attention_probs) if output_attentions else (attention_output,) - - return outputs - - -class TFCLIPMLP(tf.keras.layers.Layer): - def __init__(self, config: CLIPConfig, **kwargs): - super().__init__(**kwargs) - - self.activation_fn = get_tf_activation(config.hidden_act) - - factor = config.initializer_factor - in_proj_std = (config.hidden_size**-0.5) * ((2 * config.num_hidden_layers) ** -0.5) * factor - fc_std = (2 * config.hidden_size) ** -0.5 * factor - - self.fc1 = tf.keras.layers.Dense( - units=config.intermediate_size, kernel_initializer=get_initializer(fc_std), name="fc1" - ) - self.fc2 = tf.keras.layers.Dense( - units=config.hidden_size, kernel_initializer=get_initializer(in_proj_std), name="fc2" - ) - - def call(self, hidden_states: tf.Tensor) -> tf.Tensor: - hidden_states = self.fc1(inputs=hidden_states) - hidden_states = self.activation_fn(hidden_states) - hidden_states = self.fc2(inputs=hidden_states) - return hidden_states - - -class TFCLIPEncoderLayer(tf.keras.layers.Layer): - def __init__(self, config: CLIPConfig, **kwargs): - super().__init__(**kwargs) - - self.embed_dim = config.hidden_size - self.self_attn = TFCLIPAttention(config, name="self_attn") - self.layer_norm1 = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="layer_norm1") - self.mlp = TFCLIPMLP(config, name="mlp") - self.layer_norm2 = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="layer_norm2") - - def call( - self, - hidden_states: tf.Tensor, - attention_mask: tf.Tensor, - causal_attention_mask: tf.Tensor, - output_attentions: bool, - training: bool = False, - ) -> Tuple[tf.Tensor]: - """ - Args: - hidden_states (`tf.Tensor`): input to the layer of shape `(batch, seq_len, embed_dim)` - attention_mask (`tf.Tensor`): attention mask of size - `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - causal_attention_mask (`tf.Tensor`): causal attention mask of size - `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - output_attentions (`bool`): - Whether or not to return the attentions tensors of all attention layers. See `outputs` under returned - tensors for more detail. - """ - residual = hidden_states - - hidden_states = self.layer_norm1(inputs=hidden_states) - attention_outputs = self.self_attn( - hidden_states=hidden_states, - attention_mask=attention_mask, - causal_attention_mask=causal_attention_mask, - output_attentions=output_attentions, - training=training, - ) - hidden_states = attention_outputs[0] - hidden_states = residual + hidden_states - - residual = hidden_states - hidden_states = self.layer_norm2(inputs=hidden_states) - hidden_states = self.mlp(hidden_states=hidden_states) - hidden_states = residual + hidden_states - - outputs = (hidden_states,) + attention_outputs[1:] # add attentions if we output them - - return outputs - - -class TFCLIPEncoder(tf.keras.layers.Layer): - """ - Transformer encoder consisting of `config.num_hidden_layers` self attention layers. Each layer is a - [`TFCLIPEncoderLayer`]. - - Args: - config: CLIPConfig - """ - - def __init__(self, config: CLIPConfig, **kwargs): - super().__init__(**kwargs) - - self.layers = [TFCLIPEncoderLayer(config, name=f"layers_._{i}") for i in range(config.num_hidden_layers)] - - def call( - self, - hidden_states: tf.Tensor, - attention_mask: tf.Tensor, - causal_attention_mask: tf.Tensor, - output_attentions: bool, - output_hidden_states: bool, - return_dict: bool, - training: bool = False, - ) -> Union[TFBaseModelOutput, Tuple[tf.Tensor]]: - all_hidden_states = () if output_hidden_states else None - all_attentions = () if output_attentions else None - - for i, layer_module in enumerate(self.layers): - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - layer_outputs = layer_module( - hidden_states=hidden_states, - attention_mask=attention_mask, - causal_attention_mask=causal_attention_mask, - output_attentions=output_attentions, - training=training, - ) - hidden_states = layer_outputs[0] - - if output_attentions: - all_attentions = all_attentions + (layer_outputs[1],) - - # Add last layer - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if not return_dict: - return tuple(v for v in [hidden_states, all_hidden_states, all_attentions] if v is not None) - - return TFBaseModelOutput( - last_hidden_state=hidden_states, hidden_states=all_hidden_states, attentions=all_attentions - ) - - -class TFCLIPTextTransformer(tf.keras.layers.Layer): - def __init__(self, config: CLIPTextConfig, **kwargs): - super().__init__(**kwargs) - - self.embeddings = TFCLIPTextEmbeddings(config, name="embeddings") - self.encoder = TFCLIPEncoder(config, name="encoder") - self.final_layer_norm = tf.keras.layers.LayerNormalization( - epsilon=config.layer_norm_eps, name="final_layer_norm" - ) - - # For `pooled_output` computation - self.eos_token_id = config.eos_token_id - - def call( - self, - input_ids: TFModelInputType, - attention_mask: tf.Tensor, - position_ids: tf.Tensor, - output_attentions: bool, - output_hidden_states: bool, - return_dict: bool, - training: bool = False, - ) -> Union[TFBaseModelOutputWithPooling, Tuple[tf.Tensor]]: - input_shape = shape_list(input_ids) - - embedding_output = self.embeddings(input_ids=input_ids, position_ids=position_ids) - - batch_size, seq_length = input_shape - # CLIP's text model uses causal mask, prepare it here. - # https://github.com/openai/CLIP/blob/cfcffb90e69f37bf2ff1e988237a0fbe41f33c04/clip/model.py#L324 - causal_attention_mask = self._build_causal_attention_mask(batch_size, seq_length, dtype=embedding_output.dtype) - - # check attention mask and invert - # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] - attention_mask = _expand_mask(attention_mask) - - encoder_outputs = self.encoder( - hidden_states=embedding_output, - attention_mask=attention_mask, - causal_attention_mask=causal_attention_mask, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - - sequence_output = encoder_outputs[0] - sequence_output = self.final_layer_norm(inputs=sequence_output) - - if self.eos_token_id == 2: - # The `eos_token_id` was incorrect before PR #24773: Let's keep what have been done here. - # A CLIP model with such `eos_token_id` in the config can't work correctly with extra new tokens added - # ------------------------------------------------------------ - # text_embeds.shape = [batch_size, n_ctx, transformer.width] - # take features from the eot embedding (eot_token is the highest number in each sequence) - pooled_output = tf.gather_nd( - params=sequence_output, - indices=tf.stack( - values=(tf.range(input_shape[0], dtype=tf.int64), tf.math.argmax(input_ids, axis=-1)), axis=1 - ), - ) - else: - # The config gets updated `eos_token_id` from PR #24773 (so the use of exta new tokens is possible) - pooled_output = tf.gather_nd( - params=sequence_output, - indices=tf.stack( - values=( - tf.range(input_shape[0], dtype=tf.int64), - tf.math.argmax(tf.cast(input_ids == self.eos_token_id, dtype=tf.int8), axis=-1), - ), - axis=1, - ), - ) - - if not return_dict: - return (sequence_output, pooled_output) + encoder_outputs[1:] - - return TFBaseModelOutputWithPooling( - last_hidden_state=sequence_output, - pooler_output=pooled_output, - hidden_states=encoder_outputs.hidden_states, - attentions=encoder_outputs.attentions, - ) - - def _build_causal_attention_mask(self, batch_size, seq_length, dtype=tf.float32): - # It is possible with an unspecified sequence length for seq_length to be - # a runtime value, which is unsupported by tf.constant. Per the TensorFlow - # docs, tf.fill can handle runtime dynamic shapes: - # https://www.tensorflow.org/api_docs/python/tf/fill - diag = tf.cast(tf.fill((seq_length,), 0.0), dtype) - - # set an additive 2D attention mask with all places being masked - to_mask = tf.cast(tf.fill((seq_length, seq_length), -10000.0), dtype) - - # set diagonal & lower triangular parts to 0 (i.e. the places not to be masked) - # TIP: think the 2D matrix as the space of (query_seq, key_seq) - to_mask = tf.linalg.band_part(to_mask, 0, -1) - # to_mask = tf.linalg.band_part(to_mask, -1, 0) - to_mask = tf.linalg.set_diag(to_mask, diagonal=diag) - - return tf.broadcast_to(input=to_mask, shape=(batch_size, 1, seq_length, seq_length)) - - -@keras_serializable -class TFCLIPTextMainLayer(tf.keras.layers.Layer): - config_class = CLIPTextConfig - - def __init__(self, config: CLIPTextConfig, **kwargs): - super().__init__(**kwargs) - self.config = config - self.text_model = TFCLIPTextTransformer(config, name="text_model") - - def get_input_embeddings(self) -> tf.keras.layers.Layer: - return self.text_model.embeddings - - def set_input_embeddings(self, value: tf.Variable): - self.text_model.embeddings.weight = value - self.text_model.embeddings.vocab_size = shape_list(value)[0] - - @unpack_inputs - def call( - self, - input_ids: TFModelInputType | None = None, - attention_mask: np.ndarray | tf.Tensor | None = None, - position_ids: np.ndarray | tf.Tensor | None = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - training: bool = False, - ) -> Union[TFBaseModelOutputWithPooling, Tuple[tf.Tensor]]: - if input_ids is None: - raise ValueError("You have to specify input_ids") - - input_shape = shape_list(input_ids) - - if attention_mask is None: - attention_mask = tf.fill(dims=input_shape, value=1) - - text_model_outputs = self.text_model( - input_ids=input_ids, - attention_mask=attention_mask, - position_ids=position_ids, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - - return text_model_outputs - - -class TFCLIPVisionTransformer(tf.keras.layers.Layer): - def __init__(self, config: CLIPVisionConfig, **kwargs): - super().__init__(**kwargs) - - self.embeddings = TFCLIPVisionEmbeddings(config, name="embeddings") - self.pre_layernorm = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="pre_layrnorm") - self.encoder = TFCLIPEncoder(config, name="encoder") - self.post_layernorm = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="post_layernorm") - - def call( - self, - pixel_values: TFModelInputType, - output_attentions: bool, - output_hidden_states: bool, - return_dict: bool, - training: bool = False, - ) -> Union[TFBaseModelOutputWithPooling, Tuple[tf.Tensor]]: - embedding_output = self.embeddings(pixel_values=pixel_values) - embedding_output = self.pre_layernorm(inputs=embedding_output) - - encoder_outputs = self.encoder( - hidden_states=embedding_output, - attention_mask=None, - causal_attention_mask=None, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - - sequence_output = encoder_outputs[0] - pooled_output = sequence_output[:, 0, :] - pooled_output = self.post_layernorm(inputs=pooled_output) - - if not return_dict: - return (sequence_output, pooled_output) + encoder_outputs[1:] - - return TFBaseModelOutputWithPooling( - last_hidden_state=sequence_output, - pooler_output=pooled_output, - hidden_states=encoder_outputs.hidden_states, - attentions=encoder_outputs.attentions, - ) - - -@keras_serializable -class TFCLIPVisionMainLayer(tf.keras.layers.Layer): - config_class = CLIPVisionConfig - - def __init__(self, config: CLIPVisionConfig, **kwargs): - super().__init__(**kwargs) - self.config = config - self.vision_model = TFCLIPVisionTransformer(config, name="vision_model") - - def get_input_embeddings(self) -> tf.keras.layers.Layer: - return self.vision_model.embeddings - - @unpack_inputs - def call( - self, - pixel_values: TFModelInputType | None = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - training: bool = False, - ) -> Union[TFBaseModelOutputWithPooling, Tuple[tf.Tensor]]: - if pixel_values is None: - raise ValueError("You have to specify pixel_values") - - vision_model_outputs = self.vision_model( - pixel_values=pixel_values, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - - return vision_model_outputs - - -@keras_serializable -class TFCLIPMainLayer(tf.keras.layers.Layer): - config_class = CLIPConfig - - def __init__(self, config: CLIPConfig, **kwargs): - super().__init__(**kwargs) - - if not isinstance(config.text_config, CLIPTextConfig): - raise ValueError( - "config.text_config is expected to be of type CLIPTextConfig but is of type" - f" {type(config.text_config)}." - ) - - if not isinstance(config.vision_config, CLIPVisionConfig): - raise ValueError( - "config.vision_config is expected to be of type CLIPVisionConfig but is of type" - f" {type(config.vision_config)}." - ) - - self.config = config - - text_config = config.text_config - vision_config = config.vision_config - - self.projection_dim = config.projection_dim - - self.text_model = TFCLIPTextTransformer(text_config, name="text_model") - self.vision_model = TFCLIPVisionTransformer(vision_config, name="vision_model") - - self.visual_projection = tf.keras.layers.Dense( - units=self.projection_dim, - kernel_initializer=get_initializer(vision_config.hidden_size**-0.5 * self.config.initializer_factor), - use_bias=False, - name="visual_projection", - ) - - self.text_projection = tf.keras.layers.Dense( - units=self.projection_dim, - kernel_initializer=get_initializer(text_config.hidden_size**-0.5 * self.config.initializer_factor), - use_bias=False, - name="text_projection", - ) - - def build(self, input_shape: tf.TensorShape = None): - self.logit_scale = self.add_weight( - shape=(1,), - initializer=tf.keras.initializers.Constant(self.config.logit_scale_init_value), - trainable=True, - name="logit_scale", - ) - - super().build(input_shape) - - @unpack_inputs - def get_text_features( - self, - input_ids: TFModelInputType | None = None, - attention_mask: np.ndarray | tf.Tensor | None = None, - position_ids: np.ndarray | tf.Tensor | None = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - training: bool = False, - ) -> tf.Tensor: - if input_ids is None: - raise ValueError("You have to specify either input_ids") - - input_shape = shape_list(input_ids) - - if attention_mask is None: - attention_mask = tf.fill(dims=input_shape, value=1) - - text_outputs = self.text_model( - input_ids=input_ids, - attention_mask=attention_mask, - position_ids=position_ids, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - - pooled_output = text_outputs[1] - text_features = self.text_projection(inputs=pooled_output) - - return text_features - - @unpack_inputs - def get_image_features( - self, - pixel_values: TFModelInputType | None = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - training: bool = False, - ) -> tf.Tensor: - if pixel_values is None: - raise ValueError("You have to specify pixel_values") - - vision_outputs = self.vision_model( - pixel_values=pixel_values, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - - pooled_output = vision_outputs[1] # pooled_output - image_features = self.visual_projection(inputs=pooled_output) - - return image_features - - @unpack_inputs - def call( - self, - input_ids: TFModelInputType | None = None, - pixel_values: TFModelInputType | None = None, - attention_mask: np.ndarray | tf.Tensor | None = None, - position_ids: np.ndarray | tf.Tensor | None = None, - return_loss: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - training: bool = False, - ) -> Union[TFCLIPOutput, Tuple[tf.Tensor]]: - if input_ids is None: - raise ValueError("You have to specify either input_ids") - if pixel_values is None: - raise ValueError("You have to specify pixel_values") - - input_shape = shape_list(input_ids) - - if attention_mask is None: - attention_mask = tf.fill(dims=input_shape, value=1) - - vision_outputs = self.vision_model( - pixel_values=pixel_values, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - - text_outputs = self.text_model( - input_ids=input_ids, - attention_mask=attention_mask, - position_ids=position_ids, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - - image_embeds = vision_outputs[1] - image_embeds = self.visual_projection(inputs=image_embeds) - - text_embeds = text_outputs[1] - text_embeds = self.text_projection(inputs=text_embeds) - - # normalized features - image_embeds = image_embeds / tf.norm(tensor=image_embeds, ord="euclidean", axis=-1, keepdims=True) - text_embeds = text_embeds / tf.norm(tensor=text_embeds, ord="euclidean", axis=-1, keepdims=True) - - # cosine similarity as logits - logit_scale = tf.math.exp(self.logit_scale) - logits_per_text = tf.matmul(text_embeds, image_embeds, transpose_b=True) * logit_scale - logits_per_image = tf.transpose(logits_per_text) - - loss = None - if return_loss: - loss = clip_loss(logits_per_text) - loss = tf.reshape(loss, (1,)) - - if not return_dict: - output = (logits_per_image, logits_per_text, text_embeds, image_embeds, text_outputs, vision_outputs) - return (loss,) + output if loss is not None else output - - return TFCLIPOutput( - loss=loss, - logits_per_image=logits_per_image, - logits_per_text=logits_per_text, - text_embeds=text_embeds, - image_embeds=image_embeds, - text_model_output=text_outputs, - vision_model_output=vision_outputs, - ) - - -class TFCLIPPreTrainedModel(TFPreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = CLIPConfig - base_model_prefix = "clip" - _keys_to_ignore_on_load_missing = [r"position_ids"] - _keys_to_ignore_on_load_unexpected = [r"position_ids"] - - -CLIP_START_DOCSTRING = r""" - - This model inherits from [`TFPreTrainedModel`]. Check the superclass documentation for the generic methods the - library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads - etc.) - - This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it - as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and - behavior. - - - - TensorFlow models and layers in `transformers` accept two formats as input: - - - having all inputs as keyword arguments (like PyTorch models), or - - having all inputs as a list, tuple or dict in the first positional argument. - - The reason the second format is supported is that Keras methods prefer this format when passing inputs to models - and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just - pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second - format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with - the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first - positional argument: - - - a single Tensor with `input_ids` only and nothing else: `model(input_ids)` - - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: - `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])` - - a dictionary with one or several input Tensors associated to the input names given in the docstring: - `model({"input_ids": input_ids, "token_type_ids": token_type_ids})` - - Note that when creating models and layers with - [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry - about any of this, as you can just pass inputs like you would to any other Python function! - - - - Args: - config ([`CLIPConfig`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~TFPreTrainedModel.from_pretrained`] method to load the model weights. -""" - -CLIP_TEXT_INPUTS_DOCSTRING = r""" - Args: - input_ids (`np.ndarray`, `tf.Tensor`, `List[tf.Tensor]` ``Dict[str, tf.Tensor]` or `Dict[str, np.ndarray]` and each example must have the shape `({0})`): - Indices of input sequence tokens in the vocabulary. - - Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.__call__`] and - [`PreTrainedTokenizer.encode`] for details. - - [What are input IDs?](../glossary#input-ids) - attention_mask (`np.ndarray` or `tf.Tensor` of shape `({0})`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - position_ids (`np.ndarray` or `tf.Tensor` of shape `({0})`, *optional*): - Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, - config.max_position_embeddings - 1]`. - - [What are position IDs?](../glossary#position-ids) - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the - config will be used instead. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. This argument can be used only in eager mode, in graph mode the value in the config will be - used instead. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. This argument can be used in - eager mode, in graph mode the value will always be set to True. - training (`bool`, *optional*, defaults to `False``): - Whether or not to use the model in training mode (some modules like dropout modules have different - behaviors between training and evaluation). -""" - -CLIP_VISION_INPUTS_DOCSTRING = r""" - Args: - pixel_values (`np.ndarray`, `tf.Tensor`, `List[tf.Tensor]` ``Dict[str, tf.Tensor]` or `Dict[str, np.ndarray]` and each example must have the shape `(batch_size, num_channels, height, width)`): - Pixel values. Pixel values can be obtained using [`AutoImageProcessor`]. See - [`CLIPImageProcessor.__call__`] for details. output_attentions (`bool`, *optional*): Whether or not to - return the attentions tensors of all attention layers. See `attentions` under returned tensors for more - detail. This argument can be used only in eager mode, in graph mode the value in the config will be used - instead. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. This argument can be used only in eager mode, in graph mode the value in the config will be - used instead. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. This argument can be used in - eager mode, in graph mode the value will always be set to True. - training (`bool`, *optional*, defaults to `False``): - Whether or not to use the model in training mode (some modules like dropout modules have different - behaviors between training and evaluation). -""" - -CLIP_INPUTS_DOCSTRING = r""" - Args: - input_ids (`np.ndarray`, `tf.Tensor`, `List[tf.Tensor]` ``Dict[str, tf.Tensor]` or `Dict[str, np.ndarray]` and each example must have the shape `({0})`): - Indices of input sequence tokens in the vocabulary. - - Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.__call__`] and - [`PreTrainedTokenizer.encode`] for details. - - [What are input IDs?](../glossary#input-ids) - pixel_values (`np.ndarray`, `tf.Tensor`, `List[tf.Tensor]` `Dict[str, tf.Tensor]` or `Dict[str, np.ndarray]` and each example must have the shape `(batch_size, num_channels, height, width)`): - Pixel values. Pixel values can be obtained using [`AutoImageProcessor`]. See - [`CLIPImageProcessor.__call__`] for details. - attention_mask (`np.ndarray` or `tf.Tensor` of shape `({0})`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - position_ids (`np.ndarray` or `tf.Tensor` of shape `({0})`, *optional*): - Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, - config.max_position_embeddings - 1]`. - - [What are position IDs?](../glossary#position-ids) - return_loss (`bool`, *optional*): - Whether or not to return the contrastive loss. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the - config will be used instead. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. This argument can be used only in eager mode, in graph mode the value in the config will be - used instead. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. This argument can be used in - eager mode, in graph mode the value will always be set to True. - training (`bool`, *optional*, defaults to `False``): - Whether or not to use the model in training mode (some modules like dropout modules have different - behaviors between training and evaluation). -""" - - -class TFCLIPTextModel(TFCLIPPreTrainedModel): - config_class = CLIPTextConfig - - def __init__(self, config: CLIPTextConfig, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - - self.clip = TFCLIPTextMainLayer(config, name="clip") - - @unpack_inputs - @add_start_docstrings_to_model_forward(CLIP_TEXT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @replace_return_docstrings(output_type=TFBaseModelOutputWithPooling, config_class=CLIPTextConfig) - def call( - self, - input_ids: TFModelInputType | None = None, - attention_mask: np.ndarray | tf.Tensor | None = None, - position_ids: np.ndarray | tf.Tensor | None = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - training: Optional[bool] = False, - ) -> Union[TFBaseModelOutputWithPooling, Tuple[tf.Tensor]]: - r""" - Returns: - - Examples: - - ```python - >>> from transformers import AutoTokenizer, TFCLIPTextModel - - >>> model = TFCLIPTextModel.from_pretrained("openai/clip-vit-base-patch32") - >>> tokenizer = AutoTokenizer.from_pretrained("openai/clip-vit-base-patch32") - - >>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="tf") - - >>> outputs = model(**inputs) - >>> last_hidden_state = outputs.last_hidden_state - >>> pooled_output = outputs.pooler_output # pooled (EOS token) states - ```""" - - outputs = self.clip( - input_ids=input_ids, - attention_mask=attention_mask, - position_ids=position_ids, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - - return outputs - - -class TFCLIPVisionModel(TFCLIPPreTrainedModel): - config_class = CLIPVisionConfig - main_input_name = "pixel_values" - - def __init__(self, config: CLIPVisionConfig, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - - self.clip = TFCLIPVisionMainLayer(config, name="clip") - - @unpack_inputs - @add_start_docstrings_to_model_forward(CLIP_VISION_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=TFBaseModelOutputWithPooling, config_class=CLIPVisionConfig) - def call( - self, - pixel_values: TFModelInputType | None = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - training: Optional[bool] = False, - ) -> Union[TFBaseModelOutputWithPooling, Tuple[tf.Tensor]]: - r""" - Returns: - - Examples: - - ```python - >>> from PIL import Image - >>> import requests - >>> from transformers import AutoProcessor, TFCLIPVisionModel - - >>> model = TFCLIPVisionModel.from_pretrained("openai/clip-vit-base-patch32") - >>> processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32") - - >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" - >>> image = Image.open(requests.get(url, stream=True).raw) - - >>> inputs = processor(images=image, return_tensors="tf") - - >>> outputs = model(**inputs) - >>> last_hidden_state = outputs.last_hidden_state - >>> pooled_output = outputs.pooler_output # pooled CLS states - ```""" - - outputs = self.clip( - pixel_values=pixel_values, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - - return outputs - - -@add_start_docstrings(CLIP_START_DOCSTRING) -class TFCLIPModel(TFCLIPPreTrainedModel): - config_class = CLIPConfig - - def __init__(self, config: CLIPConfig, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - - self.clip = TFCLIPMainLayer(config, name="clip") - - @unpack_inputs - @add_start_docstrings_to_model_forward(CLIP_TEXT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - def get_text_features( - self, - input_ids: TFModelInputType | None = None, - attention_mask: np.ndarray | tf.Tensor | None = None, - position_ids: np.ndarray | tf.Tensor | None = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - training: bool = False, - ) -> tf.Tensor: - r""" - Returns: - text_features (`tf.Tensor` of shape `(batch_size, output_dim`): The text embeddings obtained by applying - the projection layer to the pooled output of [`TFCLIPTextModel`]. - - Examples: - - ```python - >>> from transformers import AutoTokenizer, TFCLIPModel - - >>> model = TFCLIPModel.from_pretrained("openai/clip-vit-base-patch32") - >>> tokenizer = AutoTokenizer.from_pretrained("openai/clip-vit-base-patch32") - - >>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="tf") - >>> text_features = model.get_text_features(**inputs) - ```""" - - text_features = self.clip.get_text_features( - input_ids=input_ids, - attention_mask=attention_mask, - position_ids=position_ids, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - return text_features - - @unpack_inputs - @add_start_docstrings_to_model_forward(CLIP_VISION_INPUTS_DOCSTRING) - def get_image_features( - self, - pixel_values: TFModelInputType | None = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - training: bool = False, - ) -> tf.Tensor: - r""" - Returns: - image_features (`tf.Tensor` of shape `(batch_size, output_dim`): The image embeddings obtained by applying - the projection layer to the pooled output of [`TFCLIPVisionModel`]. - - Examples: - - ```python - >>> from PIL import Image - >>> import requests - >>> from transformers import AutoProcessor, TFCLIPModel - - >>> model = TFCLIPModel.from_pretrained("openai/clip-vit-base-patch32") - >>> processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32") - - >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" - >>> image = Image.open(requests.get(url, stream=True).raw) - - >>> inputs = processor(images=image, return_tensors="tf") - - >>> image_features = model.get_image_features(**inputs) - ```""" - - image_features = self.clip.get_image_features( - pixel_values=pixel_values, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - return image_features - - @unpack_inputs - @add_start_docstrings_to_model_forward(CLIP_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @replace_return_docstrings(output_type=TFCLIPOutput, config_class=CLIPConfig) - def call( - self, - input_ids: TFModelInputType | None = None, - pixel_values: TFModelInputType | None = None, - attention_mask: np.ndarray | tf.Tensor | None = None, - position_ids: np.ndarray | tf.Tensor | None = None, - return_loss: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - training: bool = False, - ) -> Union[TFCLIPOutput, Tuple[tf.Tensor]]: - r""" - Returns: - - Examples: - - ```python - >>> import tensorflow as tf - >>> from PIL import Image - >>> import requests - >>> from transformers import AutoProcessor, TFCLIPModel - - >>> model = TFCLIPModel.from_pretrained("openai/clip-vit-base-patch32") - >>> processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32") - - >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" - >>> image = Image.open(requests.get(url, stream=True).raw) - - >>> inputs = processor( - ... text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="tf", padding=True - ... ) - - >>> outputs = model(**inputs) - >>> logits_per_image = outputs.logits_per_image # this is the image-text similarity score - >>> probs = tf.nn.softmax(logits_per_image, axis=1) # we can take the softmax to get the label probabilities - ```""" - - outputs = self.clip( - input_ids=input_ids, - pixel_values=pixel_values, - attention_mask=attention_mask, - position_ids=position_ids, - return_loss=return_loss, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - return outputs - - def serving_output(self, output: TFCLIPOutput) -> TFCLIPOutput: - # TODO: As is this currently fails with saved_model=True, because - # TensorFlow cannot trace through nested dataclasses. Reference: - # https://github.com/huggingface/transformers/pull/16886 - return output diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/dinat/configuration_dinat.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/dinat/configuration_dinat.py deleted file mode 100644 index b70797b55c342dc543b0afeedbf1496745598950..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/dinat/configuration_dinat.py +++ /dev/null @@ -1,151 +0,0 @@ -# coding=utf-8 -# Copyright 2022 The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" Dilated Neighborhood Attention Transformer model configuration""" - -from ...configuration_utils import PretrainedConfig -from ...utils import logging -from ...utils.backbone_utils import BackboneConfigMixin, get_aligned_output_features_output_indices - - -logger = logging.get_logger(__name__) - -DINAT_PRETRAINED_CONFIG_ARCHIVE_MAP = { - "shi-labs/dinat-mini-in1k-224": "https://huggingface.co/shi-labs/dinat-mini-in1k-224/resolve/main/config.json", - # See all Dinat models at https://huggingface.co/models?filter=dinat -} - - -class DinatConfig(BackboneConfigMixin, PretrainedConfig): - r""" - This is the configuration class to store the configuration of a [`DinatModel`]. It is used to instantiate a Dinat - model according to the specified arguments, defining the model architecture. Instantiating a configuration with the - defaults will yield a similar configuration to that of the Dinat - [shi-labs/dinat-mini-in1k-224](https://huggingface.co/shi-labs/dinat-mini-in1k-224) architecture. - - Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the - documentation from [`PretrainedConfig`] for more information. - - Args: - patch_size (`int`, *optional*, defaults to 4): - The size (resolution) of each patch. NOTE: Only patch size of 4 is supported at the moment. - num_channels (`int`, *optional*, defaults to 3): - The number of input channels. - embed_dim (`int`, *optional*, defaults to 64): - Dimensionality of patch embedding. - depths (`List[int]`, *optional*, defaults to `[3, 4, 6, 5]`): - Number of layers in each level of the encoder. - num_heads (`List[int]`, *optional*, defaults to `[2, 4, 8, 16]`): - Number of attention heads in each layer of the Transformer encoder. - kernel_size (`int`, *optional*, defaults to 7): - Neighborhood Attention kernel size. - dilations (`List[List[int]]`, *optional*, defaults to `[[1, 8, 1], [1, 4, 1, 4], [1, 2, 1, 2, 1, 2], [1, 1, 1, 1, 1]]`): - Dilation value of each NA layer in the Transformer encoder. - mlp_ratio (`float`, *optional*, defaults to 3.0): - Ratio of MLP hidden dimensionality to embedding dimensionality. - qkv_bias (`bool`, *optional*, defaults to `True`): - Whether or not a learnable bias should be added to the queries, keys and values. - hidden_dropout_prob (`float`, *optional*, defaults to 0.0): - The dropout probability for all fully connected layers in the embeddings and encoder. - attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0): - The dropout ratio for the attention probabilities. - drop_path_rate (`float`, *optional*, defaults to 0.1): - Stochastic depth rate. - hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): - The non-linear activation function (function or string) in the encoder. If string, `"gelu"`, `"relu"`, - `"selu"` and `"gelu_new"` are supported. - initializer_range (`float`, *optional*, defaults to 0.02): - The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - layer_norm_eps (`float`, *optional*, defaults to 1e-05): - The epsilon used by the layer normalization layers. - layer_scale_init_value (`float`, *optional*, defaults to 0.0): - The initial value for the layer scale. Disabled if <=0. - out_features (`List[str]`, *optional*): - If used as backbone, list of features to output. Can be any of `"stem"`, `"stage1"`, `"stage2"`, etc. - (depending on how many stages the model has). If unset and `out_indices` is set, will default to the - corresponding stages. If unset and `out_indices` is unset, will default to the last stage. - out_indices (`List[int]`, *optional*): - If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how - many stages the model has). If unset and `out_features` is set, will default to the corresponding stages. - If unset and `out_features` is unset, will default to the last stage. - - Example: - - ```python - >>> from transformers import DinatConfig, DinatModel - - >>> # Initializing a Dinat shi-labs/dinat-mini-in1k-224 style configuration - >>> configuration = DinatConfig() - - >>> # Initializing a model (with random weights) from the shi-labs/dinat-mini-in1k-224 style configuration - >>> model = DinatModel(configuration) - - >>> # Accessing the model configuration - >>> configuration = model.config - ```""" - model_type = "dinat" - - attribute_map = { - "num_attention_heads": "num_heads", - "num_hidden_layers": "num_layers", - } - - def __init__( - self, - patch_size=4, - num_channels=3, - embed_dim=64, - depths=[3, 4, 6, 5], - num_heads=[2, 4, 8, 16], - kernel_size=7, - dilations=[[1, 8, 1], [1, 4, 1, 4], [1, 2, 1, 2, 1, 2], [1, 1, 1, 1, 1]], - mlp_ratio=3.0, - qkv_bias=True, - hidden_dropout_prob=0.0, - attention_probs_dropout_prob=0.0, - drop_path_rate=0.1, - hidden_act="gelu", - initializer_range=0.02, - layer_norm_eps=1e-5, - layer_scale_init_value=0.0, - out_features=None, - out_indices=None, - **kwargs, - ): - super().__init__(**kwargs) - - self.patch_size = patch_size - self.num_channels = num_channels - self.embed_dim = embed_dim - self.depths = depths - self.num_layers = len(depths) - self.num_heads = num_heads - self.kernel_size = kernel_size - self.dilations = dilations - self.mlp_ratio = mlp_ratio - self.qkv_bias = qkv_bias - self.hidden_dropout_prob = hidden_dropout_prob - self.attention_probs_dropout_prob = attention_probs_dropout_prob - self.drop_path_rate = drop_path_rate - self.hidden_act = hidden_act - self.layer_norm_eps = layer_norm_eps - self.initializer_range = initializer_range - # we set the hidden_size attribute in order to make Dinat work with VisionEncoderDecoderModel - # this indicates the channel dimension after the last stage of the model - self.hidden_size = int(embed_dim * 2 ** (len(depths) - 1)) - self.layer_scale_init_value = layer_scale_init_value - self.stage_names = ["stem"] + [f"stage{idx}" for idx in range(1, len(depths) + 1)] - self._out_features, self._out_indices = get_aligned_output_features_output_indices( - out_features=out_features, out_indices=out_indices, stage_names=self.stage_names - ) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/gpt2/modeling_tf_gpt2.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/gpt2/modeling_tf_gpt2.py deleted file mode 100644 index 525207268e2279de07eab4b32f1deacdfa46de23..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/gpt2/modeling_tf_gpt2.py +++ /dev/null @@ -1,1119 +0,0 @@ -# coding=utf-8 -# Copyright 2018 The OpenAI Team Authors and HuggingFace Inc. team. -# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" TF 2.0 OpenAI GPT-2 model.""" - -from __future__ import annotations - -from dataclasses import dataclass -from typing import List, Optional, Tuple, Union - -import numpy as np -import tensorflow as tf - -from ...activations_tf import get_tf_activation -from ...modeling_tf_outputs import ( - TFBaseModelOutputWithPastAndCrossAttentions, - TFCausalLMOutputWithCrossAttentions, - TFSequenceClassifierOutputWithPast, -) -from ...modeling_tf_utils import ( - TFCausalLanguageModelingLoss, - TFConv1D, - TFModelInputType, - TFPreTrainedModel, - TFSequenceClassificationLoss, - TFSequenceSummary, - get_initializer, - keras_serializable, - unpack_inputs, -) -from ...tf_utils import check_embeddings_within_bounds, shape_list, stable_softmax -from ...utils import ( - ModelOutput, - add_code_sample_docstrings, - add_start_docstrings, - add_start_docstrings_to_model_forward, - logging, - replace_return_docstrings, -) -from .configuration_gpt2 import GPT2Config - - -logger = logging.get_logger(__name__) - -_CHECKPOINT_FOR_DOC = "gpt2" -_CONFIG_FOR_DOC = "GPT2Config" - -TF_GPT2_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "gpt2", - "gpt2-medium", - "gpt2-large", - "gpt2-xl", - "distilgpt2", - # See all GPT-2 models at https://huggingface.co/models?filter=gpt2 -] - - -class TFAttention(tf.keras.layers.Layer): - def __init__(self, nx, config, scale=False, is_cross_attention=False, **kwargs): - super().__init__(**kwargs) - - n_state = nx # in Attention: n_state=768 (nx=n_embd) - # [switch nx => n_state from Block to Attention to keep identical to TF implementation] - assert n_state % config.n_head == 0 - self.n_head = config.n_head - self.split_size = n_state - self.scale = scale - self.output_attentions = config.output_attentions - - self.is_cross_attention = is_cross_attention - - if self.is_cross_attention: - self.c_attn = TFConv1D(n_state * 2, nx, initializer_range=config.initializer_range, name="c_attn") - self.q_attn = TFConv1D(n_state, nx, initializer_range=config.initializer_range, name="q_attn") - else: - self.c_attn = TFConv1D(n_state * 3, nx, initializer_range=config.initializer_range, name="c_attn") - - self.c_proj = TFConv1D(n_state, nx, initializer_range=config.initializer_range, name="c_proj") - self.attn_dropout = tf.keras.layers.Dropout(config.attn_pdrop) - self.resid_dropout = tf.keras.layers.Dropout(config.resid_pdrop) - self.pruned_heads = set() - - def prune_heads(self, heads): - pass - - @staticmethod - def causal_attention_mask(nd, ns, dtype): - """ - 1's in the lower triangle, counting from the lower right corner. Same as tf.matrix_band_part(tf.ones([nd, ns]), - -1, ns-nd), but doesn't produce garbage on TPUs. - """ - i = tf.range(nd)[:, None] - j = tf.range(ns) - m = i >= j - ns + nd - return tf.cast(m, dtype) - - def _attn(self, q, k, v, attention_mask, head_mask, output_attentions, training=False): - # q, k, v have shape [batch, heads, sequence, features] - w = tf.matmul(q, k, transpose_b=True) - if self.scale: - dk = tf.cast(shape_list(k)[-1], dtype=w.dtype) # scale attention_scores - w = w / tf.math.sqrt(dk) - - if not self.is_cross_attention: - # if only "normal" attention layer implements causal mask - - # w has shape [batch, heads, dst_sequence, src_sequence], where information flows from src to dst. - _, _, nd, ns = shape_list(w) - b = self.causal_attention_mask(nd, ns, dtype=w.dtype) - b = tf.reshape(b, [1, 1, nd, ns]) - w = w * b - 1e4 * (1 - b) - - if attention_mask is not None: - # Apply the attention mask - attention_mask = tf.cast(attention_mask, dtype=w.dtype) - w = w + attention_mask - - w = stable_softmax(w, axis=-1) - w = self.attn_dropout(w, training=training) - - # Mask heads if we want to - if head_mask is not None: - w = w * head_mask - - outputs = [tf.matmul(w, v)] - if output_attentions: - outputs.append(w) - return outputs - - def merge_heads(self, x): - x = tf.transpose(x, [0, 2, 1, 3]) - x_shape = shape_list(x) - new_x_shape = x_shape[:-2] + [x_shape[-2] * x_shape[-1]] - return tf.reshape(x, new_x_shape) - - def split_heads(self, x): - x_shape = shape_list(x) - new_x_shape = x_shape[:-1] + [self.n_head, x_shape[-1] // self.n_head] - x = tf.reshape(x, new_x_shape) - return tf.transpose(x, (0, 2, 1, 3)) # (batch, head, seq_length, head_features) - - def call( - self, - x, - layer_past, - attention_mask, - head_mask, - encoder_hidden_states, - encoder_attention_mask, - use_cache, - output_attentions, - training=False, - ): - if encoder_hidden_states is not None: - if not hasattr(self, "q_attn"): - raise ValueError( - "If class is used as cross attention, the weights `q_attn` have to be defined. " - "Please make sure to instantiate class with `GPT2Attention(..., is_cross_attention=True)`." - ) - - query = self.q_attn(x) - kv_out = self.c_attn(encoder_hidden_states) - key, value = tf.split(kv_out, 2, axis=2) - attention_mask = encoder_attention_mask - else: - x = self.c_attn(x) - query, key, value = tf.split(x, 3, axis=2) - - query = self.split_heads(query) - key = self.split_heads(key) - value = self.split_heads(value) - if layer_past is not None: - past_key, past_value = tf.unstack(layer_past, axis=0, num=2) - key = tf.concat([past_key, key], axis=-2) - value = tf.concat([past_value, value], axis=-2) - - # to cope with keras serialization - if use_cache: - present = tf.stack([key, value], axis=0) - else: - present = (None,) - - attn_outputs = self._attn(query, key, value, attention_mask, head_mask, output_attentions, training=training) - a = attn_outputs[0] - - a = self.merge_heads(a) - a = self.c_proj(a) - a = self.resid_dropout(a, training=training) - - outputs = [a, present] + attn_outputs[1:] - return outputs # a, present, (attentions) - - -class TFMLP(tf.keras.layers.Layer): - def __init__(self, n_state, config, **kwargs): - super().__init__(**kwargs) - nx = config.n_embd - self.c_fc = TFConv1D(n_state, nx, initializer_range=config.initializer_range, name="c_fc") - self.c_proj = TFConv1D(nx, n_state, initializer_range=config.initializer_range, name="c_proj") - self.act = get_tf_activation(config.activation_function) - self.dropout = tf.keras.layers.Dropout(config.resid_pdrop) - - def call(self, x, training=False): - h = self.act(self.c_fc(x)) - h2 = self.c_proj(h) - h2 = self.dropout(h2, training=training) - return h2 - - -class TFBlock(tf.keras.layers.Layer): - def __init__(self, config, scale=False, **kwargs): - super().__init__(**kwargs) - nx = config.n_embd - inner_dim = config.n_inner if config.n_inner is not None else 4 * nx - self.ln_1 = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_epsilon, name="ln_1") - self.attn = TFAttention(nx, config, scale, name="attn") - self.ln_2 = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_epsilon, name="ln_2") - - if config.add_cross_attention: - self.crossattention = TFAttention(nx, config, scale, name="crossattention", is_cross_attention=True) - self.ln_cross_attn = tf.keras.layers.LayerNormalization( - epsilon=config.layer_norm_epsilon, name="ln_cross_attn" - ) - - self.mlp = TFMLP(inner_dim, config, name="mlp") - - def call( - self, - x, - layer_past, - attention_mask, - head_mask, - encoder_hidden_states, - encoder_attention_mask, - use_cache, - output_attentions, - training=False, - ): - a = self.ln_1(x) - output_attn = self.attn( - a, - layer_past=layer_past, - attention_mask=attention_mask, - head_mask=head_mask, - encoder_hidden_states=None, - encoder_attention_mask=None, - use_cache=use_cache, - output_attentions=output_attentions, - training=training, - ) - a = output_attn[0] # output_attn: a, present, (attentions) - outputs = output_attn[1:] - x = x + a - - # Cross-Attention Block - if encoder_hidden_states is not None: - # add one self-attention block for cross-attention - if not hasattr(self, "crossattention"): - raise ValueError( - f"If `encoder_hidden_states` are passed, {self} has to be instantiated with " - "cross-attention layers by setting `config.add_cross_attention=True`" - ) - - ca = self.ln_cross_attn(x) - output_cross_attn = self.crossattention( - ca, - layer_past=None, - attention_mask=attention_mask, - head_mask=head_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - use_cache=False, - output_attentions=output_attentions, - training=training, - ) - ca = output_cross_attn[0] # output_attn: a, present, (cross_attentions) - x = x + ca - outputs = outputs + output_cross_attn[2:] # add cross attentions if we output attention weights - - m = self.ln_2(x) - m = self.mlp(m, training=training) - x = x + m - - outputs = [x] + outputs - return outputs # x, present, (attentions, cross_attentions) - - -@keras_serializable -class TFGPT2MainLayer(tf.keras.layers.Layer): - config_class = GPT2Config - - def __init__(self, config, *inputs, **kwargs): - super().__init__(*inputs, **kwargs) - - self.config = config - self.output_attentions = config.output_attentions - self.output_hidden_states = config.output_hidden_states - self.use_cache = config.use_cache - self.return_dict = config.use_return_dict - - self.num_hidden_layers = config.n_layer - self.n_embd = config.n_embd - self.n_positions = config.n_positions - self.initializer_range = config.initializer_range - - self.wte = tf.keras.layers.Embedding( - input_dim=config.vocab_size, - output_dim=config.hidden_size, - embeddings_initializer=get_initializer(config.initializer_range), - name="wte", - ) - self.wpe = tf.keras.layers.Embedding( - input_dim=config.n_positions, - output_dim=config.n_embd, - embeddings_initializer=get_initializer(config.initializer_range), - name="wpe", - ) - self.drop = tf.keras.layers.Dropout(config.embd_pdrop) - self.h = [TFBlock(config, scale=True, name=f"h_._{i}") for i in range(config.n_layer)] - self.ln_f = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_epsilon, name="ln_f") - - def get_input_embeddings(self): - return self.wte - - def set_input_embeddings(self, new_embeddings): - self.wte = new_embeddings - - def _prune_heads(self, heads_to_prune): - """ - Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} - """ - raise NotImplementedError - - @unpack_inputs - def call( - self, - input_ids: TFModelInputType | None = None, - past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None, - attention_mask: np.ndarray | tf.Tensor | None = None, - token_type_ids: np.ndarray | tf.Tensor | None = None, - position_ids: np.ndarray | tf.Tensor | None = None, - head_mask: np.ndarray | tf.Tensor | None = None, - inputs_embeds: np.ndarray | tf.Tensor | None = None, - encoder_hidden_states: np.ndarray | tf.Tensor | None = None, - encoder_attention_mask: np.ndarray | tf.Tensor | None = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - training: Optional[bool] = False, - ) -> Union[TFBaseModelOutputWithPastAndCrossAttentions, Tuple[tf.Tensor]]: - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") - elif input_ids is not None: - input_shape = shape_list(input_ids) - input_ids = tf.reshape(input_ids, [-1, input_shape[-1]]) - elif inputs_embeds is not None: - input_shape = shape_list(inputs_embeds)[:-1] - else: - raise ValueError("You have to specify either input_ids or inputs_embeds") - - if past_key_values is None: - past_length = 0 - past_key_values = [None] * len(self.h) - else: - past_length = shape_list(past_key_values[0][0])[-2] - - if position_ids is None: - position_ids = tf.expand_dims(tf.range(past_length, input_shape[-1] + past_length), axis=0) - - if attention_mask is not None: - # We create a 3D attention mask from a 2D tensor mask. - # Sizes are [batch_size, 1, 1, to_seq_length] - # So we can broadcast to [batch_size, num_heads, from_seq_length, to_seq_length] - # this attention mask is more simple than the triangular masking of causal attention - # used in OpenAI GPT, we just need to prepare the broadcast dimension here. - attention_mask_shape = shape_list(attention_mask) - attention_mask = tf.reshape(attention_mask, (attention_mask_shape[0], 1, 1, attention_mask_shape[1])) - - # Since attention_mask is 1.0 for positions we want to attend and 0.0 for - # masked positions, this operation will create a tensor which is 0.0 for - # positions we want to attend and -10000.0 for masked positions. - # Since we are adding it to the raw scores before the softmax, this is - # effectively the same as removing these entirely. - one_cst = tf.constant(1.0) - attention_mask = tf.cast(attention_mask, dtype=one_cst.dtype) - attention_mask = tf.multiply(tf.subtract(one_cst, attention_mask), tf.constant(-10000.0)) - - # Copied from `modeling_tf_t5.py` with -1e9 -> -10000 - if self.config.add_cross_attention and encoder_attention_mask is not None: - # If a 2D ou 3D attention mask is provided for the cross-attention - # we need to make broadcastable to [batch_size, num_heads, mask_seq_length, mask_seq_length] - # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length] - encoder_attention_mask = tf.cast(encoder_attention_mask, dtype=encoder_hidden_states.dtype) - num_dims_encoder_attention_mask = len(shape_list(encoder_attention_mask)) - if num_dims_encoder_attention_mask == 3: - encoder_extended_attention_mask = encoder_attention_mask[:, None, :, :] - if num_dims_encoder_attention_mask == 2: - encoder_extended_attention_mask = encoder_attention_mask[:, None, None, :] - - # T5 has a mask that can compare sequence ids, we can simulate this here with this transposition - # Cf. https://github.com/tensorflow/mesh/blob/8d2465e9bc93129b913b5ccc6a59aa97abd96ec6/mesh_tensorflow/transformer/transformer_layers.py#L270 - # encoder_extended_attention_mask = tf.math.equal(encoder_extended_attention_mask, - # tf.transpose(encoder_extended_attention_mask, perm=(-1, -2))) - - encoder_extended_attention_mask = (1.0 - encoder_extended_attention_mask) * -10000.0 - else: - encoder_extended_attention_mask = None - - encoder_attention_mask = encoder_extended_attention_mask - - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - if head_mask is not None: - raise NotImplementedError - else: - head_mask = [None] * self.num_hidden_layers - # head_mask = tf.constant([0] * self.num_hidden_layers) - - position_ids = tf.reshape(position_ids, [-1, shape_list(position_ids)[-1]]) - - if inputs_embeds is None: - check_embeddings_within_bounds(input_ids, self.config.vocab_size) - inputs_embeds = self.wte(input_ids) - - position_embeds = self.wpe(position_ids) - - if token_type_ids is not None: - token_type_ids = tf.reshape(token_type_ids, [-1, shape_list(token_type_ids)[-1]]) - token_type_embeds = self.wte(token_type_ids) - else: - token_type_embeds = tf.constant(0.0) - - position_embeds = tf.cast(position_embeds, dtype=inputs_embeds.dtype) - token_type_embeds = tf.cast(token_type_embeds, dtype=inputs_embeds.dtype) - hidden_states = inputs_embeds + position_embeds + token_type_embeds - hidden_states = self.drop(hidden_states, training=training) - - output_shape = input_shape + [shape_list(hidden_states)[-1]] - - presents = () if use_cache else None - all_attentions = () if output_attentions else None - all_cross_attentions = () if output_attentions and self.config.add_cross_attention else None - all_hidden_states = () if output_hidden_states else None - for i, (block, layer_past) in enumerate(zip(self.h, past_key_values)): - if output_hidden_states: - all_hidden_states = all_hidden_states + (tf.reshape(hidden_states, output_shape),) - - outputs = block( - hidden_states, - layer_past, - attention_mask, - head_mask[i], - encoder_hidden_states, - encoder_attention_mask, - use_cache, - output_attentions, - training=training, - ) - - hidden_states, present = outputs[:2] - if use_cache: - presents = presents + (present,) - - if output_attentions: - all_attentions = all_attentions + (outputs[2],) - if self.config.add_cross_attention and encoder_hidden_states is not None: - all_cross_attentions = all_cross_attentions + (outputs[3],) - - hidden_states = self.ln_f(hidden_states) - - hidden_states = tf.reshape(hidden_states, output_shape) - # Add last hidden state - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if output_attentions: - # let the number of heads free (-1) so we can extract attention even after head pruning - attention_output_shape = input_shape[:-1] + [-1] + shape_list(all_attentions[0])[-2:] - all_attentions = tuple(tf.reshape(t, attention_output_shape) for t in all_attentions) - - if not return_dict: - return tuple( - v - for v in [hidden_states, presents, all_hidden_states, all_attentions, all_cross_attentions] - if v is not None - ) - - return TFBaseModelOutputWithPastAndCrossAttentions( - last_hidden_state=hidden_states, - past_key_values=presents, - hidden_states=all_hidden_states, - attentions=all_attentions, - cross_attentions=all_cross_attentions, - ) - - -class TFGPT2PreTrainedModel(TFPreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = GPT2Config - base_model_prefix = "transformer" - # names with a '.' represents the authorized unexpected/missing layers when a TF model is loaded from a PT model - _keys_to_ignore_on_load_unexpected = [r"h.\d+.attn.bias", r"h.\d+.crossattention.bias"] - - -@dataclass -class TFGPT2DoubleHeadsModelOutput(ModelOutput): - """ - Base class for outputs of models predicting if two sentences are consecutive or not. - - Args: - logits (`tf.Tensor` of shape `(batch_size, num_choices, sequence_length, config.vocab_size)`): - Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). - mc_logits (`tf.Tensor` of shape `(batch_size, num_choices)`): - Prediction scores of the multiple choice classification head (scores for each choice before SoftMax). - past_key_values (`List[tf.Tensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): - List of `tf.Tensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads, - sequence_length, embed_size_per_head)`). - - Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see - `past_key_values` input) to speed up sequential decoding. - hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights after the attention softmax, used to compute the weighted average in the self-attention - heads. - """ - - logits: tf.Tensor = None - mc_logits: tf.Tensor = None - past_key_values: List[tf.Tensor] | None = None - hidden_states: Tuple[tf.Tensor] | None = None - attentions: Tuple[tf.Tensor] | None = None - - -GPT2_START_DOCSTRING = r""" - - This model inherits from [`TFPreTrainedModel`]. Check the superclass documentation for the generic methods the - library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads - etc.) - - This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it - as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and - behavior. - - - - TensorFlow models and layers in `transformers` accept two formats as input: - - - having all inputs as keyword arguments (like PyTorch models), or - - having all inputs as a list, tuple or dict in the first positional argument. - - The reason the second format is supported is that Keras methods prefer this format when passing inputs to models - and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just - pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second - format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with - the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first - positional argument: - - - a single Tensor with `input_ids` only and nothing else: `model(input_ids)` - - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: - `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])` - - a dictionary with one or several input Tensors associated to the input names given in the docstring: - `model({"input_ids": input_ids, "token_type_ids": token_type_ids})` - - Note that when creating models and layers with - [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry - about any of this, as you can just pass inputs like you would to any other Python function! - - - - Parameters: - config ([`GPT2Config`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - -GPT2_INPUTS_DOCSTRING = r""" - Args: - input_ids (`Numpy array` or `tf.Tensor` of shape `(batch_size, input_ids_length)`): - `input_ids_length` = `sequence_length` if `past_key_values` is `None` else `past_key_values[0].shape[-2]` - (`sequence_length` of input past key value states). Indices of input sequence tokens in the vocabulary. - - If `past_key_values` is used, only input IDs that do not have their past calculated should be passed as - `input_ids`. - - Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.__call__`] and - [`PreTrainedTokenizer.encode`] for details. - - [What are input IDs?](../glossary#input-ids) - past_key_values (`List[tf.Tensor]` of length `config.n_layers`): - Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see - `past_key_values` output below). Can be used to speed up sequential decoding. The token ids which have - their past given to this model should not be passed as input ids as they have already been computed. - attention_mask (`tf.Tensor` or `Numpy array` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - If `past_key_values` is used, `attention_mask` needs to contain the masking strategy that was used for - `past_key_values`. In other words, the `attention_mask` always has to have the length: - `len(past_key_values) + len(input_ids)` - - [What are attention masks?](../glossary#attention-mask) - token_type_ids (`tf.Tensor` or `Numpy array` of shape `(batch_size, sequence_length)`, *optional*): - Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, - 1]`: - - - 0 corresponds to a *sentence A* token, - - 1 corresponds to a *sentence B* token. - - [What are token type IDs?](../glossary#token-type-ids) - position_ids (`tf.Tensor` or `Numpy array` of shape `(batch_size, sequence_length)`, *optional*): - Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, - config.max_position_embeddings - 1]`. - - [What are position IDs?](../glossary#position-ids) - head_mask (`Numpy array` or `tf.Tensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - inputs_embeds (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): - Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This - is useful if you want more control over how to convert `input_ids` indices into associated vectors than the - model's internal embedding lookup matrix. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the - config will be used instead. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. This argument can be used only in eager mode, in graph mode the value in the config will be - used instead. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. This argument can be used in - eager mode, in graph mode the value will always be set to True. - training (`bool`, *optional*, defaults to `False`): - Whether or not to use the model in training mode (some modules like dropout modules have different - behaviors between training and evaluation). -""" - - -@add_start_docstrings( - "The bare GPT2 Model transformer outputting raw hidden-states without any specific head on top.", - GPT2_START_DOCSTRING, -) -class TFGPT2Model(TFGPT2PreTrainedModel): - def __init__(self, config, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - self.transformer = TFGPT2MainLayer(config, name="transformer") - - @unpack_inputs - @add_start_docstrings_to_model_forward(GPT2_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=TFBaseModelOutputWithPastAndCrossAttentions, - config_class=_CONFIG_FOR_DOC, - ) - def call( - self, - input_ids: TFModelInputType | None = None, - past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None, - attention_mask: np.ndarray | tf.Tensor | None = None, - token_type_ids: np.ndarray | tf.Tensor | None = None, - position_ids: np.ndarray | tf.Tensor | None = None, - head_mask: np.ndarray | tf.Tensor | None = None, - inputs_embeds: np.ndarray | tf.Tensor | None = None, - encoder_hidden_states: np.ndarray | tf.Tensor | None = None, - encoder_attention_mask: np.ndarray | tf.Tensor | None = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - training: Optional[bool] = False, - ) -> Union[TFBaseModelOutputWithPastAndCrossAttentions, Tuple[tf.Tensor]]: - r""" - encoder_hidden_states (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): - Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if - the model is configured as a decoder. - encoder_attention_mask (`tf.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in - the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - past_key_values (`Tuple[Tuple[tf.Tensor]]` of length `config.n_layers`) - contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. - If `past` are used, the user can optionally input only the last `decoder_input_ids` (those that don't have - their past key value states given to this model) of shape `(batch_size, 1)` instead of all - `decoder_input_ids` of shape `(batch_size, sequence_length)`. - use_cache (`bool`, *optional*, defaults to `True`): - If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see - `past`). Set to `False` during training, `True` during generation - """ - - outputs = self.transformer( - input_ids=input_ids, - past_key_values=past_key_values, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - - return outputs - - -@add_start_docstrings( - """ - The GPT2 Model transformer with a language modeling head on top (linear layer with weights tied to the input - embeddings). - """, - GPT2_START_DOCSTRING, -) -class TFGPT2LMHeadModel(TFGPT2PreTrainedModel, TFCausalLanguageModelingLoss): - def __init__(self, config, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - self.transformer = TFGPT2MainLayer(config, name="transformer") - - def get_output_embeddings(self): - return self.get_input_embeddings() - - def set_output_embeddings(self, value): - self.set_input_embeddings(value) - - def prepare_inputs_for_generation(self, inputs, past_key_values=None, use_cache=None, **kwargs): - token_type_ids = kwargs.get("token_type_ids", None) - # only last token for inputs_ids if past is defined in kwargs - if past_key_values: - inputs = tf.expand_dims(inputs[:, -1], -1) - if token_type_ids is not None: - token_type_ids = tf.expand_dims(token_type_ids[:, -1], -1) - - position_ids = kwargs.get("position_ids", None) - attention_mask = kwargs.get("attention_mask", None) - - if attention_mask is not None and position_ids is None: - position_ids = tf.math.cumsum(attention_mask, axis=-1, exclusive=True) - if past_key_values: - position_ids = tf.expand_dims(position_ids[:, -1], -1) - - return { - "input_ids": inputs, - "attention_mask": attention_mask, - "position_ids": position_ids, - "past_key_values": past_key_values, - "use_cache": use_cache, - "token_type_ids": token_type_ids, - } - - @unpack_inputs - @add_start_docstrings_to_model_forward(GPT2_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=TFCausalLMOutputWithCrossAttentions, - config_class=_CONFIG_FOR_DOC, - ) - def call( - self, - input_ids: TFModelInputType | None = None, - past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None, - attention_mask: np.ndarray | tf.Tensor | None = None, - token_type_ids: np.ndarray | tf.Tensor | None = None, - position_ids: np.ndarray | tf.Tensor | None = None, - head_mask: np.ndarray | tf.Tensor | None = None, - inputs_embeds: np.ndarray | tf.Tensor | None = None, - encoder_hidden_states: np.ndarray | tf.Tensor | None = None, - encoder_attention_mask: np.ndarray | tf.Tensor | None = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - labels: np.ndarray | tf.Tensor | None = None, - training: Optional[bool] = False, - ) -> Union[TFCausalLMOutputWithCrossAttentions, Tuple[tf.Tensor]]: - r""" - encoder_hidden_states (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): - Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if - the model is configured as a decoder. - encoder_attention_mask (`tf.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in - the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - past_key_values (`Tuple[Tuple[tf.Tensor]]` of length `config.n_layers`) - contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. - If `past` are used, the user can optionally input only the last `decoder_input_ids` (those that don't have - their past key value states given to this model) of shape `(batch_size, 1)` instead of all - `decoder_input_ids` of shape `(batch_size, sequence_length)`. - use_cache (`bool`, *optional*, defaults to `True`): - If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see - `past`). Set to `False` during training, `True` during generation - labels (`tf.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the cross entropy classification loss. Indices should be in `[0, ..., - config.vocab_size - 1]`. - """ - - transformer_outputs = self.transformer( - input_ids=input_ids, - past_key_values=past_key_values, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - hidden_states = transformer_outputs[0] - logits = tf.matmul(hidden_states, self.transformer.wte.weights, transpose_b=True) - - loss = None - if labels is not None: - # shift labels to the left and cut last logit token - shifted_logits = logits[:, :-1] - labels = labels[:, 1:] - loss = self.hf_compute_loss(labels, shifted_logits) - - if not return_dict: - output = (logits,) + transformer_outputs[1:] - return ((loss,) + output) if loss is not None else output - - return TFCausalLMOutputWithCrossAttentions( - loss=loss, - logits=logits, - past_key_values=transformer_outputs.past_key_values, - hidden_states=transformer_outputs.hidden_states, - attentions=transformer_outputs.attentions, - cross_attentions=transformer_outputs.cross_attentions, - ) - - -@add_start_docstrings( - """ - The GPT2 Model transformer with a language modeling and a multiple-choice classification head on top e.g. for - RocStories/SWAG tasks. The two heads are two linear layers. The language modeling head has its weights tied to the - input embeddings, the classification head takes as input the input of a specified classification token index in the - input sequence). - """, - GPT2_START_DOCSTRING, -) -class TFGPT2DoubleHeadsModel(TFGPT2PreTrainedModel): - def __init__(self, config, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - config.num_labels = 1 - self.transformer = TFGPT2MainLayer(config, name="transformer") - self.multiple_choice_head = TFSequenceSummary( - config, initializer_range=config.initializer_range, name="multiple_choice_head" - ) - - @unpack_inputs - @add_start_docstrings_to_model_forward(GPT2_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=TFGPT2DoubleHeadsModelOutput, config_class=_CONFIG_FOR_DOC) - def call( - self, - input_ids: TFModelInputType | None = None, - past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None, - attention_mask: np.ndarray | tf.Tensor | None = None, - token_type_ids: np.ndarray | tf.Tensor | None = None, - position_ids: np.ndarray | tf.Tensor | None = None, - head_mask: np.ndarray | tf.Tensor | None = None, - inputs_embeds: np.ndarray | tf.Tensor | None = None, - mc_token_ids: np.ndarray | tf.Tensor | None = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - training: Optional[bool] = False, - ) -> Union[TFGPT2DoubleHeadsModelOutput, Tuple[tf.Tensor]]: - r""" - mc_token_ids (`tf.Tensor` or `Numpy array` of shape `(batch_size, num_choices)`, *optional*, default to index of the last token of the input): - Index of the classification token in each input sequence. Selected in the range `[0, input_ids.size(-1) - - 1]`. - - Return: - - Examples: - - ```python - >>> import tensorflow as tf - >>> from transformers import AutoTokenizer, TFGPT2DoubleHeadsModel - - >>> tokenizer = AutoTokenizer.from_pretrained("gpt2") - >>> model = TFGPT2DoubleHeadsModel.from_pretrained("gpt2") - - >>> # Add a [CLS] to the vocabulary (we should train it also!) - >>> num_added_tokens = tokenizer.add_special_tokens({"cls_token": "[CLS]"}) - - >>> embedding_layer = model.resize_token_embeddings( - ... len(tokenizer) - ... ) # Update the model embeddings with the new vocabulary size - - >>> choices = ["Hello, my dog is cute [CLS]", "Hello, my cat is cute [CLS]"] - >>> encoded_choices = [tokenizer.encode(s) for s in choices] - >>> cls_token_location = [tokens.index(tokenizer.cls_token_id) for tokens in encoded_choices] - - >>> input_ids = tf.constant(encoded_choices)[None, :] # Batch size: 1, number of choices: 2 - >>> mc_token_ids = tf.constant([cls_token_location]) # Batch size: 1 - - >>> outputs = model(input_ids, mc_token_ids=mc_token_ids) - >>> lm_prediction_scores, mc_prediction_scores = outputs[:2] - ```""" - - if input_ids is not None: - input_shapes = shape_list(input_ids) - else: - input_shapes = shape_list(inputs_embeds)[:-1] - - seq_length = input_shapes[-1] - flat_input_ids = tf.reshape(input_ids, (-1, seq_length)) if input_ids is not None else None - flat_attention_mask = tf.reshape(attention_mask, (-1, seq_length)) if attention_mask is not None else None - flat_token_type_ids = tf.reshape(token_type_ids, (-1, seq_length)) if token_type_ids is not None else None - flat_position_ids = tf.reshape(position_ids, (-1, seq_length)) if position_ids is not None else None - transformer_outputs = self.transformer( - input_ids=flat_input_ids, - past_key_values=past_key_values, - attention_mask=flat_attention_mask, - token_type_ids=flat_token_type_ids, - position_ids=flat_position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - encoder_hidden_states=None, - encoder_attention_mask=None, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - hidden_states = transformer_outputs[0] - hidden_states = tf.reshape(hidden_states, input_shapes + shape_list(hidden_states)[-1:]) - if return_dict and output_hidden_states: - # We do this to match the slightly odd PT behaviour - the final hidden state is reshaped to rank 4 when the - # input is rank 3, but all other hidden states remain at rank-3 (with the first 2 dims merged) - all_hidden_states = transformer_outputs.hidden_states[:-1] + (hidden_states,) - else: - all_hidden_states = None - lm_logits = tf.matmul(hidden_states, self.transformer.wte.weights, transpose_b=True) - mc_logits = self.multiple_choice_head(hidden_states, mc_token_ids, training=training) - mc_logits = tf.squeeze(mc_logits, axis=-1) - - if not return_dict: - return (lm_logits, mc_logits) + transformer_outputs[1:] - - return TFGPT2DoubleHeadsModelOutput( - logits=lm_logits, - mc_logits=mc_logits, - past_key_values=transformer_outputs.past_key_values, - hidden_states=all_hidden_states, - attentions=transformer_outputs.attentions, - ) - - @property - def input_signature(self): - return { - "input_ids": tf.TensorSpec((None, None, None), tf.int32, name="input_ids"), - "attention_mask": tf.TensorSpec((None, None, None), tf.int32, name="attention_mask"), - "mc_token_ids": tf.TensorSpec((None, None), tf.int32, name="mc_token_ids"), - } - - -@add_start_docstrings( - """ - The GPT2 Model transformer with a sequence classification head on top (linear layer). - - [`TFGPT2ForSequenceClassification`] uses the last token in order to do the classification, as other causal models - (e.g. GPT-1) do. - - Since it does classification on the last token, it requires to know the position of the last token. If a - `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If - no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the - padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in - each row of the batch). - """, - GPT2_START_DOCSTRING, -) -class TFGPT2ForSequenceClassification(TFGPT2PreTrainedModel, TFSequenceClassificationLoss): - def __init__(self, config, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - self.num_labels = config.num_labels - self.score = tf.keras.layers.Dense( - config.num_labels, - kernel_initializer=get_initializer(config.initializer_range), - name="score", - use_bias=False, - ) - self.transformer = TFGPT2MainLayer(config, name="transformer") - - @unpack_inputs - @add_start_docstrings_to_model_forward(GPT2_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - checkpoint="microsoft/DialogRPT-updown", - output_type=TFSequenceClassifierOutputWithPast, - config_class=_CONFIG_FOR_DOC, - ) - def call( - self, - input_ids: TFModelInputType | None = None, - past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None, - attention_mask: np.ndarray | tf.Tensor | None = None, - token_type_ids: np.ndarray | tf.Tensor | None = None, - position_ids: np.ndarray | tf.Tensor | None = None, - head_mask: np.ndarray | tf.Tensor | None = None, - inputs_embeds: np.ndarray | tf.Tensor | None = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - labels: np.ndarray | tf.Tensor | None = None, - training: Optional[bool] = False, - ) -> Union[TFSequenceClassifierOutputWithPast, Tuple[tf.Tensor]]: - r""" - labels (`tf.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the cross entropy classification loss. Indices should be in `[0, ..., - config.vocab_size - 1]`. - """ - transformer_outputs = self.transformer( - input_ids=input_ids, - past_key_values=past_key_values, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - - hidden_states = transformer_outputs[0] - logits = self.score(hidden_states) - logits_shape = shape_list(logits) - in_logits = None - if self.config.pad_token_id is None: - sequence_lengths = -1 - else: - if input_ids is not None: - sequence_lengths = ( - tf.argmax(tf.cast(tf.math.equal(input_ids, self.config.pad_token_id), input_ids.dtype), axis=-1) - - 1 - ) - sequence_lengths = tf.where(sequence_lengths >= 0, sequence_lengths, input_ids.shape[-1] - 1) - in_logits = tf.gather(logits, sequence_lengths, batch_dims=1, axis=1) - else: - sequence_lengths = -1 - logger.warning( - f"{self.__class__.__name__} will not detect padding tokens in `inputs_embeds`. Results may be " - "unexpected if using padding tokens in conjunction with `inputs_embeds.`" - ) - loss = None - - if labels is not None: - assert ( - self.config.pad_token_id is not None or logits_shape[0] == 1 - ), "Cannot handle batch sizes > 1 if no padding token is defined." - - if not tf.is_tensor(sequence_lengths): - in_logits = logits[0 : logits_shape[0], sequence_lengths] - - loss = self.hf_compute_loss(tf.reshape(labels, [-1]), tf.reshape(in_logits, [-1, self.num_labels])) - pooled_logits = in_logits if in_logits is not None else logits - - if not return_dict: - output = (pooled_logits,) + transformer_outputs[1:] - return ((loss,) + output) if loss is not None else output - - return TFSequenceClassifierOutputWithPast( - loss=loss, - logits=pooled_logits, - past_key_values=transformer_outputs.past_key_values, - hidden_states=transformer_outputs.hidden_states, - attentions=transformer_outputs.attentions, - ) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/mega/modeling_mega.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/mega/modeling_mega.py deleted file mode 100644 index 45ce5242428fbdac399bf7604aaaf9972f49c8ff..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/mega/modeling_mega.py +++ /dev/null @@ -1,2277 +0,0 @@ -# coding=utf-8 -# Copyright 2023 The Mega Authors and The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""PyTorch MEGA model.""" - -import math -from typing import List, Optional, Tuple, Union - -import torch -import torch.nn.functional as F -import torch.utils.checkpoint -from torch import nn -from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss - -from ...activations import ACT2FN -from ...modeling_outputs import ( - BaseModelOutputWithPoolingAndCrossAttentions, - CausalLMOutputWithCrossAttentions, - MaskedLMOutput, - MultipleChoiceModelOutput, - QuestionAnsweringModelOutput, - SequenceClassifierOutput, - TokenClassifierOutput, -) -from ...modeling_utils import PreTrainedModel -from ...pytorch_utils import ALL_LAYERNORM_LAYERS -from ...utils import ( - add_code_sample_docstrings, - add_start_docstrings, - add_start_docstrings_to_model_forward, - logging, - replace_return_docstrings, -) -from .configuration_mega import MegaConfig - - -logger = logging.get_logger(__name__) - -_CHECKPOINT_FOR_DOC = "mnaylor/mega-base-wikitext" -_CONFIG_FOR_DOC = "MegaConfig" - -MEGA_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "mnaylor/mega-base-wikitext", - # See all Mega models at https://huggingface.co/models?filter=mega -] - - -class MegaEmbeddings(nn.Module): - """ - Mega's basic implementation does not incorporate token type embeddings, so this is a stripped-down version of - RoBERTa's embeddings which optionally includes token types - """ - - def __init__(self, config: MegaConfig): - super().__init__() - self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id) - self.use_token_types = config.add_token_type_embeddings - if self.use_token_types: - self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.hidden_size) - # registering a buffer here allows model tracing when not passing optional token type IDs - # more info at transformers issue #5664 - self.register_buffer( - "token_type_ids", torch.zeros(config.max_positions, dtype=torch.long).expand((1, -1)), persistent=False - ) - - self.padding_idx = config.pad_token_id - - def forward(self, input_ids=None, token_type_ids=None, inputs_embeds=None): - if (input_ids is None) and (inputs_embeds is None): - raise ValueError("Must provide one of input_ids or inputs_embeds") - elif input_ids is not None: - input_shape = input_ids.size() - device = input_ids.device - - # get the word embeddings if only IDs are provided - inputs_embeds = self.word_embeddings(input_ids) - else: - input_shape = inputs_embeds.size()[:-1] - device = inputs_embeds.device - - # the original Mega implementation did not include token type embeddings, so we add - # an option to use them if desired; if embeddings are present and token type IDs are - # not provided, we will use a registered buffer (which helps with tracing) - if self.use_token_types: - if token_type_ids is None: - if hasattr(self, "token_type_ids"): - buffered_token_type_ids = self.token_type_ids[:, : input_shape[1]] - buffered_token_type_ids_expanded = buffered_token_type_ids.expand(input_shape[0], input_shape[1]) - token_type_ids = buffered_token_type_ids_expanded - else: - token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device) - - # access token type embeddings - token_type_embeddings = self.token_type_embeddings(token_type_ids) - # add the token type embeddings to the word embeddings - embeddings = inputs_embeds + token_type_embeddings - else: - embeddings = inputs_embeds - return embeddings - - -class MegaSimpleRelativePositionalBias(nn.Module): - """ - Simple relative positional embeddings copied from the Mega repo; renamed variables for better readability - """ - - def __init__(self, config: MegaConfig): - super().__init__() - self.config = config - self.max_positions = self.config.max_positions if self.config.chunk_size < 0 else self.config.chunk_size - self.rel_pos_bias = nn.Parameter(torch.Tensor(2 * config.max_positions - 1)) - - def forward(self, seq_len): - if seq_len > self.max_positions: - raise ValueError("Sequence length {} going beyond max length {}".format(seq_len, self.max_positions)) - - # seq_len * 2 - 1 - bias = self.rel_pos_bias[(self.max_positions - seq_len) : (self.max_positions + seq_len - 1)] - # seq_len * 3 - 1 - tile = F.pad(bias, (0, seq_len)) - # (seq_len * 3 - 1) * seq_len - tile = torch.tile(tile, (seq_len,)) - tile = tile[:-seq_len] - # seq_len x (3 * seq_len - 2) - tile = tile.view(seq_len, 3 * seq_len - 2) - start = (2 * seq_len - 1) // 2 - end = tile.size(1) - start - tile = tile[:, start:end] - return tile - - -class MegaRotaryRelativePositionalBias(nn.Module): - """ - Rotary relative bias for positional information; similar in concept to RoPE (i.e. RoFormer) but taken from the Mega - repo due to differences in implementation. - - When initialized, produces a positional bias which ranges from position 0 to config.max_positions, but can - extrapolate to longer sequences. Can be indexed according to input position IDs - """ - - def __init__(self, config: MegaConfig): - super().__init__() - if config.hidden_size % 2 != 0: - raise RuntimeError("Rotary positional bias requires `hidden_size` to be a multiple of 2") - self.config = config - self.embed_dim = config.shared_representation_size - self.max_positions = self.config.max_positions if self.config.chunk_size < 0 else self.config.chunk_size - self.sine, self.cosine = MegaRotaryRelativePositionalBias.get_sinusoid_embeddings( - config.max_positions, self.embed_dim - ) - # alpha and beta parameters for the rotary bias; beta renamed to b_param to avoid clashes with tf/flax weight handling - # in loading pretrained weights - self.alpha = nn.Parameter(torch.Tensor(1, self.embed_dim)) - self.b_param = nn.Parameter(torch.Tensor(1, self.embed_dim)) - self.register_buffer("_float_tensor", torch.FloatTensor([0.0])) - - @staticmethod - def get_sinusoid_embeddings(max_positions: int, embedding_dim: int): - half_dim = embedding_dim // 2 - emb = math.log(10000) / half_dim - emb = torch.exp(torch.arange(half_dim, dtype=torch.float) * -emb) - emb = torch.arange(max_positions, dtype=torch.float).unsqueeze(1) * emb.unsqueeze(0) - return torch.sin(emb), torch.cos(emb) - - def rotary(self, input): - seq_len, embed_dim = input.size() - chunk_1, chunk_2 = torch.chunk(input, 2, dim=-1) - if self.sine is None or seq_len > self.sine.size(0): - self.sine, self.cosine = MegaRotaryRelativePositionalBias.get_sinusoid_embeddings(seq_len, embed_dim) - self.max_positions = seq_len - self.sine = self.sine.to(self._float_tensor) - self.cosine = self.cosine.to(self._float_tensor) - - sin = self.sine[:seq_len] - cos = self.cosine[:seq_len] - return torch.cat([chunk_1 * cos - chunk_2 * sin, chunk_2 * cos + chunk_1 * sin], dim=1) - - def forward(self, seq_len): - rotary_alpha = self.rotary(self.alpha.expand(seq_len, self.embed_dim)) - rotary_beta = self.rotary(self.b_param.expand(seq_len, self.embed_dim)) - bias = torch.einsum("mk,nk->mn", rotary_alpha, rotary_beta) - return bias - - -class MegaDropout(nn.Module): - """ - A unified class for standard dropout functionality and featurewise dropout. - - The original fairseq Mega repo used 2 classes for these, which included some unnecessary handling of training logic - and an unused `inplace` option. The original implementation used torch.nn.functional instead of submodules, which - is retained here as well. - """ - - def __init__(self, dropout_probability, is_featurewise=False): - super().__init__() - self.dropout_probability = dropout_probability - self.is_featurewise = is_featurewise - - def forward(self, input, batch_first: bool = False): - if self.is_featurewise: - if batch_first: - # (batch_size X sequence_length X feature_dimension) - # -> (batch_size X feature_dimension X sequence_length) - # -> (batch_size X sequence_length X feature_dimension) - return F.dropout2d( - input.transpose(-1, -2), p=self.dropout_probability, training=self.training - ).transpose(-1, -2) - else: - if input.dim() != 3: - raise ValueError( - "Feature dropout inputs must be exactly 3-dimensional if inputs are ordered [sequence length, batch size, hidden dimension]" - ) - # (sequence_length X batch_size X feature_dimension) - # -> (batch_size X feature_dimension X sequence_length) - # -> (sequence_length X batch_size X feature_dimension) - return F.dropout2d(input.permute(1, 2, 0), p=self.dropout_probability, training=self.training).permute( - 2, 0, 1 - ) - else: - return F.dropout(input, p=self.dropout_probability, training=self.training) - - -class MegaRMSNorm(nn.Module): - """ - RMSNorm used in Mega implementation. Differs from T5's RMSNorm by applying the weight prior to taking the square - root (as opposed to after in T5) - """ - - def __init__(self, number_features, eps=1e-6, affine=True): - super().__init__() - self.num_features = number_features - self.eps = eps - self.affine = affine - if affine: - self.weight = nn.Parameter(torch.Tensor(self.num_features)) - else: - self.register_parameter("weight", None) - - def forward(self, input): - mean_square = torch.mean(torch.square(input), dim=-1, keepdim=True) - if self.weight is not None: - input = input * self.weight - - input * torch.rsqrt(mean_square + self.eps) - return input - - -class MegaScaleNorm(nn.Module): - """ - Scale normalization introduced in MEGA which is similar to RMSNorm, but uses a single parameter for scalar - multiplication instead of a vector, and applies over a specified dimension - """ - - def __init__(self, dim, eps=1e-6, affine=True): - super().__init__() - self.dim = dim - self.eps = eps - self.affine = affine - if affine: - self.scalar = nn.Parameter(torch.Tensor(1)) - else: - self.register_parameter("scalar", None) - - def forward(self, input): - mean_square = torch.mean(torch.square(input), dim=self.dim, keepdim=True) - if self.scalar is not None: - input = self.scalar * input - - output = input * torch.rsqrt(mean_square + self.eps) - return output - - -class MegaSequenceNorm(nn.Module): - """ - A wrapper class for various layer normalization options used in Mega. Used to handle differences in expectations on - input axis locations for different normalization methods. - """ - - def __init__(self, norm_type, embedding_dim, eps=1e-5, affine=True, export=False): - super().__init__() - if norm_type == "layernorm": - self.norm = nn.LayerNorm(embedding_dim, eps, elementwise_affine=affine) - elif norm_type == "scalenorm": - self.norm = MegaScaleNorm(dim=-1, eps=eps, affine=affine) - elif norm_type == "rmsnorm": - self.norm = MegaRMSNorm(embedding_dim, eps=eps, affine=affine) - elif norm_type == "batchnorm": - self.norm = nn.BatchNorm1d(embedding_dim, eps=eps, affine=affine) - elif norm_type == "syncbatchnorm": - self.norm = nn.SyncBatchNorm(embedding_dim, eps=eps, affine=affine) - else: - raise ValueError("Unknown norm type: {}".format(norm_type)) - - def forward(self, input): - if isinstance(self.norm, nn.modules.batchnorm._BatchNorm): - if input.dim() != 3: - raise ValueError("BatchNorm inputs must be exactly 3-dimensional") - input = input.permute(1, 2, 0) - input = self.norm(input) - return input.permute(2, 0, 1) - else: - return self.norm(input) - - -# add this layernorm class to ALL_LAYERNORM_LAYERS -ALL_LAYERNORM_LAYERS.append(MegaSequenceNorm) - - -class MegaMultiDimensionDampedEma(nn.Module): - """ - Mega's Exponential Moving Average layer, largely left unmodified from the original repo with the exception of - variable names and moving away from the stateful representation of incremental decoding state. See - "https://arxiv.org/abs/2209.10655" for more details. - """ - - def __init__(self, config: MegaConfig): - super().__init__() - - self.config = config - - self.embed_dim = config.hidden_size - self.ndim = config.ema_projection_size - self.bidirectional = config.bidirectional - self.truncation = config.truncation - self.scale = math.sqrt(1.0 / self.ndim) - - kernel_dim = 2 * config.hidden_size if self.bidirectional else config.hidden_size - # renamed delta (damping_factor) and alpha (decay_factor) to be more descriptive of what the parameters are doing - self.damping_factor = nn.Parameter(torch.Tensor(kernel_dim, self.ndim, 1)) - self.decay_factor = nn.Parameter(torch.Tensor(kernel_dim, self.ndim, 1)) - # renamed gamma (kernel_projection_matrix) and beta (ema_expansion_matrix) respectively to avoid HF renaming - # things and align with the paper's description of these params' behavior - self.ema_expansion_matrix = nn.Parameter(torch.Tensor(kernel_dim, self.ndim, 1)) - self.kernel_projection_matrix = nn.Parameter(torch.Tensor(kernel_dim, self.ndim)) - # renamed omega to residual_weight to describe what it's doing - self.residual_weight = nn.Parameter(torch.Tensor(config.hidden_size)) - self._kernel = None - self._coeffs = None - - def _compute_ema_coefficients(self): - self._coeffs = None - # convert the alpha and delta parameters (kernel_dim x EMA projection size x 1) to [0, 1] with sigmoid - damping_factor = torch.sigmoid(self.damping_factor) - decay_factor = torch.sigmoid(self.decay_factor) - previous_timestep_weight = 1.0 - damping_factor * decay_factor - return damping_factor, previous_timestep_weight - - def _compute_efficient_ema_kernel(self, length: int): - # computes the kernel used for efficient damped EMA applied via FFT convolution - self._kernel = None - # p and q have shape (kernel_dim x ema_projection_size x 1) - damping_factor, previous_timestep_weight = self._compute_ema_coefficients() - # extend the kernel to (kernel_dim X ema_projection_size X sequence_length) and - # multiply q by sequential ints up to the sequence length - vander = torch.arange(length).to(damping_factor).view(1, 1, length) * torch.log(previous_timestep_weight) - kernel = (damping_factor * self.ema_expansion_matrix) * torch.exp(vander) - # (kernel_dim X ema_projection_size X sequence_length) -> (kernel_dim, sequence_length) - return torch.einsum("dnl,dn->dl", kernel, self.kernel_projection_matrix * self.scale) - - def get_ema_coefficients(self): - if self.training: - return self._compute_ema_coefficients() - else: - if self._coeffs is None: - self._coeffs = self._compute_ema_coefficients() - return self._coeffs - - def get_ema_kernel(self, length: int): - kernel_size = length if self.truncation is None else min(self.truncation, length) - if self.training: - return self._compute_efficient_ema_kernel(kernel_size) - else: - if self._kernel is None or self._kernel.size(-1) < kernel_size: - self._kernel = self._compute_efficient_ema_kernel(kernel_size) - return self._kernel[..., :kernel_size] - - def fft_convolution(self, inputs, kernel, length): - # this is a wrapper for repeated use of EMA calculation via FFT (fast Fourier transform) convolution - inputs_fft = torch.fft.rfft(inputs.float(), n=2 * length) - kernel_fft = torch.fft.rfft(kernel.float(), n=2 * length) - convolved_sequence = torch.fft.irfft(inputs_fft * kernel_fft, n=2 * length) - return convolved_sequence - - def ema_step(self, inputs, length, past_state=None): - if length == 1: - return self.one_ema_step(inputs, past_state=past_state) - - # (kernel_dim X ema_projection_size X 1) - damping_factor, previous_timestep_weight = self.get_ema_coefficients() - # (kernel_dim X ema_projection_size X 1+sequence_length) - vander = torch.arange(length + 1).to(damping_factor).view(1, 1, length + 1) * torch.log( - previous_timestep_weight - ) - vander = torch.exp(vander) - if past_state is not None: - # (kernel_dim X ema_projection_size X sequence_length) * (kernel_dim X ema_projection_size X 1) - # -> (kernel_dim X ema_projection_size X sequence_length) - past_ema_proj = vander[:, :, 1:] * (self.kernel_projection_matrix * self.scale).unsqueeze(-1) - # past_state will be (batch_size, kernel_dim, ema_projection_size) - past_ema_state = torch.einsum("bdn,dnl->bdl", past_state, past_ema_proj) - # (kernel_dim X ema_projection_size) * (batch_size X kernel_dim X ema_projection_size) - # -> (batch_size X kernel_dim X ema_projection_size) - past_vandermonde = vander[:, :, -1] * past_state - else: - past_ema_state = None - past_vandermonde = None - - # (kernel_dim X ema_projection_size X sequence_length) - vander = vander[:, :, :-1] - kernel = (damping_factor * self.ema_expansion_matrix) * vander - kernel_proj = torch.einsum("dnl,dn->dl", kernel, self.kernel_projection_matrix * self.scale) - - ema_output = self.fft_convolution(inputs, kernel_proj, length=length)[..., 0:length] - ema_output = ema_output.type_as(inputs) - if past_ema_state is not None: - ema_output = ema_output + past_ema_state - - updated_hidden_state = torch.einsum("bdl,dnl->bdn", inputs, torch.flip(kernel, dims=[2])) - if past_vandermonde is not None: - updated_hidden_state = updated_hidden_state + past_vandermonde - # return a tuple: - # (sequence_length, batch_size, kernel_dim) - # (batch_size, kernel_dim, ema_projection_size) - return ema_output.permute(2, 0, 1), updated_hidden_state - - def one_ema_step(self, inputs, past_state=None): - damping_factor, previous_timestep_weight = self.get_ema_coefficients() - # (kernel_dim X ema_projection_size) x (batch_size X kernel_dim X 1) - # -> (batch_size X kernel_dim X ema_projection_size) - updated_state = (damping_factor * self.ema_expansion_matrix).squeeze(-1) * inputs - if past_state is not None: - updated_state = updated_state + previous_timestep_weight.squeeze(-1) * past_state - # (batch_size X kernel_dim) - out = torch.einsum("bdn,dn->bd", updated_state, self.kernel_projection_matrix * self.scale) - # (1 X batch_size X kernel_dim), (batch_size X kernel_dim X ema_projection_size) - return out.unsqueeze(0), updated_state - - def forward( - self, - inputs, - attention_mask: Optional[torch.Tensor] = None, - prev_state: Optional[torch.Tensor] = None, - use_cache: bool = False, - ) -> torch.Tensor: - """ - Mega's exponential moving average (EMA) sub-layer applied prior to single-headed (traditional) self-attention - - Args: - inputs (`torch.Tensor` of shape `(sequence_length, batch_size, hidden_size)`): - Hidden state / embedding input to update via EMA based on FFT convolution - attention_mask (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Indicates which inputs are to be ignored (mostly due to padding), where elements are either 1 for *not - masked* or 0 for *masked* - prev_state (`torch.Tensor` of shape `(batch_size, config.ndim)`, *optional*): - The hidden state returned from the previous timestep during incremental decoding. - use_cache (`bool`, default `False`): - Whether to perfom incremental decoding; uses `prev_state` as the prior timestep, and returns the - updated EMA hidden state for use in the next step - - Returns: - `tuple(torch.FloatTensor)` containing various elements depending on configuration ([`MegaConfig`]) and - inputs: - - **hidden_states** (`torch.FloatTensor` of shape `(sequence_length, batch_size, hidden_size)`) -- Hidden - states updated by EMA, with same shapes as inputs - - **updated_state** (*optional*, returned when `use_cache=True`) `torch.FloatTensor of shape `(batch_size, - config.ndim)` -- The incremental EMA state for use in the next step of incremental decoding - """ - - seq_len, bsz, embed_dim = inputs.size() - if embed_dim != self.embed_dim: - raise ValueError( - f"Unexpected embedding dimension received: input is {embed_dim}, model expects {self.embed_dim}" - ) - - # sequence_length X batch_size X hidden_size - residual = inputs * self.residual_weight - - # (sequence_length x batch_size x hidden_size) -> (batch_size x hidden_size x sequence_length) - inputs = inputs.permute(1, 2, 0) - # mask the input: output is a tensor with 0 in the masked positions - if attention_mask is not None: - inputs = inputs * (attention_mask.unsqueeze(1).type_as(inputs)) - - if self.bidirectional and use_cache: - raise RuntimeError("Bidirectional EMA does not support incremental state") - - if use_cache: - out, updated_state = self.ema_step(inputs, seq_len, past_state=prev_state) - - # (batch_size X hidden_size) -> (1 x batch_size x hidden_size) - out = F.silu(out + residual) - - # if incremental decoding, return the new state along with the output - return out, updated_state - else: - # (hidden_size x sequence_length) - kernel = self.get_ema_kernel(seq_len) - fft_len = seq_len - s_index = 0 - kernel_size = kernel.size(1) - if self.bidirectional: - # split the kernel for each direction of EMA - k1, k2 = torch.split(kernel, [self.embed_dim, self.embed_dim], dim=0) - # (hidden_size X 2*sequence_length - 1) - kernel = F.pad(k1, (kernel_size - 1, 0)) + F.pad(k2.flip(-1), (0, kernel_size - 1)) - inputs = F.pad(inputs, (kernel_size - 1, 0)) - fft_len = fft_len + kernel_size - 1 - s_index = 2 * kernel_size - 2 - - ema_output = self.fft_convolution(inputs, kernel, length=fft_len)[..., s_index : s_index + seq_len] - ema_output = ema_output.type_as(inputs) - # (batch_size X hidden_size X sequence_length) -> (sequence_length X batch_size X hidden_size) - gated_ema_output = F.silu(ema_output.permute(2, 0, 1) + residual) - - return gated_ema_output, None - - -class MegaGatedCrossAttention(nn.Module): - """ - Gated Structured State Attention for use in encoder-decoder model. See Mega paper for more details. Only - modifications from original implementation are variable names, removing the unnecessary `before_attn_fn` and - `static_kv` arguments, and the stateful representation of incremental decoder state. - """ - - def __init__(self, config: MegaConfig): - super().__init__() - - self.config = config - self.activation = ACT2FN[self.config.activation] - self.attention_activation = self.config.attention_activation - self.scaling = ( - self.config.shared_representation_size**-0.5 if self.attention_activation == "softmax" else None - ) - - self.dropout = MegaDropout(self.config.dropout_prob, is_featurewise=self.config.use_feature_dropout) - self.hidden_dropout = MegaDropout( - self.config.hidden_dropout_prob, is_featurewise=self.config.use_feature_dropout - ) - # Attention dropout is standard dropout - self.attention_dropout = MegaDropout(self.config.attention_probs_dropout_prob, is_featurewise=False) - - self.prenorm = self.config.normalize_before_mega - self.norm = MegaSequenceNorm( - self.config.normalization_type, self.config.hidden_size, affine=self.config.norm_affine - ) - - self.k_proj = nn.Linear(self.config.hidden_size, self.config.shared_representation_size) - self.v_proj = nn.Linear(self.config.hidden_size, self.config.hidden_size) - self.q_proj = nn.Linear( - self.config.hidden_size, 2 * self.config.hidden_size + self.config.shared_representation_size - ) - self.h_proj = nn.Linear(self.config.hidden_size, self.config.hidden_size) - - if self.config.relative_positional_bias == "simple": - self.rel_pos_bias = MegaSimpleRelativePositionalBias(config) - elif self.config.relative_positional_bias == "rotary": - self.rel_pos_bias = MegaRotaryRelativePositionalBias(config) - else: - raise ValueError("unknown relative position bias: {}".format(self.config.relative_positional_bias)) - - self.softmax = nn.Softmax(dim=-1) - - def element_attention(self, query, key, key_padding_mask, pidx): - bsz, src_len, _ = key.size() - tgt_len = query.size(1) if pidx is None else pidx + 1 - if key_padding_mask is not None: - # (batch_size X source_sequence_length) --> (batch_size X 1 X 1) - lengths = key_padding_mask.sum(dim=-1).view(bsz, 1, 1) - else: - lengths = src_len - - # (target_sequence_length X source_sequence_length) - bias = self.rel_pos_bias(max(tgt_len, src_len))[:, :src_len] - if pidx is not None: - if query.size(1) != 1: - raise ValueError("Position offset provided with queries longer than 1 token") - # source_sequence_length - bias = bias[pidx] - else: - # (target_sequence_length X source_sequence_length) - bias = bias[:tgt_len] - - # (batch_size X target_sequence_length X source_sequence_length) - qk = torch.bmm(query, key.transpose(1, 2)) / lengths + bias - - attn_weights = ACT2FN[self.attention_activation](qk).type_as(qk) - - if key_padding_mask is not None: - attn_weights = attn_weights * key_padding_mask.unsqueeze(1) - - return attn_weights - - def softmax_attention(self, query, key, key_padding_mask, pidx): - bsz, src_len, _ = key.size() - tgt_len = query.size(1) if pidx is None else pidx + 1 - - # (target_sequence_length X source_sequence_length) - bias = self.rel_pos_bias(max(tgt_len, src_len))[:, :src_len] - if pidx is not None: - if query.size(1) != 1: - raise ValueError("Position offset provided with queries longer than 1 token") - # source_sequence_length - bias = bias[pidx] - else: - # (target_sequence_length X source_sequence_length) - bias = bias[:tgt_len] - - # scaled attention - query = query * self.scaling - # (batch_size X target_sequence_length X source_sequence_length) - qk = torch.bmm(query, key.transpose(1, 2)) + bias - - if key_padding_mask is not None: - qk = qk.masked_fill((1 - key_padding_mask).unsqueeze(1).to(torch.bool), float("-inf")) - - attn_weights = self.softmax(qk).type_as(qk) - return attn_weights - - def forward( - self, - query, - key: Optional[torch.Tensor], - value: Optional[torch.Tensor], - key_padding_mask: Optional[torch.Tensor] = None, - past_key_values: Optional[Tuple[torch.Tensor]] = None, - output_attentions: bool = False, - use_cache: bool = False, - ) -> Tuple[torch.Tensor, Optional[torch.Tensor]]: - """ - Gated cross-attention used in Mega - - Args: - query (`torch.Tensor` of shape `(target_sequence_length, batch_size, hidden_size)`): - The self (or target) sequence input used as query inputs for cross-attention - key (`torch.Tensor` of shape `(source_sequence_length, batch_size, hidden_size)`): - The cross (or source) sequence input with shape used as keys in cross-attention - value (`torch.Tensor` of shape `(source_sequence_length, batch_size, hidden_size)`): - The cross (or source) sequence input with shape used as values in cross-attention - key_padding_mask (`torch.LongTensor` of shape `(batch_size, source_sequence_length)`, *optional*): - Padding mask corresponding to the source sequence, where entries are 1 for *not masked* and 0 for - *masked* tokens - past_key_values (`tuple(torch.FloatTensor)`, *optional*): - If provided, the hidden state returned from the previous timestep during incremental decoding; expects - that prior cross-attention keys and values will be the last two items in the tuple - output_attentions (`bool`, defaults to `False`): - Whether or not to return the cross-attention weights. - use_cache (`bool`, defaults to `False`): - Whether to perfom incremental decoding; uses `prev_state` as the prior timestep, and returns the - updated EMA hidden state for use in the next step - - Returns: - `tuple(torch.FloatTensor)` containing various elements depending on configuration ([`MegaConfig`]) and - inputs: - - **hidden_states** (`torch.FloatTensor` of shape `(target_sequence_length, batch_size, hidden_size)`) -- - Hidden states from target sequence updated by gated cross-attention - - **attn_weights** (*optional*, returned when `output_attentions=True`) `torch.FloatTensor` of shape - `(batch_size, source_sequence_length, target_sequence_length)` -- The pairwise cross-attention weights - corresponding to each token in the source and target sequences - - **cross_key** (*optional*, returned when `use_cache=True`) `torch.FloatTensor` of shape `(batch_size, - source_sequence_length, config.shared_representation_size)` -- The cross-attention key state for use in - the next step of incremental decoding - - **cross_value** (*optional*, returned when `use_cache=True`) `torch.FloatTensor` of shape `(batch_size, - source_sequence_length, config.hidden_size)` -- The cross-attention value state for use in the next step - of incremental decoding - """ - - seq_len, bsz, embed_dim = query.size() - if embed_dim != self.config.hidden_size: - raise ValueError( - f"Unexpected embedding dimension received: input is {embed_dim} but expected {self.config.hidden_size}" - ) - - if past_key_values is not None: - # make sure the inputs only have a sequence length of 1 if we're doing incremental decoding - if seq_len != 1: - raise ValueError(f"Incremental decoding requested with self-sequence length > 1: {seq_len}") - # expect past_key_values to have (self_key, self_value, self_ema, cross_key, cross_value) - prev_cross_key, prev_cross_value = past_key_values[-2:] - key = value = None - - # use the self-attention cache to get the position id of the current step - prev_self_key = past_key_values[0] - num_incremental_steps = prev_self_key.size(1) + 1 - else: - prev_cross_key = prev_cross_value = None - # we still need the position id if we're doing incremental decoding (past_key_values will be None for the first step) - num_incremental_steps = 0 if use_cache and (seq_len == 1) else None - - full_query = query - if self.prenorm: - full_query = self.norm(full_query) - - # (target_sequence_length X batch_size X 2*hidden_size + shared_representation_size) - query_projected = self.q_proj(full_query) - # split the query projections into separate components - # - residual_weight is passed through sigmoid and sent through elementwise multiplication to the gated/weighted targets prior to being added to the query directly - # - target_gate is a silu-gated tensor that is multiplied by the attention-weighted target below prior to residual connection - # - attention_query is the part that is passed to the attention function - residual_weight, target_gate, attention_query = torch.split( - query_projected, - [self.config.hidden_size, self.config.hidden_size, self.config.shared_representation_size], - dim=-1, - ) - - # (target_sequence_length X batch_size X hidden_size) - residual_weight = torch.sigmoid(residual_weight) - target_gate = F.silu(target_gate) - - if key is None: - if value is not None: - raise ValueError("Key and value must be `None` simultaneously") - projected_key = projected_value = None - else: - # (source_sequence_length X batch_size X shared_representation_size) - projected_key = self.k_proj(key) - # (source_sequence_length X batch_size X hidden_size) - projected_value = self.activation(self.v_proj(key)) - - # (target_sequence_length X batch_size X shared_representation_size) - # -> (batch_size X target_sequence_length X shared_representation_size) - attention_query = attention_query.transpose(0, 1) - if projected_key is not None: - projected_key = projected_key.transpose(0, 1) - if projected_value is not None: - projected_value = projected_value.transpose(0, 1) - - # if we're doing incremental decoding, k and v are None and need to be overwritten with past values - if past_key_values is not None: - projected_key = prev_cross_key - projected_value = prev_cross_value - - # if we're returning the cache for later use, store these now for later return (can be done without having past_key_values provided) - if use_cache: - updated_cross_key = projected_key - updated_cross_value = projected_value - - ctx_len = projected_key.size(1) - # This is part of a workaround to get around fork/join parallelism - # not supporting Optional types. - if key_padding_mask is not None and key_padding_mask.dim() == 0: - key_padding_mask = None - - if key_padding_mask is not None: - if key_padding_mask.size(0) != bsz: - raise ValueError("Key padding mask does not align on the batch dimension") - if key_padding_mask.size(1) != ctx_len: - raise ValueError("Key padding mask does not align on the sequence length dimension") - - if self.attention_activation == "softmax": - attn_weights = self.softmax_attention( - attention_query, projected_key, key_padding_mask, num_incremental_steps - ) - else: - attn_weights = self.element_attention( - attention_query, projected_key, key_padding_mask, num_incremental_steps - ) - - projected_value = self.hidden_dropout(projected_value, batch_first=True) - kernel = self.attention_dropout(attn_weights) - # (batch_size X target_sequence_length X hidden_size) - # -> (target_sequence_length X batch_size X hidden_size) - weighted_targets = torch.bmm(kernel, projected_value).transpose(0, 1) - # (target_sequence_length X batch_size X hidden_size) - weighted_targets = self.activation(self.h_proj(weighted_targets * target_gate)) - weighted_targets = self.dropout(weighted_targets) - out = torch.addcmul(query, residual_weight, weighted_targets - query) - - if not self.prenorm: - out = self.norm(out) - - outputs = (out, attn_weights) if output_attentions else (out,) - if use_cache: - outputs = outputs + (updated_cross_key, updated_cross_value) - - return outputs - - -class MegaMovingAverageGatedAttention(nn.Module): - """ - Pure PyTorch implementation of Mega block; see https://arxiv.org/abs/2209.10655 and original fairseq implementation - at https://github.com/facebookresearch/mega (copyright Meta Research, licensed under MIT License) - - Differences from original implementation include hidden state refactor and fixed inconsistency with additive / - multiplicative attention masks - """ - - def __init__(self, config: MegaConfig): - super().__init__() - self.config = config - self.activation = ACT2FN[self.config.activation] - self.scaling = ( - self.config.shared_representation_size**-0.5 if self.config.attention_activation == "softmax" else None - ) - self.dropout = MegaDropout(self.config.dropout_prob, is_featurewise=self.config.use_feature_dropout) - self.hidden_dropout = MegaDropout( - self.config.hidden_dropout_prob, is_featurewise=self.config.use_feature_dropout - ) - # attention dropout is standard dropout - self.attention_dropout = MegaDropout(self.config.attention_probs_dropout_prob, is_featurewise=False) - - self.norm = MegaSequenceNorm( - self.config.normalization_type, self.config.hidden_size, affine=self.config.norm_affine - ) - self.ema_gate = MegaMultiDimensionDampedEma(config) - - self.v_proj = nn.Linear(self.config.hidden_size, self.config.intermediate_size) - self.mx_proj = nn.Linear( - self.config.hidden_size, - self.config.shared_representation_size + self.config.intermediate_size + 2 * self.config.hidden_size, - ) - self.h_proj = nn.Linear(self.config.intermediate_size, self.config.hidden_size) - - self.qk_weight = nn.Parameter(torch.Tensor(2, self.config.shared_representation_size)) - self.qk_bias = nn.Parameter(torch.Tensor(2, self.config.shared_representation_size)) - - if self.config.relative_positional_bias == "simple": - self.rel_pos_bias = MegaSimpleRelativePositionalBias(config) - elif self.config.relative_positional_bias == "rotary": - self.rel_pos_bias = MegaRotaryRelativePositionalBias(config) - else: - raise ValueError(f"Unknown relative positional bias: {self.config.relative_positional_bias}") - - self.softmax = nn.Softmax(dim=-1) - self.attention_function = ( - self.softmax_attention if self.config.attention_activation == "softmax" else self.element_attention - ) - - def element_attention(self, query, key, padding_mask, causal_mask): - """ - Apply element-wise attention via relu^2 or laplace. Same as original implementation but with standardized - causal attention mask. Expects the Hugging Face standard attention mask paradigm: 1 for not masked, and 0 for - masked. - """ - seq_len = key.size(2) - if padding_mask is not None: - # (batch_size X number of chunks X 1) - lengths = padding_mask.sum(-1, keepdim=True) - # (batch_size X number of chunks X 1 X 1) - lengths = lengths.clamp(min=1.0).unsqueeze(-1) - else: - lengths = seq_len - - if causal_mask is not None: - lengths = causal_mask.sum(dim=-1, keepdim=True) - - # (sequence_length X sequence_length) - bias = self.rel_pos_bias(seq_len) - if seq_len != query.size(2): - if query.size(2) != 1: - raise ValueError("Size mismatch between Q and K in element attention") - # (1 X sequence_length) - bias = bias[-1:] - - # (batch_size X number of chunks X sequence_length X sequence_length) - qk = torch.matmul(query, key.transpose(2, 3)) / lengths + bias - - attn_weights = ACT2FN[self.config.attention_activation](qk).type_as(qk) - - if padding_mask is not None: - attn_weights = attn_weights * padding_mask.unsqueeze(2) - - if causal_mask is not None: - attn_weights = attn_weights * causal_mask - - return attn_weights - - def softmax_attention(self, query, key, padding_mask, causal_mask): - "Standard softmax self-attention, as in the original Transformer paper" - seq_len = key.size(2) - # (sequence_length X sequence_length) - bias = self.rel_pos_bias(seq_len) - if seq_len != query.size(2): - if query.size(2) != 1: - raise ValueError("Size mismatch between Q and K in softmax attention") - # (1 X sequence_length) - bias = bias[-1:] - - # scaled attention - query = query * self.scaling - - # (batch_size x number of chunks x chunk_size x chunk_size) if chunking - # (batch_size x 1 x sequence_length x sequence_length) otherwise - qk = torch.matmul(query, key.transpose(2, 3)) + bias - - # apply causal mask (presumed to be 1/0 for not masked / masked) - # additive, but convert to 0/-inf (which is not explicitly in the Mega source code) - if causal_mask is not None: - additive_causal_mask = torch.zeros_like(causal_mask, dtype=qk.dtype) - additive_causal_mask = additive_causal_mask.masked_fill((1 - causal_mask).bool(), float("-inf")) - qk = qk + additive_causal_mask - - if padding_mask is not None: - # 1 for tokens which are *not masked* - # 0 for tokens which are *masked* - # replace masked tokens with -inf to make softmax ignore them - # need to invert the padding mask to match what mega original did - padding_mask = 1 - padding_mask - padding_mask_all = padding_mask.all(dim=-1, keepdim=True) - padding_mask = torch.logical_and(padding_mask, ~padding_mask_all) - qk = qk.masked_fill(padding_mask.unsqueeze(2).to(torch.bool), float("-inf")) - - attn_weights = self.softmax(qk).type_as(qk) - return attn_weights - - def forward( - self, - input, - padding_mask: Optional[torch.Tensor] = None, - causal_mask: Optional[torch.Tensor] = None, - past_key_values: Optional[Tuple[torch.Tensor]] = None, - output_attentions=False, - use_cache=False, - ): - """ - Mega's self-attention block, which combines multi-headed EMA with traditional self-attention - - Args: - input (`torch.Tensor` of shape `(sequence_length, batch_size, hidden_size)`): - Hidden states to be updated by Mega's self-attention - padding_mask (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Indicates which inputs are to be ignored due to padding, where elements are either 1 for *not masked* - or 0 for *masked* - causal_mask (`torch.LongTensor` of shape `(sequence_length, sequence_length)`, *optional*): - Indicates which inputs are to be ignored due to causal attention, where elements are either 1 for *not - masked* or 0 for *masked* - past_key_values (`tuple(torch.Tensor)`, *optional*): - The hidden states returned from the previous timestep during incremental decoding; expects that - self-attention key, value, and EMA states are the first 3 entries in the tuple - output_attentions (`bool`, default `False`): - Whether to return self-attention weights - use_cache (`bool`, default `False`): - Whether to perfom incremental decoding; uses `past_key_values` as prior state, and returns the updated - states for use in the next step - - Returns: - `tuple(torch.FloatTensor)` containing various elements depending on configuration ([`MegaConfig`]) and - inputs: - - **hidden_states** (`torch.FloatTensor` of shape `(sequence_length, batch_size, hidden_size)`) -- Hidden - states from target sequence updated by Mega's self-attention - - **attn_weights** (*optional*, returned when `output_attentions=True`) `torch.FloatTensor` of shape - `(batch_size, 1, sequence_length, sequence_length)` -- The self-attention weights corresponding to how - each token in the input sequence attends to every other token - - **self_key** (*optional*, returned when `use_cache=True`) `torch.FloatTensor` of shape `(batch_size, - sequence_length, config.shared_representation_size)` -- The self-attention key state for use in the next - step of incremental decoding - - **self_value** (*optional*, returned when `use_cache=True`) `torch.FloatTensor` of shape `(batch_size, - sequence_length, config.hidden_size)` -- The self-attention value state for use in the next step of - incremental decoding - - **self_ema_state** (*optional*, returned when `use_cache=True`) `torch.FloatTensor` of shape - `(batch_size, config.ndim)` The incremental EMA state for use in the next step of incremental decoding. - """ - - seq_len, bsz, embed_dim = input.size() - if embed_dim != self.config.hidden_size: - raise ValueError(f"Input embedding dimension should be {self.config.hidden_size}; received {embed_dim}") - - # store inputs for residual connection and handle pre-norm if requested - residual = input - if self.config.normalize_before_mega: - input = self.norm(input) - - # (sequence_length X batch_size X hidden_size) -> (sequence_length X batch_size X intermediate_size) - value = self.activation(self.v_proj(input)) - - # unpack the incremental state if provided - # assumed to be (self K, self V, self EMA state, cross K, cross V) - # also assumes that incremental decoding is working one token at a time, so input sequence length must be 1 - if self.config.is_decoder and (past_key_values is not None): - if seq_len > 1: - raise ValueError(f"Incremental decoding only supports self sequence length of 1; received {seq_len}") - # the first 3 items in the saved states will be these regardless of whether cross-attention is present - prev_self_key, prev_self_value, prev_ema_state = past_key_values[0:3] - else: - prev_self_key = prev_self_value = prev_ema_state = None - - # ema output is (sequence_length x batch_size x hidden_size) - # updated_ema_state will be None if use_cache=False; otherwise (batch_size, config.ndim) - ema_out, updated_ema_state = self.ema_gate( - input, attention_mask=padding_mask, prev_state=prev_ema_state, use_cache=use_cache - ) - ema_out = self.dropout(ema_out) - - # (sequence_length X batch_size X hidden_size) - # -> (sequence_length X batch_size X 2*hidden_size + config.shared_representation_size + config.intermediate_size) - # - residual_weight -> sigmoid -> applied to residual connection in torch.addcmul - # - query_key_gates -> split into two components: query_key becomes query and key for attention input, gates becomes gating for self-attention output - # - intermediate_state -> added to weighted attention output, sent through activation, and has inputs subtracted during - # torch.addcmul to create the final layer output - base = self.mx_proj(ema_out) - residual_weight, query_key_gates, intermediate_state = torch.split( - base, - [ - self.config.hidden_size, - self.config.shared_representation_size + self.config.intermediate_size, - self.config.hidden_size, - ], - dim=-1, - ) - - # (sequence_length X batch_size X hidden_size) - residual_weight = torch.sigmoid(residual_weight) - - # (sequence_length X batch_size X shared_representation_size + intermediate_size) - query_key_gates = F.silu(query_key_gates) - - # split into two different tensors: one for Q/K usage and the other for gating self-attention - query_key, attention_gate = torch.split( - query_key_gates, [self.config.shared_representation_size, self.config.intermediate_size], dim=-1 - ) - - # (sequence_length X batch_size X shared_representation_size) - # -> (sequence_length X batch_size X 1 X shared_representation_size) - # -> (sequence_length X batch_size X 2 X shared_representation_size) - query_key = query_key.unsqueeze(2) * self.qk_weight + self.qk_bias - - # (sequence_length X batch_size X 2 X shared_representation_size) - # -> 2 tensors of (sequence_length X batch_size X shared_representation_size) - query, key = torch.unbind(query_key, dim=2) - - # (sequence_length X batch_size X dimension) - # -> (batch_size X sequence_length X dimension) - # where `dimension` is either shared_representation_size (queries and keys) or intermediate_size (values) - query = query.transpose(0, 1) - key = key.transpose(0, 1) - value = value.transpose(0, 1) - - if self.config.is_decoder: - # combine history and current to save updated state (if history is provided) - # when chunking is applied, the past states will be None at the end of the chunk, in - # which case, proceed as if no K/V history had been provided - # saved states are stored with shape (batch_size X sequence_length X dimension) - if prev_self_key is not None: - key = torch.cat([prev_self_key, key], dim=1) - if prev_self_value is not None: - value = torch.cat([prev_self_value, value], dim=1) - - # if not chunking, store as-is - if not self.config.use_chunking: - updated_self_key = key - updated_self_value = value - else: - curr_len = key.size(1) % self.config.chunk_size - if curr_len == 0: - # if we're chunking and have reached the end of a chunk, wipe out the saved state - updated_self_key = None - updated_self_value = None - else: - updated_self_key = key - updated_self_value = value - - ctx_len = key.size(1) # potentially differs from seq_len because of incremental decoding - if not self.config.use_chunking: - # if we're not chunking, treat the entire sequence as one long chunk - # (batch_size X sequence_length X dimension) -> (batch_size X 1 X sequence_length X dimension) - query = query.unsqueeze(1) - key = key.unsqueeze(1) - value = value.unsqueeze(1) - if padding_mask is not None: - # (batch_size X sequence_length) -> (batch_size X 1 X sequence_length) - padding_mask = padding_mask.unsqueeze(1) - else: - # otherwise, split the sequences in the batch into `n_chunks` chunks of size `chunk_size` - if seq_len < self.config.chunk_size: - query = query.unsqueeze(1) - else: - # (batch_size X sequence_length X dimension) -> (batch_size X n_chunks X chunk_size X dimension) - n_chunks = seq_len // self.config.chunk_size - query = query.reshape(bsz, n_chunks, self.config.chunk_size, self.config.shared_representation_size) - - if ctx_len < self.config.chunk_size: - key = key.unsqueeze(1) - value = value.unsqueeze(1) - if padding_mask is not None: - padding_mask = padding_mask.unsqueeze(1) - else: - # (batch_size X sequence_length X dimension) -> (batch_size X n_chunks X chunk_size X dimension) - n_chunks = ctx_len // self.config.chunk_size - key = key.reshape(bsz, n_chunks, self.config.chunk_size, self.config.shared_representation_size) - value = value.reshape(bsz, n_chunks, self.config.chunk_size, self.config.intermediate_size) - if padding_mask is not None: - padding_mask = padding_mask.view(bsz, n_chunks, self.config.chunk_size) - - # this is in the original Mega implementation to work around fork/join parallelism not supporting optional types - if padding_mask is not None and padding_mask.dim() == 0: - padding_mask = None - - attn_weights = self.attention_function(query, key, padding_mask=padding_mask, causal_mask=causal_mask) - - value = self.hidden_dropout(value, batch_first=True) - kernel = self.attention_dropout(attn_weights) - - # (batch_size x n_chunks x chunk_size x intermediate_size) -> (sequence_length X batch_size X intermediate_size) - weighted_self_output = ( - torch.matmul(kernel, value).view(bsz, seq_len, self.config.intermediate_size).transpose(0, 1) - ) - - # (sequence_length X batch_size X intermediate_size) -> (sequence_length X batch_size X hidden_size) - weighted_self_output = self.activation(intermediate_state + self.h_proj(weighted_self_output * attention_gate)) - weighted_self_output = self.dropout(weighted_self_output) - # (sequence_length X batch_size X hidden_size) - out = torch.addcmul(residual, residual_weight, weighted_self_output - residual) - - if not self.config.normalize_before_mega: - out = self.norm(out) - - return_values = (out, attn_weights) if output_attentions else (out,) - - if self.config.is_decoder: - return_values = return_values + (updated_self_key, updated_self_value, updated_ema_state) - - return return_values - - -class MegaNormalizedFeedForwardNetwork(nn.Module): - """ - Normalized feed-forward network used in Mega blocks. Left as-is from original Mega repo aside from retrieving args - from Hugging Face config - """ - - def __init__(self, config: MegaConfig): - super().__init__() - - self.config = config - self.hidden_dim = config.nffn_hidden_size - self.act_fn = config.activation - self.activation = ACT2FN[config.activation] - - self.dropout = MegaDropout(self.config.dropout_prob, is_featurewise=self.config.use_feature_dropout) - self.hidden_dropout = MegaDropout( - self.config.nffn_activation_dropout_prob, is_featurewise=self.config.use_feature_dropout - ) - - self.prenorm = self.config.normalize_before_ffn - self.norm = MegaSequenceNorm( - self.config.normalization_type, self.config.hidden_size, affine=self.config.norm_affine - ) - - self.fc1 = nn.Linear(self.config.hidden_size, self.config.nffn_hidden_size) - self.fc2 = nn.Linear(self.config.nffn_hidden_size, self.config.hidden_size) - - def forward(self, inputs): - residual = inputs - - if self.prenorm: - inputs = self.norm(inputs) - - hidden = self.activation(self.fc1(inputs)) - hidden = self.hidden_dropout(hidden) - output = self.fc2(hidden) - output = self.dropout(output) - output = output + residual - - if not self.prenorm: - output = self.norm(output) - - return output - - -class MegaBlock(nn.Module): - def __init__(self, config: MegaConfig): - super().__init__() - self.seq_len_dim = 1 - self.mega_layer = MegaMovingAverageGatedAttention(config) - self.nffn = MegaNormalizedFeedForwardNetwork(config) if config.use_normalized_ffn else None - self.is_decoder = config.is_decoder - self.add_cross_attention = config.add_cross_attention - if self.add_cross_attention: - if not self.is_decoder: - raise ValueError(f"{self} should be used as a decoder model if cross attention is added") - self.cross_attn = MegaGatedCrossAttention(config) - else: - self.cross_attn = None - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.LongTensor] = None, - causal_mask: Optional[torch.LongTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - past_key_value: Optional[Tuple[torch.FloatTensor]] = None, - output_attentions: Optional[bool] = False, - use_cache: bool = False, - ) -> Tuple[torch.Tensor]: - """ - A single Mega layer: either encoder or decoder, with optional cross-attention and optional normalized - feed-forward layer - - Args: - hidden_states (`torch.Tensor` of shape `(target_sequence_length, batch_size, hidden_size)`): - Hidden states to be updated by the Mega block - attention_mask (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, *optional*): - Indicates which entries in the self/target sequence are to be ignored (mostly due to padding), where - elements are either 1 for *not masked* or 0 for *masked*. Causal attention is enforced internally. - causal_mask (`torch.LongTensor` of shape `(sequence_length, sequence_length)`, *optional*): - Indicates which inputs are to be ignored due to causal attention, where elements are either 1 for *not - masked* or 0 for *masked* - encoder_hidden_states (`torch.Tensor`, of shape `(source_sequence_length, batch_size, hidden_size)`, *optional*): - Encoder hidden states to be used for cross-attention (and required for encoder-decoder model setup) - encoder_attention_mask (`torch.LongTensor` of shape `(batch_size, source_sequence_length)`, *optional*): - Indicates which entries in the cross/source sequence are to be ignored (mostly due to padding), where - elements are either 1 for *not masked* or 0 for *masked*. - past_key_value (`tuple(torch.Tensor)`, *optional*): - The hidden states returned from the previous timestep during incremental decoding; expects that - self-attention key, value, and EMA states are the first 3 entries in the tuple, and (if doing - cross-attention) cross-attention key and value are the last 2 entries in the tuple - output_attentions (`bool`, default `False`): - Whether to return self-attention weights - use_cache (`bool`, default `False`): - Whether to perfom incremental decoding; uses `past_key_value` as prior state, and returns the updated - states for use in the next step - - Returns: - `tuple(torch.FloatTensor)` containing various elements depending on configuration ([`MegaConfig`]) and - inputs: - - **hidden_states** (`torch.FloatTensor` of shape `(target_sequence_length, batch_size, hidden_size)`) -- - Hidden states from target sequence updated by Mega - - **self_attn_weights** (*optional*, returned when `output_attentions=True`) `torch.FloatTensor` of shape - `(batch_size, 1, target_sequence_length, target_sequence_length)` -- The self-attention weights - corresponding to how each token in the input sequence attends to every other token - - **cross_attn_weights** (*optional*, returned when `output_attentions=True` and - `config.add_cross_attention=True`) `torch.FloatTensor` of shape `(batch_size, source_sequence_length, - target_sequence_length)` -- Pairwise cross-attention weights between every entry in the source sequence - and target sequence - - **self_key** (*optional*, returned when `use_cache=True`) `torch.FloatTensor` of shape `(batch_size, - sequence_length, config.shared_representation_size)` -- The self-attention key state for use in the next - step of incremental decoding - - **self_value** (*optional*, returned when `use_cache=True`) `torch.FloatTensor` of shape `(batch_size, - sequence_length, config.hidden_size)` -- The self-attention value state for use in the next step of - incremental decoding - - **self_ema_state** (*optional*, returned when `use_cache=True`) `torch.FloatTensor` of shape - `(batch_size, config.ndim)` The incremental EMA state for use in the next step of incremental decoding. - - **cross_key** (*optional*, returned when `use_cache=True` and `config.is_decoder=True`) - `torch.FloatTensor` of shape `(batch_size, source_sequence_length, config.shared_representation_size)` -- - The cross-attention key state for use in the next step of incremental decoding - - **cross_value** (*optional*, returned when `use_cache=True` and `config.is_decoder=True`) - `torch.FloatTensor` of shape `(batch_size, source_sequence_length, config.hidden_size)` -- The - cross-attention value state for use in the next step of incremental decoding - """ - - # incremental decoding in the MegaMultiDimensionDampedEma module requires that the attention mask has the same - # sequence length as the input tensor; if we're caching incremental states, we assume the input - # sequence length is 1 (Mega will break otherwise), so we take the padding mask for the final - # token in the input (mask is received as [batch X sequence length]) - if use_cache and (past_key_value is not None) and (attention_mask is not None): - mega_padding_mask = attention_mask[:, -1].unsqueeze(-1) - else: - mega_padding_mask = attention_mask - - mega_outputs = self.mega_layer( - input=hidden_states, - padding_mask=mega_padding_mask, - causal_mask=causal_mask, - past_key_values=past_key_value, - output_attentions=output_attentions, - use_cache=use_cache, - ) - - new_hidden_states = mega_outputs[0] - self_key, self_value, self_ema_state = mega_outputs[-3:] if use_cache else (None, None, None) - self_attention_weights = mega_outputs[1] if output_attentions else None - - # optional cross attention - if self.cross_attn is not None: - if encoder_hidden_states is None: - raise ValueError("Requested cross-attention without providing encoder hidden states") - - cross_attn_outputs = self.cross_attn( - query=new_hidden_states, - key=encoder_hidden_states, - value=encoder_hidden_states, - key_padding_mask=encoder_attention_mask, - past_key_values=past_key_value, - output_attentions=output_attentions, - use_cache=use_cache, - ) - - # update the hidden state from cross attention - new_hidden_states = cross_attn_outputs[0] - # store cross-attention k/v if caching - cross_key, cross_value = cross_attn_outputs[-2:] if use_cache else (None, None) - cross_attention_weights = cross_attn_outputs[1] if output_attentions else None - - # optional NFFN follows cross attention - if self.nffn is not None: - new_hidden_states = self.nffn(new_hidden_states) - - outs = (new_hidden_states,) - if output_attentions: - outs = outs + (self_attention_weights,) - if self.cross_attn is not None: - outs = outs + (cross_attention_weights,) - - if use_cache: - new_key_values = ( - self_key, - self_value, - self_ema_state, - ) - if self.cross_attn is not None: - new_key_values = new_key_values + (cross_key, cross_value) - - outs = outs + (new_key_values,) - - return outs - - -# copied from transformers.models.roberta.modeling_roberta.RobertaPooler with Roberta->Mega -class MegaPooler(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - self.activation = nn.Tanh() - - def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: - # We "pool" the model by simply taking the hidden state corresponding - # to the first token. - first_token_tensor = hidden_states[:, 0] - pooled_output = self.dense(first_token_tensor) - pooled_output = self.activation(pooled_output) - return pooled_output - - -class MegaPreTrainedModel(PreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = MegaConfig - base_model_prefix = "mega" - supports_gradient_checkpointing = False - _no_split_modules = ["MegaMovingAverageGatedAttention"] - - def _init_weights(self, module): - """Initialize the weights""" - if isinstance(module, MegaMultiDimensionDampedEma): - with torch.no_grad(): - # delta & alpha - nn.init.normal_(module.damping_factor, mean=0.0, std=self.config.ema_delta_alpha_range) - nn.init.normal_(module.decay_factor, mean=0.0, std=self.config.ema_delta_alpha_range) - # beta [1, -1, 1, -1, ...] seems more stable. - val = torch.ones(self.config.ema_projection_size, 1) - if self.config.ema_projection_size > 1: - idx = torch.tensor(list(range(1, self.config.ema_projection_size, 2))) - val.index_fill_(0, idx, -1.0) - module.ema_expansion_matrix.normal_(mean=0.0, std=self.config.ema_beta_range).add_(val) - # gamma & omega - nn.init.normal_(module.kernel_projection_matrix, mean=0.0, std=self.config.ema_gamma_omega_range) - nn.init.normal_(module.residual_weight, mean=0.0, std=self.config.ema_gamma_omega_range) - elif isinstance(module, MegaSimpleRelativePositionalBias): - nn.init.normal_(module.rel_pos_bias, mean=0.0, std=self.config.initializer_range) - elif isinstance(module, MegaRotaryRelativePositionalBias): - nn.init.normal_(module.alpha, mean=0.0, std=self.config.initializer_range) - nn.init.normal_(module.b_param, mean=0.0, std=self.config.initializer_range) - elif isinstance(module, MegaScaleNorm): - if self.config.norm_affine: - nn.init.constant_(module.scalar, 1.0) - elif isinstance(module, MegaRMSNorm): - if self.config.norm_affine: - nn.init.constant_(module.weight, 1.0) - elif isinstance(module, MegaMovingAverageGatedAttention): - # linear layers covered separately by the generic nn.Linear init below - nn.init.normal_(module.qk_weight, mean=0.0, std=self.config.initializer_range) - nn.init.constant_(module.qk_bias, 0.0) - elif isinstance(module, nn.Linear): - # initializes all linear layers in the entire network - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - if module.bias is not None: - module.bias.data.zero_() - elif isinstance(module, nn.Embedding): - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - if module.padding_idx is not None: - module.weight.data[module.padding_idx].zero_() - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - - -MEGA_START_DOCSTRING = r""" - - This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the - library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads - etc.) - - This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. - Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage - and behavior. - - Parameters: - config ([`MegaConfig`]): Model configuration class with all the parameters of the - model. Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - -MEGA_INPUTS_DOCSTRING = r""" - Args: - input_ids (`torch.LongTensor` of shape `({0})`): - Indices of input sequence tokens in the vocabulary. - - Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - attention_mask (`torch.FloatTensor` of shape `({0})`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - token_type_ids (`torch.LongTensor` of shape `({0})`, *optional*): - Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0,1]`: - - - 0 corresponds to a *sentence A* token, - - 1 corresponds to a *sentence B* token. - This parameter can only be used when the model is initialized with `add_token_type_embeddings` parameter - set to `True`. All the value in this tensor should be always < config.type_vocab_size. - - [What are token type IDs?](../glossary#token-type-ids) - inputs_embeds (`torch.FloatTensor` of shape `({0}, hidden_size)`, *optional*): - Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This - is useful if you want more control over how to convert `input_ids` indices into associated vectors than the - model's internal embedding lookup matrix. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. -""" - - -@add_start_docstrings( - "The bare MEGA Model transformer outputting raw hidden-states without any specific head on top.", - MEGA_START_DOCSTRING, -) -class MegaModel(MegaPreTrainedModel): - """ - - The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of - cross-attention is added after self-attention, following the architecture described in *Mega: Moving Average - Equipped Gated Attention*_ by Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, - Jonathan May, and Luke Zettlemoyer - - To behave as a decoder the model needs to be initialized with the `is_decoder` argument of the configuration set to - `True` and `bidirectional` set to `False`. To be used in a Seq2Seq model, the model needs to initialized with both - `is_decoder=True` and `bidirectional=False` argument as well as `add_cross_attention` set to `True`; an - `encoder_hidden_states` is then expected as an input to the forward pass. - - .. _*Mega: Moving Average Equipped Gated Attention*: https://arxiv.org/abs/2209.10655 - - """ - - def __init__(self, config: MegaConfig, add_pooling_layer=True): - super().__init__(config) - self.config = config - - self.embedding_layer = MegaEmbeddings(config) - self.layers = nn.ModuleList([MegaBlock(config) for _ in range(config.num_hidden_layers)]) - - self.pooler = MegaPooler(config) if add_pooling_layer else None - - # Initialize weights and apply final processing (retained from RoBERTa code) - self.post_init() - - def get_input_embeddings(self): - return self.embedding_layer.word_embeddings - - def set_input_embeddings(self, value): - self.embedding_layer.word_embeddings = value - - @add_start_docstrings_to_model_forward(MEGA_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=BaseModelOutputWithPoolingAndCrossAttentions, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - encoder_hidden_states: Optional[torch.Tensor] = None, - encoder_attention_mask: Optional[torch.Tensor] = None, - past_key_values: Optional[List[torch.FloatTensor]] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple[torch.Tensor], BaseModelOutputWithPoolingAndCrossAttentions]: - r""" - encoder_hidden_states (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): - Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if - the model is configured as a decoder. - encoder_attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in - the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - past_key_values (`tuple(tuple(torch.FloatTensor))` of length `config.n_layers` with each tuple having 4 tensors of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`): - Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. - - If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that - don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all - `decoder_input_ids` of shape `(batch_size, sequence_length)`. - use_cache (`bool`, *optional*): - If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see - `past_key_values`). - """ - - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") - elif input_ids is not None: - self.warn_if_padding_and_no_attention_mask(input_ids, attention_mask) - input_shape = input_ids.size() - device = input_ids.device - elif inputs_embeds is not None: - input_shape = inputs_embeds.size()[:-1] - device = inputs_embeds.device - else: - raise ValueError("You have to specify either input_ids or inputs_embeds") - - if self.config.use_chunking: - input_shape = torch.tensor([input_shape[0], self.config.chunk_size]) - - batch_size, sequence_length = input_shape - - if self.config.use_chunking and (sequence_length > self.config.chunk_size): - if sequence_length % self.config.chunk_size != 0: - raise ValueError( - f"config.use_chunking is activated; input sequence length must be shorter than or a multiple of config.chunk_size\nreceived sequence length of {sequence_length} with chunk size {self.config.chunk_size}" - ) - - if self.config.is_decoder: - use_cache = use_cache if use_cache is not None else self.config.use_cache - - # Mega expects the causal mask to be a 2D square matrix of (from) x (to) over the input sequence length - # the HF utility function generates a 3D causal mask which includes batch size, so we'll create a dummy - # mask with the correct device and all ones - temp_mask_for_extension = torch.ones((1, sequence_length), dtype=torch.long, device=device) - causal_mask = self.create_extended_attention_mask_for_decoder(input_shape, temp_mask_for_extension) - - # get rid of batch dimension in the generated mask; result is (sequence_length X sequence_length) - causal_mask = causal_mask.squeeze(0) - else: - use_cache = False - causal_mask = None - - # if using cache, make sure we have a tuple of tuples which matches the length of our hidden layers - if (past_key_values is not None) and (len(past_key_values) != self.config.num_hidden_layers): - raise ValueError( - f"Received past key/value cache with size mismatch; expected {self.config.num_hidden_layers}, received {len(past_key_values)}" - ) - - # get embeddings (batch X sequence length X embed dim) - embedding_output = self.embedding_layer( - input_ids=input_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds - ) - - # transpose for Mega --> (seq len X batch X embed dim) - hidden_states = embedding_output.transpose(0, 1) - - # we expect encoder hidden states to also have batch first in line - # with typical Hugging Face behavior (which is also how we return them) - # Mega expects sequence length first, so do the same transpose here - if encoder_hidden_states is not None: - encoder_hidden_states = encoder_hidden_states.transpose(0, 1) - - # pass through mega layers - all_hidden_states = (embedding_output,) if output_hidden_states else None - all_self_attentions = () if output_attentions else None - all_cross_attentions = () if output_attentions and self.config.add_cross_attention else None - next_decoder_cache = () if use_cache else None - for i, mega_layer in enumerate(self.layers): - current_decoder_cache = past_key_values[i] if past_key_values is not None else None - mega_outputs = mega_layer( - hidden_states=hidden_states, - attention_mask=attention_mask, - causal_mask=causal_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - past_key_value=current_decoder_cache, - output_attentions=output_attentions, - use_cache=use_cache, - ) - - hidden_states = mega_outputs[0] - if output_hidden_states: - # store layer-wise hidden states in the way that the user expects - # (seq len X batch X embed dim) --> (batch X seq len X embed dim) - all_hidden_states += (hidden_states.transpose(0, 1),) - if output_attentions: - self_attn_weights = mega_outputs[1] - all_self_attentions += (self_attn_weights,) - if self.config.add_cross_attention: - cross_attn_weights = mega_outputs[2] - all_cross_attentions += (cross_attn_weights,) - if use_cache: - updated_cache = mega_outputs[-1] - next_decoder_cache += (updated_cache,) - - # transpose final hidden states - hidden_states = hidden_states.transpose(0, 1) - - # optional pooling layer - pooled_output = self.pooler(hidden_states) if self.pooler is not None else None - - if not return_dict: - return (hidden_states, pooled_output) + ( - all_hidden_states, - next_decoder_cache, - all_self_attentions, - all_cross_attentions, - ) - - return BaseModelOutputWithPoolingAndCrossAttentions( - last_hidden_state=hidden_states, - pooler_output=pooled_output, - past_key_values=next_decoder_cache, - hidden_states=all_hidden_states, - attentions=all_self_attentions, - cross_attentions=all_cross_attentions, - ) - - -@add_start_docstrings( - """MEGA Model with a `language modeling` head on top for CLM fine-tuning.""", MEGA_START_DOCSTRING -) -class MegaForCausalLM(MegaPreTrainedModel): - _tied_weights_keys = ["lm_head.weight"] - - def __init__(self, config: MegaConfig): - super().__init__(config) - - if not config.is_decoder: - logger.warning("If you want to use `MegaForCausalLM` as a standalone, add `is_decoder=True.`") - - self.mega = MegaModel(config, add_pooling_layer=False) - - if config.add_lm_hidden_dense_layer: - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - self.hidden_activation = nn.Tanh() - else: - self.dense = None - self.hidden_activation = None - - self.lm_head = nn.Linear(config.hidden_size, config.vocab_size) - - # Initialize weights and apply final processing - self.post_init() - - def get_output_embeddings(self): - return self.lm_head - - def set_output_embeddings(self, new_embeddings): - self.lm_head = new_embeddings - - @add_start_docstrings_to_model_forward(MEGA_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @replace_return_docstrings(output_type=CausalLMOutputWithCrossAttentions, config_class=_CONFIG_FOR_DOC) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - past_key_values: Tuple[Tuple[torch.FloatTensor]] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple[torch.Tensor], CausalLMOutputWithCrossAttentions]: - r""" - encoder_hidden_states (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): - Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if - the model is configured as a decoder. - encoder_attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in - the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in - `[-100, 0, ..., config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are - ignored (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]` - past_key_values (`tuple(tuple(torch.FloatTensor))` of length `config.n_layers` with each tuple having 4 tensors of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`): - Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. - - If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that - don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all - `decoder_input_ids` of shape `(batch_size, sequence_length)`. - use_cache (`bool`, *optional*): - If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see - `past_key_values`). - - Returns: - - Example: - - ```python - >>> from transformers import AutoTokenizer, MegaForCausalLM, AutoConfig - >>> import torch - - >>> tokenizer = AutoTokenizer.from_pretrained("mnaylor/mega-base-wikitext") - >>> config = AutoConfig.from_pretrained("mnaylor/mega-base-wikitext") - >>> config.is_decoder = True - >>> config.bidirectional = False - >>> model = MegaForCausalLM.from_pretrained( - ... "mnaylor/mega-base-wikitext", config=config, ignore_mismatched_sizes=True - ... ) - - >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") - >>> outputs = model(**inputs) - - >>> prediction_logits = outputs.logits - ```""" - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - if labels is not None: - use_cache = False - - outputs = self.mega( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - inputs_embeds=inputs_embeds, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - past_key_values=past_key_values, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = outputs[0] - - if self.dense is not None: - sequence_output = self.dense(sequence_output) - sequence_output = self.hidden_activation(sequence_output) - - prediction_scores = self.lm_head(sequence_output) - - lm_loss = None - if labels is not None: - # we are doing next-token prediction; shift prediction scores and input ids by one - shifted_prediction_scores = prediction_scores[:, :-1, :].contiguous() - labels = labels[:, 1:].contiguous() - loss_fct = CrossEntropyLoss() - lm_loss = loss_fct(shifted_prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) - - if not return_dict: - output = (prediction_scores,) + outputs[2:] - return ((lm_loss,) + output) if lm_loss is not None else output - - return CausalLMOutputWithCrossAttentions( - loss=lm_loss, - logits=prediction_scores, - past_key_values=outputs.past_key_values, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - cross_attentions=outputs.cross_attentions, - ) - - def prepare_inputs_for_generation(self, input_ids, past_key_values=None, attention_mask=None, **model_kwargs): - input_shape = input_ids.shape - # if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly - if attention_mask is None: - attention_mask = input_ids.new_ones(input_shape) - - # cut decoder_input_ids if past is used - if past_key_values is not None: - input_ids = input_ids[:, -1:] - - return {"input_ids": input_ids, "attention_mask": attention_mask, "past_key_values": past_key_values} - - def _reorder_cache(self, past_key_values, beam_idx): - reordered_past = () - for layer_past in past_key_values: - reordered_past += ( - tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past), - ) - return reordered_past - - -@add_start_docstrings("""MEGA Model with a `language modeling` head on top.""", MEGA_START_DOCSTRING) -class MegaForMaskedLM(MegaPreTrainedModel): - _tied_weights_keys = ["mlm_head.weight"] - - def __init__(self, config: MegaConfig): - super().__init__(config) - - if config.is_decoder: - logger.warning( - "If you want to use `MegaForMaskedLM`, set `config.is_decoder=False` for " - "bi-directional self-attention." - ) - - self.mega = MegaModel(config, add_pooling_layer=False) - if config.add_lm_hidden_dense_layer: - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - self.hidden_activation = nn.Tanh() - else: - self.dense = None - self.hidden_activation = None - self.mlm_head = nn.Linear(config.hidden_size, config.vocab_size) - self.dropout = nn.Dropout(config.dropout_prob) - - # Initialize weights and apply final processing - self.post_init() - - def get_output_embeddings(self): - return self.mlm_head - - def set_output_embeddings(self, new_embeddings): - self.mlm_head = new_embeddings - - @add_start_docstrings_to_model_forward(MEGA_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=MaskedLMOutput, - config_class=_CONFIG_FOR_DOC, - mask="", - expected_output="' Paris'", - expected_loss=0.1, - ) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple[torch.Tensor], MaskedLMOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ..., - config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the - loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]` - kwargs (`Dict[str, any]`, optional, defaults to *{}*): - Used to hide legacy arguments that have been deprecated. - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.mega( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - inputs_embeds=inputs_embeds, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - sequence_output = outputs[0] - if self.dense is not None: - sequence_output = self.dense(sequence_output) - sequence_output = self.hidden_activation(sequence_output) - prediction_scores = self.mlm_head(sequence_output) - - masked_lm_loss = None - if labels is not None: - loss_fct = CrossEntropyLoss() - masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) - - if not return_dict: - output = (prediction_scores,) + outputs[2:] - return ((masked_lm_loss,) + output) if masked_lm_loss is not None else output - - return MaskedLMOutput( - loss=masked_lm_loss, - logits=prediction_scores, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - MEGA Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled - output) e.g. for GLUE tasks. - """, - MEGA_START_DOCSTRING, -) -class MegaForSequenceClassification(MegaPreTrainedModel): - def __init__(self, config): - super().__init__(config) - self.num_labels = config.num_labels - self.config = config - - self.mega = MegaModel(config, add_pooling_layer=False) - self.classifier = MegaClassificationHead(config) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(MEGA_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=SequenceClassifierOutput, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple[torch.Tensor], SequenceClassifierOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., - config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If - `config.num_labels > 1` a classification loss is computed (Cross-Entropy). - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.mega( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - sequence_output = outputs[0] - logits = self.classifier(sequence_output) - - loss = None - if labels is not None: - if self.config.problem_type is None: - if self.num_labels == 1: - self.config.problem_type = "regression" - elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int): - self.config.problem_type = "single_label_classification" - else: - self.config.problem_type = "multi_label_classification" - - if self.config.problem_type == "regression": - loss_fct = MSELoss() - if self.num_labels == 1: - loss = loss_fct(logits.squeeze(), labels.squeeze()) - else: - loss = loss_fct(logits, labels) - elif self.config.problem_type == "single_label_classification": - loss_fct = CrossEntropyLoss() - loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) - elif self.config.problem_type == "multi_label_classification": - loss_fct = BCEWithLogitsLoss() - loss = loss_fct(logits, labels) - - if not return_dict: - output = (logits,) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return SequenceClassifierOutput( - loss=loss, - logits=logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - MEGA Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a - softmax) e.g. for RocStories/SWAG tasks. - """, - MEGA_START_DOCSTRING, -) -class MegaForMultipleChoice(MegaPreTrainedModel): - def __init__(self, config): - super().__init__(config) - - self.mega = MegaModel(config) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - self.classifier = nn.Linear(config.hidden_size, 1) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(MEGA_INPUTS_DOCSTRING.format("batch_size, num_choices, sequence_length")) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=MultipleChoiceModelOutput, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple[torch.Tensor], MultipleChoiceModelOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for computing the multiple choice classification loss. Indices should be in `[0, ..., - num_choices-1]` where `num_choices` is the size of the second dimension of the input tensors. (See - `input_ids` above) - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - num_choices = input_ids.shape[1] if input_ids is not None else inputs_embeds.shape[1] - - flat_input_ids = input_ids.view(-1, input_ids.size(-1)) if input_ids is not None else None - flat_token_type_ids = token_type_ids.view(-1, token_type_ids.size(-1)) if token_type_ids is not None else None - flat_attention_mask = attention_mask.view(-1, attention_mask.size(-1)) if attention_mask is not None else None - flat_inputs_embeds = ( - inputs_embeds.view(-1, inputs_embeds.size(-2), inputs_embeds.size(-1)) - if inputs_embeds is not None - else None - ) - - outputs = self.mega( - flat_input_ids, - token_type_ids=flat_token_type_ids, - attention_mask=flat_attention_mask, - inputs_embeds=flat_inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - pooled_output = outputs[1] - - pooled_output = self.dropout(pooled_output) - logits = self.classifier(pooled_output) - reshaped_logits = logits.view(-1, num_choices) - - loss = None - if labels is not None: - loss_fct = CrossEntropyLoss() - loss = loss_fct(reshaped_logits, labels) - - if not return_dict: - output = (reshaped_logits,) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return MultipleChoiceModelOutput( - loss=loss, - logits=reshaped_logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - MEGA Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for - Named-Entity-Recognition (NER) tasks. - """, - MEGA_START_DOCSTRING, -) -class MegaForTokenClassification(MegaPreTrainedModel): - def __init__(self, config): - super().__init__(config) - self.num_labels = config.num_labels - - self.mega = MegaModel(config, add_pooling_layer=False) - classifier_dropout = ( - config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob - ) - self.dropout = nn.Dropout(classifier_dropout) - self.classifier = nn.Linear(config.hidden_size, config.num_labels) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(MEGA_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=TokenClassifierOutput, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple[torch.Tensor], TokenClassifierOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the token classification loss. Indices should be in `[0, ..., config.num_labels - 1]`. - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.mega( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = outputs[0] - - sequence_output = self.dropout(sequence_output) - logits = self.classifier(sequence_output) - - loss = None - if labels is not None: - loss_fct = CrossEntropyLoss() - loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) - - if not return_dict: - output = (logits,) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return TokenClassifierOutput( - loss=loss, - logits=logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -# copied from transformers.models.roberta.modeling_roberta.RobertaClassificationHead with Roberta->Mega -class MegaClassificationHead(nn.Module): - """Head for sentence-level classification tasks.""" - - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - classifier_dropout = ( - config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob - ) - self.dropout = nn.Dropout(classifier_dropout) - self.out_proj = nn.Linear(config.hidden_size, config.num_labels) - - def forward(self, features, **kwargs): - x = features[:, 0, :] # take token (equiv. to [CLS]) - x = self.dropout(x) - x = self.dense(x) - x = torch.tanh(x) - x = self.dropout(x) - x = self.out_proj(x) - return x - - -@add_start_docstrings( - """ - MEGA Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear - layers on top of the hidden-states output to compute `span start logits` and `span end logits`). - """, - MEGA_START_DOCSTRING, -) -class MegaForQuestionAnswering(MegaPreTrainedModel): - def __init__(self, config): - super().__init__(config) - self.num_labels = config.num_labels - - self.mega = MegaModel(config, add_pooling_layer=False) - self.qa_outputs = nn.Linear(config.hidden_size, config.num_labels) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(MEGA_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=QuestionAnsweringModelOutput, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - start_positions: Optional[torch.LongTensor] = None, - end_positions: Optional[torch.LongTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple[torch.Tensor], QuestionAnsweringModelOutput]: - r""" - start_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for position (index) of the start of the labelled span for computing the token classification loss. - Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence - are not taken into account for computing the loss. - end_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for position (index) of the end of the labelled span for computing the token classification loss. - Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence - are not taken into account for computing the loss. - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.mega( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = outputs[0] - - logits = self.qa_outputs(sequence_output) - start_logits, end_logits = logits.split(1, dim=-1) - start_logits = start_logits.squeeze(-1).contiguous() - end_logits = end_logits.squeeze(-1).contiguous() - - total_loss = None - if start_positions is not None and end_positions is not None: - # If we are on multi-GPU, split add a dimension - if len(start_positions.size()) > 1: - start_positions = start_positions.squeeze(-1) - if len(end_positions.size()) > 1: - end_positions = end_positions.squeeze(-1) - # sometimes the start/end positions are outside our model inputs, we ignore these terms - ignored_index = start_logits.size(1) - start_positions = start_positions.clamp(0, ignored_index) - end_positions = end_positions.clamp(0, ignored_index) - - loss_fct = CrossEntropyLoss(ignore_index=ignored_index) - start_loss = loss_fct(start_logits, start_positions) - end_loss = loss_fct(end_logits, end_positions) - total_loss = (start_loss + end_loss) / 2 - - if not return_dict: - output = (start_logits, end_logits) + outputs[2:] - return ((total_loss,) + output) if total_loss is not None else output - - return QuestionAnsweringModelOutput( - loss=total_loss, - start_logits=start_logits, - end_logits=end_logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py deleted file mode 100644 index 88d54f10e2605bd90131b57ab82c3174477717ad..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py +++ /dev/null @@ -1,358 +0,0 @@ -#################################################################################################### - -# Copyright (c) 2021-, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -#################################################################################################### - -# -# Note: If when running this conversion script you're getting an exception: -# ModuleNotFoundError: No module named 'megatron.model.enums' -# you need to tell python where to find the clone of Megatron-LM, e.g.: -# -# cd /tmp -# git clone https://github.com/NVIDIA/Megatron-LM -# PYTHONPATH=/tmp/Megatron-LM python src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py ... -# -# if you already have it cloned elsewhere, simply adjust the path to the existing path -# -# If the training was done using a Megatron-LM fork, e.g., -# https://github.com/microsoft/Megatron-DeepSpeed/ then chances are that you need to have that one -# in your path, i.e., /path/to/Megatron-DeepSpeed/ -# - -import argparse -import os -import re -import zipfile - -import torch - -from transformers import AutoTokenizer, GPT2Config - - -#################################################################################################### - - -def recursive_print(name, val, spaces=0): - # Format the message. - if name is None: - msg = None - else: - fmt = "." * max(0, spaces - 2) + "# {:" + str(50 - spaces) + "s}" - msg = fmt.format(name) - - # Print and recurse (if needed). - if isinstance(val, dict): - if msg is not None: - print(msg) - for k in val.keys(): - recursive_print(k, val[k], spaces + 2) - elif isinstance(val, torch.Tensor): - print(msg, ":", val.size()) - else: - print(msg, ":", val) - - -def fix_query_key_value_ordering(param, checkpoint_version, num_splits, num_heads, hidden_size): - # Permutes layout of param tensor to [num_splits * num_heads * hidden_size, :] - # for compatibility with later versions of NVIDIA Megatron-LM. - # The inverse operation is performed inside Megatron-LM to read checkpoints: - # https://github.com/NVIDIA/Megatron-LM/blob/v2.4/megatron/checkpointing.py#L209 - # If param is the weight tensor of the self-attention block, the returned tensor - # will have to be transposed one more time to be read by HuggingFace GPT2. - input_shape = param.size() - if checkpoint_version == 1.0: - # version 1.0 stores [num_heads * hidden_size * num_splits, :] - saved_shape = (num_heads, hidden_size, num_splits) + input_shape[1:] - param = param.view(*saved_shape) - param = param.transpose(0, 2) - param = param.transpose(1, 2).contiguous() - elif checkpoint_version >= 2.0: - # other versions store [num_heads * num_splits * hidden_size, :] - saved_shape = (num_heads, num_splits, hidden_size) + input_shape[1:] - param = param.view(*saved_shape) - param = param.transpose(0, 1).contiguous() - param = param.view(*input_shape) - return param - - -#################################################################################################### - - -def convert_megatron_checkpoint(args, input_state_dict, config): - # The converted output model. - output_state_dict = {} - - # old versions did not store training args - ds_args = input_state_dict.get("args", None) - if ds_args is not None: - # do not make the user write a config file when the exact dimensions/sizes are already in the checkpoint - # from pprint import pprint - # pprint(vars(ds_args)) - - config.vocab_size = ds_args.padded_vocab_size - config.n_positions = ds_args.max_position_embeddings - config.n_embd = ds_args.hidden_size - config.n_layer = ds_args.num_layers - config.n_head = ds_args.num_attention_heads - config.n_inner = ds_args.ffn_hidden_size - # pprint(config) - - # The number of heads. - heads = config.n_head - # The hidden_size per head. - hidden_size_per_head = config.n_embd // config.n_head - # Megatron-LM checkpoint version - if "checkpoint_version" in input_state_dict.keys(): - checkpoint_version = input_state_dict["checkpoint_version"] - else: - checkpoint_version = 0.0 - - # The model. - model = input_state_dict["model"] - # The language model. - lm = model["language_model"] - # The embeddings. - embeddings = lm["embedding"] - - # The word embeddings. - word_embeddings = embeddings["word_embeddings"]["weight"] - # Truncate the embedding table to vocab_size rows. - word_embeddings = word_embeddings[: config.vocab_size, :] - output_state_dict["transformer.wte.weight"] = word_embeddings - - # The position embeddings. - pos_embeddings = embeddings["position_embeddings"]["weight"] - # Read the causal mask dimension (seqlen). [max_sequence_length, hidden_size] - n_positions = pos_embeddings.size(0) - if n_positions != config.n_positions: - raise ValueError( - f"pos_embeddings.max_sequence_length={n_positions} and config.n_positions={config.n_positions} don't match" - ) - # Store the position embeddings. - output_state_dict["transformer.wpe.weight"] = pos_embeddings - - # The transformer. - transformer = lm["transformer"] if "transformer" in lm.keys() else lm["encoder"] - - # The regex to extract layer names. - layer_re = re.compile(r"layers\.(\d+)\.([a-z0-9_.]+)\.([a-z]+)") - - # The simple map of names for "automated" rules. - megatron_to_transformers = { - "attention.dense": ".attn.c_proj.", - "self_attention.dense": ".attn.c_proj.", - "mlp.dense_h_to_4h": ".mlp.c_fc.", - "mlp.dense_4h_to_h": ".mlp.c_proj.", - } - - # Extract the layers. - for key, val in transformer.items(): - # Match the name. - m = layer_re.match(key) - - # Stop if that's not a layer - if m is None: - break - - # The index of the layer. - layer_idx = int(m.group(1)) - # The name of the operation. - op_name = m.group(2) - # Is it a weight or a bias? - weight_or_bias = m.group(3) - - # The name of the layer. - layer_name = f"transformer.h.{layer_idx}" - - # For layernorm(s), simply store the layer norm. - if op_name.endswith("layernorm"): - ln_name = "ln_1" if op_name.startswith("input") else "ln_2" - output_state_dict[layer_name + "." + ln_name + "." + weight_or_bias] = val - - # Transpose the QKV matrix. - elif ( - op_name == "attention.query_key_value" or op_name == "self_attention.query_key_value" - ) and weight_or_bias == "weight": - # Insert a tensor of 1x1xDxD bias. - causal_mask = torch.tril(torch.ones((n_positions, n_positions), dtype=torch.float16)).view( - 1, 1, n_positions, n_positions - ) - output_state_dict[layer_name + ".attn.bias"] = causal_mask - - # Insert a "dummy" tensor for masked_bias. - masked_bias = torch.tensor(-1e4, dtype=torch.float16) - output_state_dict[layer_name + ".attn.masked_bias"] = masked_bias - - out_val = fix_query_key_value_ordering(val, checkpoint_version, 3, heads, hidden_size_per_head) - # Megatron stores (3*D) x D but transformers-GPT2 expects D x 3*D. - out_val = out_val.transpose(0, 1).contiguous() - # Store. - output_state_dict[layer_name + ".attn.c_attn.weight"] = out_val - - # Transpose the bias. - elif ( - op_name == "attention.query_key_value" or op_name == "self_attention.query_key_value" - ) and weight_or_bias == "bias": - out_val = fix_query_key_value_ordering(val, checkpoint_version, 3, heads, hidden_size_per_head) - # Store. No change of shape. - output_state_dict[layer_name + ".attn.c_attn.bias"] = out_val - - # Transpose the weights. - elif weight_or_bias == "weight": - out_name = megatron_to_transformers[op_name] - output_state_dict[layer_name + out_name + "weight"] = val.transpose(0, 1) - - # Copy the bias. - elif weight_or_bias == "bias": - out_name = megatron_to_transformers[op_name] - output_state_dict[layer_name + out_name + "bias"] = val - - # DEBUG. - assert config.n_layer == layer_idx + 1 - - # The final layernorm. - output_state_dict["transformer.ln_f.weight"] = transformer["final_layernorm.weight"] - output_state_dict["transformer.ln_f.bias"] = transformer["final_layernorm.bias"] - - # For LM head, transformers' wants the matrix to weight embeddings. - output_state_dict["lm_head.weight"] = word_embeddings - - # It should be done! - return output_state_dict - - -#################################################################################################### - - -def main(): - # Create the argument parser. - parser = argparse.ArgumentParser() - parser.add_argument("--print-checkpoint-structure", action="store_true") - parser.add_argument( - "path_to_checkpoint", - type=str, - help="Path to the checkpoint file (.zip archive or direct .pt file)", - ) - parser.add_argument( - "--config_file", - default="", - type=str, - help="An optional config json file describing the pre-trained model.", - ) - args = parser.parse_args() - - # Extract the basename. - basename = os.path.dirname(args.path_to_checkpoint) - - # Load the model. - # the .zip is very optional, let's keep it for backward compatibility - print(f"Extracting PyTorch state dictionary from {args.path_to_checkpoint}") - if args.path_to_checkpoint.endswith(".zip"): - with zipfile.ZipFile(args.path_to_checkpoint, "r") as checkpoint: - with checkpoint.open("release/mp_rank_00/model_optim_rng.pt") as pytorch_dict: - input_state_dict = torch.load(pytorch_dict, map_location="cpu") - else: - input_state_dict = torch.load(args.path_to_checkpoint, map_location="cpu") - - ds_args = input_state_dict.get("args", None) - - # Read the config, or default to the model released by NVIDIA. - if args.config_file == "": - if ds_args is not None: - if ds_args.bias_gelu_fusion: - activation_function = "gelu_fast" - elif ds_args.openai_gelu: - activation_function = "gelu_new" - else: - activation_function = "gelu" - else: - # in the very early days this used to be "gelu_new" - activation_function = "gelu_new" - - # Spell out all parameters in case the defaults change. - config = GPT2Config( - vocab_size=50257, - n_positions=1024, - n_embd=1024, - n_layer=24, - n_head=16, - n_inner=4096, - activation_function=activation_function, - resid_pdrop=0.1, - embd_pdrop=0.1, - attn_pdrop=0.1, - layer_norm_epsilon=1e-5, - initializer_range=0.02, - summary_type="cls_index", - summary_use_proj=True, - summary_activation=None, - summary_proj_to_labels=True, - summary_first_dropout=0.1, - scale_attn_weights=True, - use_cache=True, - bos_token_id=50256, - eos_token_id=50256, - ) - else: - config = GPT2Config.from_json_file(args.config_file) - - config.architectures = ["GPT2LMHeadModel"] - - # Convert. - print("Converting") - output_state_dict = convert_megatron_checkpoint(args, input_state_dict, config) - - # Print the structure of converted state dict. - if args.print_checkpoint_structure: - recursive_print(None, output_state_dict) - - # Add tokenizer class info to config - # see https://github.com/huggingface/transformers/issues/13906) - if ds_args is not None: - tokenizer_type = ds_args.tokenizer_type - if tokenizer_type == "GPT2BPETokenizer": - tokenizer_model_name = "gpt2" - elif tokenizer_type == "PretrainedFromHF": - tokenizer_model_name = ds_args.tokenizer_name_or_path - else: - raise ValueError(f"Unrecognized tokenizer_type {tokenizer_type}") - else: - tokenizer_model_name = "gpt2" - - tokenizer = AutoTokenizer.from_pretrained(tokenizer_model_name) - tokenizer_class = type(tokenizer).__name__ - config.tokenizer_class = tokenizer_class - - # Store the config to file. - print("Saving config") - config.save_pretrained(basename) - - # Save tokenizer based on args - print(f"Adding {tokenizer_class} tokenizer files") - tokenizer.save_pretrained(basename) - - # Store the state_dict to file. - output_checkpoint_file = os.path.join(basename, "pytorch_model.bin") - print(f'Saving checkpoint to "{output_checkpoint_file}"') - torch.save(output_state_dict, output_checkpoint_file) - - -#################################################################################################### - -if __name__ == "__main__": - main() - -#################################################################################################### diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/pvt/__init__.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/pvt/__init__.py deleted file mode 100644 index cab5af9af7c99775651e2f4a322265670676b8da..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/pvt/__init__.py +++ /dev/null @@ -1,80 +0,0 @@ -# coding=utf-8 -# Copyright 2023 Authors: Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, -# Kaitao Song, Ding Liang, Tong Lu, Ping Luo, Ling Shao and The HuggingFace Inc. team. -# All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from typing import TYPE_CHECKING - -from ...utils import ( - OptionalDependencyNotAvailable, - _LazyModule, - is_torch_available, - is_vision_available, -) - - -_import_structure = { - "configuration_pvt": ["PVT_PRETRAINED_CONFIG_ARCHIVE_MAP", "PvtConfig", "PvtOnnxConfig"], -} - -try: - if not is_vision_available(): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - pass -else: - _import_structure["image_processing_pvt"] = ["PvtImageProcessor"] - -try: - if not is_torch_available(): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - pass -else: - _import_structure["modeling_pvt"] = [ - "PVT_PRETRAINED_MODEL_ARCHIVE_LIST", - "PvtForImageClassification", - "PvtModel", - "PvtPreTrainedModel", - ] - - -if TYPE_CHECKING: - from .configuration_pvt import PVT_PRETRAINED_CONFIG_ARCHIVE_MAP, PvtConfig, PvtOnnxConfig - - try: - if not is_vision_available(): - raise OptionalDependencyNotAvailable() - except OptionalDependencyNotAvailable: - pass - else: - from .image_processing_pvt import PvtImageProcessor - - try: - if not is_torch_available(): - raise OptionalDependencyNotAvailable() - except OptionalDependencyNotAvailable: - pass - else: - from .modeling_pvt import ( - PVT_PRETRAINED_MODEL_ARCHIVE_LIST, - PvtForImageClassification, - PvtModel, - PvtPreTrainedModel, - ) - -else: - import sys - - sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__) diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/tests/modeling/test_anchor_generator.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/tests/modeling/test_anchor_generator.py deleted file mode 100644 index 13a808e587382216da6fe7ee957603f448172657..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/tests/modeling/test_anchor_generator.py +++ /dev/null @@ -1,120 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import unittest -import torch - -from detectron2.config import get_cfg -from detectron2.layers import ShapeSpec -from detectron2.modeling.anchor_generator import DefaultAnchorGenerator, RotatedAnchorGenerator - -logger = logging.getLogger(__name__) - - -class TestAnchorGenerator(unittest.TestCase): - def test_default_anchor_generator(self): - cfg = get_cfg() - cfg.MODEL.ANCHOR_GENERATOR.SIZES = [[32, 64]] - cfg.MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS = [[0.25, 1, 4]] - - anchor_generator = DefaultAnchorGenerator(cfg, [ShapeSpec(stride=4)]) - - # only the last two dimensions of features matter here - num_images = 2 - features = {"stage3": torch.rand(num_images, 96, 1, 2)} - anchors = anchor_generator([features["stage3"]]) - expected_anchor_tensor = torch.tensor( - [ - [-32.0, -8.0, 32.0, 8.0], - [-16.0, -16.0, 16.0, 16.0], - [-8.0, -32.0, 8.0, 32.0], - [-64.0, -16.0, 64.0, 16.0], - [-32.0, -32.0, 32.0, 32.0], - [-16.0, -64.0, 16.0, 64.0], - [-28.0, -8.0, 36.0, 8.0], # -28.0 == -32.0 + STRIDE (4) - [-12.0, -16.0, 20.0, 16.0], - [-4.0, -32.0, 12.0, 32.0], - [-60.0, -16.0, 68.0, 16.0], - [-28.0, -32.0, 36.0, 32.0], - [-12.0, -64.0, 20.0, 64.0], - ] - ) - - self.assertTrue(torch.allclose(anchors[0].tensor, expected_anchor_tensor)) - - def test_default_anchor_generator_centered(self): - # test explicit args - anchor_generator = DefaultAnchorGenerator( - sizes=[32, 64], aspect_ratios=[0.25, 1, 4], strides=[4] - ) - - # only the last two dimensions of features matter here - num_images = 2 - features = {"stage3": torch.rand(num_images, 96, 1, 2)} - expected_anchor_tensor = torch.tensor( - [ - [-30.0, -6.0, 34.0, 10.0], - [-14.0, -14.0, 18.0, 18.0], - [-6.0, -30.0, 10.0, 34.0], - [-62.0, -14.0, 66.0, 18.0], - [-30.0, -30.0, 34.0, 34.0], - [-14.0, -62.0, 18.0, 66.0], - [-26.0, -6.0, 38.0, 10.0], - [-10.0, -14.0, 22.0, 18.0], - [-2.0, -30.0, 14.0, 34.0], - [-58.0, -14.0, 70.0, 18.0], - [-26.0, -30.0, 38.0, 34.0], - [-10.0, -62.0, 22.0, 66.0], - ] - ) - - anchors = anchor_generator([features["stage3"]]) - self.assertTrue(torch.allclose(anchors[0].tensor, expected_anchor_tensor)) - - anchors = torch.jit.script(anchor_generator)([features["stage3"]]) - self.assertTrue(torch.allclose(anchors[0].tensor, expected_anchor_tensor)) - - def test_rrpn_anchor_generator(self): - cfg = get_cfg() - cfg.MODEL.ANCHOR_GENERATOR.SIZES = [[32, 64]] - cfg.MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS = [[0.25, 1, 4]] - cfg.MODEL.ANCHOR_GENERATOR.ANGLES = [0, 45] # test single list[float] - anchor_generator = RotatedAnchorGenerator(cfg, [ShapeSpec(stride=4)]) - - # only the last two dimensions of features matter here - num_images = 2 - features = {"stage3": torch.rand(num_images, 96, 1, 2)} - anchors = anchor_generator([features["stage3"]]) - expected_anchor_tensor = torch.tensor( - [ - [0.0, 0.0, 64.0, 16.0, 0.0], - [0.0, 0.0, 64.0, 16.0, 45.0], - [0.0, 0.0, 32.0, 32.0, 0.0], - [0.0, 0.0, 32.0, 32.0, 45.0], - [0.0, 0.0, 16.0, 64.0, 0.0], - [0.0, 0.0, 16.0, 64.0, 45.0], - [0.0, 0.0, 128.0, 32.0, 0.0], - [0.0, 0.0, 128.0, 32.0, 45.0], - [0.0, 0.0, 64.0, 64.0, 0.0], - [0.0, 0.0, 64.0, 64.0, 45.0], - [0.0, 0.0, 32.0, 128.0, 0.0], - [0.0, 0.0, 32.0, 128.0, 45.0], - [4.0, 0.0, 64.0, 16.0, 0.0], # 4.0 == 0.0 + STRIDE (4) - [4.0, 0.0, 64.0, 16.0, 45.0], - [4.0, 0.0, 32.0, 32.0, 0.0], - [4.0, 0.0, 32.0, 32.0, 45.0], - [4.0, 0.0, 16.0, 64.0, 0.0], - [4.0, 0.0, 16.0, 64.0, 45.0], - [4.0, 0.0, 128.0, 32.0, 0.0], - [4.0, 0.0, 128.0, 32.0, 45.0], - [4.0, 0.0, 64.0, 64.0, 0.0], - [4.0, 0.0, 64.0, 64.0, 45.0], - [4.0, 0.0, 32.0, 128.0, 0.0], - [4.0, 0.0, 32.0, 128.0, 45.0], - ] - ) - - self.assertTrue(torch.allclose(anchors[0].tensor, expected_anchor_tensor)) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/ysharma/ControlNet_Image_Comparison/gradio_hed2image.py b/spaces/ysharma/ControlNet_Image_Comparison/gradio_hed2image.py deleted file mode 100644 index 9be9fff53bcebeb436084d99674bd37427e250bc..0000000000000000000000000000000000000000 --- a/spaces/ysharma/ControlNet_Image_Comparison/gradio_hed2image.py +++ /dev/null @@ -1,69 +0,0 @@ -# This file is adapted from https://github.com/lllyasviel/ControlNet/blob/f4748e3630d8141d7765e2bd9b1e348f47847707/gradio_hed2image.py -# The original license file is LICENSE.ControlNet in this repo. -import gradio as gr - - -def create_demo(process, max_images=12): - with gr.Blocks() as demo: - with gr.Row(): - gr.Markdown('## Control Stable Diffusion with HED Maps') - with gr.Row(): - with gr.Column(): - input_image = gr.Image(source='upload', type='numpy') - prompt = gr.Textbox(label='Prompt') - run_button = gr.Button(label='Run') - with gr.Accordion('Advanced options', open=False): - num_samples = gr.Slider(label='Images', - minimum=1, - maximum=max_images, - value=1, - step=1) - image_resolution = gr.Slider(label='Image Resolution', - minimum=256, - maximum=768, - value=512, - step=256) - detect_resolution = gr.Slider(label='HED Resolution', - minimum=128, - maximum=1024, - value=512, - step=1) - ddim_steps = gr.Slider(label='Steps', - minimum=1, - maximum=100, - value=20, - step=1) - scale = gr.Slider(label='Guidance Scale', - minimum=0.1, - maximum=30.0, - value=9.0, - step=0.1) - seed = gr.Slider(label='Seed', - minimum=-1, - maximum=2147483647, - step=1, - randomize=True, - queue=False) - eta = gr.Number(label='eta (DDIM)', value=0.0) - a_prompt = gr.Textbox( - label='Added Prompt', - value='best quality, extremely detailed') - n_prompt = gr.Textbox( - label='Negative Prompt', - value= - 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality' - ) - with gr.Column(): - result_gallery = gr.Gallery(label='Output', - show_label=False, - elem_id='gallery').style( - grid=2, height='auto') - ips = [ - input_image, prompt, a_prompt, n_prompt, num_samples, - image_resolution, detect_resolution, ddim_steps, scale, seed, eta - ] - run_button.click(fn=process, - inputs=ips, - outputs=[result_gallery], - api_name='hed') - return demo diff --git a/spaces/yuan1615/EmpathyTTS/synthesize_fastapi.py b/spaces/yuan1615/EmpathyTTS/synthesize_fastapi.py deleted file mode 100644 index 13e6da7fdb0d04091eece27acae182023ecc92d0..0000000000000000000000000000000000000000 --- a/spaces/yuan1615/EmpathyTTS/synthesize_fastapi.py +++ /dev/null @@ -1,101 +0,0 @@ -import struct -import re -import os -import torch -import commons -from text import text_to_sequence, prosody_to_sequence - - -def get_text(text, hps): - text_norm = text_to_sequence(text, hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - -def get_prosody(text, hps): - text_norm = prosody_to_sequence(text) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - -def read_lexicon(lex_path): - lexicon = {} - with open(lex_path) as f: - for line in f: - temp = re.split(r"\s+", line.strip("\n")) - word = temp[0] - phones = temp[1:] - if word.lower() not in lexicon: - lexicon[word.lower()] = phones - return lexicon - - -def g2p_mandarin(frontend, text): - result = frontend.gen_tacotron_symbols(text) - texts = [s for s in result.splitlines() if s != ''] - pinyin_list = [] - prosody_list = [] - for line in texts: - pinyin = [] - line = line.strip().split('\t')[1] - lfeat_symbol = line.strip().split(' ') - for this_lfeat_symbol in lfeat_symbol: - this_lfeat_symbol = this_lfeat_symbol.strip('{').strip('}').split( - '$') - if this_lfeat_symbol[2] == 's_begin': - if this_lfeat_symbol[0].split('_')[0] == 'xx': - pinyin.append('x') - else: - pinyin.append(this_lfeat_symbol[0].split('_')[0]) - else: - if this_lfeat_symbol[0].split('_')[0] == 'ih': - pinyin.append('iii' + this_lfeat_symbol[1][-1]) - elif this_lfeat_symbol[0].split('_')[0] in ['e', 'an', 'a', 'ang', 'ao', 'ou', 'ong'] and pinyin[-1] == 'y': - pinyin.append('i' + this_lfeat_symbol[0].split('_')[0] + this_lfeat_symbol[1][-1]) - elif this_lfeat_symbol[0].split('_')[0] == 'er': - pinyin.append('e' + this_lfeat_symbol[1][-1]) - pinyin.append('rr') - else: - if this_lfeat_symbol[1] == 'tone_none': - pinyin.append(this_lfeat_symbol[0]) - else: - pinyin.append(this_lfeat_symbol[0].split('_')[0] + this_lfeat_symbol[1][-1]) - prosody = ['I' for _ in pinyin] - for ii, p in enumerate(pinyin): - if p == '#1': - prosody[ii - 2:ii] = ['B1', 'B1'] - if p == "#2": - prosody[ii - 2:ii] = ['B2', 'B2'] - if p == "#3" or p == '#4': - prosody[ii - 2:ii] = ['B3', 'B3'] - ind = [] - for ii, p in enumerate(pinyin): - if p in ['ge', 'ga', 'go', '#1', '#2', '#3', '#4']: - ind.append(ii) - for a in ind[::-1]: - pinyin.pop(a) - prosody.pop(a) - pinyin = ' '.join(pinyin) - prosody = ' '.join(prosody) - pinyin_list.append(pinyin) - prosody_list.append(prosody) - return pinyin_list, prosody_list - - -def create_wav_header(audio_size: int, sampleRate:int, bits:int, channel:int): - header = b'' - header += b"RIFF" - header += struct.pack('i', int(audio_size + 44 - 8)) - header += b"WAVEfmt " - header += b'\x10\x00\x00\x00' - header += b'\x01\x00' - header += struct.pack('H', channel) - header += struct.pack('i', sampleRate) - header += struct.pack('i', int(sampleRate * bits / 8)) - header += struct.pack('H', int(channel * bits / 8)) - header += struct.pack('H', bits) - header += b'data' - header += struct.pack('i', audio_size) - return header diff --git a/spaces/yuan2023/img-to-music/README.md b/spaces/yuan2023/img-to-music/README.md deleted file mode 100644 index ff1948d1b95ee1f8d7a3396aefb285c729d18687..0000000000000000000000000000000000000000 --- a/spaces/yuan2023/img-to-music/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Img To Music -emoji: 🌅🎶 -colorFrom: green -colorTo: purple -sdk: gradio -sdk_version: 3.16.0 -app_file: app.py -pinned: true -duplicated_from: fffiloni/img-to-music ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/yunfei0710/gpt-academic/request_llm/bridge_tgui.py b/spaces/yunfei0710/gpt-academic/request_llm/bridge_tgui.py deleted file mode 100644 index fcf852f0474892bd179843ece3f4a83110bd7756..0000000000000000000000000000000000000000 --- a/spaces/yunfei0710/gpt-academic/request_llm/bridge_tgui.py +++ /dev/null @@ -1,171 +0,0 @@ -''' -Contributed by SagsMug. Modified by binary-husky -https://github.com/oobabooga/text-generation-webui/pull/175 -''' - -import asyncio -import json -import random -import string -import websockets -import logging -import time -import threading -import importlib -from toolbox import get_conf, update_ui - - -def random_hash(): - letters = string.ascii_lowercase + string.digits - return ''.join(random.choice(letters) for i in range(9)) - -async def run(context, max_token, temperature, top_p, addr, port): - params = { - 'max_new_tokens': max_token, - 'do_sample': True, - 'temperature': temperature, - 'top_p': top_p, - 'typical_p': 1, - 'repetition_penalty': 1.05, - 'encoder_repetition_penalty': 1.0, - 'top_k': 0, - 'min_length': 0, - 'no_repeat_ngram_size': 0, - 'num_beams': 1, - 'penalty_alpha': 0, - 'length_penalty': 1, - 'early_stopping': True, - 'seed': -1, - } - session = random_hash() - - async with websockets.connect(f"ws://{addr}:{port}/queue/join") as websocket: - while content := json.loads(await websocket.recv()): - #Python3.10 syntax, replace with if elif on older - if content["msg"] == "send_hash": - await websocket.send(json.dumps({ - "session_hash": session, - "fn_index": 12 - })) - elif content["msg"] == "estimation": - pass - elif content["msg"] == "send_data": - await websocket.send(json.dumps({ - "session_hash": session, - "fn_index": 12, - "data": [ - context, - params['max_new_tokens'], - params['do_sample'], - params['temperature'], - params['top_p'], - params['typical_p'], - params['repetition_penalty'], - params['encoder_repetition_penalty'], - params['top_k'], - params['min_length'], - params['no_repeat_ngram_size'], - params['num_beams'], - params['penalty_alpha'], - params['length_penalty'], - params['early_stopping'], - params['seed'], - ] - })) - elif content["msg"] == "process_starts": - pass - elif content["msg"] in ["process_generating", "process_completed"]: - yield content["output"]["data"][0] - # You can search for your desired end indicator and - # stop generation by closing the websocket here - if (content["msg"] == "process_completed"): - break - - - - - -def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None): - """ - 发送至chatGPT,流式获取输出。 - 用于基础的对话功能。 - inputs 是本次问询的输入 - top_p, temperature是chatGPT的内部调优参数 - history 是之前的对话列表(注意无论是inputs还是history,内容太长了都会触发token数量溢出的错误) - chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容 - additional_fn代表点击的哪个按钮,按钮见functional.py - """ - if additional_fn is not None: - import core_functional - importlib.reload(core_functional) # 热更新prompt - core_functional = core_functional.get_core_functions() - if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话) - inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"] - - raw_input = "What I would like to say is the following: " + inputs - history.extend([inputs, ""]) - chatbot.append([inputs, ""]) - yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面 - - prompt = raw_input - tgui_say = "" - - model_name, addr_port = llm_kwargs['llm_model'].split('@') - assert ':' in addr_port, "LLM_MODEL 格式不正确!" + llm_kwargs['llm_model'] - addr, port = addr_port.split(':') - - - mutable = ["", time.time()] - def run_coorotine(mutable): - async def get_result(mutable): - # "tgui:galactica-1.3b@localhost:7860" - - async for response in run(context=prompt, max_token=llm_kwargs['max_length'], - temperature=llm_kwargs['temperature'], - top_p=llm_kwargs['top_p'], addr=addr, port=port): - print(response[len(mutable[0]):]) - mutable[0] = response - if (time.time() - mutable[1]) > 3: - print('exit when no listener') - break - asyncio.run(get_result(mutable)) - - thread_listen = threading.Thread(target=run_coorotine, args=(mutable,), daemon=True) - thread_listen.start() - - while thread_listen.is_alive(): - time.sleep(1) - mutable[1] = time.time() - # Print intermediate steps - if tgui_say != mutable[0]: - tgui_say = mutable[0] - history[-1] = tgui_say - chatbot[-1] = (history[-2], history[-1]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - - - -def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience=False): - raw_input = "What I would like to say is the following: " + inputs - prompt = raw_input - tgui_say = "" - model_name, addr_port = llm_kwargs['llm_model'].split('@') - assert ':' in addr_port, "LLM_MODEL 格式不正确!" + llm_kwargs['llm_model'] - addr, port = addr_port.split(':') - - - def run_coorotine(observe_window): - async def get_result(observe_window): - async for response in run(context=prompt, max_token=llm_kwargs['max_length'], - temperature=llm_kwargs['temperature'], - top_p=llm_kwargs['top_p'], addr=addr, port=port): - print(response[len(observe_window[0]):]) - observe_window[0] = response - if (time.time() - observe_window[1]) > 5: - print('exit when no listener') - break - asyncio.run(get_result(observe_window)) - thread_listen = threading.Thread(target=run_coorotine, args=(observe_window,)) - thread_listen.start() - return observe_window[0] diff --git a/spaces/z-uo/HTS-Audio-Transformer/README.md b/spaces/z-uo/HTS-Audio-Transformer/README.md deleted file mode 100644 index 43fdd9996c1972a03c60a5dd6d59b3bc4ee618fc..0000000000000000000000000000000000000000 --- a/spaces/z-uo/HTS-Audio-Transformer/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: HTS Audio Transformer -emoji: 🔥 -colorFrom: pink -colorTo: blue -sdk: gradio -sdk_version: 3.0.26 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/zhanpj/ChatGPT/modules/openai_func.py b/spaces/zhanpj/ChatGPT/modules/openai_func.py deleted file mode 100644 index fb07b16235476360ccc48849f5f9e761630efec3..0000000000000000000000000000000000000000 --- a/spaces/zhanpj/ChatGPT/modules/openai_func.py +++ /dev/null @@ -1,82 +0,0 @@ -import requests -import logging -from modules.presets import ( - timeout_all, - USAGE_API_URL, - BALANCE_API_URL, - standard_error_msg, - connection_timeout_prompt, - error_retrieve_prompt, - read_timeout_prompt -) - -from modules import shared -from modules.utils import get_proxies -import os, datetime - -def get_billing_data(openai_api_key, billing_url): - headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {openai_api_key}" - } - - timeout = timeout_all - proxies = get_proxies() - response = requests.get( - billing_url, - headers=headers, - timeout=timeout, - proxies=proxies, - ) - - if response.status_code == 200: - data = response.json() - return data - else: - raise Exception(f"API request failed with status code {response.status_code}: {response.text}") - - -def get_usage(openai_api_key): - try: - balance_data=get_billing_data(openai_api_key, BALANCE_API_URL) - logging.debug(balance_data) - try: - balance = balance_data["total_available"] if balance_data["total_available"] else 0 - total_used = balance_data["total_used"] if balance_data["total_used"] else 0 - usage_percent = round(total_used / (total_used+balance) * 100, 2) - except Exception as e: - logging.error(f"API使用情况解析失败:"+str(e)) - balance = 0 - total_used=0 - return f"**API使用情况解析失败**" - if balance == 0: - last_day_of_month = datetime.datetime.now().strftime("%Y-%m-%d") - first_day_of_month = datetime.datetime.now().replace(day=1).strftime("%Y-%m-%d") - usage_url = f"{USAGE_API_URL}?start_date={first_day_of_month}&end_date={last_day_of_month}" - try: - usage_data = get_billing_data(openai_api_key, usage_url) - except Exception as e: - logging.error(f"获取API使用情况失败:"+str(e)) - return f"**获取API使用情况失败**" - return f"**本月使用金额** \u3000 ${usage_data['total_usage'] / 100}" - - # return f"**免费额度**(已用/余额)\u3000${total_used} / ${balance}" - return f"""\ - 免费额度使用情况 -
                    -
                    - {usage_percent}% -
                    -
                    -
                    已用 ${total_used}可用 ${balance}
                    - """ - - except requests.exceptions.ConnectTimeout: - status_text = standard_error_msg + connection_timeout_prompt + error_retrieve_prompt - return status_text - except requests.exceptions.ReadTimeout: - status_text = standard_error_msg + read_timeout_prompt + error_retrieve_prompt - return status_text - except Exception as e: - logging.error(f"获取API使用情况失败:"+str(e)) - return standard_error_msg + error_retrieve_prompt diff --git a/spaces/zhicheng127/White-box-Cartoonization/README.md b/spaces/zhicheng127/White-box-Cartoonization/README.md deleted file mode 100644 index 9860239cf42c94e385faaaa75a85311e010d64f7..0000000000000000000000000000000000000000 --- a/spaces/zhicheng127/White-box-Cartoonization/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -python_version: 3.7 -title: White Box Cartoonization -emoji: 📚 -colorFrom: purple -colorTo: green -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: hylee/White-box-Cartoonization ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/zhigangjiang/3D-Room-Layout-Estimation_LGT-Net/utils/logger.py b/spaces/zhigangjiang/3D-Room-Layout-Estimation_LGT-Net/utils/logger.py deleted file mode 100644 index 0f2e4dc66099c7e4784e37ab924e8594ffa03e27..0000000000000000000000000000000000000000 --- a/spaces/zhigangjiang/3D-Room-Layout-Estimation_LGT-Net/utils/logger.py +++ /dev/null @@ -1,49 +0,0 @@ -""" -@Date: 2021/07/17 -@description: -""" -import os -import sys -import logging -import functools -from termcolor import colored - - -def build_logger(config): - output_dir = config.LOGGER.DIR - local_rank = config.LOCAL_RANK - name = config.MODEL.NAME - logger = get_logger(output_dir, local_rank, name) - return logger - - -@functools.lru_cache() -def get_logger(output_dir=None, local_rank=None, name="PLTNet"): - if output_dir and not os.path.exists(output_dir): - os.makedirs(output_dir) - - # create logger - logger = logging.getLogger(name) - logger.setLevel(logging.DEBUG) - logger.propagate = False - - # create formatter - fmt = f'[%(asctime)s %(name)s][%(levelname)1.1s](%(filename)s %(lineno)d): %(message)s' - color_fmt = colored(f'[%(asctime)s %(name)s][%(levelname)1.1s][{local_rank}]', 'green') + colored( - f'(%(filename)s %(lineno)d)', - 'yellow') + ': %(message)s' - if local_rank in [0] or local_rank is None: - console_handler = logging.StreamHandler(sys.stdout) - console_handler.setLevel(logging.DEBUG) - console_handler.setFormatter( - logging.Formatter(fmt=color_fmt, datefmt='%Y-%m-%d %H:%M:%S')) - logger.addHandler(console_handler) - - if output_dir is not None: - # create file handlers - file_handler = logging.FileHandler(os.path.join(output_dir, f'log_rank{local_rank}.log'), mode='a') - file_handler.setLevel(logging.DEBUG) - file_handler.setFormatter(logging.Formatter(fmt=fmt, datefmt='%Y-%m-%d %H:%M:%S')) - logger.addHandler(file_handler) - - return logger diff --git a/spaces/zhoupin30/zhoupin30/src/components/tailwind-indicator.tsx b/spaces/zhoupin30/zhoupin30/src/components/tailwind-indicator.tsx deleted file mode 100644 index f2a1291213dd67055fcebe67fab574c8441338df..0000000000000000000000000000000000000000 --- a/spaces/zhoupin30/zhoupin30/src/components/tailwind-indicator.tsx +++ /dev/null @@ -1,14 +0,0 @@ -export function TailwindIndicator() { - if (process.env.NODE_ENV === 'production') return null - - return ( -
                    -
                    xs
                    -
                    sm
                    -
                    md
                    -
                    lg
                    -
                    xl
                    -
                    2xl
                    -
                    - ) -} diff --git a/spaces/zhuce/vits/text/symbols.py b/spaces/zhuce/vits/text/symbols.py deleted file mode 100644 index edfbd24247be8c757275ce80b9ec27a0ffa808f3..0000000000000000000000000000000000000000 --- a/spaces/zhuce/vits/text/symbols.py +++ /dev/null @@ -1,39 +0,0 @@ -''' -Defines the set of symbols used in text input to the model. -''' - -'''# japanese_cleaners -_pad = '_' -_punctuation = ',.!?-' -_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧ↓↑ ' -''' - -'''# japanese_cleaners2 -_pad = '_' -_punctuation = ',.!?-~…' -_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧʦ↓↑ ' -''' - -'''# korean_cleaners -_pad = '_' -_punctuation = ',.!?…~' -_letters = 'ㄱㄴㄷㄹㅁㅂㅅㅇㅈㅊㅋㅌㅍㅎㄲㄸㅃㅆㅉㅏㅓㅗㅜㅡㅣㅐㅔ ' -''' - -'''# chinese_cleaners -_pad = '_' -_punctuation = ',。!?—…' -_letters = 'ㄅㄆㄇㄈㄉㄊㄋㄌㄍㄎㄏㄐㄑㄒㄓㄔㄕㄖㄗㄘㄙㄚㄛㄜㄝㄞㄟㄠㄡㄢㄣㄤㄥㄦㄧㄨㄩˉˊˇˋ˙ ' -''' - -# zh_ja_mixture_cleaners -_pad = '_' -_punctuation = ',.!?-~…' -_letters = 'AEINOQUabdefghijklmnoprstuvwyzʃʧʦɯɹəɥ⁼ʰ`→↓↑ ' - - -# Export all symbols: -symbols = [_pad] + list(_punctuation) + list(_letters) - -# Special symbol ids -SPACE_ID = symbols.index(" ") \ No newline at end of file diff --git a/spaces/zhuolisam/resume-ranker/embedding.py b/spaces/zhuolisam/resume-ranker/embedding.py deleted file mode 100644 index 633e2fb903423a33e3823b84dc40de7c3bfb86cc..0000000000000000000000000000000000000000 --- a/spaces/zhuolisam/resume-ranker/embedding.py +++ /dev/null @@ -1,24 +0,0 @@ -from sklearn.feature_extraction.text import TfidfVectorizer -from sentence_transformers import SentenceTransformer -import os - -def embedding(documents, embedding='bert'): - if embedding == 'bert': - sbert_model = SentenceTransformer('bert-base-nli-mean-tokens', cache_folder=os.path.join(os.getcwd(), 'embedding')) - - document_embeddings = sbert_model.encode(documents) - return document_embeddings - - if embedding == 'minilm': - sbert_model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2', cache_folder=os.path.join(os.getcwd(), 'embedding')) - - document_embeddings = sbert_model.encode(documents) - return document_embeddings - - if embedding == 'tfidf': - word_vectorizer = TfidfVectorizer( - sublinear_tf=True, stop_words='english') - word_vectorizer.fit(documents) - word_features = word_vectorizer.transform(documents) - - return word_features diff --git a/spaces/zlc99/M4Singer/vocoders/hifigan.py b/spaces/zlc99/M4Singer/vocoders/hifigan.py deleted file mode 100644 index 810d3c931b556387f8a2e85537f4964add1e76b0..0000000000000000000000000000000000000000 --- a/spaces/zlc99/M4Singer/vocoders/hifigan.py +++ /dev/null @@ -1,76 +0,0 @@ -import glob -import json -import os -import re - -import librosa -import torch - -import utils -from modules.hifigan.hifigan import HifiGanGenerator -from utils.hparams import hparams, set_hparams -from vocoders.base_vocoder import register_vocoder -from vocoders.pwg import PWG -from vocoders.vocoder_utils import denoise - - -def load_model(config_path, checkpoint_path): - device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - ckpt_dict = torch.load(checkpoint_path, map_location="cpu") - if '.yaml' in config_path: - config = set_hparams(config_path, global_hparams=False) - state = ckpt_dict["state_dict"]["model_gen"] - elif '.json' in config_path: - config = json.load(open(config_path, 'r')) - state = ckpt_dict["generator"] - - model = HifiGanGenerator(config) - model.load_state_dict(state, strict=True) - model.remove_weight_norm() - model = model.eval().to(device) - print(f"| Loaded model parameters from {checkpoint_path}.") - print(f"| HifiGAN device: {device}.") - return model, config, device - - -total_time = 0 - - -@register_vocoder -class HifiGAN(PWG): - def __init__(self): - base_dir = hparams['vocoder_ckpt'] - config_path = f'{base_dir}/config.yaml' - if os.path.exists(config_path): - ckpt = sorted(glob.glob(f'{base_dir}/model_ckpt_steps_*.ckpt'), key= - lambda x: int(re.findall(f'{base_dir}/model_ckpt_steps_(\d+).ckpt', x)[0]))[-1] - print('| load HifiGAN: ', ckpt) - self.model, self.config, self.device = load_model(config_path=config_path, checkpoint_path=ckpt) - else: - config_path = f'{base_dir}/config.json' - ckpt = f'{base_dir}/generator_v1' - if os.path.exists(config_path): - self.model, self.config, self.device = load_model(config_path=config_path, checkpoint_path=ckpt) - - def spec2wav(self, mel, **kwargs): - device = self.device - with torch.no_grad(): - c = torch.FloatTensor(mel).unsqueeze(0).transpose(2, 1).to(device) - with utils.Timer('hifigan', print_time=hparams['profile_infer']): - f0 = kwargs.get('f0') - if f0 is not None and hparams.get('use_nsf'): - f0 = torch.FloatTensor(f0[None, :]).to(device) - y = self.model(c, f0).view(-1) - else: - y = self.model(c).view(-1) - wav_out = y.cpu().numpy() - if hparams.get('vocoder_denoise_c', 0.0) > 0: - wav_out = denoise(wav_out, v=hparams['vocoder_denoise_c']) - return wav_out - - # @staticmethod - # def wav2spec(wav_fn, **kwargs): - # wav, _ = librosa.core.load(wav_fn, sr=hparams['audio_sample_rate']) - # wav_torch = torch.FloatTensor(wav)[None, :] - # mel = mel_spectrogram(wav_torch, hparams).numpy()[0] - # return wav, mel.T diff --git a/spaces/zmengaf/comp652_final_demo/README.md b/spaces/zmengaf/comp652_final_demo/README.md deleted file mode 100644 index fafe388834f7d9fedb5f0872c5f6e4cc1ccd58a5..0000000000000000000000000000000000000000 --- a/spaces/zmengaf/comp652_final_demo/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Comp652 Final Demo -emoji: 🐠 -colorFrom: gray -colorTo: green -sdk: gradio -sdk_version: 3.28.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/zomehwh/sovits-goldship/README.md b/spaces/zomehwh/sovits-goldship/README.md deleted file mode 100644 index 4d7e0def3a0f009fae1a536d0dfbfb9ca0fc9d13..0000000000000000000000000000000000000000 --- a/spaces/zomehwh/sovits-goldship/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Sovits Goldship -emoji: 🎙️ -colorFrom: gray -colorTo: pink -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: sayashi/sovits-models ---- diff --git a/spaces/zzzzred/extras/tts_edge.py b/spaces/zzzzred/extras/tts_edge.py deleted file mode 100644 index 7031e18e0b836ec64254e5637f7e10b775c871a0..0000000000000000000000000000000000000000 --- a/spaces/zzzzred/extras/tts_edge.py +++ /dev/null @@ -1,34 +0,0 @@ -import io -import edge_tts -import asyncio - - -def get_voices(): - voices = asyncio.run(edge_tts.list_voices()) - return voices - - -async def _iterate_chunks(audio): - async for chunk in audio.stream(): - if chunk["type"] == "audio": - yield chunk["data"] - - -async def _async_generator_to_list(async_gen): - result = [] - async for item in async_gen: - result.append(item) - return result - - -def generate_audio(text: str, voice: str, rate: int) -> bytes: - sign = '+' if rate > 0 else '-' - rate = f'{sign}{abs(rate)}%' - audio = edge_tts.Communicate(text=text, voice=voice, rate=rate) - chunks = asyncio.run(_async_generator_to_list(_iterate_chunks(audio))) - buffer = io.BytesIO() - - for chunk in chunks: - buffer.write(chunk) - - return buffer.getvalue()