diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Download Internet Download Manager with Crack.rar How to Install and Use IDM with Crack.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Download Internet Download Manager with Crack.rar How to Install and Use IDM with Crack.md deleted file mode 100644 index 43a784c3e0e64c690f44e00d93d4e896bc0397bc..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Download Internet Download Manager with Crack.rar How to Install and Use IDM with Crack.md +++ /dev/null @@ -1,151 +0,0 @@ -
-

Sri Lalitha Sahasranamam Lyrics In Tamil Pdf Downloadl

-

If you are a devotee of Goddess Lalitha, the Divine Mother, you might be interested in downloading Sri Lalitha Sahasranamam lyrics in Tamil pdf. Sri Lalitha Sahasranamam is a sacred Hindu text that contains the thousand names of Goddess Lalitha, who is also known as Lalita Devi, Tripura Sundari, Shodashi, Rajarajeshwari, and many other names. In this article, we will tell you what is Sri Lalitha Sahasranamam, how to download it in Tamil pdf format, and how to chant it for maximum benefits.

-

What is Sri Lalitha Sahasranamam?

-

Sri Lalitha Sahasranamam is a part of the Brahmanda Purana, one of the 18 major Puranas in Hinduism. It is a hymn that praises Goddess Lalitha as the supreme power and creator of the universe. It describes her various attributes, qualities, forms, manifestations, and deeds. It also reveals her secret names that can grant various boons and blessings to her devotees.

-

Sri Lalitha Sahasranamam Lyrics In Tamil Pdf Downloadl


Download Ziphttps://byltly.com/2uKyK6



-

The origin and meaning of Sri Lalitha Sahasranamam

-

According to the legend, Sri Lalitha Sahasranamam was revealed by Lord Hayagriva, an incarnation of Lord Vishnu, to Sage Agastya, one of the seven great sages in Hinduism. Lord Hayagriva told Sage Agastya the story of how Goddess Lalitha incarnated as the daughter of Himalaya, the king of mountains, and married Lord Shiva, the destroyer of evil. He also narrated how she fought and killed a powerful demon named Bhandasura, who was created from the ashes of Kamadeva, the god of love. He then taught him the thousand names of Goddess Lalitha that can please her and invoke her grace.

-

The meaning of Sri Lalitha Sahasranamam is "the thousand names of Sri Lalitha". The word "Sri" means auspiciousness, wealth, beauty, grace, and respect. The word "Lalitha" means playful, charming, delightful, graceful, and lovely. The word "Sahasranama" means thousand names. Each name of Goddess Lalitha has a deep meaning and significance that reflects her various aspects and powers. Some of her names are:

- -

And so on...

-

The benefits and significance of Sri Lalitha Sahasranamam

-

Sri Lalitha Sahasranamam is not just a hymn but a powerful mantra that can bestow various benefits to those who recite it with devotion and faith. Some of the benefits are:

-

Sri Lalitha Sahasranamam Tamil Script Pdf Free Download
-Lalitha Sahasranamam Lyrics in Tamil with Meaning Pdf
-Sri Lalitha Sahasranama Stotram Tamil Pdf Austin Hindu Temple
-Lalitha Sahasranamam in Tamil Pdf Download New Scientist
-Sri Lalitha Sahasranamam Stotram in Tamil Bhaktinidhi
-Lalitha Sahasranamam Tamil Pdf Free Download Aanmeegam
-Sri Lalitha Sahasranama Stotram Tamil Lyrics Pdf
-Lalitha Sahasranamam in Tamil with Audio Pdf Download
-Sri Lalitha Sahasranamam Tamil Script Austin Hindu Temple Pdf
-Lalitha Sahasranamam Lyrics in Tamil Wikipedia Pdf
-Sri Lalitha Sahasranama Stotram Tamil Translation Pdf
-Lalitha Sahasranamam in Tamil by Bombay Sisters Pdf Download
-Sri Lalitha Sahasranamam Tamil Script with Meaning Pdf
-Lalitha Sahasranamam Lyrics in Tamil Font Pdf
-Sri Lalitha Sahasranama Stotram Tamil Mp3 Free Download Pdf
-Lalitha Sahasranamam in Tamil by MS Subbulakshmi Pdf Download
-Sri Lalitha Sahasranamam Tamil Script with Audio Pdf
-Lalitha Sahasranamam Lyrics in Tamil and English Pdf
-Sri Lalitha Sahasranama Stotram Tamil Book Pdf
-Lalitha Sahasranamam in Tamil by Sivananda Vijayalakshmi Pdf Download
-Sri Lalitha Sahasranamam Tamil Script with Commentary Pdf
-Lalitha Sahasranamam Lyrics in Tamil Youtube Pdf
-Sri Lalitha Sahasranama Stotram Tamil Video Download Pdf
-Lalitha Sahasranamam in Tamil by Nitya Santhoshini Pdf Download
-Sri Lalitha Sahasranamam Tamil Script with Explanation Pdf
-Lalitha Sahasranamam Lyrics in Tamil Printable Pdf
-Sri Lalitha Sahasranama Stotram Tamil Online Read Pdf
-Lalitha Sahasranamam in Tamil by Priya Sisters Pdf Download
-Sri Lalitha Sahasranamam Tamil Script with Benefits Pdf
-Lalitha Sahasranamam Lyrics in Tamil for Beginners Pdf
-Sri Lalitha Sahasranama Stotram Tamil Karaoke Download Pdf
-Lalitha Sahasranamam in Tamil by Anuradha Paudwal Pdf Download
-Sri Lalitha Sahasranamam Tamil Script with Phala Sruthi Pdf
-Lalitha Sahasranamam Lyrics in Tamil for Recitation Pdf
-Sri Lalitha Sahasranama Stotram Tamil Notes Download Pdf
-Lalitha Sahasranamam in Tamil by Uma Mohan Pdf Download
-Sri Lalitha Sahasranamam Tamil Script with Namavali Pdf
-Lalitha Sahasranamam Lyrics in Tamil for Meditation Pdf
-Sri Lalitha Sahasranama Stotram Tamil Parayanam Download Pdf
-Lalitha Sahasranamam in Tamil by Sowmya Narayanan Pdf Download

- -

The significance of Sri Lalitha Sahasranamam is that it reveals the true nature and glory of Goddess Lalitha as the supreme reality and source of everything. It also teaches us how to worship her with love and devotion. It also helps us to understand ourselves better as we are reflections of her divine attributes. It also guides us to attain liberation from the cycle of birth and death by merging with her supreme self.

-

How to download Sri Lalitha Sahasranamam lyrics in Tamil pdf?

-

If you want to download Sri Lalitha Sahasranamam lyrics in Tamil pdf format for your convenience and ease of reading, you can follow these steps:

-

The sources and steps to download Sri Lalitha Sahasranama lyrics in Tamil pdf

- -

The tips and precautions to download Sri Lalitha Sahasranama lyrics in Tamil pdf

- -

How to chant Sri Lalitha Sahasranamam?

-

Chanting Sri Lalitha Sahasranamam is a simple and effective way to worship Goddess Lalitha and receive her grace and blessings. However, there are some guidelines and rules that one should follow to chant it properly and correctly. Here are some of them:

-

The best time and place to chant Sri Lalitha Sahasranamam

- -

The procedure and rules to chant Sri Lalitha Sahasranamam

- -

The effects and experiences of chanting Sri Lalitha Sahasranamam

- -

Conclusion

- you can follow the steps and tips given in this article. You can also chant Sri Lalitha Sahasranama with devotion and faith to receive her grace and blessings. We hope you enjoyed reading this article and learned something new and useful. Thank you for your time and attention.

-

FAQs

-

Here are some frequently asked questions about Sri Lalitha Sahasranama and their answers.

-
    -
  1. What is the meaning of Lalitha?
  2. -

    Lalitha means playful, charming, delightful, graceful, and lovely. It is one of the names of Goddess Lalitha, who is also known as Lalita Devi, Tripura Sundari, Shodashi, Rajarajeshwari, and many other names.

    -
  3. Who wrote Sri Lalitha Sahasranama?
  4. -

    Sri Lalitha Sahasranama was revealed by Lord Hayagriva, an incarnation of Lord Vishnu, to Sage Agastya, one of the seven great sages in Hinduism. It is a part of the Brahmanda Purana, one of the 18 major Puranas in Hinduism.

    -
  5. How many times should one chant Sri Lalitha Sahasranama?
  6. -

    One should chant Sri Lalitha Sahasranama 108 times or any multiple of 9 times. One can also chant it as many times as one wishes or as per one's convenience and availability of time.

    -
  7. What are the benefits of chanting Sri Lalitha Sahasranama?
  8. -

    Chanting Sri Lalitha Sahasranama can bestow various benefits to the chanter such as peace, happiness, prosperity, health, wealth, fame, success, protection, fulfillment of desires, spiritual awakening, and liberation.

    -
  9. What are the rules to chant Sri Lalitha Sahasranama?
  10. -

    Some of the rules to chant Sri Lalitha Sahasranama are to chant it with devotion, concentration, and understanding; to chant it with a clear and loud voice; to chant it without any interruptions or mistakes; to chant it with a rosary or a mala; to chant it after invoking Goddess Lalitha with her dhyana, panchapuja, and mula mantra; to chant it by following the order of the names; and to chant it by offering a flower or a leaf to Goddess Lalitha after each name.

    -
-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Carmen Serban Cu El Numai Cu El Zippy Flight Floyd Aerea Unlimited.md b/spaces/1gistliPinn/ChatGPT4/Examples/Carmen Serban Cu El Numai Cu El Zippy Flight Floyd Aerea Unlimited.md deleted file mode 100644 index f76e74108b849e593132f58f3f33925a2046d937..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Carmen Serban Cu El Numai Cu El Zippy Flight Floyd Aerea Unlimited.md +++ /dev/null @@ -1,6 +0,0 @@ -
-

take the latest episodes, select the episode you want to download and click on the download button. you have to seek tips from the airport staff to make your travel to have a safe and memorable holiday. carmen serban cu el numai cu el zippy flight floyd aerea unlimited ->>> download 100 movies with complete script.

-

Carmen Serban Cu El Numai Cu El Zippy flight floyd aerea unlimited


DOWNLOAD - https://imgfil.com/2uxYhM



-

carmen serban cu el numai cu el zippy a fost foarte populară pe ultima lui perioadă, pentru că era un aeroport funcţional, dar necesită personal şi oameni care să rezolve probleme. dacă eşti o zonă dificilă, un loc de muncă, să puteţi merge într-o zonă dificilă, la volan. eu am fost pe la volan la volan de mai multe ori, aşa că aş fi bine să mă duc în această zonă şi să ştiu ce se întâmplă de la una la alta. dar a fost o mare plăcere, pentru că erau mulţi oameni de la care puteai să-ţi aduci aminte şi ei vorbeau de la cine ştia ce să facă, de ce aveau nevoie, dacă aveau nevoie. dar foarte bine, în general. carmen serban se afla în acel moment în aeroportul din iaşi, iar eu aş fi bine să-l întâlnesc aici şi să-l întâlnesc cu el înainte de mers pe jos. pentru că aşa era comunitatea, era un fel de prietenie şi voia să afle de ce te aştepţi când vorbeşti cu el. dacă vorbeşti cu oamenii că este zbor ce l-a nedepărtuit pe el, în timp ce e un avion care s-ar putea să plouă sau să zboare, sau de la cine ştie cine. unii zboară nespus de aşa, pentru că aşa ceva nu se întâmplă de atâţia ani. dacă se întâmplă sau se întâmplă, e un fel de aventură. unii dintre aceşti oameni nu ştiau să preia un avion, dacă nu aveau tehnologie, dacă nu aveau tehnologie de a preia avionul.

899543212b
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Criminal Case Save the World! Mod APK - The Ultimate Adventure Game for Crime Lovers.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Criminal Case Save the World! Mod APK - The Ultimate Adventure Game for Crime Lovers.md deleted file mode 100644 index b44225f9da8d3ead8a537003090a67c2c7e5d73c..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Criminal Case Save the World! Mod APK - The Ultimate Adventure Game for Crime Lovers.md +++ /dev/null @@ -1,115 +0,0 @@ - -

Criminal Case World Mod Apk: A Guide for Crime Solvers

-

If you are a fan of detective stories and hidden object games, you might have heard of Criminal Case, one of the most popular and addictive games on Facebook. But did you know that there is a mod apk version of the game that gives you unlimited energy and hints, as well as access to all the cases and features? In this article, we will tell you everything you need to know about Criminal Case World Mod Apk, including what it is, how to download and install it, why you should play it, and some tips and tricks to help you solve crimes faster and easier.

-

What is Criminal Case World Mod Apk?

-

A brief introduction to the game and its features

-

Criminal Case is a hidden object game that puts you in the role of a detective who investigates murder cases in different locations around the world. You have to search for clues in crime scenes, examine evidence in the lab, interrogate suspects and witnesses, and bring the killer to justice. Along the way, you will also meet various characters, such as your partner, your boss, your forensic team, and other police officers.

-

criminal case world mod apk


Download File ··· https://urlin.us/2uT0ND



-

Criminal Case World Mod Apk is a modified version of the game that gives you some advantages over the original one. For example, you will have unlimited energy and hints, which means you can play as long as you want without waiting for them to refill. You will also be able to unlock all the cases and features in the game, such as new locations, new outfits, new pets, new trophies, and more. You will also be able to play with your friends who are also using the mod apk version.

-

How to download and install the mod apk

-

To download and install Criminal Case World Mod Apk, you will need an Android device that meets the minimum requirements of the game. You will also need to enable unknown sources in your device settings, so that you can install apps from outside the Google Play Store. Here are the steps to follow:

-
    -
  1. Go to [this link](^1^) and download the mod apk file.
  2. -
  3. Locate the file in your device storage and tap on it to start the installation process.
  4. -
  5. Follow the instructions on the screen and wait for the installation to finish.
  6. -
  7. Launch the game from your app drawer or home screen.
  8. -
  9. Enjoy playing Criminal Case World Mod Apk with unlimited energy and hints.
  10. -
-

Why Play Criminal Case World Mod Apk?

-

The benefits of playing with unlimited energy and hints

-

One of the main reasons why you should play Criminal Case World Mod Apk is that you will never run out of energy or hints while playing. Energy is used to enter crime scenes and mini-games, while hints are used to highlight objects or areas that are relevant to the investigation. In the original game, both energy and hints are limited and take time to regenerate. This can be frustrating if you want to play more or if you are stuck on a difficult scene. With the mod apk version, you can play without any interruptions or limitations. You can also use hints more freely to help you find clues faster and easier.

-

The challenges and rewards of solving murder cases

-

Another reason why you should play Criminal Case World Mod Apk is that you will experience the thrill and satisfaction of solving murder cases. Each case has a unique story, a different setting, and a diverse cast of characters. You will have to use your observation skills, your logic, and your intuition to find the evidence, analyze it, and deduce the killer. You will also have to face some twists and turns along the way, such as false leads, red herrings, and unexpected revelations. Solving cases will not only test your intelligence, but also your morality and your empathy.

-

criminal case save the world mod apk unlimited money
-criminal case world edition mod apk latest version
-criminal case world edition mod apk android 1
-criminal case world edition mod apk happymod
-criminal case world edition mod apk revdl
-criminal case world edition mod apk rexdl
-criminal case world edition mod apk download for pc
-criminal case world edition mod apk offline
-criminal case world edition mod apk unlimited energy
-criminal case world edition mod apk unlimited stars
-criminal case world edition mod apk unlimited hints
-criminal case world edition mod apk free shopping
-criminal case world edition mod apk no ads
-criminal case world edition mod apk 2023
-criminal case world edition mod apk 2022
-criminal case save the world hack apk download
-criminal case save the world cheat apk
-criminal case save the world premium apk
-criminal case save the world pro apk
-criminal case save the world cracked apk
-criminal case save the world full apk
-criminal case save the world unlocked apk
-criminal case save the world mega mod apk
-criminal case save the world god mode apk
-criminal case save the world vip mod apk
-how to install criminal case world mod apk
-how to play criminal case world mod apk
-how to update criminal case world mod apk
-how to get criminal case world mod apk
-how to download criminal case world mod apk on ios
-best site to download criminal case world mod apk
-best way to download criminal case world mod apk
-best source for criminal case world mod apk
-best alternative for criminal case world mod apk
-best features of criminal case world mod apk
-benefits of using criminal case world mod apk
-advantages of using criminal case world mod apk
-disadvantages of using criminal case world mod apk
-risks of using criminal case world mod apk
-reviews of criminal case world mod apk

-

As you solve cases, you will also earn rewards, such as stars, coins, cash, and experience points. Stars are used to unlock new scenes and mini-games, as well as to perform certain actions, such as examining evidence or interrogating suspects. Coins are used to buy items in the shop, such as clothes, accessories, pets, and boosters. Cash is used to buy premium items, such as energy refills, hints, or special outfits. Experience points are used to level up and unlock new features and cases.

-

The fun and excitement of playing with friends

-

A third reason why you should play Criminal Case World Mod Apk is that you will have more fun and excitement by playing with your friends. You can connect your game account to your Facebook account and invite your friends who are also using the mod apk version to join you. You can then team up with them to solve cases together, or compete with them to see who can score higher or rank higher in the leaderboards. You can also chat with them, send them gifts, ask them for help, or help them in return.

-

Playing with friends will not only make the game more enjoyable, but also more social and interactive. You can share your opinions, your theories, your strategies, and your emotions with your friends. You can also learn from them, challenge them, support them, and congratulate them. Playing with friends will also motivate you to play more and improve your skills.

-

Tips and Tricks for Criminal Case World Mod Apk

-

How to rank up and earn stars faster

-

If you want to rank up and earn stars faster in Criminal Case World Mod Apk, here are some tips and tricks that you can follow:

- -

How to use boosters and power-ups effectively

-

Boosters and power-ups are very useful items that can help you solve cases faster and easier in Criminal Case World Mod Apk. However, they are also limited and costly, so you need to use them wisely. Here are some tips on how to use boosters and power-ups effectively:

- -

How to find clues and evidence easily

-

Clues and evidence are essential items that can help you solve cases and identify the killer in Criminal Case World Mod Apk. However, they are not always easy to find or recognize in the scenes or the mini-games. Here are some tips on how to find clues and evidence easily:

- -

Conclusion

-

Criminal Case World Mod Apk is a great game for anyone who loves detective stories and hidden object games. It offers unlimited energy and hints, as well as access to all the cases and features in the game. It also allows you to play with your friends who are also using the mod apk version. It is a fun and exciting way to test your intelligence, your morality, and your empathy as you solve murder cases around the world. If you want to download and install Criminal Case World Mod Apk, just follow the steps that we have provided in this article. And if you want to rank up and earn stars faster, use boosters and power-ups effectively, and find clues and evidence easily, just follow the tips and tricks that we have shared with you. We hope that this article has been helpful and informative for you. Now, what are you waiting for? Grab your magnifying glass and your badge, and start solving crimes with Criminal Case World Mod Apk!

-

FAQs

-

Q1: Is Criminal Case World Mod Apk safe to use?

-

A1: Yes, Criminal Case World Mod Apk is safe to use as long as you download it from a trusted source and scan it with an antivirus program before installing it. However, you should be aware that using mod apk versions of games may violate their terms of service and may result in your account being banned or suspended by the developers. Therefore, use it at your own risk and discretion.

-

Q2: How can I update Criminal Case World Mod Apk?

-

A2: To update Criminal Case World Mod Apk, you will need to download the latest version of the mod apk file from [this link] and install it over the existing one. You do not need to uninstall the previous version first. However, you should back up your game data before updating, in case something goes wrong during the process.

-

Q3: How can I get more friends to play with?

-

A3: To get more friends to play with in Criminal Case World Mod Apk, you can invite your existing Facebook friends who are also using the mod apk version to join you. You can also join online communities and groups of Criminal Case players who are looking for new friends and partners. You can also add random players who appear in your game as potential friends.

-

Q4: How can I report a bug or a problem with the game?

-

A4: To report a bug or a problem with Criminal Case World Mod Apk, you can contact the developers or the support team through their official website [here]. You can also leave a comment or a review on [this page] where you downloaded the mod apk file. Please provide as much detail as possible about the issue that you encountered, such as when it happened, what you were doing, what device you were using, what error message you received, etc.

-

Q5: How can I contact the developers or the support team?

-

A5: To contact the developers or the support team of Criminal Case World Mod Apk, you can use one of the following methods:

- -

We hope that this article has answered all your questions about Criminal Case World Mod Apk. If you have any other questions, feel free to contact us through any of the methods above. Thank you for reading and happy crime solving!

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download !NEW! 1o5 Version Please Open Via Salary.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download !NEW! 1o5 Version Please Open Via Salary.md deleted file mode 100644 index 0dc72fd16773eda960a459445e6b87e6d038fbe4..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download !NEW! 1o5 Version Please Open Via Salary.md +++ /dev/null @@ -1,49 +0,0 @@ - -

What is 1o5 version and why you should download it

-

Have you ever wondered how you can pay your salary and suppliers online without hassle or fees? If so, you might want to check out 1o5 version, a new way to make payments with your phone using Google Pay or WhatsApp.

-

1o5 version is a solution that integrates directly with your Sage software, allowing you to send and receive money instantly, securely, and conveniently. You can also earn rewards, discover offers, and understand your spending with Google Pay. Or you can enjoy private messaging, voice and video calls, and group chats with WhatsApp.

-

download 1o5 version please open via salary*


DOWNLOAD ✏ ✏ ✏ https://urlin.us/2uT0in



-

In this article, we will show you how to download 1o5 version on your device, how to open it via salary*, and what benefits you can get from using it. We will also answer some frequently asked questions about this innovative payment method.

-

How to download 1o5 version on your device

-

Downloading 1o5 version is easy and free. All you need is a smartphone or tablet that supports Google Pay or WhatsApp. Here are the steps to follow:

-
    -
  1. Go to the Google Play Store or the App Store and search for "Google Pay: Save and Pay" or "WhatsApp Messenger".
  2. -
  3. Tap on the app icon and then tap on "Install".
  4. -
  5. Open the app and follow the instructions to set up your account and link your bank card.
  6. -
-

Congratulations! You have successfully downloaded 1o5 version on your device. Now you are ready to open it via salary*.

-

How to open 1o5 version via salary*

-

Opening 1o5 version via salary* is simple and fast. All you need is a Sage account that supports salary and supplier payments. Here are the steps to follow:

-
    -
  1. Log in to your Sage account and go to the "Salary and Supplier Payments" section.
  2. -
  3. Choose the option to pay your staff or suppliers with Google Pay or WhatsApp.
  4. -
  5. Enter the amount, the recipient's phone number, and a reference.
  6. -
  7. Confirm the payment and send it.
  8. -
-

That's it! You have successfully opened 1o5 version via salary* and made a payment with your phone. You will receive a confirmation message and a receipt for your transaction.

-

-

Benefits of using 1o5 version via salary*

-

Using 1o5 version via salary* has many benefits for you and your business. Here are some of them:

- -

As you can see, using 1o5 version via salary* can help you save money and time, improve your cash flow, and streamline your operations.

-

FAQs about 1o5 version via salary*

-

You may have some questions about 1o5 version via salary*. Here are some of the most common ones:

-

What is salary*?

-

Salary* is a service that allows you to pay your salary and suppliers online with Sage. You can choose from various payment methods, such as bank transfer, debit card, credit card, PayPal, Google Pay, or WhatsApp. You can also access real-time reports, analytics, and insights on your payments.

-

Is 1o5 version safe to use?

-

Yes, 1o5 version is safe to use. Google Pay and WhatsApp use advanced encryption and security features to protect your personal and financial information. They also comply with the Payment Card Industry Data Security Standard (PCI DSS) and the General Data Protection Regulation (GDPR). Sage also uses secure servers and encryption to safeguard your data.

-

How can I track my payments with 1o5 version?

-

You can track your payments with 1o5 version by logging in to your Sage account and going to the "Salary and Supplier Payments" section. There you can see the status, date, amount, recipient, and reference of each payment. You can also download or print receipts for your records.

-

What if I have a problem with my payment?

-

If you have a problem with your payment, you can contact the customer support team of Google Pay or WhatsApp, depending on which app you used. They will help you resolve the issue as soon as possible. You can also contact Sage support if you need assistance with your Sage account or software.

-

How can I get more information about 1o5 version?

-

If you want to get more information about 1o5 version, you can visit the official websites of Google Pay or WhatsApp, or read their FAQs . You can also visit the Sage website or read their blog for more tips and insights on how to use salary* effectively.

-

Conclusion

-

In conclusion, 1o5 version is a new way to pay your salary and suppliers online with your phone using Google Pay or WhatsApp. It is easy to download, simple to open via salary*, and beneficial for your business. It can help you save money and time, improve your cash flow, and streamline your operations.

-

Why not give it a try today? Download 1o5 version on your device and open it via salary*. You will be amazed by how convenient and rewarding it is to make payments with your phone.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/CarX Street v0.8.6 Mod Apk The Ultimate Street Racing Game with Unlimited Cash.md b/spaces/1phancelerku/anime-remove-background/CarX Street v0.8.6 Mod Apk The Ultimate Street Racing Game with Unlimited Cash.md deleted file mode 100644 index 69bcc1404650c91e4143260c1ea5a1f71a6c9820..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/CarX Street v0.8.6 Mod Apk The Ultimate Street Racing Game with Unlimited Cash.md +++ /dev/null @@ -1,107 +0,0 @@ - -

CarX Street Mod APK 0.8.6 Download: A Guide for Android Users

-

If you are a fan of racing games, you might have heard of CarX Street, a dynamic open world game that lets you become a street racer the way you want. In this game, you can customize your car, challenge other racers, and explore the city of Sunset City. But what if you want to enjoy the game with more features and unlimited resources? That's where CarX Street Mod APK 0.8.6 comes in handy.

-

carx street mod apk 0.8.6 download


Download Ziphttps://jinyurl.com/2uNRpo



-

In this article, we will tell you what CarX Street is, how to download and install CarX Street Mod APK 0.8.6 on your Android device, why you should play it, and some tips and tricks to help you become a legend of the streets.

-

What is CarX Street?

-

CarX Street is a racing game developed by CarX Technologies, the creators of CarX Drift Racing and CarX Highway Racing. It was released in February 2022 for Android and iOS devices, and has received positive reviews from players and critics alike.

-

CarX Street is different from other racing games because it gives you more freedom and control over your car and your racing style. You can choose from over 50 cars, each with its own characteristics and customization options. You can also tune your car's performance, appearance, and sound to suit your preferences.

-

carx street mod apk latest version download
-carx street mod apk unlimited money and gold
-carx street mod apk free download for android
-carx street mod apk obb download
-carx street mod apk 0.8.6 no root
-carx street mod apk offline download
-carx street mod apk 0.8.6 hack
-carx street mod apk revdl
-carx street mod apk 0.8.6 unlocked all cars
-carx street mod apk rexdl
-carx street mod apk 0.8.6 android 1
-carx street mod apk 0.8.6 update
-carx street mod apk 0.8.6 gameplay
-carx street mod apk 0.8.6 cheats
-carx street mod apk 0.8.6 new features
-carx street mod apk 0.8.6 download link
-carx street mod apk 0.8.6 mediafire
-carx street mod apk 0.8.6 mega
-carx street mod apk 0.8.6 google drive
-carx street mod apk 0.8.6 zippyshare
-carx street mod apk 0.8.6 highly compressed
-carx street mod apk 0.8.6 full version
-carx street mod apk 0.8.6 premium
-carx street mod apk 0.8.6 pro
-carx street mod apk 0.8.6 cracked
-carx street mod apk 0.8.6 patched
-carx street mod apk 0.8.6 vip
-carx street mod apk 0.8.6 original
-carx street mod apk 0.8.6 official
-carx street mod apk 0.8.6 safe
-carx street mod apk 0.8.6 virus free
-carx street mod apk 0.8.6 without ads
-carx street mod apk 0.8.6 without verification
-carx street mod apk 0.8.6 without survey
-carx street mod apk 0.8.6 direct download
-carx street mod apk 0.8.6 fast download
-carx street mod apk 0.8.6 easy download
-carx street mod apk 0 offline installer download

-

But CarX Street is not just about racing. It's also about exploring the vast and vibrant city of Sunset City, where you can find various events, challenges, and secrets. You can also interact with other racers, join clubs, or create your own club and invite your friends.

-

Features of CarX Street

-

Some of the features that make CarX Street an amazing game are:

- -

How to download and install CarX Street Mod APK 0.8.6 on Android?

-

If you want to play CarX Street with more features and unlimited resources, you can download and install CarX Street Mod APK 0.8.6 on your Android device. Here are the steps to do so:

-
    -
  1. Download the CarX Street Mod APK 0.8.6 file from a trusted source, such as [PlayMods](^1^).
  2. -
  3. Go to your device's settings and enable the installation of apps from unknown sources.
  4. -
  5. Locate the downloaded file in your device's storage and tap on it to install it.
  6. -
  7. Wait for the installation process to finish and launch the game.
  8. -
  9. Enjoy playing CarX Street Mod APK 0.8.6 with unlimited money, gold, diamonds, fuel, and more.
  10. -
-

Why should you play CarX Street Mod APK 0.8.6?

-

CarX Street Mod APK 0.8.6 is not just a regular racing game. It's a game that offers you more fun,

CarX Street Mod APK 0.8.6 is not just a regular racing game. It's a game that offers you more fun, excitement, and customization than ever before. Here are some of the benefits of playing CarX Street Mod APK 0.8.6:

-

Benefits of playing CarX Street Mod APK 0.8.6

- -

Tips and tricks for playing CarX Street Mod APK 0.8.6

-

If you want to become a legend of the streets, you need to master the skills and strategies of racing in CarX Street Mod APK 0.8.6. Here are some tips and tricks to help you out:

- -

Conclusion

-

CarX Street Mod APK 0.8.6 is a game that will make you feel the thrill of street racing like never before. You can customize your car, challenge other racers, and explore the city of Sunset City with unlimited resources and features. If you are looking for a racing game that is realistic, dynamic, and fun, you should download and install CarX Street Mod APK 0.8.6 on your Android device today.

-

FAQs

-

Here are some frequently asked questions about CarX Street Mod APK 0.8.6:

-
    -
  1. Is CarX Street Mod APK 0.8.6 safe to download and install?
  2. -

    Yes, CarX Street Mod APK 0.8.6 is safe to download and install as long as you get it from a trusted source like [PlayMods]. However,

    Yes, CarX Street Mod APK 0.8.6 is safe to download and install as long as you get it from a trusted source like [PlayMods]. However, you should be aware that modded apps may not be compatible with the official version of the game or the latest updates. You should also backup your data before installing the modded app, in case something goes wrong.

    -
  3. What are the requirements to play CarX Street Mod APK 0.8.6?
  4. -

    To play CarX Street Mod APK 0.8.6, you need an Android device that has at least 4 GB of RAM, 2 GB of free storage space, and Android 5.0 or higher. You also need a stable internet connection to play online or offline.

    -
  5. Can I play CarX Street Mod APK 0.8.6 with my friends?
  6. -

    Yes, you can play CarX Street Mod APK 0.8.6 with your friends, either online or offline. You can join or create a club and invite your friends to join you. You can also chat with them, send them gifts, and challenge them to races.

    -
  7. How can I get more money, gold, diamonds, and other resources in CarX Street Mod APK 0.8.6?
  8. -

    You don't need to worry about getting more resources in CarX Street Mod APK 0.8.6, because you will have unlimited amounts of them from the start. You can use them to buy new cars, parts, upgrades, and more.

    -
  9. Where can I find more information about CarX Street Mod APK 0.8.6?
  10. -

    If you want to learn more about CarX Street Mod APK 0.8.6, you can visit the official website of CarX Technologies, the developer of the game. You can also follow their social media accounts on Facebook, Twitter, Instagram, and YouTube for the latest news and updates.

    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy PUBG MOBILE on PC Mac with BlueStacks Emulator.md b/spaces/1phancelerku/anime-remove-background/Enjoy PUBG MOBILE on PC Mac with BlueStacks Emulator.md deleted file mode 100644 index c1059103126340d684802c540b7dd446d95f3905..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Enjoy PUBG MOBILE on PC Mac with BlueStacks Emulator.md +++ /dev/null @@ -1,205 +0,0 @@ -
-

How to Download PUBG Mobile Emulator for PC

-

If you are a fan of PUBG Mobile, the popular battle royale game for mobile devices, you might be wondering how you can play it on your PC. After all, playing on a bigger screen with better graphics and controls can enhance your gaming experience and give you an edge over your opponents. Fortunately, there is a way to do that: by using a PUBG Mobile emulator.

-

download pubg mobile emulator for pc


DOWNLOADhttps://jinyurl.com/2uNJjm



-

A PUBG Mobile emulator is a software application that allows you to run PUBG Mobile on your PC. It simulates the Android environment and lets you access the Google Play Store and download the game. However, not all emulators are created equal. Some are faster, smoother, and more compatible than others. So, how do you choose the best PUBG Mobile emulator for your PC? And how do you download and install it? In this article, we will answer these questions and more. We will also review some of the best emulators available in the market and compare their features and performance.

-

What is PUBG Mobile Emulator?

-

A PUBG Mobile emulator is a program that allows you to play PUBG Mobile on your PC. It works by creating a virtual Android device on your computer, where you can install and run apps from the Google Play Store. An emulator acts as a bridge between your PC and your mobile game, enabling you to enjoy the best of both worlds.

-

There are many reasons why you might want to use a PUBG Mobile emulator. For instance, you might have a low-end or old smartphone that cannot run the game smoothly or at all. Or, you might prefer playing on a larger screen with higher resolution and frame rate. Or, you might want to use a keyboard and mouse instead of touch controls for more accuracy and responsiveness. Whatever your reason, a PUBG Mobile emulator can help you achieve it.

-

Why Use PUBG Mobile Emulator?

-

Using a PUBG Mobile emulator has many benefits and advantages over playing on your smartphone. Here are some of them:

-

How to download pubg mobile on pc with memu
-Download and play pubg mobile on pc with bluestacks
-Pubg mobile pc download gameloop emulator
-Best pubg mobile emulator for pc free download
-Pubg mobile lite pc download without emulator
-Download pubg mobile kr version on pc emulator
-Pubg mobile emulator for pc windows 10 download
-Download pubg mobile global version on pc emulator
-Pubg mobile emulator for pc low end download
-Download pubg mobile vn version on pc emulator
-Pubg mobile emulator for pc 32 bit download
-Download pubg mobile tw version on pc emulator
-Pubg mobile emulator for pc offline installer download
-Download pubg mobile korean version on pc emulator
-Pubg mobile emulator for pc tencent gaming buddy download
-Download pubg mobile beta version on pc emulator
-Pubg mobile emulator for pc nox player download
-Download pubg mobile india version on pc emulator
-Pubg mobile emulator for pc ld player download
-Download pubg mobile new state on pc emulator
-Pubg mobile emulator for pc smart gaga download
-Download pubg mobile 1.5 update on pc emulator
-Pubg mobile emulator for pc phoenix os download
-Download pubg mobile 1.4 update on pc emulator
-Pubg mobile emulator for pc prime os download
-Download pubg mobile 1.3 update on pc emulator
-Pubg mobile emulator for pc remix os download
-Download pubg mobile 1.2 update on pc emulator
-Pubg mobile emulator for pc windows 7 download
-Download pubg mobile 1.1 update on pc emulator
-Pubg mobile emulator for mac download free
-Download pubg mobile 1.0 update on pc emulator
-Pubg mobile hack emulator for pc download free
-Download pubg mobile season 19 on pc emulator
-Pubg mobile cheat engine for pc emulator download free
-Download pubg mobile season 18 on pc emulator
-Pubg mobile esp hack for pc emulator download free
-Download pubg mobile season 17 on pc emulator
-Pubg mobile aimbot hack for pc emulator download free
-Download pubg mobile season 16 on pc emulator
-Pubg mobile wallhack for pc emulator download free
-Download pubg mobile season 15 on pc emulator
-Pubg mobile mod apk for pc emulator download free
-Download pubg mobile season 14 on pc emulator
-Pubg mobile uc generator for pc emulator download free
-Download pubg mobile season 13 on pc emulator
-Pubg mobile redeem code generator for pc emulator download free

- -

As you can see, using a PUBG Mobile emulator can enhance your gaming experience and make it more fun and enjoyable. However, not all emulators are the same. Some are better than others in terms of compatibility, performance, stability, and features. Therefore, you need to choose the best PUBG Mobile emulator for your PC.

-

How to Choose the Best PUBG Mobile Emulator?

-

There are many factors and criteria that you need to consider when choosing the best PUBG Mobile emulator for your PC. Here are some of them:

- -

Based on these criteria, we have selected the best PUBG Mobile emulator for your PC: GameLoop.

-

The Best PUBG Mobile Emulator: GameLoop

-

GameLoop is the official emulator for PUBG Mobile developed by Tencent Games, the same company that created the game. It is designed specifically for PUBG Mobile and optimized for its performance and features. It is also one of the most popular and widely used emulators for PUBG Mobile in the world.

-

GameLoop has many advantages over other emulators for PUBG Mobile. Here are some of them:

-

How to Download and Install GameLoop Emulator?

-

Downloading and installing GameLoop emulator is very easy and straightforward. Here are the steps that you need to follow:

-
    -
  1. Go to the official website of GameLoop at https://gameloop.fun/.
  2. -
  3. Click on the "Download" button on the homepage to download the installer file.
  4. -
  5. Run the installer file and follow the instructions on the screen to install GameLoop on your PC.
  6. -
  7. Launch GameLoop from your desktop or start menu.
  8. -
  9. On the Game Center tab, search for PUBG Mobile or browse through the categories to find it.
  10. -
  11. Click on the "Install" button to download and install PUBG Mobile on GameLoop.
  12. -
  13. Once the installation is complete, click on the "Play" button to launch PUBG Mobile on GameLoop.
  14. -
-

How to Play PUBG Mobile on GameLoop Emulator?

-

Playing PUBG Mobile on GameLoop emulator is very similar to playing it on your smartphone. However, there are some tips and tricks that you can use to optimize your gameplay and make it more enjoyable. Here are some of them:

- -

What are the Features of GameLoop Emulator?

-

GameLoop emulator has many features that make it one of the best emulators for PUBG Mobile. Here are some of them:

- -

As you can see, GameLoop is a powerful and versatile emulator that can provide you with the best PUBG Mobile experience on your PC. However, if you want to try other emulators, there are some alternatives that you can consider.

-

Other PUBG Mobile Emulators to Consider

-

GameLoop is not the only emulator that can run PUBG Mobile on your PC. There are other emulators that have their own strengths and weaknesses. Here are some of them:

-

BlueStacks Emulator

-

BlueStacks is one of the oldest and most popular emulators for Android games and apps. It has a large user base and a wide range of games and apps that it supports. It also has many features and options that make it user-friendly and customizable.

-

However, BlueStacks is not very optimized for PUBG Mobile. It can run the game, but not as smoothly or as fast as GameLoop. It also has higher CPU and memory usage and lower graphics quality. Moreover, BlueStacks is not officially supported by Tencent Games or PUBG Corporation, which means that it might have compatibility or security issues in the future.

-

Tencent Gaming Buddy (AKA Gameloop) Emulator

-

Tencent Gaming Buddy is the predecessor of GameLoop. It is the original emulator for PUBG Mobile developed by Tencent Games. It is still available for download and use, but it is no longer updated or maintained by Tencent Games.

-

Tencent Gaming Buddy is similar to GameLoop in many aspects, such as compatibility, performance, stability, and features. However, it is not as advanced or as refined as GameLoop. It also does not support the latest version of PUBG Mobile or its new modes and features. Therefore, it is recommended to use GameLoop instead of Tencent Gaming Buddy for PUBG Mobile.

-

Comparison Table of PUBG Mobile Emulators

-

To help you compare and choose the best PUBG Mobile emulator for your PC, here is a table that summarizes and compares the main features and performance of GameLoop, BlueStacks, and Tencent Gaming Buddy:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -td>Medium - -
EmulatorCompatibilityPerformanceStabilityFeaturesUser-friendlinessReputation
GameLoopHighHighHighHighHighHigh
BlueStacksMediumMediumMediumMediumMediumHigh
Tencent Gaming Buddy (AKA Gameloop)MediumMediumMediumMediumMedium
-

Conclusion

-

PUBG Mobile is one of the most popular and exciting mobile games in the world. It offers a thrilling and immersive battle royale experience that you can enjoy with your friends or solo. However, playing on a mobile device might not be the best way to experience PUBG Mobile. You might face issues such as low graphics quality, small screen size, poor controls, battery drain, overheating, etc.

-

That is why using a PUBG Mobile emulator can be a great solution. A PUBG Mobile emulator allows you to play PUBG Mobile on your PC, which can improve your gameplay and convenience. You can enjoy better graphics and performance, bigger screen, better controls, more features and options, and more.

-

However, not all PUBG Mobile emulators are the same. Some are better than others in terms of compatibility, performance, stability, features, user-friendliness, and reputation. Therefore, you need to choose the best PUBG Mobile emulator for your PC.

-

In this article, we have reviewed and compared some of the best PUBG Mobile emulators available in the market. We have also provided a step-by-step guide on how to download and install GameLoop emulator, which is the official and best emulator for PUBG Mobile. We have also given some tips and tricks on how to play PUBG Mobile on GameLoop emulator and optimize your gameplay.

-

We hope that this article has helped you learn how to download PUBG Mobile emulator for PC and enjoy PUBG Mobile on a bigger and better platform. If you have any questions or feedback, please feel free to leave a comment below. Happy gaming!

-

FAQs

-

Here are some frequently asked questions and answers about PUBG Mobile emulator for PC:

- -

References

-

Here are some sources and links that we used in this article:

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/AI-Dashboards/ScrabbleSolverWordThesaurus/backupapp.py b/spaces/AI-Dashboards/ScrabbleSolverWordThesaurus/backupapp.py deleted file mode 100644 index 9be9e6f4a8e6023a73d313d5e08121cf55792ea3..0000000000000000000000000000000000000000 --- a/spaces/AI-Dashboards/ScrabbleSolverWordThesaurus/backupapp.py +++ /dev/null @@ -1,35 +0,0 @@ -import streamlit as st -import itertools -from nltk.corpus import wordnet - -def get_synonyms(word): - synonyms = set() - for syn in wordnet.synsets(word): - for lemma in syn.lemmas(): - synonyms.add(lemma.name()) - return list(synonyms) - -def generate_words(letters, length=None): - permutations = set() - for i in range(1, len(letters) + 1): - for p in itertools.permutations(letters, i): - word = "".join(p) - if length is None or len(word) == length: - permutations.add(word) - return permutations - -st.title("Scrabble Helper") - -letters = st.text_input("Enter the letters you have:") -word_length = st.number_input("Enter the word length (optional):", min_value=0, value=0, step=1) - -if letters: - st.header("Generated Words") - words = generate_words(letters, length=word_length if word_length > 0 else None) - st.write(words) - - st.header("Thesaurus Lookup") - selected_word = st.selectbox("Select a word to look up synonyms:", [""] + sorted(words)) - if selected_word: - synonyms = get_synonyms(selected_word) - st.write(synonyms) diff --git a/spaces/AP123/ai-avatars/train_dreambooth.py b/spaces/AP123/ai-avatars/train_dreambooth.py deleted file mode 100644 index a496382fbc895961b9902c33a9d5cc926d4fcc8d..0000000000000000000000000000000000000000 --- a/spaces/AP123/ai-avatars/train_dreambooth.py +++ /dev/null @@ -1,881 +0,0 @@ -import argparse -import itertools -import math -import os -from pathlib import Path -from typing import Optional -import subprocess -import sys -import gc -import random - -import torch -import torch.nn.functional as F -import torch.utils.checkpoint -from torch.utils.data import Dataset - -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import set_seed -from diffusers import AutoencoderKL, DDPMScheduler, StableDiffusionPipeline, UNet2DConditionModel -from diffusers.optimization import get_scheduler -from huggingface_hub import HfFolder, Repository, whoami -from PIL import Image -from torchvision import transforms -from tqdm.auto import tqdm -from transformers import CLIPTextModel, CLIPTokenizer - - -logger = get_logger(__name__) - - -def parse_args(): - parser = argparse.ArgumentParser(description="Simple example of a training script.") - parser.add_argument( - "--pretrained_model_name_or_path", - type=str, - default=None, - #required=True, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--tokenizer_name", - type=str, - default=None, - help="Pretrained tokenizer name or path if not the same as model_name", - ) - parser.add_argument( - "--instance_data_dir", - type=str, - default=None, - #required=True, - help="A folder containing the training data of instance images.", - ) - parser.add_argument( - "--class_data_dir", - type=str, - default=None, - #required=False, - help="A folder containing the training data of class images.", - ) - parser.add_argument( - "--instance_prompt", - type=str, - default=None, - help="The prompt with identifier specifying the instance", - ) - parser.add_argument( - "--class_prompt", - type=str, - default="", - help="The prompt to specify images in the same class as provided instance images.", - ) - parser.add_argument( - "--with_prior_preservation", - default=False, - action="store_true", - help="Flag to add prior preservation loss.", - ) - parser.add_argument("--prior_loss_weight", type=float, default=1.0, help="The weight of prior preservation loss.") - parser.add_argument( - "--num_class_images", - type=int, - default=100, - help=( - "Minimal class images for prior preservation loss. If not have enough images, additional images will be" - " sampled with class_prompt." - ), - ) - parser.add_argument( - "--output_dir", - type=str, - default="", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") - parser.add_argument( - "--resolution", - type=int, - default=512, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--center_crop", action="store_true", help="Whether to center crop images before resizing to resolution" - ) - parser.add_argument("--train_text_encoder", action="store_true", help="Whether to train the text encoder") - parser.add_argument( - "--train_batch_size", type=int, default=4, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument( - "--sample_batch_size", type=int, default=4, help="Batch size (per device) for sampling images." - ) - parser.add_argument("--num_train_epochs", type=int, default=1) - parser.add_argument( - "--max_train_steps", - type=int, - default=None, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument( - "--gradient_checkpointing", - action="store_true", - help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=5e-6, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--scale_lr", - action="store_true", - default=False, - help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="constant", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument( - "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument( - "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes." - ) - parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") - parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default="no", - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose" - "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10." - "and an Nvidia Ampere GPU." - ), - ) - - parser.add_argument( - "--save_n_steps", - type=int, - default=1, - help=("Save the model every n global_steps"), - ) - - - parser.add_argument( - "--save_starting_step", - type=int, - default=1, - help=("The step from which it starts saving intermediary checkpoints"), - ) - - parser.add_argument( - "--stop_text_encoder_training", - type=int, - default=1000000, - help=("The step at which the text_encoder is no longer trained"), - ) - - - parser.add_argument( - "--image_captions_filename", - action="store_true", - help="Get captions from filename", - ) - - - parser.add_argument( - "--dump_only_text_encoder", - action="store_true", - default=False, - help="Dump only text encoder", - ) - - parser.add_argument( - "--train_only_unet", - action="store_true", - default=False, - help="Train only the unet", - ) - - parser.add_argument( - "--cache_latents", - action="store_true", - default=False, - help="Train only the unet", - ) - - parser.add_argument( - "--Session_dir", - type=str, - default="", - help="Current session directory", - ) - - - - - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - - args = parser.parse_args() - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - #if args.instance_data_dir is None: - # raise ValueError("You must specify a train data directory.") - - #if args.with_prior_preservation: - # if args.class_data_dir is None: - # raise ValueError("You must specify a data directory for class images.") - # if args.class_prompt is None: - # raise ValueError("You must specify prompt for class images.") - - return args - - -class DreamBoothDataset(Dataset): - """ - A dataset to prepare the instance and class images with the prompts for fine-tuning the model. - It pre-processes the images and the tokenizes prompts. - """ - - def __init__( - self, - instance_data_root, - instance_prompt, - tokenizer, - args, - class_data_root=None, - class_prompt=None, - size=512, - center_crop=False, - ): - self.size = size - self.center_crop = center_crop - self.tokenizer = tokenizer - self.image_captions_filename = None - - self.instance_data_root = Path(instance_data_root) - if not self.instance_data_root.exists(): - raise ValueError("Instance images root doesn't exists.") - - self.instance_images_path = list(Path(instance_data_root).iterdir()) - self.num_instance_images = len(self.instance_images_path) - self.instance_prompt = instance_prompt - self._length = self.num_instance_images - - if args.image_captions_filename: - self.image_captions_filename = True - - if class_data_root is not None: - self.class_data_root = Path(class_data_root) - self.class_data_root.mkdir(parents=True, exist_ok=True) - self.class_images_path = list(self.class_data_root.iterdir()) - random.shuffle(self.class_images_path) - self.num_class_images = len(self.class_images_path) - self._length = max(self.num_class_images, self.num_instance_images) - self.class_prompt = class_prompt - else: - self.class_data_root = None - - self.image_transforms = transforms.Compose( - [ - transforms.Resize(size, interpolation=transforms.InterpolationMode.BILINEAR), - transforms.CenterCrop(size) if center_crop else transforms.RandomCrop(size), - transforms.ToTensor(), - transforms.Normalize([0.5], [0.5]), - ] - ) - - def __len__(self): - return self._length - - def __getitem__(self, index): - example = {} - path = self.instance_images_path[index % self.num_instance_images] - instance_image = Image.open(path) - if not instance_image.mode == "RGB": - instance_image = instance_image.convert("RGB") - - instance_prompt = self.instance_prompt - - if self.image_captions_filename: - filename = Path(path).stem - pt=''.join([i for i in filename if not i.isdigit()]) - pt=pt.replace("_"," ") - pt=pt.replace("(","") - pt=pt.replace(")","") - pt=pt.replace("-","") - instance_prompt = pt - sys.stdout.write(" " +instance_prompt+" ") - sys.stdout.flush() - - - example["instance_images"] = self.image_transforms(instance_image) - example["instance_prompt_ids"] = self.tokenizer( - instance_prompt, - padding="do_not_pad", - truncation=True, - max_length=self.tokenizer.model_max_length, - ).input_ids - - if self.class_data_root: - class_image = Image.open(self.class_images_path[index % self.num_class_images]) - if not class_image.mode == "RGB": - class_image = class_image.convert("RGB") - example["class_images"] = self.image_transforms(class_image) - example["class_prompt_ids"] = self.tokenizer( - self.class_prompt, - padding="do_not_pad", - truncation=True, - max_length=self.tokenizer.model_max_length, - ).input_ids - - return example - - - -class PromptDataset(Dataset): - "A simple dataset to prepare the prompts to generate class images on multiple GPUs." - - def __init__(self, prompt, num_samples): - self.prompt = prompt - self.num_samples = num_samples - - def __len__(self): - return self.num_samples - - def __getitem__(self, index): - example = {} - example["prompt"] = self.prompt - example["index"] = index - return example - -class LatentsDataset(Dataset): - def __init__(self, latents_cache, text_encoder_cache): - self.latents_cache = latents_cache - self.text_encoder_cache = text_encoder_cache - - def __len__(self): - return len(self.latents_cache) - - def __getitem__(self, index): - return self.latents_cache[index], self.text_encoder_cache[index] - -def get_full_repo_name(model_id: str, organization: Optional[str] = None, token: Optional[str] = None): - if token is None: - token = HfFolder.get_token() - if organization is None: - username = whoami(token)["name"] - return f"{username}/{model_id}" - else: - return f"{organization}/{model_id}" - -def merge_two_dicts(starting_dict: dict, updater_dict: dict) -> dict: - """ - Starts from base starting dict and then adds the remaining key values from updater replacing the values from - the first starting/base dict with the second updater dict. - - For later: how does d = {**d1, **d2} replace collision? - - :param starting_dict: - :param updater_dict: - :return: - """ - new_dict: dict = starting_dict.copy() # start with keys and values of starting_dict - new_dict.update(updater_dict) # modifies starting_dict with keys and values of updater_dict - return new_dict - -def merge_args(args1: argparse.Namespace, args2: argparse.Namespace) -> argparse.Namespace: - """ - - ref: https://stackoverflow.com/questions/56136549/how-can-i-merge-two-argparse-namespaces-in-python-2-x - :param args1: - :param args2: - :return: - """ - # - the merged args - # The vars() function returns the __dict__ attribute to values of the given object e.g {field:value}. - merged_key_values_for_namespace: dict = merge_two_dicts(vars(args1), vars(args2)) - args = argparse.Namespace(**merged_key_values_for_namespace) - return args - -def run_training(args_imported): - args_default = parse_args() - args = merge_args(args_default, args_imported) - print(args) - logging_dir = Path(args.output_dir, args.logging_dir) - i=args.save_starting_step - accelerator = Accelerator( - gradient_accumulation_steps=args.gradient_accumulation_steps, - mixed_precision=args.mixed_precision, - log_with="tensorboard", - logging_dir=logging_dir, - ) - - # Currently, it's not possible to do gradient accumulation when training two models with accelerate.accumulate - # This will be enabled soon in accelerate. For now, we don't allow gradient accumulation when training two models. - # TODO (patil-suraj): Remove this check when gradient accumulation with two models is enabled in accelerate. - if args.train_text_encoder and args.gradient_accumulation_steps > 1 and accelerator.num_processes > 1: - raise ValueError( - "Gradient accumulation is not supported when training the text encoder in distributed training. " - "Please set gradient_accumulation_steps to 1. This feature will be supported in the future." - ) - - if args.seed is not None: - set_seed(args.seed) - - if args.with_prior_preservation: - class_images_dir = Path(args.class_data_dir) - if not class_images_dir.exists(): - class_images_dir.mkdir(parents=True) - cur_class_images = len(list(class_images_dir.iterdir())) - - if cur_class_images < args.num_class_images: - torch_dtype = torch.float16 if accelerator.device.type == "cuda" else torch.float32 - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, torch_dtype=torch_dtype - ) - pipeline.set_progress_bar_config(disable=True) - - num_new_images = args.num_class_images - cur_class_images - logger.info(f"Number of class images to sample: {num_new_images}.") - - sample_dataset = PromptDataset(args.class_prompt, num_new_images) - sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size) - - sample_dataloader = accelerator.prepare(sample_dataloader) - pipeline.to(accelerator.device) - - for example in tqdm( - sample_dataloader, desc="Generating class images", disable=not accelerator.is_local_main_process - ): - with torch.autocast("cuda"): - images = pipeline(example["prompt"]).images - - for i, image in enumerate(images): - image.save(class_images_dir / f"{example['index'][i] + cur_class_images}.jpg") - - del pipeline - if torch.cuda.is_available(): - torch.cuda.empty_cache() - - # Handle the repository creation - if accelerator.is_main_process: - if args.push_to_hub: - if args.hub_model_id is None: - repo_name = get_full_repo_name(Path(args.output_dir).name, token=args.hub_token) - else: - repo_name = args.hub_model_id - repo = Repository(args.output_dir, clone_from=repo_name) - - with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore: - if "step_*" not in gitignore: - gitignore.write("step_*\n") - if "epoch_*" not in gitignore: - gitignore.write("epoch_*\n") - elif args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - # Load the tokenizer - if args.tokenizer_name: - tokenizer = CLIPTokenizer.from_pretrained(args.tokenizer_name) - elif args.pretrained_model_name_or_path: - tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer") - - # Load models and create wrapper for stable diffusion - if args.train_only_unet: - if os.path.exists(str(args.output_dir+"/text_encoder_trained")): - text_encoder = CLIPTextModel.from_pretrained(args.output_dir, subfolder="text_encoder_trained") - elif os.path.exists(str(args.output_dir+"/text_encoder")): - text_encoder = CLIPTextModel.from_pretrained(args.output_dir, subfolder="text_encoder") - else: - text_encoder = CLIPTextModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="text_encoder") - else: - text_encoder = CLIPTextModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="text_encoder") - vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae") - unet = UNet2DConditionModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="unet") - - vae.requires_grad_(False) - if not args.train_text_encoder: - text_encoder.requires_grad_(False) - - if args.gradient_checkpointing: - unet.enable_gradient_checkpointing() - if args.train_text_encoder: - text_encoder.gradient_checkpointing_enable() - - if args.scale_lr: - args.learning_rate = ( - args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes - ) - - # Use 8-bit Adam for lower memory usage or to fine-tune the model in 16GB GPUs - if args.use_8bit_adam: - try: - import bitsandbytes as bnb - except ImportError: - raise ImportError( - "To use 8-bit Adam, please install the bitsandbytes library: `pip install bitsandbytes`." - ) - - optimizer_class = bnb.optim.AdamW8bit - else: - optimizer_class = torch.optim.AdamW - - params_to_optimize = ( - itertools.chain(unet.parameters(), text_encoder.parameters()) if args.train_text_encoder else unet.parameters() - ) - optimizer = optimizer_class( - params_to_optimize, - lr=args.learning_rate, - betas=(args.adam_beta1, args.adam_beta2), - weight_decay=args.adam_weight_decay, - eps=args.adam_epsilon, - ) - - noise_scheduler = DDPMScheduler.from_config(args.pretrained_model_name_or_path, subfolder="scheduler") - - train_dataset = DreamBoothDataset( - instance_data_root=args.instance_data_dir, - instance_prompt=args.instance_prompt, - class_data_root=args.class_data_dir if args.with_prior_preservation else None, - class_prompt=args.class_prompt, - tokenizer=tokenizer, - size=args.resolution, - center_crop=args.center_crop, - args=args, - ) - - def collate_fn(examples): - input_ids = [example["instance_prompt_ids"] for example in examples] - pixel_values = [example["instance_images"] for example in examples] - - # Concat class and instance examples for prior preservation. - # We do this to avoid doing two forward passes. - if args.with_prior_preservation: - input_ids += [example["class_prompt_ids"] for example in examples] - pixel_values += [example["class_images"] for example in examples] - - pixel_values = torch.stack(pixel_values) - pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float() - - input_ids = tokenizer.pad({"input_ids": input_ids}, padding=True, return_tensors="pt").input_ids - - batch = { - "input_ids": input_ids, - "pixel_values": pixel_values, - } - return batch - - train_dataloader = torch.utils.data.DataLoader( - train_dataset, batch_size=args.train_batch_size, shuffle=True, collate_fn=collate_fn - ) - - # Scheduler and math around the number of training steps. - overrode_max_train_steps = False - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - overrode_max_train_steps = True - - lr_scheduler = get_scheduler( - args.lr_scheduler, - optimizer=optimizer, - num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps, - num_training_steps=args.max_train_steps * args.gradient_accumulation_steps, - ) - - if args.train_text_encoder: - unet, text_encoder, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - unet, text_encoder, optimizer, train_dataloader, lr_scheduler - ) - else: - unet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - unet, optimizer, train_dataloader, lr_scheduler - ) - - weight_dtype = torch.float32 - if args.mixed_precision == "fp16": - weight_dtype = torch.float16 - elif args.mixed_precision == "bf16": - weight_dtype = torch.bfloat16 - - # Move text_encode and vae to gpu. - # For mixed precision training we cast the text_encoder and vae weights to half-precision - # as these models are only used for inference, keeping weights in full precision is not required. - vae.to(accelerator.device, dtype=weight_dtype) - if not args.train_text_encoder: - text_encoder.to(accelerator.device, dtype=weight_dtype) - - - if args.cache_latents: - latents_cache = [] - text_encoder_cache = [] - for batch in tqdm(train_dataloader, desc="Caching latents"): - with torch.no_grad(): - batch["pixel_values"] = batch["pixel_values"].to(accelerator.device, non_blocking=True, dtype=weight_dtype) - batch["input_ids"] = batch["input_ids"].to(accelerator.device, non_blocking=True) - latents_cache.append(vae.encode(batch["pixel_values"]).latent_dist) - if args.train_text_encoder: - text_encoder_cache.append(batch["input_ids"]) - else: - text_encoder_cache.append(text_encoder(batch["input_ids"])[0]) - train_dataset = LatentsDataset(latents_cache, text_encoder_cache) - train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=1, collate_fn=lambda x: x, shuffle=True) - - del vae - #if not args.train_text_encoder: - # del text_encoder - if torch.cuda.is_available(): - torch.cuda.empty_cache() - - # We need to recalculate our total training steps as the size of the training dataloader may have changed. - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if overrode_max_train_steps: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - # Afterwards we recalculate our number of training epochs - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if accelerator.is_main_process: - accelerator.init_trackers("dreambooth", config=vars(args)) - - def bar(prg): - br='|'+'█' * prg + ' ' * (25-prg)+'|' - return br - - # Train! - total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num batches each epoch = {len(train_dataloader)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") - logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - # Only show the progress bar once on each machine. - progress_bar = tqdm(range(args.max_train_steps), disable=not accelerator.is_local_main_process) - global_step = 0 - - for epoch in range(args.num_train_epochs): - unet.train() - if args.train_text_encoder: - text_encoder.train() - for step, batch in enumerate(train_dataloader): - with accelerator.accumulate(unet): - # Convert images to latent space - with torch.no_grad(): - if args.cache_latents: - latents_dist = batch[0][0] - else: - latents_dist = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist - latents = latents_dist.sample() * 0.18215 - - # Sample noise that we'll add to the latents - noise = torch.randn_like(latents) - bsz = latents.shape[0] - # Sample a random timestep for each image - timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device) - timesteps = timesteps.long() - - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) - - # Get the text embedding for conditioning - if(args.cache_latents): - if args.train_text_encoder: - encoder_hidden_states = text_encoder(batch[0][1])[0] - else: - encoder_hidden_states = batch[0][1] - else: - encoder_hidden_states = text_encoder(batch["input_ids"])[0] - - # Predict the noise residual - model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample - - # Get the target for loss depending on the prediction type - if noise_scheduler.config.prediction_type == "epsilon": - target = noise - elif noise_scheduler.config.prediction_type == "v_prediction": - target = noise_scheduler.get_velocity(latents, noise, timesteps) - else: - raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") - - if args.with_prior_preservation: - # Chunk the noise and model_pred into two parts and compute the loss on each part separately. - model_pred, model_pred_prior = torch.chunk(model_pred, 2, dim=0) - target, target_prior = torch.chunk(target, 2, dim=0) - - # Compute instance loss - loss = F.mse_loss(model_pred.float(), target.float(), reduction="none").mean([1, 2, 3]).mean() - - # Compute prior loss - prior_loss = F.mse_loss(model_pred_prior.float(), target_prior.float(), reduction="mean") - - # Add the prior loss to the instance loss. - loss = loss + args.prior_loss_weight * prior_loss - else: - loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") - - accelerator.backward(loss) - if accelerator.sync_gradients: - params_to_clip = ( - itertools.chain(unet.parameters(), text_encoder.parameters()) - if args.train_text_encoder - else unet.parameters() - ) - accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm) - optimizer.step() - lr_scheduler.step() - optimizer.zero_grad() - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - progress_bar.update(1) - global_step += 1 - - fll=round((global_step*100)/args.max_train_steps) - fll=round(fll/4) - pr=bar(fll) - - logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} - progress_bar.set_postfix(**logs) - progress_bar.set_description_str("Progress:"+pr) - accelerator.log(logs, step=global_step) - - if global_step >= args.max_train_steps: - break - - if args.train_text_encoder and global_step == args.stop_text_encoder_training and global_step >= 30: - if accelerator.is_main_process: - print(" " +" Freezing the text_encoder ..."+" ") - frz_dir=args.output_dir + "/text_encoder_frozen" - if os.path.exists(frz_dir): - subprocess.call('rm -r '+ frz_dir, shell=True) - os.mkdir(frz_dir) - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - ) - pipeline.text_encoder.save_pretrained(frz_dir) - - if args.save_n_steps >= 200: - if global_step < args.max_train_steps and global_step+1==i: - ckpt_name = "_step_" + str(global_step+1) - save_dir = Path(args.output_dir+ckpt_name) - save_dir=str(save_dir) - save_dir=save_dir.replace(" ", "_") - if not os.path.exists(save_dir): - os.mkdir(save_dir) - inst=save_dir[16:] - inst=inst.replace(" ", "_") - print(" SAVING CHECKPOINT: "+args.Session_dir+"/"+inst+".ckpt") - # Create the pipeline using the trained modules and save it. - if accelerator.is_main_process: - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - ) - pipeline.save_pretrained(save_dir) - frz_dir=args.output_dir + "/text_encoder_frozen" - if args.train_text_encoder and os.path.exists(frz_dir): - subprocess.call('rm -r '+save_dir+'/text_encoder/*.*', shell=True) - subprocess.call('cp -f '+frz_dir +'/*.* '+ save_dir+'/text_encoder', shell=True) - chkpth=args.Session_dir+"/"+inst+".ckpt" - subprocess.call('python /content/diffusers/scripts/convert_diffusers_to_original_stable_diffusion.py --model_path ' + save_dir + ' --checkpoint_path ' + chkpth + ' --half', shell=True) - subprocess.call('rm -r '+ save_dir, shell=True) - i=i+args.save_n_steps - - accelerator.wait_for_everyone() - - # Create the pipeline using using the trained modules and save it. - if accelerator.is_main_process: - if args.dump_only_text_encoder: - txt_dir=args.output_dir + "/text_encoder_trained" - if not os.path.exists(txt_dir): - os.mkdir(txt_dir) - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - ) - pipeline.text_encoder.save_pretrained(txt_dir) - - elif args.train_only_unet: - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - ) - pipeline.save_pretrained(args.output_dir) - txt_dir=args.output_dir + "/text_encoder_trained" - subprocess.call('rm -r '+txt_dir, shell=True) - - else: - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - ) - frz_dir=args.output_dir + "/text_encoder_frozen" - pipeline.save_pretrained(args.output_dir) - if args.train_text_encoder and os.path.exists(frz_dir): - subprocess.call('mv -f '+frz_dir +'/*.* '+ args.output_dir+'/text_encoder', shell=True) - subprocess.call('rm -r '+ frz_dir, shell=True) - - if args.push_to_hub: - repo.push_to_hub(commit_message="End of training", blocking=False, auto_lfs_prune=True) - - accelerator.end_training() - del pipeline - torch.cuda.empty_cache() - gc.collect() -if __name__ == "__main__": - pass - #main() - diff --git a/spaces/Abdullah-Habib/Rabbit_or_Hare/README.md b/spaces/Abdullah-Habib/Rabbit_or_Hare/README.md deleted file mode 100644 index 483dd3fa19f13b6e9ca48c2cb20d8a1ee70b4aa2..0000000000000000000000000000000000000000 --- a/spaces/Abdullah-Habib/Rabbit_or_Hare/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Rabbit Or Hare -emoji: 📊 -colorFrom: pink -colorTo: blue -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AgentVerse/agentVerse/agentverse/agents/simulation_agent/reflection.py b/spaces/AgentVerse/agentVerse/agentverse/agents/simulation_agent/reflection.py deleted file mode 100644 index bbbcf9f109cc43b1392f15bf065f4450b8fa8501..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/agentverse/agents/simulation_agent/reflection.py +++ /dev/null @@ -1,227 +0,0 @@ -from __future__ import annotations - -""" -An agent based upon Observation-Planning-Reflection architecture. -""" - -from logging import getLogger - -from abc import abstractmethod -from typing import List, Set, Union, NamedTuple, TYPE_CHECKING - -from pydantic import BaseModel, Field, validator - -from agentverse.llms import BaseLLM -from agentverse.memory import BaseMemory, ChatHistoryMemory -from agentverse.message import Message -from agentverse.output_parser import OutputParser - -from agentverse.message import Message -from agentverse.agents.base import BaseAgent - -from datetime import datetime as dt -import datetime - -#from . import agent_registry -from string import Template - -from agentverse.agents import agent_registry -from agentverse.agents.base import BaseAgent - -logger = getLogger(__file__) - -if TYPE_CHECKING: - from agentverse.environments.base import BaseEnvironment - - -@agent_registry.register("reflection") -class ReflectionAgent(BaseAgent): - async_mode: bool = (True,) - current_time: str = (None,) - environment: BaseEnvironment = None - step_cnt: int = 0 - - manipulated_memory: str = Field( - default="", description="one fragment used in prompt construction" - ) - - @validator("current_time") - def convert_str_to_dt(cls, current_time): - if not isinstance(current_time, str): - raise ValueError("current_time should be str") - return dt.strptime(current_time, "%Y-%m-%d %H:%M:%S") - - def step(self, current_time: dt, env_description: str = "") -> Message: - """ - Call this method at each time frame - """ - self.current_time = current_time - - self.manipulated_memory = self.memory_manipulator.manipulate_memory() - - prompt = self._fill_prompt_template(env_description) - - parsed_response, reaction, target = None, None, None - for i in range(self.max_retry): - try: - response = self.llm.agenerate_response(prompt) - parsed_response = self.output_parser.parse(response) - - if "say(" in parsed_response.return_values["output"]: - reaction, target = eval( - "self._" + parsed_response.return_values["output"].strip() - ) - elif "act(" in parsed_response.return_values["output"]: - reaction, target = eval( - "self._" + parsed_response.return_values["output"].strip() - ) - elif "do_nothing(" in parsed_response.return_values["output"]: - reaction, target = None, None - else: - raise Exception( - f"no valid parsed_response detected, " - f"cur response {parsed_response.return_values['output']}" - ) - break - - except Exception as e: - logger.error(e) - logger.warn("Retrying...") - continue - - if parsed_response is None: - logger.error(f"{self.name} failed to generate valid response.") - - if reaction is None: - reaction = "Keep doing last action ..." - - message = Message( - content="" if reaction is None else reaction, - sender=self.name, - receiver=self.get_receiver() - if target is None - else self.get_valid_receiver(target), - ) - - self.step_cnt += 1 - - return message - - async def astep(self, current_time: dt, env_description: str = "") -> Message: - """Asynchronous version of step""" - # use environment's time to update agent's time - self.current_time = current_time - # Before the agent step, we check current status, - # TODO add this func after - # self.check_status_passive() - - self.manipulated_memory = self.memory_manipulator.manipulate_memory() - - prompt = self._fill_prompt_template(env_description) - - parsed_response, reaction, target = None, None, None - for i in range(self.max_retry): - try: - response = await self.llm.agenerate_response(prompt) - parsed_response = self.output_parser.parse(response) - - if "say(" in parsed_response.return_values["output"]: - reaction, target = eval( - "self._" + parsed_response.return_values["output"].strip() - ) - elif "act(" in parsed_response.return_values["output"]: - reaction, target = eval( - "self._" + parsed_response.return_values["output"].strip() - ) - elif "do_nothing(" in parsed_response.return_values["output"]: - reaction, target = None, None - else: - raise Exception( - f"no valid parsed_response detected, " - f"cur response {parsed_response.return_values['output']}" - ) - - break - - except Exception as e: - logger.error(e) - logger.warn("Retrying...") - continue - - if parsed_response is None: - logger.error(f"{self.name} failed to generate valid response.") - - if reaction is None: - reaction = "Keep doing last action ..." - - message = Message( - content="" if reaction is None else reaction, - sender=self.name, - receiver=self.get_receiver() - if target is None - else self.get_valid_receiver(target), - ) - - self.step_cnt += 1 - - return message - - def _act(self, description=None, target=None): - if description is None: - return "" - if target is None: - reaction_content = f"{self.name} performs action: '{description}'." - else: - reaction_content = ( - f"{self.name} performs action to {target}: '{description}'." - ) - # self.environment.broadcast_observations(self, target, reaction_content) - return reaction_content, target - - def _say(self, description, target=None): - if description is None: - return "" - if target is None: - reaction_content = f"{self.name} says: '{description}'." - else: - reaction_content = f"{self.name} says to {target}: '{description}'." - # self.environment.broadcast_observations(self, target, reaction_content) - return reaction_content, target - - def get_valid_receiver(self, target: str) -> set(): - all_agents_name = [] - for agent in self.environment.agents: - all_agents_name.append(agent.name) - - if not (target in all_agents_name): - return {"all"} - else: - return {target} - - def _fill_prompt_template(self, env_description: str = "") -> str: - """Fill the placeholders in the prompt template - - In the conversation agent, three placeholders are supported: - - ${agent_name}: the name of the agent - - ${env_description}: the description of the environment - - ${role_description}: the description of the role of the agent - - ${chat_history}: the chat history of the agent - """ - input_arguments = { - "agent_name": self.name, - "role_description": self.role_description, - "chat_history": self.memory.to_string(add_sender_prefix=True), - "current_time": self.current_time, - "env_description": env_description, - } - return Template(self.prompt_template).safe_substitute(input_arguments) - - def add_message_to_memory(self, messages: List[Message]) -> None: - self.memory.add_message(messages) - - def reset(self, environment: BaseEnvironment) -> None: - """Reset the agent""" - self.environment = environment - self.memory.reset() - self.memory_manipulator.agent = self - self.memory_manipulator.memory = self.memory diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/rings/Rings.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/rings/Rings.js deleted file mode 100644 index 5a0707cf2e8306f37395a60b9d1cf719ad4d9c50..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/rings/Rings.js +++ /dev/null @@ -1,38 +0,0 @@ -import Base from '../base/Base.js'; -import { Circle } from '../utils/Geoms.js' -import Yoyo from '../utils/Yoyo.js'; - - -class Rings extends Base { - constructor(scene, config) { - super(scene, config); - this.type = 'rexSpinnerRings'; - } - - buildShapes() { - for (var i = 0; i < 2; i++) { - this.addShape(new Circle()); - } - } - - updateShapes() { - var centerX = this.centerX; - var centerY = this.centerY; - var radius = this.radius; - var lineWidth = Math.ceil(radius / 25); - var maxRingRadius = radius - lineWidth; - - var shapes = this.getShapes(); - for (var i = 0, cnt = shapes.length; i < cnt; i++) { - var ring = shapes[i]; - var t = (this.value + (i / cnt)) % 1; - var alpha = Yoyo(t); - ring - .lineStyle(lineWidth, this.color, alpha) - .setRadius(t * maxRingRadius) - .setCenterPosition(centerX, centerY) - } - } -} - -export default Rings; \ No newline at end of file diff --git a/spaces/Aloento/9Nine-PITS/README.md b/spaces/Aloento/9Nine-PITS/README.md deleted file mode 100644 index f1784fce45700b8c52355760cdc553cb74e741ef..0000000000000000000000000000000000000000 --- a/spaces/Aloento/9Nine-PITS/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 9Nine PITS -emoji: 🚀 -colorFrom: red -colorTo: purple -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false -license: agpl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git "a/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243.py" "b/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243.py" deleted file mode 100644 index 2f4201438c4d8597c251726fe99c02d40f0cadf0..0000000000000000000000000000000000000000 --- "a/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243.py" +++ /dev/null @@ -1,166 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file -import re -import unicodedata -fast_debug = False -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive - -def is_paragraph_break(match): - """ - 根据给定的匹配结果来判断换行符是否表示段落分隔。 - 如果换行符前为句子结束标志(句号,感叹号,问号),且下一个字符为大写字母,则换行符更有可能表示段落分隔。 - 也可以根据之前的内容长度来判断段落是否已经足够长。 - """ - prev_char, next_char = match.groups() - - # 句子结束标志 - sentence_endings = ".!?" - - # 设定一个最小段落长度阈值 - min_paragraph_length = 140 - - if prev_char in sentence_endings and next_char.isupper() and len(match.string[:match.start(1)]) > min_paragraph_length: - return "\n\n" - else: - return " " - -def normalize_text(text): - """ - 通过把连字(ligatures)等文本特殊符号转换为其基本形式来对文本进行归一化处理。 - 例如,将连字 "fi" 转换为 "f" 和 "i"。 - """ - # 对文本进行归一化处理,分解连字 - normalized_text = unicodedata.normalize("NFKD", text) - - # 替换其他特殊字符 - cleaned_text = re.sub(r'[^\x00-\x7F]+', '', normalized_text) - - return cleaned_text - -def clean_text(raw_text): - """ - 对从 PDF 提取出的原始文本进行清洗和格式化处理。 - 1. 对原始文本进行归一化处理。 - 2. 替换跨行的连词,例如 “Espe-\ncially” 转换为 “Especially”。 - 3. 根据 heuristic 规则判断换行符是否是段落分隔,并相应地进行替换。 - """ - # 对文本进行归一化处理 - normalized_text = normalize_text(raw_text) - - # 替换跨行的连词 - text = re.sub(r'(\w+-\n\w+)', lambda m: m.group(1).replace('-\n', ''), normalized_text) - - # 根据前后相邻字符的特点,找到原文本中的换行符 - newlines = re.compile(r'(\S)\n(\S)') - - # 根据 heuristic 规则,用空格或段落分隔符替换原换行符 - final_text = re.sub(newlines, lambda m: m.group(1) + is_paragraph_break(m) + m.group(2), text) - - return final_text.strip() - -def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): - import time, glob, os, fitz - print('begin analysis on:', file_manifest) - for index, fp in enumerate(file_manifest): - with fitz.open(fp) as doc: - file_content = "" - for page in doc: - file_content += page.get_text() - file_content = clean_text(file_content) - print(file_content) - - prefix = "接下来请你逐文件分析下面的论文文件,概括其内容" if index==0 else "" - i_say = prefix + f'请对下面的文章片段用中文做一个概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{file_content}```' - i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的文章片段做一个概述: {os.path.abspath(fp)}' - chatbot.append((i_say_show_user, "[Local Message] waiting gpt response.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - if not fast_debug: - msg = '正常' - # ** gpt request ** - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, - inputs_show_user=i_say_show_user, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history=[], - sys_prompt="总结文章。" - ) # 带超时倒计时 - - - chatbot[-1] = (i_say_show_user, gpt_say) - history.append(i_say_show_user); history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - if not fast_debug: time.sleep(2) - - all_file = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(file_manifest)]) - i_say = f'根据以上你自己的分析,对全文进行概括,用学术性语言写一段中文摘要,然后再写一段英文摘要(包括{all_file})。' - chatbot.append((i_say, "[Local Message] waiting gpt response.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - if not fast_debug: - msg = '正常' - # ** gpt request ** - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, - inputs_show_user=i_say, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history=history, - sys_prompt="总结文章。" - ) # 带超时倒计时 - - chatbot[-1] = (i_say, gpt_say) - history.append(i_say); history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - - -@CatchException -def 批量总结PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - import glob, os - - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "批量总结PDF文档。函数插件贡献者: ValeriaWong,Eralien"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import fitz - except: - report_execption(chatbot, history, - a = f"解析项目: {txt}", - b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 清空历史,以免输入溢出 - history = [] - - # 检测输入参数,如没有给定输入参数,直接退出 - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 搜索需要处理的文件清单 - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)] # + \ - # [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] + \ - # [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \ - # [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)] - - # 如果没找到任何文件 - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex或.pdf文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 开始正式执行任务 - yield from 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) diff --git "a/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/\350\257\273\346\226\207\347\253\240\345\206\231\346\221\230\350\246\201.py" "b/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/\350\257\273\346\226\207\347\253\240\345\206\231\346\221\230\350\246\201.py" deleted file mode 100644 index dc92256dcb998294a27b13ed07c34d38d18b329a..0000000000000000000000000000000000000000 --- "a/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/\350\257\273\346\226\207\347\253\240\345\206\231\346\221\230\350\246\201.py" +++ /dev/null @@ -1,70 +0,0 @@ -from predict import predict_no_ui -from toolbox import CatchException, report_execption, write_results_to_file, predict_no_ui_but_counting_down -fast_debug = False - - -def 解析Paper(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt): - import time, glob, os - print('begin analysis on:', file_manifest) - for index, fp in enumerate(file_manifest): - with open(fp, 'r', encoding='utf-8') as f: - file_content = f.read() - - prefix = "接下来请你逐文件分析下面的论文文件,概括其内容" if index==0 else "" - i_say = prefix + f'请对下面的文章片段用中文做一个概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{file_content}```' - i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的文章片段做一个概述: {os.path.abspath(fp)}' - chatbot.append((i_say_show_user, "[Local Message] waiting gpt response.")) - print('[1] yield chatbot, history') - yield chatbot, history, '正常' - - if not fast_debug: - msg = '正常' - # ** gpt request ** - gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say_show_user, chatbot, top_p, temperature, history=[]) # 带超时倒计时 - - print('[2] end gpt req') - chatbot[-1] = (i_say_show_user, gpt_say) - history.append(i_say_show_user); history.append(gpt_say) - print('[3] yield chatbot, history') - yield chatbot, history, msg - print('[4] next') - if not fast_debug: time.sleep(2) - - all_file = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(file_manifest)]) - i_say = f'根据以上你自己的分析,对全文进行概括,用学术性语言写一段中文摘要,然后再写一段英文摘要(包括{all_file})。' - chatbot.append((i_say, "[Local Message] waiting gpt response.")) - yield chatbot, history, '正常' - - if not fast_debug: - msg = '正常' - # ** gpt request ** - gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say, chatbot, top_p, temperature, history=history) # 带超时倒计时 - - chatbot[-1] = (i_say, gpt_say) - history.append(i_say); history.append(gpt_say) - yield chatbot, history, msg - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res)) - yield chatbot, history, msg - - - -@CatchException -def 读文章写摘要(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT): - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield chatbot, history, '正常' - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] # + \ - # [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \ - # [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}") - yield chatbot, history, '正常' - return - yield from 解析Paper(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/stochastic_karras_ve.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/stochastic_karras_ve.md deleted file mode 100644 index 6dee2d382e3b4c9e11dcfdba148cdf23fceeb336..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/stochastic_karras_ve.md +++ /dev/null @@ -1,33 +0,0 @@ - - -# Stochastic Karras VE - -[Elucidating the Design Space of Diffusion-Based Generative Models](https://huggingface.co/papers/2206.00364) is by Tero Karras, Miika Aittala, Timo Aila and Samuli Laine. This pipeline implements the stochastic sampling tailored to variance expanding (VE) models. - -The abstract from the paper: - -*We argue that the theory and practice of diffusion-based generative models are currently unnecessarily convoluted and seek to remedy the situation by presenting a design space that clearly separates the concrete design choices. This lets us identify several changes to both the sampling and training processes, as well as preconditioning of the score networks. Together, our improvements yield new state-of-the-art FID of 1.79 for CIFAR-10 in a class-conditional setting and 1.97 in an unconditional setting, with much faster sampling (35 network evaluations per image) than prior designs. To further demonstrate their modular nature, we show that our design changes dramatically improve both the efficiency and quality obtainable with pre-trained score networks from previous work, including improving the FID of an existing ImageNet-64 model from 2.07 to near-SOTA 1.55.* - - - -Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines. - - - -## KarrasVePipeline -[[autodoc]] KarrasVePipeline - - all - - __call__ - -## ImagePipelineOutput -[[autodoc]] pipelines.ImagePipelineOutput \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/optimization/open_vino.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/optimization/open_vino.md deleted file mode 100644 index b944c24859f70896cf2830e5b990dc8d609bec34..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/optimization/open_vino.md +++ /dev/null @@ -1,108 +0,0 @@ - - - -# How to use OpenVINO for inference - -🤗 [Optimum](https://github.com/huggingface/optimum-intel) provides Stable Diffusion pipelines compatible with OpenVINO. You can now easily perform inference with OpenVINO Runtime on a variety of Intel processors ([see](https://docs.openvino.ai/latest/openvino_docs_OV_UG_supported_plugins_Supported_Devices.html) the full list of supported devices). - -## Installation - -Install 🤗 Optimum Intel with the following command: - -``` -pip install --upgrade-strategy eager optimum["openvino"] -``` - -The `--upgrade-strategy eager` option is needed to ensure [`optimum-intel`](https://github.com/huggingface/optimum-intel) is upgraded to its latest version. - - -## Stable Diffusion - -### Inference - -To load an OpenVINO model and run inference with OpenVINO Runtime, you need to replace `StableDiffusionPipeline` with `OVStableDiffusionPipeline`. In case you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, you can set `export=True`. - -```python -from optimum.intel import OVStableDiffusionPipeline - -model_id = "runwayml/stable-diffusion-v1-5" -pipeline = OVStableDiffusionPipeline.from_pretrained(model_id, export=True) -prompt = "sailing ship in storm by Rembrandt" -image = pipeline(prompt).images[0] - -# Don't forget to save the exported model -pipeline.save_pretrained("openvino-sd-v1-5") -``` - -To further speed up inference, the model can be statically reshaped : - -```python -# Define the shapes related to the inputs and desired outputs -batch_size, num_images, height, width = 1, 1, 512, 512 - -# Statically reshape the model -pipeline.reshape(batch_size, height, width, num_images) -# Compile the model before inference -pipeline.compile() - -image = pipeline( - prompt, - height=height, - width=width, - num_images_per_prompt=num_images, -).images[0] -``` - -In case you want to change any parameters such as the outputs height or width, you’ll need to statically reshape your model once again. - -
- -
- - -### Supported tasks - -| Task | Loading Class | -|--------------------------------------|--------------------------------------| -| `text-to-image` | `OVStableDiffusionPipeline` | -| `image-to-image` | `OVStableDiffusionImg2ImgPipeline` | -| `inpaint` | `OVStableDiffusionInpaintPipeline` | - -You can find more examples in the optimum [documentation](https://huggingface.co/docs/optimum/intel/inference#stable-diffusion). - - -## Stable Diffusion XL - -### Inference - -```python -from optimum.intel import OVStableDiffusionXLPipeline - -model_id = "stabilityai/stable-diffusion-xl-base-1.0" -pipeline = OVStableDiffusionXLPipeline.from_pretrained(model_id, export=True) -prompt = "sailing ship in storm by Rembrandt" -image = pipeline(prompt).images[0] -``` - -To further speed up inference, the model can be statically reshaped as showed above. -You can find more examples in the optimum [documentation](https://huggingface.co/docs/optimum/intel/inference#stable-diffusion-xl). - -### Supported tasks - -| Task | Loading Class | -|--------------------------------------|--------------------------------------| -| `text-to-image` | `OVStableDiffusionXLPipeline` | -| `image-to-image` | `OVStableDiffusionXLImg2ImgPipeline` | - - - diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/kandinsky_v22/test_kandinsky_inpaint.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/kandinsky_v22/test_kandinsky_inpaint.py deleted file mode 100644 index 436c240e1ac8e450d0cf30949c539c82860b58d1..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/kandinsky_v22/test_kandinsky_inpaint.py +++ /dev/null @@ -1,295 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import gc -import random -import unittest - -import numpy as np -import torch -from PIL import Image - -from diffusers import ( - DDIMScheduler, - KandinskyV22InpaintPipeline, - KandinskyV22PriorPipeline, - UNet2DConditionModel, - VQModel, -) -from diffusers.utils import floats_tensor, load_image, load_numpy, slow, torch_device -from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu - -from ..test_pipelines_common import PipelineTesterMixin, assert_mean_pixel_difference - - -enable_full_determinism() - - -class Dummies: - @property - def text_embedder_hidden_size(self): - return 32 - - @property - def time_input_dim(self): - return 32 - - @property - def block_out_channels_0(self): - return self.time_input_dim - - @property - def time_embed_dim(self): - return self.time_input_dim * 4 - - @property - def cross_attention_dim(self): - return 32 - - @property - def dummy_unet(self): - torch.manual_seed(0) - - model_kwargs = { - "in_channels": 9, - # Out channels is double in channels because predicts mean and variance - "out_channels": 8, - "addition_embed_type": "image", - "down_block_types": ("ResnetDownsampleBlock2D", "SimpleCrossAttnDownBlock2D"), - "up_block_types": ("SimpleCrossAttnUpBlock2D", "ResnetUpsampleBlock2D"), - "mid_block_type": "UNetMidBlock2DSimpleCrossAttn", - "block_out_channels": (self.block_out_channels_0, self.block_out_channels_0 * 2), - "layers_per_block": 1, - "encoder_hid_dim": self.text_embedder_hidden_size, - "encoder_hid_dim_type": "image_proj", - "cross_attention_dim": self.cross_attention_dim, - "attention_head_dim": 4, - "resnet_time_scale_shift": "scale_shift", - "class_embed_type": None, - } - - model = UNet2DConditionModel(**model_kwargs) - return model - - @property - def dummy_movq_kwargs(self): - return { - "block_out_channels": [32, 64], - "down_block_types": ["DownEncoderBlock2D", "AttnDownEncoderBlock2D"], - "in_channels": 3, - "latent_channels": 4, - "layers_per_block": 1, - "norm_num_groups": 8, - "norm_type": "spatial", - "num_vq_embeddings": 12, - "out_channels": 3, - "up_block_types": [ - "AttnUpDecoderBlock2D", - "UpDecoderBlock2D", - ], - "vq_embed_dim": 4, - } - - @property - def dummy_movq(self): - torch.manual_seed(0) - model = VQModel(**self.dummy_movq_kwargs) - return model - - def get_dummy_components(self): - unet = self.dummy_unet - movq = self.dummy_movq - - scheduler = DDIMScheduler( - num_train_timesteps=1000, - beta_schedule="linear", - beta_start=0.00085, - beta_end=0.012, - clip_sample=False, - set_alpha_to_one=False, - steps_offset=1, - prediction_type="epsilon", - thresholding=False, - ) - - components = { - "unet": unet, - "scheduler": scheduler, - "movq": movq, - } - - return components - - def get_dummy_inputs(self, device, seed=0): - image_embeds = floats_tensor((1, self.text_embedder_hidden_size), rng=random.Random(seed)).to(device) - negative_image_embeds = floats_tensor((1, self.text_embedder_hidden_size), rng=random.Random(seed + 1)).to( - device - ) - # create init_image - image = floats_tensor((1, 3, 64, 64), rng=random.Random(seed)).to(device) - image = image.cpu().permute(0, 2, 3, 1)[0] - init_image = Image.fromarray(np.uint8(image)).convert("RGB").resize((256, 256)) - # create mask - mask = np.zeros((64, 64), dtype=np.float32) - mask[:32, :32] = 1 - - if str(device).startswith("mps"): - generator = torch.manual_seed(seed) - else: - generator = torch.Generator(device=device).manual_seed(seed) - inputs = { - "image": init_image, - "mask_image": mask, - "image_embeds": image_embeds, - "negative_image_embeds": negative_image_embeds, - "generator": generator, - "height": 64, - "width": 64, - "num_inference_steps": 2, - "guidance_scale": 4.0, - "output_type": "np", - } - return inputs - - -class KandinskyV22InpaintPipelineFastTests(PipelineTesterMixin, unittest.TestCase): - pipeline_class = KandinskyV22InpaintPipeline - params = ["image_embeds", "negative_image_embeds", "image", "mask_image"] - batch_params = [ - "image_embeds", - "negative_image_embeds", - "image", - "mask_image", - ] - required_optional_params = [ - "generator", - "height", - "width", - "latents", - "guidance_scale", - "num_inference_steps", - "return_dict", - "guidance_scale", - "num_images_per_prompt", - "output_type", - "return_dict", - ] - test_xformers_attention = False - - def get_dummy_components(self): - dummies = Dummies() - return dummies.get_dummy_components() - - def get_dummy_inputs(self, device, seed=0): - dummies = Dummies() - return dummies.get_dummy_inputs(device=device, seed=seed) - - def test_kandinsky_inpaint(self): - device = "cpu" - - components = self.get_dummy_components() - - pipe = self.pipeline_class(**components) - pipe = pipe.to(device) - - pipe.set_progress_bar_config(disable=None) - - output = pipe(**self.get_dummy_inputs(device)) - image = output.images - - image_from_tuple = pipe( - **self.get_dummy_inputs(device), - return_dict=False, - )[0] - - image_slice = image[0, -3:, -3:, -1] - image_from_tuple_slice = image_from_tuple[0, -3:, -3:, -1] - - assert image.shape == (1, 64, 64, 3) - - expected_slice = np.array( - [0.50775903, 0.49527195, 0.48824543, 0.50192237, 0.48644906, 0.49373814, 0.4780598, 0.47234827, 0.48327848] - ) - - assert ( - np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 - ), f" expected_slice {expected_slice}, but got {image_slice.flatten()}" - assert ( - np.abs(image_from_tuple_slice.flatten() - expected_slice).max() < 1e-2 - ), f" expected_slice {expected_slice}, but got {image_from_tuple_slice.flatten()}" - - def test_inference_batch_single_identical(self): - super().test_inference_batch_single_identical(expected_max_diff=3e-3) - - -@slow -@require_torch_gpu -class KandinskyV22InpaintPipelineIntegrationTests(unittest.TestCase): - def tearDown(self): - # clean up the VRAM after each test - super().tearDown() - gc.collect() - torch.cuda.empty_cache() - - def test_kandinsky_inpaint(self): - expected_image = load_numpy( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" - "/kandinskyv22/kandinskyv22_inpaint_cat_with_hat_fp16.npy" - ) - - init_image = load_image( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png" - ) - mask = np.zeros((768, 768), dtype=np.float32) - mask[:250, 250:-250] = 1 - - prompt = "a hat" - - pipe_prior = KandinskyV22PriorPipeline.from_pretrained( - "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16 - ) - pipe_prior.to(torch_device) - - pipeline = KandinskyV22InpaintPipeline.from_pretrained( - "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 - ) - pipeline = pipeline.to(torch_device) - pipeline.set_progress_bar_config(disable=None) - - generator = torch.Generator(device="cpu").manual_seed(0) - image_emb, zero_image_emb = pipe_prior( - prompt, - generator=generator, - num_inference_steps=5, - negative_prompt="", - ).to_tuple() - - output = pipeline( - image=init_image, - mask_image=mask, - image_embeds=image_emb, - negative_image_embeds=zero_image_emb, - generator=generator, - num_inference_steps=100, - height=768, - width=768, - output_type="np", - ) - - image = output.images[0] - - assert image.shape == (768, 768, 3) - - assert_mean_pixel_difference(image, expected_image) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion_instruction_pix2pix.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion_instruction_pix2pix.py deleted file mode 100644 index 513e11c105d5fc728045642609768e915bac9d62..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion_instruction_pix2pix.py +++ /dev/null @@ -1,385 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import gc -import random -import unittest - -import numpy as np -import torch -from PIL import Image -from transformers import CLIPTextConfig, CLIPTextModel, CLIPTokenizer - -from diffusers import ( - AutoencoderKL, - DDIMScheduler, - EulerAncestralDiscreteScheduler, - LMSDiscreteScheduler, - PNDMScheduler, - StableDiffusionInstructPix2PixPipeline, - UNet2DConditionModel, -) -from diffusers.image_processor import VaeImageProcessor -from diffusers.utils import floats_tensor, load_image, slow, torch_device -from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu - -from ..pipeline_params import ( - IMAGE_TO_IMAGE_IMAGE_PARAMS, - TEXT_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS, - TEXT_GUIDED_IMAGE_VARIATION_PARAMS, -) -from ..test_pipelines_common import PipelineKarrasSchedulerTesterMixin, PipelineLatentTesterMixin, PipelineTesterMixin - - -enable_full_determinism() - - -class StableDiffusionInstructPix2PixPipelineFastTests( - PipelineLatentTesterMixin, PipelineKarrasSchedulerTesterMixin, PipelineTesterMixin, unittest.TestCase -): - pipeline_class = StableDiffusionInstructPix2PixPipeline - params = TEXT_GUIDED_IMAGE_VARIATION_PARAMS - {"height", "width", "cross_attention_kwargs"} - batch_params = TEXT_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS - image_params = IMAGE_TO_IMAGE_IMAGE_PARAMS - image_latents_params = IMAGE_TO_IMAGE_IMAGE_PARAMS - - def get_dummy_components(self): - torch.manual_seed(0) - unet = UNet2DConditionModel( - block_out_channels=(32, 64), - layers_per_block=2, - sample_size=32, - in_channels=8, - out_channels=4, - down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), - up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"), - cross_attention_dim=32, - ) - scheduler = PNDMScheduler(skip_prk_steps=True) - torch.manual_seed(0) - vae = AutoencoderKL( - block_out_channels=[32, 64], - in_channels=3, - out_channels=3, - down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"], - up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"], - latent_channels=4, - ) - torch.manual_seed(0) - text_encoder_config = CLIPTextConfig( - bos_token_id=0, - eos_token_id=2, - hidden_size=32, - intermediate_size=37, - layer_norm_eps=1e-05, - num_attention_heads=4, - num_hidden_layers=5, - pad_token_id=1, - vocab_size=1000, - ) - text_encoder = CLIPTextModel(text_encoder_config) - tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") - - components = { - "unet": unet, - "scheduler": scheduler, - "vae": vae, - "text_encoder": text_encoder, - "tokenizer": tokenizer, - "safety_checker": None, - "feature_extractor": None, - } - return components - - def get_dummy_inputs(self, device, seed=0): - image = floats_tensor((1, 3, 32, 32), rng=random.Random(seed)).to(device) - image = image.cpu().permute(0, 2, 3, 1)[0] - image = Image.fromarray(np.uint8(image)).convert("RGB") - if str(device).startswith("mps"): - generator = torch.manual_seed(seed) - else: - generator = torch.Generator(device=device).manual_seed(seed) - inputs = { - "prompt": "A painting of a squirrel eating a burger", - "image": image, - "generator": generator, - "num_inference_steps": 2, - "guidance_scale": 6.0, - "image_guidance_scale": 1, - "output_type": "numpy", - } - return inputs - - def test_stable_diffusion_pix2pix_default_case(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - components = self.get_dummy_components() - sd_pipe = StableDiffusionInstructPix2PixPipeline(**components) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - image = sd_pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1] - assert image.shape == (1, 32, 32, 3) - expected_slice = np.array([0.7526, 0.3750, 0.4547, 0.6117, 0.5866, 0.5016, 0.4327, 0.5642, 0.4815]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 - - def test_stable_diffusion_pix2pix_negative_prompt(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - components = self.get_dummy_components() - sd_pipe = StableDiffusionInstructPix2PixPipeline(**components) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - negative_prompt = "french fries" - output = sd_pipe(**inputs, negative_prompt=negative_prompt) - image = output.images - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 32, 32, 3) - expected_slice = np.array([0.7511, 0.3642, 0.4553, 0.6236, 0.5797, 0.5013, 0.4343, 0.5611, 0.4831]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 - - def test_stable_diffusion_pix2pix_multiple_init_images(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - components = self.get_dummy_components() - sd_pipe = StableDiffusionInstructPix2PixPipeline(**components) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - inputs["prompt"] = [inputs["prompt"]] * 2 - - image = np.array(inputs["image"]).astype(np.float32) / 255.0 - image = torch.from_numpy(image).unsqueeze(0).to(device) - image = image / 2 + 0.5 - image = image.permute(0, 3, 1, 2) - inputs["image"] = image.repeat(2, 1, 1, 1) - - image = sd_pipe(**inputs).images - image_slice = image[-1, -3:, -3:, -1] - - assert image.shape == (2, 32, 32, 3) - expected_slice = np.array([0.5812, 0.5748, 0.5222, 0.5908, 0.5695, 0.7174, 0.6804, 0.5523, 0.5579]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 - - def test_stable_diffusion_pix2pix_euler(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - components = self.get_dummy_components() - components["scheduler"] = EulerAncestralDiscreteScheduler( - beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear" - ) - sd_pipe = StableDiffusionInstructPix2PixPipeline(**components) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - image = sd_pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1] - - slice = [round(x, 4) for x in image_slice.flatten().tolist()] - print(",".join([str(x) for x in slice])) - - assert image.shape == (1, 32, 32, 3) - expected_slice = np.array([0.7417, 0.3842, 0.4732, 0.5776, 0.5891, 0.5139, 0.4052, 0.5673, 0.4986]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 - - def test_inference_batch_single_identical(self): - super().test_inference_batch_single_identical(expected_max_diff=3e-3) - - # Overwrite the default test_latents_inputs because pix2pix encode the image differently - def test_latents_input(self): - components = self.get_dummy_components() - pipe = StableDiffusionInstructPix2PixPipeline(**components) - pipe.image_processor = VaeImageProcessor(do_resize=False, do_normalize=False) - pipe = pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - - out = pipe(**self.get_dummy_inputs_by_type(torch_device, input_image_type="pt"))[0] - - vae = components["vae"] - inputs = self.get_dummy_inputs_by_type(torch_device, input_image_type="pt") - - for image_param in self.image_latents_params: - if image_param in inputs.keys(): - inputs[image_param] = vae.encode(inputs[image_param]).latent_dist.mode() - - out_latents_inputs = pipe(**inputs)[0] - - max_diff = np.abs(out - out_latents_inputs).max() - self.assertLess(max_diff, 1e-4, "passing latents as image input generate different result from passing image") - - -@slow -@require_torch_gpu -class StableDiffusionInstructPix2PixPipelineSlowTests(unittest.TestCase): - def tearDown(self): - super().tearDown() - gc.collect() - torch.cuda.empty_cache() - - def get_inputs(self, seed=0): - generator = torch.manual_seed(seed) - image = load_image( - "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_pix2pix/example.jpg" - ) - inputs = { - "prompt": "turn him into a cyborg", - "image": image, - "generator": generator, - "num_inference_steps": 3, - "guidance_scale": 7.5, - "image_guidance_scale": 1.0, - "output_type": "numpy", - } - return inputs - - def test_stable_diffusion_pix2pix_default(self): - pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained( - "timbrooks/instruct-pix2pix", safety_checker=None - ) - pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - pipe.enable_attention_slicing() - - inputs = self.get_inputs() - image = pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1].flatten() - - assert image.shape == (1, 512, 512, 3) - expected_slice = np.array([0.5902, 0.6015, 0.6027, 0.5983, 0.6092, 0.6061, 0.5765, 0.5785, 0.5555]) - - assert np.abs(expected_slice - image_slice).max() < 1e-3 - - def test_stable_diffusion_pix2pix_k_lms(self): - pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained( - "timbrooks/instruct-pix2pix", safety_checker=None - ) - pipe.scheduler = LMSDiscreteScheduler.from_config(pipe.scheduler.config) - pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - pipe.enable_attention_slicing() - - inputs = self.get_inputs() - image = pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1].flatten() - - assert image.shape == (1, 512, 512, 3) - expected_slice = np.array([0.6578, 0.6817, 0.6972, 0.6761, 0.6856, 0.6916, 0.6428, 0.6516, 0.6301]) - - assert np.abs(expected_slice - image_slice).max() < 1e-3 - - def test_stable_diffusion_pix2pix_ddim(self): - pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained( - "timbrooks/instruct-pix2pix", safety_checker=None - ) - pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) - pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - pipe.enable_attention_slicing() - - inputs = self.get_inputs() - image = pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1].flatten() - - assert image.shape == (1, 512, 512, 3) - expected_slice = np.array([0.3828, 0.3834, 0.3818, 0.3792, 0.3865, 0.3752, 0.3792, 0.3847, 0.3753]) - - assert np.abs(expected_slice - image_slice).max() < 1e-3 - - def test_stable_diffusion_pix2pix_intermediate_state(self): - number_of_steps = 0 - - def callback_fn(step: int, timestep: int, latents: torch.FloatTensor) -> None: - callback_fn.has_been_called = True - nonlocal number_of_steps - number_of_steps += 1 - if step == 1: - latents = latents.detach().cpu().numpy() - assert latents.shape == (1, 4, 64, 64) - latents_slice = latents[0, -3:, -3:, -1] - expected_slice = np.array([-0.2463, -0.4644, -0.9756, 1.5176, 1.4414, 0.7866, 0.9897, 0.8521, 0.7983]) - - assert np.abs(latents_slice.flatten() - expected_slice).max() < 5e-2 - elif step == 2: - latents = latents.detach().cpu().numpy() - assert latents.shape == (1, 4, 64, 64) - latents_slice = latents[0, -3:, -3:, -1] - expected_slice = np.array([-0.2644, -0.4626, -0.9653, 1.5176, 1.4551, 0.7686, 0.9805, 0.8452, 0.8115]) - - assert np.abs(latents_slice.flatten() - expected_slice).max() < 5e-2 - - callback_fn.has_been_called = False - - pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained( - "timbrooks/instruct-pix2pix", safety_checker=None, torch_dtype=torch.float16 - ) - pipe = pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - pipe.enable_attention_slicing() - - inputs = self.get_inputs() - pipe(**inputs, callback=callback_fn, callback_steps=1) - assert callback_fn.has_been_called - assert number_of_steps == 3 - - def test_stable_diffusion_pipeline_with_sequential_cpu_offloading(self): - torch.cuda.empty_cache() - torch.cuda.reset_max_memory_allocated() - torch.cuda.reset_peak_memory_stats() - - pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained( - "timbrooks/instruct-pix2pix", safety_checker=None, torch_dtype=torch.float16 - ) - pipe = pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - pipe.enable_attention_slicing(1) - pipe.enable_sequential_cpu_offload() - - inputs = self.get_inputs() - _ = pipe(**inputs) - - mem_bytes = torch.cuda.max_memory_allocated() - # make sure that less than 2.2 GB is allocated - assert mem_bytes < 2.2 * 10**9 - - def test_stable_diffusion_pix2pix_pipeline_multiple_of_8(self): - inputs = self.get_inputs() - # resize to resolution that is divisible by 8 but not 16 or 32 - inputs["image"] = inputs["image"].resize((504, 504)) - - model_id = "timbrooks/instruct-pix2pix" - pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained( - model_id, - safety_checker=None, - ) - pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - pipe.enable_attention_slicing() - - output = pipe(**inputs) - image = output.images[0] - - image_slice = image[255:258, 383:386, -1] - - assert image.shape == (504, 504, 3) - expected_slice = np.array([0.2726, 0.2529, 0.2664, 0.2655, 0.2641, 0.2642, 0.2591, 0.2649, 0.2590]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 5e-3 diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/shared_heads/res_layer.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/shared_heads/res_layer.py deleted file mode 100644 index b5c343258b079a0dd832d4f999c18d002b06efac..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/shared_heads/res_layer.py +++ /dev/null @@ -1,77 +0,0 @@ -import torch.nn as nn -from mmcv.cnn import constant_init, kaiming_init -from mmcv.runner import auto_fp16, load_checkpoint - -from mmdet.models.backbones import ResNet -from mmdet.models.builder import SHARED_HEADS -from mmdet.models.utils import ResLayer as _ResLayer -from mmdet.utils import get_root_logger - - -@SHARED_HEADS.register_module() -class ResLayer(nn.Module): - - def __init__(self, - depth, - stage=3, - stride=2, - dilation=1, - style='pytorch', - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - with_cp=False, - dcn=None): - super(ResLayer, self).__init__() - self.norm_eval = norm_eval - self.norm_cfg = norm_cfg - self.stage = stage - self.fp16_enabled = False - block, stage_blocks = ResNet.arch_settings[depth] - stage_block = stage_blocks[stage] - planes = 64 * 2**stage - inplanes = 64 * 2**(stage - 1) * block.expansion - - res_layer = _ResLayer( - block, - inplanes, - planes, - stage_block, - stride=stride, - dilation=dilation, - style=style, - with_cp=with_cp, - norm_cfg=self.norm_cfg, - dcn=dcn) - self.add_module(f'layer{stage + 1}', res_layer) - - def init_weights(self, pretrained=None): - """Initialize the weights in the module. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - if isinstance(pretrained, str): - logger = get_root_logger() - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, nn.BatchNorm2d): - constant_init(m, 1) - else: - raise TypeError('pretrained must be a str or None') - - @auto_fp16() - def forward(self, x): - res_layer = getattr(self, f'layer{self.stage + 1}') - out = res_layer(x) - return out - - def train(self, mode=True): - super(ResLayer, self).train(mode) - if self.norm_eval: - for m in self.modules(): - if isinstance(m, nn.BatchNorm2d): - m.eval() diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/utils/profiling.py b/spaces/Andy1621/uniformer_image_detection/mmdet/utils/profiling.py deleted file mode 100644 index 4be9222c37e922329d537f883f5587995e27efc6..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/utils/profiling.py +++ /dev/null @@ -1,39 +0,0 @@ -import contextlib -import sys -import time - -import torch - -if sys.version_info >= (3, 7): - - @contextlib.contextmanager - def profile_time(trace_name, - name, - enabled=True, - stream=None, - end_stream=None): - """Print time spent by CPU and GPU. - - Useful as a temporary context manager to find sweet spots of code - suitable for async implementation. - """ - if (not enabled) or not torch.cuda.is_available(): - yield - return - stream = stream if stream else torch.cuda.current_stream() - end_stream = end_stream if end_stream else stream - start = torch.cuda.Event(enable_timing=True) - end = torch.cuda.Event(enable_timing=True) - stream.record_event(start) - try: - cpu_start = time.monotonic() - yield - finally: - cpu_end = time.monotonic() - end_stream.record_event(end) - end.synchronize() - cpu_time = (cpu_end - cpu_start) * 1000 - gpu_time = start.elapsed_time(end) - msg = f'{trace_name} {name} cpu_time {cpu_time:.2f} ms ' - msg += f'gpu_time {gpu_time:.2f} ms stream {stream}' - print(msg, end_stream) diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/utils/testing.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/utils/testing.py deleted file mode 100644 index a27f936da8ec14bac18562ede0a79d476d82f797..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/utils/testing.py +++ /dev/null @@ -1,140 +0,0 @@ -# Copyright (c) Open-MMLab. -import sys -from collections.abc import Iterable -from runpy import run_path -from shlex import split -from typing import Any, Dict, List -from unittest.mock import patch - - -def check_python_script(cmd): - """Run the python cmd script with `__main__`. The difference between - `os.system` is that, this function exectues code in the current process, so - that it can be tracked by coverage tools. Currently it supports two forms: - - - ./tests/data/scripts/hello.py zz - - python tests/data/scripts/hello.py zz - """ - args = split(cmd) - if args[0] == 'python': - args = args[1:] - with patch.object(sys, 'argv', args): - run_path(args[0], run_name='__main__') - - -def _any(judge_result): - """Since built-in ``any`` works only when the element of iterable is not - iterable, implement the function.""" - if not isinstance(judge_result, Iterable): - return judge_result - - try: - for element in judge_result: - if _any(element): - return True - except TypeError: - # Maybe encounter the case: torch.tensor(True) | torch.tensor(False) - if judge_result: - return True - return False - - -def assert_dict_contains_subset(dict_obj: Dict[Any, Any], - expected_subset: Dict[Any, Any]) -> bool: - """Check if the dict_obj contains the expected_subset. - - Args: - dict_obj (Dict[Any, Any]): Dict object to be checked. - expected_subset (Dict[Any, Any]): Subset expected to be contained in - dict_obj. - - Returns: - bool: Whether the dict_obj contains the expected_subset. - """ - - for key, value in expected_subset.items(): - if key not in dict_obj.keys() or _any(dict_obj[key] != value): - return False - return True - - -def assert_attrs_equal(obj: Any, expected_attrs: Dict[str, Any]) -> bool: - """Check if attribute of class object is correct. - - Args: - obj (object): Class object to be checked. - expected_attrs (Dict[str, Any]): Dict of the expected attrs. - - Returns: - bool: Whether the attribute of class object is correct. - """ - for attr, value in expected_attrs.items(): - if not hasattr(obj, attr) or _any(getattr(obj, attr) != value): - return False - return True - - -def assert_dict_has_keys(obj: Dict[str, Any], - expected_keys: List[str]) -> bool: - """Check if the obj has all the expected_keys. - - Args: - obj (Dict[str, Any]): Object to be checked. - expected_keys (List[str]): Keys expected to contained in the keys of - the obj. - - Returns: - bool: Whether the obj has the expected keys. - """ - return set(expected_keys).issubset(set(obj.keys())) - - -def assert_keys_equal(result_keys: List[str], target_keys: List[str]) -> bool: - """Check if target_keys is equal to result_keys. - - Args: - result_keys (List[str]): Result keys to be checked. - target_keys (List[str]): Target keys to be checked. - - Returns: - bool: Whether target_keys is equal to result_keys. - """ - return set(result_keys) == set(target_keys) - - -def assert_is_norm_layer(module) -> bool: - """Check if the module is a norm layer. - - Args: - module (nn.Module): The module to be checked. - - Returns: - bool: Whether the module is a norm layer. - """ - from .parrots_wrapper import _BatchNorm, _InstanceNorm - from torch.nn import GroupNorm, LayerNorm - norm_layer_candidates = (_BatchNorm, _InstanceNorm, GroupNorm, LayerNorm) - return isinstance(module, norm_layer_candidates) - - -def assert_params_all_zeros(module) -> bool: - """Check if the parameters of the module is all zeros. - - Args: - module (nn.Module): The module to be checked. - - Returns: - bool: Whether the parameters of the module is all zeros. - """ - weight_data = module.weight.data - is_weight_zero = weight_data.allclose( - weight_data.new_zeros(weight_data.size())) - - if hasattr(module, 'bias') and module.bias is not None: - bias_data = module.bias.data - is_bias_zero = bias_data.allclose( - bias_data.new_zeros(bias_data.size())) - else: - is_bias_zero = True - - return is_weight_zero and is_bias_zero diff --git a/spaces/Anthony7906/MengHuiMXD_GPT/modules/shared.py b/spaces/Anthony7906/MengHuiMXD_GPT/modules/shared.py deleted file mode 100644 index a9e72580aa7ae48f907e923a09099513570a9ad8..0000000000000000000000000000000000000000 --- a/spaces/Anthony7906/MengHuiMXD_GPT/modules/shared.py +++ /dev/null @@ -1,55 +0,0 @@ -from modules.presets import COMPLETION_URL, BALANCE_API_URL, USAGE_API_URL, API_HOST -import os -import queue - -class State: - interrupted = False - multi_api_key = False - completion_url = COMPLETION_URL - balance_api_url = BALANCE_API_URL - usage_api_url = USAGE_API_URL - - def interrupt(self): - self.interrupted = True - - def recover(self): - self.interrupted = False - - def set_api_host(self, api_host): - self.completion_url = f"https://{api_host}/v1/chat/completions" - self.balance_api_url = f"https://{api_host}/dashboard/billing/credit_grants" - self.usage_api_url = f"https://{api_host}/dashboard/billing/usage" - os.environ["OPENAI_API_BASE"] = f"https://{api_host}/v1" - - def reset_api_host(self): - self.completion_url = COMPLETION_URL - self.balance_api_url = BALANCE_API_URL - self.usage_api_url = USAGE_API_URL - os.environ["OPENAI_API_BASE"] = f"https://{API_HOST}/v1" - return API_HOST - - def reset_all(self): - self.interrupted = False - self.completion_url = COMPLETION_URL - - def set_api_key_queue(self, api_key_list): - self.multi_api_key = True - self.api_key_queue = queue.Queue() - for api_key in api_key_list: - self.api_key_queue.put(api_key) - - def switching_api_key(self, func): - if not hasattr(self, "api_key_queue"): - return func - - def wrapped(*args, **kwargs): - api_key = self.api_key_queue.get() - args[0].api_key = api_key - ret = func(*args, **kwargs) - self.api_key_queue.put(api_key) - return ret - - return wrapped - - -state = State() diff --git a/spaces/Apk/anything-v3.0/utils.py b/spaces/Apk/anything-v3.0/utils.py deleted file mode 100644 index ff1c065d186347ca51b47d010a697dbe1814695c..0000000000000000000000000000000000000000 --- a/spaces/Apk/anything-v3.0/utils.py +++ /dev/null @@ -1,6 +0,0 @@ -def is_google_colab(): - try: - import google.colab - return True - except: - return False \ No newline at end of file diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/req/req_set.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/req/req_set.py deleted file mode 100644 index ec7a6e07a25acfa978030c65ae7c1d8609163249..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/req/req_set.py +++ /dev/null @@ -1,82 +0,0 @@ -import logging -from collections import OrderedDict -from typing import Dict, List - -from pip._vendor.packaging.utils import canonicalize_name - -from pip._internal.req.req_install import InstallRequirement - -logger = logging.getLogger(__name__) - - -class RequirementSet: - def __init__(self, check_supported_wheels: bool = True) -> None: - """Create a RequirementSet.""" - - self.requirements: Dict[str, InstallRequirement] = OrderedDict() - self.check_supported_wheels = check_supported_wheels - - self.unnamed_requirements: List[InstallRequirement] = [] - - def __str__(self) -> str: - requirements = sorted( - (req for req in self.requirements.values() if not req.comes_from), - key=lambda req: canonicalize_name(req.name or ""), - ) - return " ".join(str(req.req) for req in requirements) - - def __repr__(self) -> str: - requirements = sorted( - self.requirements.values(), - key=lambda req: canonicalize_name(req.name or ""), - ) - - format_string = "<{classname} object; {count} requirement(s): {reqs}>" - return format_string.format( - classname=self.__class__.__name__, - count=len(requirements), - reqs=", ".join(str(req.req) for req in requirements), - ) - - def add_unnamed_requirement(self, install_req: InstallRequirement) -> None: - assert not install_req.name - self.unnamed_requirements.append(install_req) - - def add_named_requirement(self, install_req: InstallRequirement) -> None: - assert install_req.name - - project_name = canonicalize_name(install_req.name) - self.requirements[project_name] = install_req - - def has_requirement(self, name: str) -> bool: - project_name = canonicalize_name(name) - - return ( - project_name in self.requirements - and not self.requirements[project_name].constraint - ) - - def get_requirement(self, name: str) -> InstallRequirement: - project_name = canonicalize_name(name) - - if project_name in self.requirements: - return self.requirements[project_name] - - raise KeyError(f"No project with the name {name!r}") - - @property - def all_requirements(self) -> List[InstallRequirement]: - return self.unnamed_requirements + list(self.requirements.values()) - - @property - def requirements_to_install(self) -> List[InstallRequirement]: - """Return the list of requirements that need to be installed. - - TODO remove this property together with the legacy resolver, since the new - resolver only returns requirements that need to be installed. - """ - return [ - install_req - for install_req in self.all_requirements - if not install_req.constraint and not install_req.satisfied_by - ] diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/distlib/markers.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/distlib/markers.py deleted file mode 100644 index 9dc68410337dcf4619ef66a49d87cea8233bc057..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/distlib/markers.py +++ /dev/null @@ -1,152 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright (C) 2012-2017 Vinay Sajip. -# Licensed to the Python Software Foundation under a contributor agreement. -# See LICENSE.txt and CONTRIBUTORS.txt. -# -""" -Parser for the environment markers micro-language defined in PEP 508. -""" - -# Note: In PEP 345, the micro-language was Python compatible, so the ast -# module could be used to parse it. However, PEP 508 introduced operators such -# as ~= and === which aren't in Python, necessitating a different approach. - -import os -import re -import sys -import platform - -from .compat import string_types -from .util import in_venv, parse_marker -from .version import NormalizedVersion as NV - -__all__ = ['interpret'] - -_VERSION_PATTERN = re.compile(r'((\d+(\.\d+)*\w*)|\'(\d+(\.\d+)*\w*)\'|\"(\d+(\.\d+)*\w*)\")') - -def _is_literal(o): - if not isinstance(o, string_types) or not o: - return False - return o[0] in '\'"' - -def _get_versions(s): - result = [] - for m in _VERSION_PATTERN.finditer(s): - result.append(NV(m.groups()[0])) - return set(result) - -class Evaluator(object): - """ - This class is used to evaluate marker expessions. - """ - - operations = { - '==': lambda x, y: x == y, - '===': lambda x, y: x == y, - '~=': lambda x, y: x == y or x > y, - '!=': lambda x, y: x != y, - '<': lambda x, y: x < y, - '<=': lambda x, y: x == y or x < y, - '>': lambda x, y: x > y, - '>=': lambda x, y: x == y or x > y, - 'and': lambda x, y: x and y, - 'or': lambda x, y: x or y, - 'in': lambda x, y: x in y, - 'not in': lambda x, y: x not in y, - } - - def evaluate(self, expr, context): - """ - Evaluate a marker expression returned by the :func:`parse_requirement` - function in the specified context. - """ - if isinstance(expr, string_types): - if expr[0] in '\'"': - result = expr[1:-1] - else: - if expr not in context: - raise SyntaxError('unknown variable: %s' % expr) - result = context[expr] - else: - assert isinstance(expr, dict) - op = expr['op'] - if op not in self.operations: - raise NotImplementedError('op not implemented: %s' % op) - elhs = expr['lhs'] - erhs = expr['rhs'] - if _is_literal(expr['lhs']) and _is_literal(expr['rhs']): - raise SyntaxError('invalid comparison: %s %s %s' % (elhs, op, erhs)) - - lhs = self.evaluate(elhs, context) - rhs = self.evaluate(erhs, context) - if ((elhs == 'python_version' or erhs == 'python_version') and - op in ('<', '<=', '>', '>=', '===', '==', '!=', '~=')): - lhs = NV(lhs) - rhs = NV(rhs) - elif elhs == 'python_version' and op in ('in', 'not in'): - lhs = NV(lhs) - rhs = _get_versions(rhs) - result = self.operations[op](lhs, rhs) - return result - -_DIGITS = re.compile(r'\d+\.\d+') - -def default_context(): - def format_full_version(info): - version = '%s.%s.%s' % (info.major, info.minor, info.micro) - kind = info.releaselevel - if kind != 'final': - version += kind[0] + str(info.serial) - return version - - if hasattr(sys, 'implementation'): - implementation_version = format_full_version(sys.implementation.version) - implementation_name = sys.implementation.name - else: - implementation_version = '0' - implementation_name = '' - - ppv = platform.python_version() - m = _DIGITS.match(ppv) - pv = m.group(0) - result = { - 'implementation_name': implementation_name, - 'implementation_version': implementation_version, - 'os_name': os.name, - 'platform_machine': platform.machine(), - 'platform_python_implementation': platform.python_implementation(), - 'platform_release': platform.release(), - 'platform_system': platform.system(), - 'platform_version': platform.version(), - 'platform_in_venv': str(in_venv()), - 'python_full_version': ppv, - 'python_version': pv, - 'sys_platform': sys.platform, - } - return result - -DEFAULT_CONTEXT = default_context() -del default_context - -evaluator = Evaluator() - -def interpret(marker, execution_context=None): - """ - Interpret a marker and return a result depending on environment. - - :param marker: The marker to interpret. - :type marker: str - :param execution_context: The context used for name lookup. - :type execution_context: mapping - """ - try: - expr, rest = parse_marker(marker) - except Exception as e: - raise SyntaxError('Unable to interpret marker syntax: %s: %s' % (marker, e)) - if rest and rest[0] != '#': - raise SyntaxError('unexpected trailing data in marker: %s: %s' % (marker, rest)) - context = dict(DEFAULT_CONTEXT) - if execution_context: - context.update(execution_context) - return evaluator.evaluate(expr, context) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/importlib_resources/abc.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/importlib_resources/abc.py deleted file mode 100644 index d39dc1adba0f00d2f7bdf6fa2cd1abcd82475e2e..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/importlib_resources/abc.py +++ /dev/null @@ -1,137 +0,0 @@ -import abc -from typing import BinaryIO, Iterable, Text - -from ._compat import runtime_checkable, Protocol - - -class ResourceReader(metaclass=abc.ABCMeta): - """Abstract base class for loaders to provide resource reading support.""" - - @abc.abstractmethod - def open_resource(self, resource: Text) -> BinaryIO: - """Return an opened, file-like object for binary reading. - - The 'resource' argument is expected to represent only a file name. - If the resource cannot be found, FileNotFoundError is raised. - """ - # This deliberately raises FileNotFoundError instead of - # NotImplementedError so that if this method is accidentally called, - # it'll still do the right thing. - raise FileNotFoundError - - @abc.abstractmethod - def resource_path(self, resource: Text) -> Text: - """Return the file system path to the specified resource. - - The 'resource' argument is expected to represent only a file name. - If the resource does not exist on the file system, raise - FileNotFoundError. - """ - # This deliberately raises FileNotFoundError instead of - # NotImplementedError so that if this method is accidentally called, - # it'll still do the right thing. - raise FileNotFoundError - - @abc.abstractmethod - def is_resource(self, path: Text) -> bool: - """Return True if the named 'path' is a resource. - - Files are resources, directories are not. - """ - raise FileNotFoundError - - @abc.abstractmethod - def contents(self) -> Iterable[str]: - """Return an iterable of entries in `package`.""" - raise FileNotFoundError - - -@runtime_checkable -class Traversable(Protocol): - """ - An object with a subset of pathlib.Path methods suitable for - traversing directories and opening files. - """ - - @abc.abstractmethod - def iterdir(self): - """ - Yield Traversable objects in self - """ - - def read_bytes(self): - """ - Read contents of self as bytes - """ - with self.open('rb') as strm: - return strm.read() - - def read_text(self, encoding=None): - """ - Read contents of self as text - """ - with self.open(encoding=encoding) as strm: - return strm.read() - - @abc.abstractmethod - def is_dir(self) -> bool: - """ - Return True if self is a directory - """ - - @abc.abstractmethod - def is_file(self) -> bool: - """ - Return True if self is a file - """ - - @abc.abstractmethod - def joinpath(self, child): - """ - Return Traversable child in self - """ - - def __truediv__(self, child): - """ - Return Traversable child in self - """ - return self.joinpath(child) - - @abc.abstractmethod - def open(self, mode='r', *args, **kwargs): - """ - mode may be 'r' or 'rb' to open as text or binary. Return a handle - suitable for reading (same as pathlib.Path.open). - - When opening as text, accepts encoding parameters such as those - accepted by io.TextIOWrapper. - """ - - @abc.abstractproperty - def name(self) -> str: - """ - The base name of this object without any parent references. - """ - - -class TraversableResources(ResourceReader): - """ - The required interface for providing traversable - resources. - """ - - @abc.abstractmethod - def files(self): - """Return a Traversable object for the loaded package.""" - - def open_resource(self, resource): - return self.files().joinpath(resource).open('rb') - - def resource_path(self, resource): - raise FileNotFoundError(resource) - - def is_resource(self, path): - return self.files().joinpath(path).is_file() - - def contents(self): - return (item.name for item in self.files().iterdir()) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/pyparsing/actions.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/pyparsing/actions.py deleted file mode 100644 index f72c66e743146c7a5b70a5440e9ab5459f10245b..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/pyparsing/actions.py +++ /dev/null @@ -1,207 +0,0 @@ -# actions.py - -from .exceptions import ParseException -from .util import col - - -class OnlyOnce: - """ - Wrapper for parse actions, to ensure they are only called once. - """ - - def __init__(self, method_call): - from .core import _trim_arity - - self.callable = _trim_arity(method_call) - self.called = False - - def __call__(self, s, l, t): - if not self.called: - results = self.callable(s, l, t) - self.called = True - return results - raise ParseException(s, l, "OnlyOnce obj called multiple times w/out reset") - - def reset(self): - """ - Allow the associated parse action to be called once more. - """ - - self.called = False - - -def match_only_at_col(n): - """ - Helper method for defining parse actions that require matching at - a specific column in the input text. - """ - - def verify_col(strg, locn, toks): - if col(locn, strg) != n: - raise ParseException(strg, locn, "matched token not at column {}".format(n)) - - return verify_col - - -def replace_with(repl_str): - """ - Helper method for common parse actions that simply return - a literal value. Especially useful when used with - :class:`transform_string` (). - - Example:: - - num = Word(nums).set_parse_action(lambda toks: int(toks[0])) - na = one_of("N/A NA").set_parse_action(replace_with(math.nan)) - term = na | num - - term[1, ...].parse_string("324 234 N/A 234") # -> [324, 234, nan, 234] - """ - return lambda s, l, t: [repl_str] - - -def remove_quotes(s, l, t): - """ - Helper parse action for removing quotation marks from parsed - quoted strings. - - Example:: - - # by default, quotation marks are included in parsed results - quoted_string.parse_string("'Now is the Winter of our Discontent'") # -> ["'Now is the Winter of our Discontent'"] - - # use remove_quotes to strip quotation marks from parsed results - quoted_string.set_parse_action(remove_quotes) - quoted_string.parse_string("'Now is the Winter of our Discontent'") # -> ["Now is the Winter of our Discontent"] - """ - return t[0][1:-1] - - -def with_attribute(*args, **attr_dict): - """ - Helper to create a validating parse action to be used with start - tags created with :class:`make_xml_tags` or - :class:`make_html_tags`. Use ``with_attribute`` to qualify - a starting tag with a required attribute value, to avoid false - matches on common tags such as ```` or ``
``. - - Call ``with_attribute`` with a series of attribute names and - values. Specify the list of filter attributes names and values as: - - - keyword arguments, as in ``(align="right")``, or - - as an explicit dict with ``**`` operator, when an attribute - name is also a Python reserved word, as in ``**{"class":"Customer", "align":"right"}`` - - a list of name-value tuples, as in ``(("ns1:class", "Customer"), ("ns2:align", "right"))`` - - For attribute names with a namespace prefix, you must use the second - form. Attribute names are matched insensitive to upper/lower case. - - If just testing for ``class`` (with or without a namespace), use - :class:`with_class`. - - To verify that the attribute exists, but without specifying a value, - pass ``with_attribute.ANY_VALUE`` as the value. - - Example:: - - html = ''' -
- Some text -
1 4 0 1 0
-
1,3 2,3 1,1
-
this has no type
-
- - ''' - div,div_end = make_html_tags("div") - - # only match div tag having a type attribute with value "grid" - div_grid = div().set_parse_action(with_attribute(type="grid")) - grid_expr = div_grid + SkipTo(div | div_end)("body") - for grid_header in grid_expr.search_string(html): - print(grid_header.body) - - # construct a match with any div tag having a type attribute, regardless of the value - div_any_type = div().set_parse_action(with_attribute(type=with_attribute.ANY_VALUE)) - div_expr = div_any_type + SkipTo(div | div_end)("body") - for div_header in div_expr.search_string(html): - print(div_header.body) - - prints:: - - 1 4 0 1 0 - - 1 4 0 1 0 - 1,3 2,3 1,1 - """ - if args: - attrs = args[:] - else: - attrs = attr_dict.items() - attrs = [(k, v) for k, v in attrs] - - def pa(s, l, tokens): - for attrName, attrValue in attrs: - if attrName not in tokens: - raise ParseException(s, l, "no matching attribute " + attrName) - if attrValue != with_attribute.ANY_VALUE and tokens[attrName] != attrValue: - raise ParseException( - s, - l, - "attribute {!r} has value {!r}, must be {!r}".format( - attrName, tokens[attrName], attrValue - ), - ) - - return pa - - -with_attribute.ANY_VALUE = object() - - -def with_class(classname, namespace=""): - """ - Simplified version of :class:`with_attribute` when - matching on a div class - made difficult because ``class`` is - a reserved word in Python. - - Example:: - - html = ''' -
- Some text -
1 4 0 1 0
-
1,3 2,3 1,1
-
this <div> has no class
-
- - ''' - div,div_end = make_html_tags("div") - div_grid = div().set_parse_action(with_class("grid")) - - grid_expr = div_grid + SkipTo(div | div_end)("body") - for grid_header in grid_expr.search_string(html): - print(grid_header.body) - - div_any_type = div().set_parse_action(with_class(withAttribute.ANY_VALUE)) - div_expr = div_any_type + SkipTo(div | div_end)("body") - for div_header in div_expr.search_string(html): - print(div_header.body) - - prints:: - - 1 4 0 1 0 - - 1 4 0 1 0 - 1,3 2,3 1,1 - """ - classattr = "{}:class".format(namespace) if namespace else "class" - return with_attribute(**{classattr: classname}) - - -# pre-PEP8 compatibility symbols -replaceWith = replace_with -removeQuotes = remove_quotes -withAttribute = with_attribute -withClass = with_class -matchOnlyAtCol = match_only_at_col diff --git a/spaces/Awesimo/jojogan/e4e/models/encoders/psp_encoders.py b/spaces/Awesimo/jojogan/e4e/models/encoders/psp_encoders.py deleted file mode 100644 index dc49acd11f062cbd29f839ee3c04bce7fa84f479..0000000000000000000000000000000000000000 --- a/spaces/Awesimo/jojogan/e4e/models/encoders/psp_encoders.py +++ /dev/null @@ -1,200 +0,0 @@ -from enum import Enum -import math -import numpy as np -import torch -from torch import nn -from torch.nn import Conv2d, BatchNorm2d, PReLU, Sequential, Module - -from e4e.models.encoders.helpers import get_blocks, bottleneck_IR, bottleneck_IR_SE, _upsample_add -from e4e.models.stylegan2.model import EqualLinear - - -class ProgressiveStage(Enum): - WTraining = 0 - Delta1Training = 1 - Delta2Training = 2 - Delta3Training = 3 - Delta4Training = 4 - Delta5Training = 5 - Delta6Training = 6 - Delta7Training = 7 - Delta8Training = 8 - Delta9Training = 9 - Delta10Training = 10 - Delta11Training = 11 - Delta12Training = 12 - Delta13Training = 13 - Delta14Training = 14 - Delta15Training = 15 - Delta16Training = 16 - Delta17Training = 17 - Inference = 18 - - -class GradualStyleBlock(Module): - def __init__(self, in_c, out_c, spatial): - super(GradualStyleBlock, self).__init__() - self.out_c = out_c - self.spatial = spatial - num_pools = int(np.log2(spatial)) - modules = [] - modules += [Conv2d(in_c, out_c, kernel_size=3, stride=2, padding=1), - nn.LeakyReLU()] - for i in range(num_pools - 1): - modules += [ - Conv2d(out_c, out_c, kernel_size=3, stride=2, padding=1), - nn.LeakyReLU() - ] - self.convs = nn.Sequential(*modules) - self.linear = EqualLinear(out_c, out_c, lr_mul=1) - - def forward(self, x): - x = self.convs(x) - x = x.view(-1, self.out_c) - x = self.linear(x) - return x - - -class GradualStyleEncoder(Module): - def __init__(self, num_layers, mode='ir', opts=None): - super(GradualStyleEncoder, self).__init__() - assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152' - assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se' - blocks = get_blocks(num_layers) - if mode == 'ir': - unit_module = bottleneck_IR - elif mode == 'ir_se': - unit_module = bottleneck_IR_SE - self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False), - BatchNorm2d(64), - PReLU(64)) - modules = [] - for block in blocks: - for bottleneck in block: - modules.append(unit_module(bottleneck.in_channel, - bottleneck.depth, - bottleneck.stride)) - self.body = Sequential(*modules) - - self.styles = nn.ModuleList() - log_size = int(math.log(opts.stylegan_size, 2)) - self.style_count = 2 * log_size - 2 - self.coarse_ind = 3 - self.middle_ind = 7 - for i in range(self.style_count): - if i < self.coarse_ind: - style = GradualStyleBlock(512, 512, 16) - elif i < self.middle_ind: - style = GradualStyleBlock(512, 512, 32) - else: - style = GradualStyleBlock(512, 512, 64) - self.styles.append(style) - self.latlayer1 = nn.Conv2d(256, 512, kernel_size=1, stride=1, padding=0) - self.latlayer2 = nn.Conv2d(128, 512, kernel_size=1, stride=1, padding=0) - - def forward(self, x): - x = self.input_layer(x) - - latents = [] - modulelist = list(self.body._modules.values()) - for i, l in enumerate(modulelist): - x = l(x) - if i == 6: - c1 = x - elif i == 20: - c2 = x - elif i == 23: - c3 = x - - for j in range(self.coarse_ind): - latents.append(self.styles[j](c3)) - - p2 = _upsample_add(c3, self.latlayer1(c2)) - for j in range(self.coarse_ind, self.middle_ind): - latents.append(self.styles[j](p2)) - - p1 = _upsample_add(p2, self.latlayer2(c1)) - for j in range(self.middle_ind, self.style_count): - latents.append(self.styles[j](p1)) - - out = torch.stack(latents, dim=1) - return out - - -class Encoder4Editing(Module): - def __init__(self, num_layers, mode='ir', opts=None): - super(Encoder4Editing, self).__init__() - assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152' - assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se' - blocks = get_blocks(num_layers) - if mode == 'ir': - unit_module = bottleneck_IR - elif mode == 'ir_se': - unit_module = bottleneck_IR_SE - self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False), - BatchNorm2d(64), - PReLU(64)) - modules = [] - for block in blocks: - for bottleneck in block: - modules.append(unit_module(bottleneck.in_channel, - bottleneck.depth, - bottleneck.stride)) - self.body = Sequential(*modules) - - self.styles = nn.ModuleList() - log_size = int(math.log(opts.stylegan_size, 2)) - self.style_count = 2 * log_size - 2 - self.coarse_ind = 3 - self.middle_ind = 7 - - for i in range(self.style_count): - if i < self.coarse_ind: - style = GradualStyleBlock(512, 512, 16) - elif i < self.middle_ind: - style = GradualStyleBlock(512, 512, 32) - else: - style = GradualStyleBlock(512, 512, 64) - self.styles.append(style) - - self.latlayer1 = nn.Conv2d(256, 512, kernel_size=1, stride=1, padding=0) - self.latlayer2 = nn.Conv2d(128, 512, kernel_size=1, stride=1, padding=0) - - self.progressive_stage = ProgressiveStage.Inference - - def get_deltas_starting_dimensions(self): - ''' Get a list of the initial dimension of every delta from which it is applied ''' - return list(range(self.style_count)) # Each dimension has a delta applied to it - - def set_progressive_stage(self, new_stage: ProgressiveStage): - self.progressive_stage = new_stage - print('Changed progressive stage to: ', new_stage) - - def forward(self, x): - x = self.input_layer(x) - - modulelist = list(self.body._modules.values()) - for i, l in enumerate(modulelist): - x = l(x) - if i == 6: - c1 = x - elif i == 20: - c2 = x - elif i == 23: - c3 = x - - # Infer main W and duplicate it - w0 = self.styles[0](c3) - w = w0.repeat(self.style_count, 1, 1).permute(1, 0, 2) - stage = self.progressive_stage.value - features = c3 - for i in range(1, min(stage + 1, self.style_count)): # Infer additional deltas - if i == self.coarse_ind: - p2 = _upsample_add(c3, self.latlayer1(c2)) # FPN's middle features - features = p2 - elif i == self.middle_ind: - p1 = _upsample_add(p2, self.latlayer2(c1)) # FPN's fine features - features = p1 - delta_i = self.styles[i](features) - w[:, i] += delta_i - return w diff --git a/spaces/BAAI/AltDiffusion/js/index.js b/spaces/BAAI/AltDiffusion/js/index.js deleted file mode 100644 index 2afe2db8da0b7305eb88a46a31d1f309ee9d0793..0000000000000000000000000000000000000000 --- a/spaces/BAAI/AltDiffusion/js/index.js +++ /dev/null @@ -1,186 +0,0 @@ -window.SD = (() => { - /* - * Painterro is made a field of the SD global object - * To provide convinience when using w() method in css_and_js.py - */ - class PainterroClass { - static isOpen = false; - static async init ({ x, toId }) { - console.log(x) - - const originalImage = x[2] === 'Mask' ? x[1]?.image : x[0]; - - if (window.Painterro === undefined) { - try { - await this.load(); - } catch (e) { - SDClass.error(e); - - return this.fallback(originalImage); - } - } - - if (this.isOpen) { - return this.fallback(originalImage); - } - this.isOpen = true; - - let resolveResult; - const paintClient = Painterro({ - hiddenTools: ['arrow'], - onHide: () => { - resolveResult?.(null); - }, - saveHandler: (image, done) => { - const data = image.asDataURL(); - - // ensures stable performance even - // when the editor is in interactive mode - SD.clearImageInput(SD.el.get(`#${toId}`)); - - resolveResult(data); - - done(true); - paintClient.hide(); - }, - }); - - const result = await new Promise((resolve) => { - resolveResult = resolve; - paintClient.show(originalImage); - }); - this.isOpen = false; - - return result ? this.success(result) : this.fallback(originalImage); - } - static success (result) { return [result, { image: result, mask: result }] }; - static fallback (image) { return [image, { image: image, mask: image }] }; - static load () { - return new Promise((resolve, reject) => { - const scriptId = '__painterro-script'; - if (document.getElementById(scriptId)) { - reject(new Error('Tried to load painterro script, but script tag already exists.')); - return; - } - - const styleId = '__painterro-css-override'; - if (!document.getElementById(styleId)) { - /* Ensure Painterro window is always on top */ - const style = document.createElement('style'); - style.id = styleId; - style.setAttribute('type', 'text/css'); - style.appendChild(document.createTextNode(` - .ptro-holder-wrapper { - z-index: 100; - } - `)); - document.head.appendChild(style); - } - - const script = document.createElement('script'); - script.id = scriptId; - script.src = 'https://unpkg.com/painterro@1.2.78/build/painterro.min.js'; - script.onload = () => resolve(true); - script.onerror = (e) => { - // remove self on error to enable reattempting load - document.head.removeChild(script); - reject(e); - }; - document.head.appendChild(script); - }); - } - } - - /* - * Turns out caching elements doesn't actually work in gradio - * As elements in tabs might get recreated - */ - class ElementCache { - #el; - constructor () { - this.root = document.querySelector('gradio-app').shadowRoot; - } - get (selector) { - return this.root.querySelector(selector); - } - } - - /* - * The main helper class to incapsulate functions - * that change gradio ui functionality - */ - class SDClass { - el = new ElementCache(); - Painterro = PainterroClass; - moveImageFromGallery ({ x, fromId, toId }) { - x = x[0]; - if (!Array.isArray(x) || x.length === 0) return; - - this.clearImageInput(this.el.get(`#${toId}`)); - - const i = this.#getGallerySelectedIndex(this.el.get(`#${fromId}`)); - - return [x[i].replace('data:;','data:image/png;')]; - } - async copyImageFromGalleryToClipboard ({ x, fromId }) { - x = x[0]; - if (!Array.isArray(x) || x.length === 0) return; - - const i = this.#getGallerySelectedIndex(this.el.get(`#${fromId}`)); - - const data = x[i]; - const blob = await (await fetch(data.replace('data:;','data:image/png;'))).blob(); - const item = new ClipboardItem({'image/png': blob}); - - await this.copyToClipboard([item]); - } - clickFirstVisibleButton({ rowId }) { - const generateButtons = this.el.get(`#${rowId}`).querySelectorAll('.gr-button-primary'); - - if (!generateButtons) return; - - for (let i = 0, arr = [...generateButtons]; i < arr.length; i++) { - const cs = window.getComputedStyle(arr[i]); - - if (cs.display !== 'none' && cs.visibility !== 'hidden') { - console.log(arr[i]); - - arr[i].click(); - break; - } - } - } - async gradioInputToClipboard ({ x }) { return this.copyToClipboard(x[0]); } - async copyToClipboard (value) { - if (!value || typeof value === 'boolean') return; - try { - if (Array.isArray(value) && - value.length && - value[0] instanceof ClipboardItem) { - await navigator.clipboard.write(value); - } else { - await navigator.clipboard.writeText(value); - } - } catch (e) { - SDClass.error(e); - } - } - static error (e) { - console.error(e); - if (typeof e === 'string') { - alert(e); - } else if(typeof e === 'object' && Object.hasOwn(e, 'message')) { - alert(e.message); - } - } - clearImageInput (imageEditor) { - imageEditor?.querySelector('.modify-upload button:last-child')?.click(); - } - #getGallerySelectedIndex (gallery) { - const selected = gallery.querySelector(`.\\!ring-2`); - return selected ? [...selected.parentNode.children].indexOf(selected) : 0; - } - } - - return new SDClass(); -})(); diff --git a/spaces/Benson/text-generation/Examples/Cmo Descargar Y Jugar Entre Nosotros En El PC.md b/spaces/Benson/text-generation/Examples/Cmo Descargar Y Jugar Entre Nosotros En El PC.md deleted file mode 100644 index 76e3fec427295bced637664c0d537ab8cc335e1e..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Cmo Descargar Y Jugar Entre Nosotros En El PC.md +++ /dev/null @@ -1,101 +0,0 @@ -
-

Cómo descargar Brawl Stars para iPhone

-

Si estás buscando un juego de ritmo rápido, lleno de acción y lleno de diversión para jugar en tu iPhone, definitivamente deberías ver Brawl Stars. Brawl Stars es un juego multijugador de arena de batalla en línea (MOBA) desarrollado por Supercell, los creadores de Clash of Clans y Clash Royale. En este juego, puedes elegir entre docenas de personajes únicos llamados Brawlers, cada uno con sus propias habilidades, armas y personalidades. Puedes hacer equipo con tus amigos o jugar solo en varios modos de juego, como Gem Grab, Showdown, Brawl Ball, Bounty, Heist y más. También puedes desbloquear nuevos skins, gadgets, poderes estelares y pines para personalizar tus Brawlers y mostrar tu estilo.

-

cómo descargar y jugar entre nosotros en el PC


Download Filehttps://bltlly.com/2v6ILh



-

Brawl Stars es uno de los juegos más populares para dispositivos móviles en este momento, con más de 100 millones de descargas solo en Google Play. Pero ¿qué pasa si quieres jugar en tu iPhone? No te preocupes, tenemos todo cubierto. En este artículo, te mostraremos cómo descargar Brawl Stars para iPhone en solo unos sencillos pasos. También te daremos algunos consejos y trucos para jugar Brawl Stars en el iPhone y responder a algunas preguntas frecuentes sobre el juego. Así que sin más preámbulos, ¡empecemos!

-

¿Qué es Brawl Stars?

-

Brawl Stars es un juego MOBA 3v3 que combina elementos de tiro, lucha, estrategia y trabajo en equipo. El juego cuenta con varios modos que requieren diferentes objetivos y habilidades. Por ejemplo, en el modo Gem Grab, tienes que recoger y guardar 10 gemas para ganar; en el modo Showdown, tienes que sobrevivir el mayor tiempo posible en un battle royale; en el modo Brawl Ball, tienes que anotar dos goles antes que el otro equipo; y así sucesivamente.

- -

Brawl Stars es un juego que es fácil de aprender pero difícil de dominar. Tienes que usar tus habilidades, estrategia y trabajo en equipo para ganar partidos y subir de rango. También puede unirse o crear un club para chatear con otros jugadores, compartir consejos y jugar juntos. Brawl Stars es un juego que se actualiza constantemente con nuevos contenidos, como nuevos luchadores, skins, mapas, eventos y características. También puedes participar en desafíos especiales y torneos para ganar recompensas y fama.

-

¿Por qué jugar Brawl estrellas en el iPhone?

-

Brawl Stars es un juego diseñado para dispositivos móviles, y jugarlo en iPhone tiene muchas ventajas. Estas son algunas de las razones por las que deberías jugar Brawl Stars en iPhone:

-

- -

¿Cómo obtener estrellas de pelea en el iPhone?

-

Ahora que sabes por qué deberías jugar Brawl Stars en el iPhone, vamos a ver cómo puedes conseguirlo en tu dispositivo. El proceso es muy simple y sencillo, y solo toma unos minutos. Estos son los pasos que debes seguir:

-

Paso 1: Abra la aplicación App Store

-

Lo primero que tienes que hacer es abrir la aplicación App Store en tu iPhone. Puedes encontrarla en tu pantalla de inicio o en tu biblioteca de aplicaciones. La aplicación App Store tiene un icono azul con una letra blanca A dentro.

-

Icono de la App Store

-

Paso 2: Buscar estrellas de pelea

-

Una vez que abra la aplicación App Store, debe buscar Brawl Stars en la pestaña de búsqueda. Puede encontrar la pestaña de búsqueda en la esquina inferior derecha de la pantalla. Tiene un icono de lupa.

-

Icono de la pestaña de búsqueda

-

Toque en la pestaña de búsqueda y escriba "Brawl Stars" en la barra de búsqueda. Verá una lista de resultados que coinciden con su consulta. Busca el que dice "Brawl Stars" de Supercell y tiene un icono rojo con tres estrellas dentro.

-

Resultado de Brawl Stars

-

Paso 3: Toque Obtener o el precio

-

Cuando encuentre Brawl Stars en los resultados, toque en él para abrir su página en el App Store. Verás información sobre el juego, como su descripción, capturas de pantalla, valoraciones, reseñas y más.

-

Para descargar Brawl Stars, necesitas tocar el botón Obtener o el precio si no es gratis en tu región. El botón Obtener o el precio se encuentra en la esquina superior derecha de la pantalla, junto al icono y nombre del juego.

-

Obtener botón

-

Paso 4: Confirmar la descarga

- -

Confirmar descarga

-

Introduzca su contraseña o utilice su huella digital o su cara para confirmar la descarga. Verá un mensaje de confirmación que dice "Descargar..." o "Comprar".

-

Paso 5: Espera a que termine la descarga

-

Ahora solo tienes que esperar a que termine la descarga. Puedes comprobar el progreso de la descarga mirando el círculo alrededor del icono del juego. El círculo se llenará a medida que avance la descarga. También puede ver el estado de descarga en la pestaña Actualizaciones de la aplicación App Store.

-

Download progress

-

Brawl Stars es de unos 300 MB de tamaño, por lo que puede tardar unos minutos en descargarse dependiendo de su velocidad y conexión a Internet. Asegúrate de tener suficiente espacio de almacenamiento en tu iPhone y una conexión Wi-Fi o datos móviles estable.

-

Paso 6: Estrellas de pelea abierta y disfrutar

-

Enhorabuena, ¡has descargado con éxito Brawl Stars para iPhone! Ahora puedes abrir el juego y empezar a jugar. Puedes encontrar Brawl Stars en tu pantalla de inicio o en tu biblioteca de aplicaciones. El icono del juego es rojo con tres estrellas dentro.

-

Brawl Stars icon

- -

Brawl Stars main menú

-

¡Ahora estás listo para pelear! ¡Diviértete y disfruta de Brawl Stars en tu iPhone!

-

Consejos y trucos para jugar Brawl estrellas en el iPhone

-

Brawl Stars es un juego que requiere habilidad, estrategia y trabajo en equipo para ganar. Estos son algunos consejos y trucos que pueden ayudarte a mejorar tu experiencia de juego y convertirte en un mejor luchador:

- -

Preguntas frecuentes sobre Brawl Stars en iPhone

-

Aquí están algunas de las preguntas y respuestas más comunes sobre Brawl Stars en iPhone:

-

¿Cómo actualizo Brawl Stars en el iPhone?

-

Para actualizar Brawl Stars en el iPhone, debe abrir la aplicación App Store e ir a la pestaña Actualizaciones. Verá una lista de aplicaciones que tienen actualizaciones disponibles. Busque Brawl Stars y toque en el botón Actualizar junto a él. También puede habilitar las actualizaciones automáticas en la configuración de la aplicación App Store.

-

¿Cómo puedo restaurar mis compras en Brawl Stars en iPhone?

-

Si has comprado gemas u otros artículos en Brawl Stars con dinero real y los has perdido debido a un cambio de dispositivo o un problema de juego, puedes restaurar tus compras siguiendo estos pasos:

-
    -
  1. Abrir Brawl Stars e ir al icono de configuración en la esquina superior derecha de la pantalla.
  2. -
  3. Toque en Ayuda y Soporte.
  4. -
  5. Toque en Contáctenos.
  6. -
  7. Escriba un mensaje explicando su situación y proporcione su etiqueta de jugador, número de recibo, fecha de compra y cantidad de compra.
  8. -
  9. Enviar el mensaje y esperar una respuesta del equipo de soporte.
  10. -
-

¿Cómo puedo contactar al equipo de soporte de Brawl Stars en el iPhone?

-

Si tienes algún problema, preguntas o comentarios sobre Brawl Stars en el iPhone, puedes ponerte en contacto con el equipo de soporte siguiendo estos pasos:

-
    -
  1. Abrir Brawl Stars e ir al icono de configuración en la esquina superior derecha de la pantalla.
  2. -
  3. Toque en Ayuda y Soporte.
  4. -
  5. Toque en Contáctenos.
  6. - -
  7. Enviar el mensaje y esperar una respuesta del equipo de soporte.
  8. -
-

¿Cómo puedo jugar con mis amigos en Brawl Stars en iPhone?

-

Si quieres jugar con tus amigos en Brawl Stars en iPhone, tienes dos opciones:

- -

¿Cómo puedo canjear códigos en Brawl Stars en iPhone?

- -
    -
  1. Abre Brawl Stars y ve al icono de la tienda en la esquina superior izquierda del menú principal.
  2. -
  3. Desplácese hacia abajo hasta la parte inferior de la pantalla de la tienda y busque un botón que diga "Canjear código". Toque en él para abrir una ventana emergente.
  4. -
  5. Introduzca el código que ha recibido en el cuadro de texto y toque en el botón confirmar.
  6. -
  7. Verá un mensaje que dice "Código redimido" y las recompensas que ha recibido. Toque en el botón de reclamación para recoger sus recompensas.
  8. -
-

Tenga en cuenta que los códigos distinguen entre mayúsculas y minúsculas y tienen una fecha de vencimiento. Solo puede usar un código por cuenta. Si introduce un código inválido o caducado, verá un mensaje de error que dice "Código inválido" o "Código caducado".

-

Conclusión

-

Brawl Stars es un juego divertido y emocionante que puedes jugar en tu iPhone. Puedes descargarlo desde la App Store en unos sencillos pasos y disfrutar de sus características, modos, personajes y jugabilidad. También puedes mejorar tus habilidades, unirte a un club, participar en eventos y canjear códigos para obtener más recompensas y diversión. Brawl Stars es un juego que se actualiza constantemente con nuevos contenidos y mejoras, por lo que nunca te aburrirás de él.

-

¿Qué estás esperando? Descargar Brawl Stars para iPhone hoy y unirse a los millones de jugadores que están luchando su camino a la gloria!

-

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/distro/__main__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/distro/__main__.py deleted file mode 100644 index 0c01d5b08b6b44379b931d54d7fcf5221fdc9fde..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/distro/__main__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .distro import main - -if __name__ == "__main__": - main() diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/formatters/pangomarkup.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/formatters/pangomarkup.py deleted file mode 100644 index bd00866b8b95a98edc8956608e895a6329a944a0..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/formatters/pangomarkup.py +++ /dev/null @@ -1,83 +0,0 @@ -""" - pygments.formatters.pangomarkup - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - Formatter for Pango markup output. - - :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -from pip._vendor.pygments.formatter import Formatter - - -__all__ = ['PangoMarkupFormatter'] - - -_escape_table = { - ord('&'): '&', - ord('<'): '<', -} - - -def escape_special_chars(text, table=_escape_table): - """Escape & and < for Pango Markup.""" - return text.translate(table) - - -class PangoMarkupFormatter(Formatter): - """ - Format tokens as Pango Markup code. It can then be rendered to an SVG. - - .. versionadded:: 2.9 - """ - - name = 'Pango Markup' - aliases = ['pango', 'pangomarkup'] - filenames = [] - - def __init__(self, **options): - Formatter.__init__(self, **options) - - self.styles = {} - - for token, style in self.style: - start = '' - end = '' - if style['color']: - start += '' % style['color'] - end = '' + end - if style['bold']: - start += '' - end = '' + end - if style['italic']: - start += '' - end = '' + end - if style['underline']: - start += '' - end = '' + end - self.styles[token] = (start, end) - - def format_unencoded(self, tokensource, outfile): - lastval = '' - lasttype = None - - outfile.write('') - - for ttype, value in tokensource: - while ttype not in self.styles: - ttype = ttype.parent - if ttype == lasttype: - lastval += escape_special_chars(value) - else: - if lastval: - stylebegin, styleend = self.styles[lasttype] - outfile.write(stylebegin + lastval + styleend) - lastval = escape_special_chars(value) - lasttype = ttype - - if lastval: - stylebegin, styleend = self.styles[lasttype] - outfile.write(stylebegin + lastval + styleend) - - outfile.write('') diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/style.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/style.py deleted file mode 100644 index 313c889496d90cef94d5537c122e5c5e898e3bb4..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/style.py +++ /dev/null @@ -1,796 +0,0 @@ -import sys -from functools import lru_cache -from marshal import dumps, loads -from random import randint -from typing import Any, Dict, Iterable, List, Optional, Type, Union, cast - -from . import errors -from .color import Color, ColorParseError, ColorSystem, blend_rgb -from .repr import Result, rich_repr -from .terminal_theme import DEFAULT_TERMINAL_THEME, TerminalTheme - -# Style instances and style definitions are often interchangeable -StyleType = Union[str, "Style"] - - -class _Bit: - """A descriptor to get/set a style attribute bit.""" - - __slots__ = ["bit"] - - def __init__(self, bit_no: int) -> None: - self.bit = 1 << bit_no - - def __get__(self, obj: "Style", objtype: Type["Style"]) -> Optional[bool]: - if obj._set_attributes & self.bit: - return obj._attributes & self.bit != 0 - return None - - -@rich_repr -class Style: - """A terminal style. - - A terminal style consists of a color (`color`), a background color (`bgcolor`), and a number of attributes, such - as bold, italic etc. The attributes have 3 states: they can either be on - (``True``), off (``False``), or not set (``None``). - - Args: - color (Union[Color, str], optional): Color of terminal text. Defaults to None. - bgcolor (Union[Color, str], optional): Color of terminal background. Defaults to None. - bold (bool, optional): Enable bold text. Defaults to None. - dim (bool, optional): Enable dim text. Defaults to None. - italic (bool, optional): Enable italic text. Defaults to None. - underline (bool, optional): Enable underlined text. Defaults to None. - blink (bool, optional): Enabled blinking text. Defaults to None. - blink2 (bool, optional): Enable fast blinking text. Defaults to None. - reverse (bool, optional): Enabled reverse text. Defaults to None. - conceal (bool, optional): Enable concealed text. Defaults to None. - strike (bool, optional): Enable strikethrough text. Defaults to None. - underline2 (bool, optional): Enable doubly underlined text. Defaults to None. - frame (bool, optional): Enable framed text. Defaults to None. - encircle (bool, optional): Enable encircled text. Defaults to None. - overline (bool, optional): Enable overlined text. Defaults to None. - link (str, link): Link URL. Defaults to None. - - """ - - _color: Optional[Color] - _bgcolor: Optional[Color] - _attributes: int - _set_attributes: int - _hash: Optional[int] - _null: bool - _meta: Optional[bytes] - - __slots__ = [ - "_color", - "_bgcolor", - "_attributes", - "_set_attributes", - "_link", - "_link_id", - "_ansi", - "_style_definition", - "_hash", - "_null", - "_meta", - ] - - # maps bits on to SGR parameter - _style_map = { - 0: "1", - 1: "2", - 2: "3", - 3: "4", - 4: "5", - 5: "6", - 6: "7", - 7: "8", - 8: "9", - 9: "21", - 10: "51", - 11: "52", - 12: "53", - } - - STYLE_ATTRIBUTES = { - "dim": "dim", - "d": "dim", - "bold": "bold", - "b": "bold", - "italic": "italic", - "i": "italic", - "underline": "underline", - "u": "underline", - "blink": "blink", - "blink2": "blink2", - "reverse": "reverse", - "r": "reverse", - "conceal": "conceal", - "c": "conceal", - "strike": "strike", - "s": "strike", - "underline2": "underline2", - "uu": "underline2", - "frame": "frame", - "encircle": "encircle", - "overline": "overline", - "o": "overline", - } - - def __init__( - self, - *, - color: Optional[Union[Color, str]] = None, - bgcolor: Optional[Union[Color, str]] = None, - bold: Optional[bool] = None, - dim: Optional[bool] = None, - italic: Optional[bool] = None, - underline: Optional[bool] = None, - blink: Optional[bool] = None, - blink2: Optional[bool] = None, - reverse: Optional[bool] = None, - conceal: Optional[bool] = None, - strike: Optional[bool] = None, - underline2: Optional[bool] = None, - frame: Optional[bool] = None, - encircle: Optional[bool] = None, - overline: Optional[bool] = None, - link: Optional[str] = None, - meta: Optional[Dict[str, Any]] = None, - ): - self._ansi: Optional[str] = None - self._style_definition: Optional[str] = None - - def _make_color(color: Union[Color, str]) -> Color: - return color if isinstance(color, Color) else Color.parse(color) - - self._color = None if color is None else _make_color(color) - self._bgcolor = None if bgcolor is None else _make_color(bgcolor) - self._set_attributes = sum( - ( - bold is not None, - dim is not None and 2, - italic is not None and 4, - underline is not None and 8, - blink is not None and 16, - blink2 is not None and 32, - reverse is not None and 64, - conceal is not None and 128, - strike is not None and 256, - underline2 is not None and 512, - frame is not None and 1024, - encircle is not None and 2048, - overline is not None and 4096, - ) - ) - self._attributes = ( - sum( - ( - bold and 1 or 0, - dim and 2 or 0, - italic and 4 or 0, - underline and 8 or 0, - blink and 16 or 0, - blink2 and 32 or 0, - reverse and 64 or 0, - conceal and 128 or 0, - strike and 256 or 0, - underline2 and 512 or 0, - frame and 1024 or 0, - encircle and 2048 or 0, - overline and 4096 or 0, - ) - ) - if self._set_attributes - else 0 - ) - - self._link = link - self._meta = None if meta is None else dumps(meta) - self._link_id = ( - f"{randint(0, 999999)}{hash(self._meta)}" if (link or meta) else "" - ) - self._hash: Optional[int] = None - self._null = not (self._set_attributes or color or bgcolor or link or meta) - - @classmethod - def null(cls) -> "Style": - """Create an 'null' style, equivalent to Style(), but more performant.""" - return NULL_STYLE - - @classmethod - def from_color( - cls, color: Optional[Color] = None, bgcolor: Optional[Color] = None - ) -> "Style": - """Create a new style with colors and no attributes. - - Returns: - color (Optional[Color]): A (foreground) color, or None for no color. Defaults to None. - bgcolor (Optional[Color]): A (background) color, or None for no color. Defaults to None. - """ - style: Style = cls.__new__(Style) - style._ansi = None - style._style_definition = None - style._color = color - style._bgcolor = bgcolor - style._set_attributes = 0 - style._attributes = 0 - style._link = None - style._link_id = "" - style._meta = None - style._null = not (color or bgcolor) - style._hash = None - return style - - @classmethod - def from_meta(cls, meta: Optional[Dict[str, Any]]) -> "Style": - """Create a new style with meta data. - - Returns: - meta (Optional[Dict[str, Any]]): A dictionary of meta data. Defaults to None. - """ - style: Style = cls.__new__(Style) - style._ansi = None - style._style_definition = None - style._color = None - style._bgcolor = None - style._set_attributes = 0 - style._attributes = 0 - style._link = None - style._meta = dumps(meta) - style._link_id = f"{randint(0, 999999)}{hash(style._meta)}" - style._hash = None - style._null = not (meta) - return style - - @classmethod - def on(cls, meta: Optional[Dict[str, Any]] = None, **handlers: Any) -> "Style": - """Create a blank style with meta information. - - Example: - style = Style.on(click=self.on_click) - - Args: - meta (Optional[Dict[str, Any]], optional): An optional dict of meta information. - **handlers (Any): Keyword arguments are translated in to handlers. - - Returns: - Style: A Style with meta information attached. - """ - meta = {} if meta is None else meta - meta.update({f"@{key}": value for key, value in handlers.items()}) - return cls.from_meta(meta) - - bold = _Bit(0) - dim = _Bit(1) - italic = _Bit(2) - underline = _Bit(3) - blink = _Bit(4) - blink2 = _Bit(5) - reverse = _Bit(6) - conceal = _Bit(7) - strike = _Bit(8) - underline2 = _Bit(9) - frame = _Bit(10) - encircle = _Bit(11) - overline = _Bit(12) - - @property - def link_id(self) -> str: - """Get a link id, used in ansi code for links.""" - return self._link_id - - def __str__(self) -> str: - """Re-generate style definition from attributes.""" - if self._style_definition is None: - attributes: List[str] = [] - append = attributes.append - bits = self._set_attributes - if bits & 0b0000000001111: - if bits & 1: - append("bold" if self.bold else "not bold") - if bits & (1 << 1): - append("dim" if self.dim else "not dim") - if bits & (1 << 2): - append("italic" if self.italic else "not italic") - if bits & (1 << 3): - append("underline" if self.underline else "not underline") - if bits & 0b0000111110000: - if bits & (1 << 4): - append("blink" if self.blink else "not blink") - if bits & (1 << 5): - append("blink2" if self.blink2 else "not blink2") - if bits & (1 << 6): - append("reverse" if self.reverse else "not reverse") - if bits & (1 << 7): - append("conceal" if self.conceal else "not conceal") - if bits & (1 << 8): - append("strike" if self.strike else "not strike") - if bits & 0b1111000000000: - if bits & (1 << 9): - append("underline2" if self.underline2 else "not underline2") - if bits & (1 << 10): - append("frame" if self.frame else "not frame") - if bits & (1 << 11): - append("encircle" if self.encircle else "not encircle") - if bits & (1 << 12): - append("overline" if self.overline else "not overline") - if self._color is not None: - append(self._color.name) - if self._bgcolor is not None: - append("on") - append(self._bgcolor.name) - if self._link: - append("link") - append(self._link) - self._style_definition = " ".join(attributes) or "none" - return self._style_definition - - def __bool__(self) -> bool: - """A Style is false if it has no attributes, colors, or links.""" - return not self._null - - def _make_ansi_codes(self, color_system: ColorSystem) -> str: - """Generate ANSI codes for this style. - - Args: - color_system (ColorSystem): Color system. - - Returns: - str: String containing codes. - """ - - if self._ansi is None: - sgr: List[str] = [] - append = sgr.append - _style_map = self._style_map - attributes = self._attributes & self._set_attributes - if attributes: - if attributes & 1: - append(_style_map[0]) - if attributes & 2: - append(_style_map[1]) - if attributes & 4: - append(_style_map[2]) - if attributes & 8: - append(_style_map[3]) - if attributes & 0b0000111110000: - for bit in range(4, 9): - if attributes & (1 << bit): - append(_style_map[bit]) - if attributes & 0b1111000000000: - for bit in range(9, 13): - if attributes & (1 << bit): - append(_style_map[bit]) - if self._color is not None: - sgr.extend(self._color.downgrade(color_system).get_ansi_codes()) - if self._bgcolor is not None: - sgr.extend( - self._bgcolor.downgrade(color_system).get_ansi_codes( - foreground=False - ) - ) - self._ansi = ";".join(sgr) - return self._ansi - - @classmethod - @lru_cache(maxsize=1024) - def normalize(cls, style: str) -> str: - """Normalize a style definition so that styles with the same effect have the same string - representation. - - Args: - style (str): A style definition. - - Returns: - str: Normal form of style definition. - """ - try: - return str(cls.parse(style)) - except errors.StyleSyntaxError: - return style.strip().lower() - - @classmethod - def pick_first(cls, *values: Optional[StyleType]) -> StyleType: - """Pick first non-None style.""" - for value in values: - if value is not None: - return value - raise ValueError("expected at least one non-None style") - - def __rich_repr__(self) -> Result: - yield "color", self.color, None - yield "bgcolor", self.bgcolor, None - yield "bold", self.bold, None, - yield "dim", self.dim, None, - yield "italic", self.italic, None - yield "underline", self.underline, None, - yield "blink", self.blink, None - yield "blink2", self.blink2, None - yield "reverse", self.reverse, None - yield "conceal", self.conceal, None - yield "strike", self.strike, None - yield "underline2", self.underline2, None - yield "frame", self.frame, None - yield "encircle", self.encircle, None - yield "link", self.link, None - if self._meta: - yield "meta", self.meta - - def __eq__(self, other: Any) -> bool: - if not isinstance(other, Style): - return NotImplemented - return self.__hash__() == other.__hash__() - - def __ne__(self, other: Any) -> bool: - if not isinstance(other, Style): - return NotImplemented - return self.__hash__() != other.__hash__() - - def __hash__(self) -> int: - if self._hash is not None: - return self._hash - self._hash = hash( - ( - self._color, - self._bgcolor, - self._attributes, - self._set_attributes, - self._link, - self._meta, - ) - ) - return self._hash - - @property - def color(self) -> Optional[Color]: - """The foreground color or None if it is not set.""" - return self._color - - @property - def bgcolor(self) -> Optional[Color]: - """The background color or None if it is not set.""" - return self._bgcolor - - @property - def link(self) -> Optional[str]: - """Link text, if set.""" - return self._link - - @property - def transparent_background(self) -> bool: - """Check if the style specified a transparent background.""" - return self.bgcolor is None or self.bgcolor.is_default - - @property - def background_style(self) -> "Style": - """A Style with background only.""" - return Style(bgcolor=self.bgcolor) - - @property - def meta(self) -> Dict[str, Any]: - """Get meta information (can not be changed after construction).""" - return {} if self._meta is None else cast(Dict[str, Any], loads(self._meta)) - - @property - def without_color(self) -> "Style": - """Get a copy of the style with color removed.""" - if self._null: - return NULL_STYLE - style: Style = self.__new__(Style) - style._ansi = None - style._style_definition = None - style._color = None - style._bgcolor = None - style._attributes = self._attributes - style._set_attributes = self._set_attributes - style._link = self._link - style._link_id = f"{randint(0, 999999)}" if self._link else "" - style._null = False - style._meta = None - style._hash = None - return style - - @classmethod - @lru_cache(maxsize=4096) - def parse(cls, style_definition: str) -> "Style": - """Parse a style definition. - - Args: - style_definition (str): A string containing a style. - - Raises: - errors.StyleSyntaxError: If the style definition syntax is invalid. - - Returns: - `Style`: A Style instance. - """ - if style_definition.strip() == "none" or not style_definition: - return cls.null() - - STYLE_ATTRIBUTES = cls.STYLE_ATTRIBUTES - color: Optional[str] = None - bgcolor: Optional[str] = None - attributes: Dict[str, Optional[Any]] = {} - link: Optional[str] = None - - words = iter(style_definition.split()) - for original_word in words: - word = original_word.lower() - if word == "on": - word = next(words, "") - if not word: - raise errors.StyleSyntaxError("color expected after 'on'") - try: - Color.parse(word) is None - except ColorParseError as error: - raise errors.StyleSyntaxError( - f"unable to parse {word!r} as background color; {error}" - ) from None - bgcolor = word - - elif word == "not": - word = next(words, "") - attribute = STYLE_ATTRIBUTES.get(word) - if attribute is None: - raise errors.StyleSyntaxError( - f"expected style attribute after 'not', found {word!r}" - ) - attributes[attribute] = False - - elif word == "link": - word = next(words, "") - if not word: - raise errors.StyleSyntaxError("URL expected after 'link'") - link = word - - elif word in STYLE_ATTRIBUTES: - attributes[STYLE_ATTRIBUTES[word]] = True - - else: - try: - Color.parse(word) - except ColorParseError as error: - raise errors.StyleSyntaxError( - f"unable to parse {word!r} as color; {error}" - ) from None - color = word - style = Style(color=color, bgcolor=bgcolor, link=link, **attributes) - return style - - @lru_cache(maxsize=1024) - def get_html_style(self, theme: Optional[TerminalTheme] = None) -> str: - """Get a CSS style rule.""" - theme = theme or DEFAULT_TERMINAL_THEME - css: List[str] = [] - append = css.append - - color = self.color - bgcolor = self.bgcolor - if self.reverse: - color, bgcolor = bgcolor, color - if self.dim: - foreground_color = ( - theme.foreground_color if color is None else color.get_truecolor(theme) - ) - color = Color.from_triplet( - blend_rgb(foreground_color, theme.background_color, 0.5) - ) - if color is not None: - theme_color = color.get_truecolor(theme) - append(f"color: {theme_color.hex}") - append(f"text-decoration-color: {theme_color.hex}") - if bgcolor is not None: - theme_color = bgcolor.get_truecolor(theme, foreground=False) - append(f"background-color: {theme_color.hex}") - if self.bold: - append("font-weight: bold") - if self.italic: - append("font-style: italic") - if self.underline: - append("text-decoration: underline") - if self.strike: - append("text-decoration: line-through") - if self.overline: - append("text-decoration: overline") - return "; ".join(css) - - @classmethod - def combine(cls, styles: Iterable["Style"]) -> "Style": - """Combine styles and get result. - - Args: - styles (Iterable[Style]): Styles to combine. - - Returns: - Style: A new style instance. - """ - iter_styles = iter(styles) - return sum(iter_styles, next(iter_styles)) - - @classmethod - def chain(cls, *styles: "Style") -> "Style": - """Combine styles from positional argument in to a single style. - - Args: - *styles (Iterable[Style]): Styles to combine. - - Returns: - Style: A new style instance. - """ - iter_styles = iter(styles) - return sum(iter_styles, next(iter_styles)) - - def copy(self) -> "Style": - """Get a copy of this style. - - Returns: - Style: A new Style instance with identical attributes. - """ - if self._null: - return NULL_STYLE - style: Style = self.__new__(Style) - style._ansi = self._ansi - style._style_definition = self._style_definition - style._color = self._color - style._bgcolor = self._bgcolor - style._attributes = self._attributes - style._set_attributes = self._set_attributes - style._link = self._link - style._link_id = f"{randint(0, 999999)}" if self._link else "" - style._hash = self._hash - style._null = False - style._meta = self._meta - return style - - @lru_cache(maxsize=128) - def clear_meta_and_links(self) -> "Style": - """Get a copy of this style with link and meta information removed. - - Returns: - Style: New style object. - """ - if self._null: - return NULL_STYLE - style: Style = self.__new__(Style) - style._ansi = self._ansi - style._style_definition = self._style_definition - style._color = self._color - style._bgcolor = self._bgcolor - style._attributes = self._attributes - style._set_attributes = self._set_attributes - style._link = None - style._link_id = "" - style._hash = self._hash - style._null = False - style._meta = None - return style - - def update_link(self, link: Optional[str] = None) -> "Style": - """Get a copy with a different value for link. - - Args: - link (str, optional): New value for link. Defaults to None. - - Returns: - Style: A new Style instance. - """ - style: Style = self.__new__(Style) - style._ansi = self._ansi - style._style_definition = self._style_definition - style._color = self._color - style._bgcolor = self._bgcolor - style._attributes = self._attributes - style._set_attributes = self._set_attributes - style._link = link - style._link_id = f"{randint(0, 999999)}" if link else "" - style._hash = None - style._null = False - style._meta = self._meta - return style - - def render( - self, - text: str = "", - *, - color_system: Optional[ColorSystem] = ColorSystem.TRUECOLOR, - legacy_windows: bool = False, - ) -> str: - """Render the ANSI codes for the style. - - Args: - text (str, optional): A string to style. Defaults to "". - color_system (Optional[ColorSystem], optional): Color system to render to. Defaults to ColorSystem.TRUECOLOR. - - Returns: - str: A string containing ANSI style codes. - """ - if not text or color_system is None: - return text - attrs = self._ansi or self._make_ansi_codes(color_system) - rendered = f"\x1b[{attrs}m{text}\x1b[0m" if attrs else text - if self._link and not legacy_windows: - rendered = ( - f"\x1b]8;id={self._link_id};{self._link}\x1b\\{rendered}\x1b]8;;\x1b\\" - ) - return rendered - - def test(self, text: Optional[str] = None) -> None: - """Write text with style directly to terminal. - - This method is for testing purposes only. - - Args: - text (Optional[str], optional): Text to style or None for style name. - - """ - text = text or str(self) - sys.stdout.write(f"{self.render(text)}\n") - - @lru_cache(maxsize=1024) - def _add(self, style: Optional["Style"]) -> "Style": - if style is None or style._null: - return self - if self._null: - return style - new_style: Style = self.__new__(Style) - new_style._ansi = None - new_style._style_definition = None - new_style._color = style._color or self._color - new_style._bgcolor = style._bgcolor or self._bgcolor - new_style._attributes = (self._attributes & ~style._set_attributes) | ( - style._attributes & style._set_attributes - ) - new_style._set_attributes = self._set_attributes | style._set_attributes - new_style._link = style._link or self._link - new_style._link_id = style._link_id or self._link_id - new_style._null = style._null - if self._meta and style._meta: - new_style._meta = dumps({**self.meta, **style.meta}) - else: - new_style._meta = self._meta or style._meta - new_style._hash = None - return new_style - - def __add__(self, style: Optional["Style"]) -> "Style": - combined_style = self._add(style) - return combined_style.copy() if combined_style.link else combined_style - - -NULL_STYLE = Style() - - -class StyleStack: - """A stack of styles.""" - - __slots__ = ["_stack"] - - def __init__(self, default_style: "Style") -> None: - self._stack: List[Style] = [default_style] - - def __repr__(self) -> str: - return f"" - - @property - def current(self) -> Style: - """Get the Style at the top of the stack.""" - return self._stack[-1] - - def push(self, style: Style) -> None: - """Push a new style on to the stack. - - Args: - style (Style): New style to combine with current style. - """ - self._stack.append(self._stack[-1] + style) - - def pop(self) -> Style: - """Pop last style and discard. - - Returns: - Style: New current style (also available as stack.current) - """ - self._stack.pop() - return self._stack[-1] diff --git a/spaces/BraydenMoore/MARCI-NFL-Betting/Source/Test/xgboost_ML.py b/spaces/BraydenMoore/MARCI-NFL-Betting/Source/Test/xgboost_ML.py deleted file mode 100644 index 0196886ed85201ec82142e45a7231de19e2f7afd..0000000000000000000000000000000000000000 --- a/spaces/BraydenMoore/MARCI-NFL-Betting/Source/Test/xgboost_ML.py +++ /dev/null @@ -1,59 +0,0 @@ -import xgboost as xgb -import pandas as pd -import pickle as pkl -import numpy as np -import os - -model = 'xgboost_ML_no_odds_71.4%' - -current_directory = os.path.dirname(os.path.abspath(__file__)) -parent_directory = os.path.dirname(current_directory) -data_directory = os.path.join(parent_directory, 'Data') -model_directory = os.path.join(parent_directory, 'Models') -pickle_directory = os.path.join(parent_directory, 'Pickles') - -file_path = os.path.join(model_directory, f'{model}.json') -xgb_ml = xgb.Booster() -xgb_ml.load_model(file_path) - -file_path = os.path.join(pickle_directory, 'test_games_ML_no_odds.pkl') -with open(file_path,'rb') as f: - test_games = pkl.load(f).tolist() - -file_path = os.path.join(data_directory, 'gbg_and_odds.csv') -gbg_and_odds = pd.read_csv(file_path) -test_data = gbg_and_odds.loc[gbg_and_odds['game_id'].isin(test_games)] -test_data_matrix = xgb.DMatrix(test_data.drop(columns=['game_id','Over','Home-Team-Win','Season','home_team','away_team','game_date','Key','Home Score','Away Score','Home Odds Close','Away Odds Close','Home Winnings','Away Winnings','Away Odds','Home Odds']).astype(float).values) - -predicted_probas = xgb_ml.predict(test_data_matrix) -predictions = np.argmax(predicted_probas, axis=1) -test_data['predicted_proba'] = [i[1] for i in predicted_probas] -test_data['prediction'] = (test_data['predicted_proba']>0.5).astype(int) -test_data['correct'] = test_data['Home-Team-Win']==test_data['prediction'] - -bets = test_data.loc[(test_data['predicted_proba']>0.6) | (test_data['predicted_proba']<0.4)] -bets['winnings'] = [h if p==1 else a for h,a,p in bets[['Home Winnings','Away Winnings','prediction']].values] - -import matplotlib.pyplot as plt -fig = plt.figure(facecolor='black') -ax = fig.add_subplot(1, 1, 1, facecolor='black') - -# Plot data with line color as RGB(0, 128, 0) -ax.plot(bets['winnings'].cumsum().values*100, linewidth=3, color=(0/255, 128/255, 0/255)) - -# Set title and labels -ax.set_title('MARCI 3.0 - MoneyLine w/ 60% Confidence Threshold', color='white') -ax.set_xlabel('Games Bet On', color='white') -ax.set_ylabel('Return (%)', color='white') - -# Change tick colors to white -ax.tick_params(axis='x', colors='white') -ax.tick_params(axis='y', colors='white') - -# Change axis edge colors -ax.spines['bottom'].set_color('white') -ax.spines['top'].set_color('white') -ax.spines['left'].set_color('white') -ax.spines['right'].set_color('white') - -plt.savefig(f'{model}_dark.png', facecolor='black') \ No newline at end of file diff --git a/spaces/BrianL/CoE197-Fil-DialectTranslator/app.py b/spaces/BrianL/CoE197-Fil-DialectTranslator/app.py deleted file mode 100644 index e5e8dbc5f7a38b444c21399167b84ed0d5a3253a..0000000000000000000000000000000000000000 --- a/spaces/BrianL/CoE197-Fil-DialectTranslator/app.py +++ /dev/null @@ -1,36 +0,0 @@ -import gradio as gr -from transformers import pipeline - - -def trnslt(TagalogText,Language): - txt_inp = gr.Interface.load("huggingface/Helsinki-NLP/opus-mt-tl-en") - if Language=="Cebuano": - ceb1 = gr.Interface.load("huggingface/Helsinki-NLP/opus-mt-en-ceb") - out_ceb = gr.Series(txt_inp,ceb1) - return out_ceb(TagalogText) - elif Language=="Ilocano": - ilo1 = gr.Interface.load("huggingface/Helsinki-NLP/opus-mt-en-ilo") - out_ilo = gr.Series(txt_inp,ilo1) - return out_ilo(TagalogText) - elif Language=="Hiligaynon": - hil1 = gr.Interface.load("huggingface/Helsinki-NLP/opus-mt-en-hil") - out_hil = gr.Series(txt_inp,hil1) - return out_hil(TagalogText) - -iface = gr.Interface( - fn=trnslt, - inputs=[gr.inputs.Textbox(label="Input Tagalog Text"), - gr.inputs.Radio(["Cebuano","Ilocano","Hiligaynon"],label="Translate to",optional=False)], - outputs='text', - examples=[["Magandang Umaga","Cebuano"],["Magandang gabi","Ilocano"],["Masarap ang Adobo","Hiligaynon"], - ["Kumusta Ka Na","Cebuano"],["Bumibili si Juan ng manok","Ilocano"],["Magandang umaga","Hiligaynon"]], - live=True, - theme="dark-seafoam", - title="Basic Filipino Dialect Translator", - description=" This application uses Helsinki-NLP models to translate Tagalog texts to 3 other dialects of the Filipino language", - css=".footer{display:none !important}", -) - -iface.launch() - - diff --git a/spaces/CVPR/LIVE/thrust/thrust/binary_search.h b/spaces/CVPR/LIVE/thrust/thrust/binary_search.h deleted file mode 100644 index 127be16aab996b03e7290bac5ae3d1d1fce27588..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/binary_search.h +++ /dev/null @@ -1,1902 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -/*! \file binary_search.h - * \brief Search for values in sorted ranges. - */ - -#pragma once - -#include -#include -#include - -namespace thrust -{ - - -/*! \addtogroup algorithms - */ - - -/*! \addtogroup searching - * \ingroup algorithms - * \{ - */ - - -/*! \addtogroup binary_search Binary Search - * \ingroup searching - * \{ - */ - - -////////////////////// -// Scalar Functions // -////////////////////// - - -/*! \p lower_bound is a version of binary search: it attempts to find - * the element value in an ordered range [first, last). - * Specifically, it returns the first position where value could be - * inserted without violating the ordering. This version of - * \p lower_bound uses operator< for comparison and returns - * the furthermost iterator \c i in [first, last) such that, - * for every iterator \c j in [first, i), *j < value. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param value The value to be searched. - * \return The furthermost iterator \c i, such that *i < value. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam LessThanComparable is a model of LessThanComparable. - * - * The following code snippet demonstrates how to use \p lower_bound - * to search for values in a ordered range using the \p thrust::device execution policy for parallelization: - * - * \code - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::lower_bound(thrust::device, input.begin(), input.end(), 0); // returns input.begin() - * thrust::lower_bound(thrust::device, input.begin(), input.end(), 1); // returns input.begin() + 1 - * thrust::lower_bound(thrust::device, input.begin(), input.end(), 2); // returns input.begin() + 1 - * thrust::lower_bound(thrust::device, input.begin(), input.end(), 3); // returns input.begin() + 2 - * thrust::lower_bound(thrust::device, input.begin(), input.end(), 8); // returns input.begin() + 4 - * thrust::lower_bound(thrust::device, input.begin(), input.end(), 9); // returns input.end() - * \endcode - * - * \see http://www.sgi.com/tech/stl/lower_bound.html - * \see \p upper_bound - * \see \p equal_range - * \see \p binary_search - */ -template -__host__ __device__ -ForwardIterator lower_bound(const thrust::detail::execution_policy_base &exec, - ForwardIterator first, - ForwardIterator last, - const LessThanComparable &value); - - -/*! \p lower_bound is a version of binary search: it attempts to find - * the element value in an ordered range [first, last). - * Specifically, it returns the first position where value could be - * inserted without violating the ordering. This version of - * \p lower_bound uses operator< for comparison and returns - * the furthermost iterator \c i in [first, last) such that, - * for every iterator \c j in [first, i), *j < value. - * - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param value The value to be searched. - * \return The furthermost iterator \c i, such that *i < value. - * - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam LessThanComparable is a model of LessThanComparable. - * - * The following code snippet demonstrates how to use \p lower_bound - * to search for values in a ordered range. - * - * \code - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::lower_bound(input.begin(), input.end(), 0); // returns input.begin() - * thrust::lower_bound(input.begin(), input.end(), 1); // returns input.begin() + 1 - * thrust::lower_bound(input.begin(), input.end(), 2); // returns input.begin() + 1 - * thrust::lower_bound(input.begin(), input.end(), 3); // returns input.begin() + 2 - * thrust::lower_bound(input.begin(), input.end(), 8); // returns input.begin() + 4 - * thrust::lower_bound(input.begin(), input.end(), 9); // returns input.end() - * \endcode - * - * \see http://www.sgi.com/tech/stl/lower_bound.html - * \see \p upper_bound - * \see \p equal_range - * \see \p binary_search - */ -template -ForwardIterator lower_bound(ForwardIterator first, - ForwardIterator last, - const LessThanComparable& value); - - -/*! \p lower_bound is a version of binary search: it attempts to find - * the element value in an ordered range [first, last). - * Specifically, it returns the first position where value could be - * inserted without violating the ordering. This version of - * \p lower_bound uses function object \c comp for comparison - * and returns the furthermost iterator \c i in [first, last) - * such that, for every iterator \c j in [first, i), - * comp(*j, value) is \c true. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param value The value to be searched. - * \param comp The comparison operator. - * \return The furthermost iterator \c i, such that comp(*i, value) is \c true. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam T is comparable to \p ForwardIterator's \c value_type. - * \tparam StrictWeakOrdering is a model of Strict Weak Ordering. - * - * The following code snippet demonstrates how to use \p lower_bound - * to search for values in a ordered range using the \p thrust::device execution policy for parallelization: - * - * \code - * #include - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::lower_bound(input.begin(), input.end(), 0, thrust::less()); // returns input.begin() - * thrust::lower_bound(input.begin(), input.end(), 1, thrust::less()); // returns input.begin() + 1 - * thrust::lower_bound(input.begin(), input.end(), 2, thrust::less()); // returns input.begin() + 1 - * thrust::lower_bound(input.begin(), input.end(), 3, thrust::less()); // returns input.begin() + 2 - * thrust::lower_bound(input.begin(), input.end(), 8, thrust::less()); // returns input.begin() + 4 - * thrust::lower_bound(input.begin(), input.end(), 9, thrust::less()); // returns input.end() - * \endcode - * - * \see http://www.sgi.com/tech/stl/lower_bound.html - * \see \p upper_bound - * \see \p equal_range - * \see \p binary_search - */ -template -__host__ __device__ -ForwardIterator lower_bound(const thrust::detail::execution_policy_base &exec, - ForwardIterator first, - ForwardIterator last, - const T &value, - StrictWeakOrdering comp); - - -/*! \p lower_bound is a version of binary search: it attempts to find - * the element value in an ordered range [first, last). - * Specifically, it returns the first position where value could be - * inserted without violating the ordering. This version of - * \p lower_bound uses function object \c comp for comparison - * and returns the furthermost iterator \c i in [first, last) - * such that, for every iterator \c j in [first, i), - * comp(*j, value) is \c true. - * - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param value The value to be searched. - * \param comp The comparison operator. - * \return The furthermost iterator \c i, such that comp(*i, value) is \c true. - * - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam T is comparable to \p ForwardIterator's \c value_type. - * \tparam StrictWeakOrdering is a model of Strict Weak Ordering. - * - * The following code snippet demonstrates how to use \p lower_bound - * to search for values in a ordered range. - * - * \code - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::lower_bound(input.begin(), input.end(), 0, thrust::less()); // returns input.begin() - * thrust::lower_bound(input.begin(), input.end(), 1, thrust::less()); // returns input.begin() + 1 - * thrust::lower_bound(input.begin(), input.end(), 2, thrust::less()); // returns input.begin() + 1 - * thrust::lower_bound(input.begin(), input.end(), 3, thrust::less()); // returns input.begin() + 2 - * thrust::lower_bound(input.begin(), input.end(), 8, thrust::less()); // returns input.begin() + 4 - * thrust::lower_bound(input.begin(), input.end(), 9, thrust::less()); // returns input.end() - * \endcode - * - * \see http://www.sgi.com/tech/stl/lower_bound.html - * \see \p upper_bound - * \see \p equal_range - * \see \p binary_search - */ -template -ForwardIterator lower_bound(ForwardIterator first, - ForwardIterator last, - const T& value, - StrictWeakOrdering comp); - - -/*! \p upper_bound is a version of binary search: it attempts to find - * the element value in an ordered range [first, last). - * Specifically, it returns the last position where value could be - * inserted without violating the ordering. This version of - * \p upper_bound uses operator< for comparison and returns - * the furthermost iterator \c i in [first, last) such that, - * for every iterator \c j in [first, i), value < *j - * is \c false. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param value The value to be searched. - * \return The furthermost iterator \c i, such that value < *i is \c false. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam LessThanComparable is a model of LessThanComparable. - * - * The following code snippet demonstrates how to use \p upper_bound - * to search for values in a ordered range using the \p thrust::device execution policy for parallelism: - * - * \code - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::upper_bound(thrust::device, input.begin(), input.end(), 0); // returns input.begin() + 1 - * thrust::upper_bound(thrust::device, input.begin(), input.end(), 1); // returns input.begin() + 1 - * thrust::upper_bound(thrust::device, input.begin(), input.end(), 2); // returns input.begin() + 2 - * thrust::upper_bound(thrust::device, input.begin(), input.end(), 3); // returns input.begin() + 2 - * thrust::upper_bound(thrust::device, input.begin(), input.end(), 8); // returns input.end() - * thrust::upper_bound(thrust::device, input.begin(), input.end(), 9); // returns input.end() - * \endcode - * - * \see http://www.sgi.com/tech/stl/upper_bound.html - * \see \p lower_bound - * \see \p equal_range - * \see \p binary_search - */ -template -__host__ __device__ -ForwardIterator upper_bound(const thrust::detail::execution_policy_base &exec, - ForwardIterator first, - ForwardIterator last, - const LessThanComparable &value); - - -/*! \p upper_bound is a version of binary search: it attempts to find - * the element value in an ordered range [first, last). - * Specifically, it returns the last position where value could be - * inserted without violating the ordering. This version of - * \p upper_bound uses operator< for comparison and returns - * the furthermost iterator \c i in [first, last) such that, - * for every iterator \c j in [first, i), value < *j - * is \c false. - * - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param value The value to be searched. - * \return The furthermost iterator \c i, such that value < *i is \c false. - * - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam LessThanComparable is a model of LessThanComparable. - * - * The following code snippet demonstrates how to use \p upper_bound - * to search for values in a ordered range. - * - * \code - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::upper_bound(input.begin(), input.end(), 0); // returns input.begin() + 1 - * thrust::upper_bound(input.begin(), input.end(), 1); // returns input.begin() + 1 - * thrust::upper_bound(input.begin(), input.end(), 2); // returns input.begin() + 2 - * thrust::upper_bound(input.begin(), input.end(), 3); // returns input.begin() + 2 - * thrust::upper_bound(input.begin(), input.end(), 8); // returns input.end() - * thrust::upper_bound(input.begin(), input.end(), 9); // returns input.end() - * \endcode - * - * \see http://www.sgi.com/tech/stl/upper_bound.html - * \see \p lower_bound - * \see \p equal_range - * \see \p binary_search - */ -template -ForwardIterator upper_bound(ForwardIterator first, - ForwardIterator last, - const LessThanComparable& value); - - -/*! \p upper_bound is a version of binary search: it attempts to find - * the element value in an ordered range [first, last). - * Specifically, it returns the last position where value could be - * inserted without violating the ordering. This version of - * \p upper_bound uses function object \c comp for comparison and returns - * the furthermost iterator \c i in [first, last) such that, - * for every iterator \c j in [first, i), comp(value, *j) - * is \c false. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param value The value to be searched. - * \param comp The comparison operator. - * \return The furthermost iterator \c i, such that comp(value, *i) is \c false. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam T is comparable to \p ForwardIterator's \c value_type. - * \tparam StrictWeakOrdering is a model of Strict Weak Ordering. - * - * The following code snippet demonstrates how to use \p upper_bound - * to search for values in a ordered range using the \p thrust::device execution policy for parallelization: - * - * \code - * #include - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::upper_bound(thrust::device, input.begin(), input.end(), 0, thrust::less()); // returns input.begin() + 1 - * thrust::upper_bound(thrust::device, input.begin(), input.end(), 1, thrust::less()); // returns input.begin() + 1 - * thrust::upper_bound(thrust::device, input.begin(), input.end(), 2, thrust::less()); // returns input.begin() + 2 - * thrust::upper_bound(thrust::device, input.begin(), input.end(), 3, thrust::less()); // returns input.begin() + 2 - * thrust::upper_bound(thrust::device, input.begin(), input.end(), 8, thrust::less()); // returns input.end() - * thrust::upper_bound(thrust::device, input.begin(), input.end(), 9, thrust::less()); // returns input.end() - * \endcode - * - * \see http://www.sgi.com/tech/stl/upper_bound.html - * \see \p lower_bound - * \see \p equal_range - * \see \p binary_search - */ -template -__host__ __device__ -ForwardIterator upper_bound(const thrust::detail::execution_policy_base &exec, - ForwardIterator first, - ForwardIterator last, - const T &value, - StrictWeakOrdering comp); - -/*! \p upper_bound is a version of binary search: it attempts to find - * the element value in an ordered range [first, last). - * Specifically, it returns the last position where value could be - * inserted without violating the ordering. This version of - * \p upper_bound uses function object \c comp for comparison and returns - * the furthermost iterator \c i in [first, last) such that, - * for every iterator \c j in [first, i), comp(value, *j) - * is \c false. - * - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param value The value to be searched. - * \param comp The comparison operator. - * \return The furthermost iterator \c i, such that comp(value, *i) is \c false. - * - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam T is comparable to \p ForwardIterator's \c value_type. - * \tparam StrictWeakOrdering is a model of Strict Weak Ordering. - * - * The following code snippet demonstrates how to use \p upper_bound - * to search for values in a ordered range. - * - * \code - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::upper_bound(input.begin(), input.end(), 0, thrust::less()); // returns input.begin() + 1 - * thrust::upper_bound(input.begin(), input.end(), 1, thrust::less()); // returns input.begin() + 1 - * thrust::upper_bound(input.begin(), input.end(), 2, thrust::less()); // returns input.begin() + 2 - * thrust::upper_bound(input.begin(), input.end(), 3, thrust::less()); // returns input.begin() + 2 - * thrust::upper_bound(input.begin(), input.end(), 8, thrust::less()); // returns input.end() - * thrust::upper_bound(input.begin(), input.end(), 9, thrust::less()); // returns input.end() - * \endcode - * - * \see http://www.sgi.com/tech/stl/upper_bound.html - * \see \p lower_bound - * \see \p equal_range - * \see \p binary_search - */ -template -ForwardIterator upper_bound(ForwardIterator first, - ForwardIterator last, - const T& value, - StrictWeakOrdering comp); - - -/*! \p binary_search is a version of binary search: it attempts to find - * the element value in an ordered range [first, last). - * It returns \c true if an element that is equivalent to \c value - * is present in [first, last) and \c false if no such element - * exists. Specifically, this version returns \c true if and only if - * there exists an iterator \c i in [first, last) such that - * *i < value and value < *i are both \c false. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param value The value to be searched. - * \return \c true if an equivalent element exists in [first, last), otherwise \c false. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam LessThanComparable is a model of LessThanComparable. - * - * The following code snippet demonstrates how to use \p binary_search - * to search for values in a ordered range using the \p thrust::device execution policy for parallelization: - * - * \code - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::binary_search(thrust::device, input.begin(), input.end(), 0); // returns true - * thrust::binary_search(thrust::device, input.begin(), input.end(), 1); // returns false - * thrust::binary_search(thrust::device, input.begin(), input.end(), 2); // returns true - * thrust::binary_search(thrust::device, input.begin(), input.end(), 3); // returns false - * thrust::binary_search(thrust::device, input.begin(), input.end(), 8); // returns true - * thrust::binary_search(thrust::device, input.begin(), input.end(), 9); // returns false - * \endcode - * - * \see http://www.sgi.com/tech/stl/binary_search.html - * \see \p lower_bound - * \see \p upper_bound - * \see \p equal_range - */ -template -__host__ __device__ -bool binary_search(const thrust::detail::execution_policy_base &exec, - ForwardIterator first, - ForwardIterator last, - const LessThanComparable& value); - - -/*! \p binary_search is a version of binary search: it attempts to find - * the element value in an ordered range [first, last). - * It returns \c true if an element that is equivalent to \c value - * is present in [first, last) and \c false if no such element - * exists. Specifically, this version returns \c true if and only if - * there exists an iterator \c i in [first, last) such that - * *i < value and value < *i are both \c false. - * - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param value The value to be searched. - * \return \c true if an equivalent element exists in [first, last), otherwise \c false. - * - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam LessThanComparable is a model of LessThanComparable. - * - * The following code snippet demonstrates how to use \p binary_search - * to search for values in a ordered range. - * - * \code - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::binary_search(input.begin(), input.end(), 0); // returns true - * thrust::binary_search(input.begin(), input.end(), 1); // returns false - * thrust::binary_search(input.begin(), input.end(), 2); // returns true - * thrust::binary_search(input.begin(), input.end(), 3); // returns false - * thrust::binary_search(input.begin(), input.end(), 8); // returns true - * thrust::binary_search(input.begin(), input.end(), 9); // returns false - * \endcode - * - * \see http://www.sgi.com/tech/stl/binary_search.html - * \see \p lower_bound - * \see \p upper_bound - * \see \p equal_range - */ -template -bool binary_search(ForwardIterator first, - ForwardIterator last, - const LessThanComparable& value); - - -/*! \p binary_search is a version of binary search: it attempts to find - * the element value in an ordered range [first, last). - * It returns \c true if an element that is equivalent to \c value - * is present in [first, last) and \c false if no such element - * exists. Specifically, this version returns \c true if and only if - * there exists an iterator \c i in [first, last) such that - * comp(*i, value) and comp(value, *i) are both \c false. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param value The value to be searched. - * \param comp The comparison operator. - * \return \c true if an equivalent element exists in [first, last), otherwise \c false. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam T is comparable to \p ForwardIterator's \c value_type. - * \tparam StrictWeakOrdering is a model of Strict Weak Ordering. - * - * The following code snippet demonstrates how to use \p binary_search - * to search for values in a ordered range using the \p thrust::device execution policy for parallelization: - * - * \code - * #include - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::binary_search(thrust::device, input.begin(), input.end(), 0, thrust::less()); // returns true - * thrust::binary_search(thrust::device, input.begin(), input.end(), 1, thrust::less()); // returns false - * thrust::binary_search(thrust::device, input.begin(), input.end(), 2, thrust::less()); // returns true - * thrust::binary_search(thrust::device, input.begin(), input.end(), 3, thrust::less()); // returns false - * thrust::binary_search(thrust::device, input.begin(), input.end(), 8, thrust::less()); // returns true - * thrust::binary_search(thrust::device, input.begin(), input.end(), 9, thrust::less()); // returns false - * \endcode - * - * \see http://www.sgi.com/tech/stl/binary_search.html - * \see \p lower_bound - * \see \p upper_bound - * \see \p equal_range - */ -template -__host__ __device__ -bool binary_search(const thrust::detail::execution_policy_base &exec, - ForwardIterator first, - ForwardIterator last, - const T& value, - StrictWeakOrdering comp); - - -/*! \p binary_search is a version of binary search: it attempts to find - * the element value in an ordered range [first, last). - * It returns \c true if an element that is equivalent to \c value - * is present in [first, last) and \c false if no such element - * exists. Specifically, this version returns \c true if and only if - * there exists an iterator \c i in [first, last) such that - * comp(*i, value) and comp(value, *i) are both \c false. - * - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param value The value to be searched. - * \param comp The comparison operator. - * \return \c true if an equivalent element exists in [first, last), otherwise \c false. - * - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam T is comparable to \p ForwardIterator's \c value_type. - * \tparam StrictWeakOrdering is a model of Strict Weak Ordering. - * - * The following code snippet demonstrates how to use \p binary_search - * to search for values in a ordered range. - * - * \code - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::binary_search(input.begin(), input.end(), 0, thrust::less()); // returns true - * thrust::binary_search(input.begin(), input.end(), 1, thrust::less()); // returns false - * thrust::binary_search(input.begin(), input.end(), 2, thrust::less()); // returns true - * thrust::binary_search(input.begin(), input.end(), 3, thrust::less()); // returns false - * thrust::binary_search(input.begin(), input.end(), 8, thrust::less()); // returns true - * thrust::binary_search(input.begin(), input.end(), 9, thrust::less()); // returns false - * \endcode - * - * \see http://www.sgi.com/tech/stl/binary_search.html - * \see \p lower_bound - * \see \p upper_bound - * \see \p equal_range - */ -template -bool binary_search(ForwardIterator first, - ForwardIterator last, - const T& value, - StrictWeakOrdering comp); - - -/*! \p equal_range is a version of binary search: it attempts to find - * the element value in an ordered range [first, last). The - * value returned by \p equal_range is essentially a combination of - * the values returned by \p lower_bound and \p upper_bound: it returns - * a \p pair of iterators \c i and \c j such that \c i is the first - * position where value could be inserted without violating the - * ordering and \c j is the last position where value could be inserted - * without violating the ordering. It follows that every element in the - * range [i, j) is equivalent to value, and that - * [i, j) is the largest subrange of [first, last) that - * has this property. - * - * This version of \p equal_range returns a \p pair of iterators - * [i, j), where \c i is the furthermost iterator in - * [first, last) such that, for every iterator \c k in - * [first, i), *k < value. \c j is the furthermost - * iterator in [first, last) such that, for every iterator - * \c k in [first, j), value < *k is \c false. - * For every iterator \c k in [i, j), neither - * value < *k nor *k < value is \c true. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param value The value to be searched. - * \return A \p pair of iterators [i, j) that define the range of equivalent elements. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam LessThanComparable is a model of LessThanComparable. - * - * The following code snippet demonstrates how to use \p equal_range - * to search for values in a ordered range using the \p thrust::device execution policy for parallelization: - * - * \code - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::equal_range(thrust::device, input.begin(), input.end(), 0); // returns [input.begin(), input.begin() + 1) - * thrust::equal_range(thrust::device, input.begin(), input.end(), 1); // returns [input.begin() + 1, input.begin() + 1) - * thrust::equal_range(thrust::device, input.begin(), input.end(), 2); // returns [input.begin() + 1, input.begin() + 2) - * thrust::equal_range(thrust::device, input.begin(), input.end(), 3); // returns [input.begin() + 2, input.begin() + 2) - * thrust::equal_range(thrust::device, input.begin(), input.end(), 8); // returns [input.begin() + 4, input.end) - * thrust::equal_range(thrust::device, input.begin(), input.end(), 9); // returns [input.end(), input.end) - * \endcode - * - * \see http://www.sgi.com/tech/stl/equal_range.html - * \see \p lower_bound - * \see \p upper_bound - * \see \p binary_search - */ -template -__host__ __device__ -thrust::pair -equal_range(const thrust::detail::execution_policy_base &exec, - ForwardIterator first, - ForwardIterator last, - const LessThanComparable& value); - - -/*! \p equal_range is a version of binary search: it attempts to find - * the element value in an ordered range [first, last). The - * value returned by \p equal_range is essentially a combination of - * the values returned by \p lower_bound and \p upper_bound: it returns - * a \p pair of iterators \c i and \c j such that \c i is the first - * position where value could be inserted without violating the - * ordering and \c j is the last position where value could be inserted - * without violating the ordering. It follows that every element in the - * range [i, j) is equivalent to value, and that - * [i, j) is the largest subrange of [first, last) that - * has this property. - * - * This version of \p equal_range returns a \p pair of iterators - * [i, j), where \c i is the furthermost iterator in - * [first, last) such that, for every iterator \c k in - * [first, i), *k < value. \c j is the furthermost - * iterator in [first, last) such that, for every iterator - * \c k in [first, j), value < *k is \c false. - * For every iterator \c k in [i, j), neither - * value < *k nor *k < value is \c true. - * - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param value The value to be searched. - * \return A \p pair of iterators [i, j) that define the range of equivalent elements. - * - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam LessThanComparable is a model of LessThanComparable. - * - * The following code snippet demonstrates how to use \p equal_range - * to search for values in a ordered range. - * - * \code - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::equal_range(input.begin(), input.end(), 0); // returns [input.begin(), input.begin() + 1) - * thrust::equal_range(input.begin(), input.end(), 1); // returns [input.begin() + 1, input.begin() + 1) - * thrust::equal_range(input.begin(), input.end(), 2); // returns [input.begin() + 1, input.begin() + 2) - * thrust::equal_range(input.begin(), input.end(), 3); // returns [input.begin() + 2, input.begin() + 2) - * thrust::equal_range(input.begin(), input.end(), 8); // returns [input.begin() + 4, input.end) - * thrust::equal_range(input.begin(), input.end(), 9); // returns [input.end(), input.end) - * \endcode - * - * \see http://www.sgi.com/tech/stl/equal_range.html - * \see \p lower_bound - * \see \p upper_bound - * \see \p binary_search - */ -template -thrust::pair -equal_range(ForwardIterator first, - ForwardIterator last, - const LessThanComparable& value); - - -/*! \p equal_range is a version of binary search: it attempts to find - * the element value in an ordered range [first, last). The - * value returned by \p equal_range is essentially a combination of - * the values returned by \p lower_bound and \p upper_bound: it returns - * a \p pair of iterators \c i and \c j such that \c i is the first - * position where value could be inserted without violating the - * ordering and \c j is the last position where value could be inserted - * without violating the ordering. It follows that every element in the - * range [i, j) is equivalent to value, and that - * [i, j) is the largest subrange of [first, last) that - * has this property. - * - * This version of \p equal_range returns a \p pair of iterators - * [i, j). \c i is the furthermost iterator in - * [first, last) such that, for every iterator \c k in - * [first, i), comp(*k, value) is \c true. - * \c j is the furthermost iterator in [first, last) such - * that, for every iterator \c k in [first, last), - * comp(value, *k) is \c false. For every iterator \c k - * in [i, j), neither comp(value, *k) nor - * comp(*k, value) is \c true. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param value The value to be searched. - * \param comp The comparison operator. - * \return A \p pair of iterators [i, j) that define the range of equivalent elements. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam T is comparable to \p ForwardIterator's \c value_type. - * \tparam StrictWeakOrdering is a model of Strict Weak Ordering. - * - * The following code snippet demonstrates how to use \p equal_range - * to search for values in a ordered range using the \p thrust::device execution policy for parallelization: - * - * \code - * #include - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::equal_range(thrust::device, input.begin(), input.end(), 0, thrust::less()); // returns [input.begin(), input.begin() + 1) - * thrust::equal_range(thrust::device, input.begin(), input.end(), 1, thrust::less()); // returns [input.begin() + 1, input.begin() + 1) - * thrust::equal_range(thrust::device, input.begin(), input.end(), 2, thrust::less()); // returns [input.begin() + 1, input.begin() + 2) - * thrust::equal_range(thrust::device, input.begin(), input.end(), 3, thrust::less()); // returns [input.begin() + 2, input.begin() + 2) - * thrust::equal_range(thrust::device, input.begin(), input.end(), 8, thrust::less()); // returns [input.begin() + 4, input.end) - * thrust::equal_range(thrust::device, input.begin(), input.end(), 9, thrust::less()); // returns [input.end(), input.end) - * \endcode - * - * \see http://www.sgi.com/tech/stl/equal_range.html - * \see \p lower_bound - * \see \p upper_bound - * \see \p binary_search - */ -template -__host__ __device__ -thrust::pair -equal_range(const thrust::detail::execution_policy_base &exec, - ForwardIterator first, - ForwardIterator last, - const T& value, - StrictWeakOrdering comp); - - -/*! \p equal_range is a version of binary search: it attempts to find - * the element value in an ordered range [first, last). The - * value returned by \p equal_range is essentially a combination of - * the values returned by \p lower_bound and \p upper_bound: it returns - * a \p pair of iterators \c i and \c j such that \c i is the first - * position where value could be inserted without violating the - * ordering and \c j is the last position where value could be inserted - * without violating the ordering. It follows that every element in the - * range [i, j) is equivalent to value, and that - * [i, j) is the largest subrange of [first, last) that - * has this property. - * - * This version of \p equal_range returns a \p pair of iterators - * [i, j). \c i is the furthermost iterator in - * [first, last) such that, for every iterator \c k in - * [first, i), comp(*k, value) is \c true. - * \c j is the furthermost iterator in [first, last) such - * that, for every iterator \c k in [first, last), - * comp(value, *k) is \c false. For every iterator \c k - * in [i, j), neither comp(value, *k) nor - * comp(*k, value) is \c true. - * - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param value The value to be searched. - * \param comp The comparison operator. - * \return A \p pair of iterators [i, j) that define the range of equivalent elements. - * - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam T is comparable to \p ForwardIterator's \c value_type. - * \tparam StrictWeakOrdering is a model of Strict Weak Ordering. - * - * The following code snippet demonstrates how to use \p equal_range - * to search for values in a ordered range. - * - * \code - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::equal_range(input.begin(), input.end(), 0, thrust::less()); // returns [input.begin(), input.begin() + 1) - * thrust::equal_range(input.begin(), input.end(), 1, thrust::less()); // returns [input.begin() + 1, input.begin() + 1) - * thrust::equal_range(input.begin(), input.end(), 2, thrust::less()); // returns [input.begin() + 1, input.begin() + 2) - * thrust::equal_range(input.begin(), input.end(), 3, thrust::less()); // returns [input.begin() + 2, input.begin() + 2) - * thrust::equal_range(input.begin(), input.end(), 8, thrust::less()); // returns [input.begin() + 4, input.end) - * thrust::equal_range(input.begin(), input.end(), 9, thrust::less()); // returns [input.end(), input.end) - * \endcode - * - * \see http://www.sgi.com/tech/stl/equal_range.html - * \see \p lower_bound - * \see \p upper_bound - * \see \p binary_search - */ -template -thrust::pair -equal_range(ForwardIterator first, - ForwardIterator last, - const T& value, - StrictWeakOrdering comp); - - -/*! \addtogroup vectorized_binary_search Vectorized Searches - * \ingroup binary_search - * \{ - */ - - -////////////////////// -// Vector Functions // -////////////////////// - - -/*! \p lower_bound is a vectorized version of binary search: for each - * iterator \c v in [values_first, values_last) it attempts to - * find the value *v in an ordered range [first, last). - * Specifically, it returns the index of first position where value could - * be inserted without violating the ordering. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param values_first The beginning of the search values sequence. - * \param values_last The end of the search values sequence. - * \param result The beginning of the output sequence. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam InputIterator is a model of Input Iterator. - * and \c InputIterator's \c value_type is LessThanComparable. - * \tparam OutputIterator is a model of Output Iterator. - * and \c ForwardIterator's difference_type is convertible to \c OutputIterator's \c value_type. - * - * \pre The ranges [first,last) and [result, result + (last - first)) shall not overlap. - * - * The following code snippet demonstrates how to use \p lower_bound - * to search for multiple values in a ordered range using the \p thrust::device execution policy for - * parallelization: - * - * \code - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::device_vector values(6); - * values[0] = 0; - * values[1] = 1; - * values[2] = 2; - * values[3] = 3; - * values[4] = 8; - * values[5] = 9; - * - * thrust::device_vector output(6); - * - * thrust::lower_bound(thrust::device, - * input.begin(), input.end(), - * values.begin(), values.end(), - * output.begin()); - * - * // output is now [0, 1, 1, 2, 4, 5] - * \endcode - * - * \see http://www.sgi.com/tech/stl/lower_bound.html - * \see \p upper_bound - * \see \p equal_range - * \see \p binary_search - */ -template -__host__ __device__ -OutputIterator lower_bound(const thrust::detail::execution_policy_base &exec, - ForwardIterator first, - ForwardIterator last, - InputIterator values_first, - InputIterator values_last, - OutputIterator result); - - -/*! \p lower_bound is a vectorized version of binary search: for each - * iterator \c v in [values_first, values_last) it attempts to - * find the value *v in an ordered range [first, last). - * Specifically, it returns the index of first position where value could - * be inserted without violating the ordering. - * - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param values_first The beginning of the search values sequence. - * \param values_last The end of the search values sequence. - * \param result The beginning of the output sequence. - * - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam InputIterator is a model of Input Iterator. - * and \c InputIterator's \c value_type is LessThanComparable. - * \tparam OutputIterator is a model of Output Iterator. - * and \c ForwardIterator's difference_type is convertible to \c OutputIterator's \c value_type. - * - * \pre The ranges [first,last) and [result, result + (last - first)) shall not overlap. - * - * The following code snippet demonstrates how to use \p lower_bound - * to search for multiple values in a ordered range. - * - * \code - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::device_vector values(6); - * values[0] = 0; - * values[1] = 1; - * values[2] = 2; - * values[3] = 3; - * values[4] = 8; - * values[5] = 9; - * - * thrust::device_vector output(6); - * - * thrust::lower_bound(input.begin(), input.end(), - * values.begin(), values.end(), - * output.begin()); - * - * // output is now [0, 1, 1, 2, 4, 5] - * \endcode - * - * \see http://www.sgi.com/tech/stl/lower_bound.html - * \see \p upper_bound - * \see \p equal_range - * \see \p binary_search - */ -template -OutputIterator lower_bound(ForwardIterator first, - ForwardIterator last, - InputIterator values_first, - InputIterator values_last, - OutputIterator result); - - -/*! \p lower_bound is a vectorized version of binary search: for each - * iterator \c v in [values_first, values_last) it attempts to - * find the value *v in an ordered range [first, last). - * Specifically, it returns the index of first position where value could - * be inserted without violating the ordering. This version of - * \p lower_bound uses function object \c comp for comparison. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param values_first The beginning of the search values sequence. - * \param values_last The end of the search values sequence. - * \param result The beginning of the output sequence. - * \param comp The comparison operator. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam InputIterator is a model of Input Iterator. - * and \c InputIterator's \c value_type is comparable to \p ForwardIterator's \c value_type. - * \tparam OutputIterator is a model of Output Iterator. - * and \c ForwardIterator's difference_type is convertible to \c OutputIterator's \c value_type. - * \tparam StrictWeakOrdering is a model of Strict Weak Ordering. - * - * \pre The ranges [first,last) and [result, result + (last - first)) shall not overlap. - * - * The following code snippet demonstrates how to use \p lower_bound - * to search for multiple values in a ordered range. - * - * \code - * #include - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::device_vector values(6); - * values[0] = 0; - * values[1] = 1; - * values[2] = 2; - * values[3] = 3; - * values[4] = 8; - * values[5] = 9; - * - * thrust::device_vector output(6); - * - * thrust::lower_bound(input.begin(), input.end(), - * values.begin(), values.end(), - * output.begin(), - * thrust::less()); - * - * // output is now [0, 1, 1, 2, 4, 5] - * \endcode - * - * \see http://www.sgi.com/tech/stl/lower_bound.html - * \see \p upper_bound - * \see \p equal_range - * \see \p binary_search - */ -template -__host__ __device__ -OutputIterator lower_bound(const thrust::detail::execution_policy_base &exec, - ForwardIterator first, - ForwardIterator last, - InputIterator values_first, - InputIterator values_last, - OutputIterator result, - StrictWeakOrdering comp); - - -/*! \p lower_bound is a vectorized version of binary search: for each - * iterator \c v in [values_first, values_last) it attempts to - * find the value *v in an ordered range [first, last). - * Specifically, it returns the index of first position where value could - * be inserted without violating the ordering. This version of - * \p lower_bound uses function object \c comp for comparison. - * - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param values_first The beginning of the search values sequence. - * \param values_last The end of the search values sequence. - * \param result The beginning of the output sequence. - * \param comp The comparison operator. - * - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam InputIterator is a model of Input Iterator. - * and \c InputIterator's \c value_type is comparable to \p ForwardIterator's \c value_type. - * \tparam OutputIterator is a model of Output Iterator. - * and \c ForwardIterator's difference_type is convertible to \c OutputIterator's \c value_type. - * \tparam StrictWeakOrdering is a model of Strict Weak Ordering. - * - * \pre The ranges [first,last) and [result, result + (last - first)) shall not overlap. - * - * The following code snippet demonstrates how to use \p lower_bound - * to search for multiple values in a ordered range. - * - * \code - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::device_vector values(6); - * values[0] = 0; - * values[1] = 1; - * values[2] = 2; - * values[3] = 3; - * values[4] = 8; - * values[5] = 9; - * - * thrust::device_vector output(6); - * - * thrust::lower_bound(input.begin(), input.end(), - * values.begin(), values.end(), - * output.begin(), - * thrust::less()); - * - * // output is now [0, 1, 1, 2, 4, 5] - * \endcode - * - * \see http://www.sgi.com/tech/stl/lower_bound.html - * \see \p upper_bound - * \see \p equal_range - * \see \p binary_search - */ -template -OutputIterator lower_bound(ForwardIterator first, - ForwardIterator last, - InputIterator values_first, - InputIterator values_last, - OutputIterator result, - StrictWeakOrdering comp); - - -/*! \p upper_bound is a vectorized version of binary search: for each - * iterator \c v in [values_first, values_last) it attempts to - * find the value *v in an ordered range [first, last). - * Specifically, it returns the index of last position where value could - * be inserted without violating the ordering. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param values_first The beginning of the search values sequence. - * \param values_last The end of the search values sequence. - * \param result The beginning of the output sequence. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam InputIterator is a model of Input Iterator. - * and \c InputIterator's \c value_type is LessThanComparable. - * \tparam OutputIterator is a model of Output Iterator. - * and \c ForwardIterator's difference_type is convertible to \c OutputIterator's \c value_type. - * - * \pre The ranges [first,last) and [result, result + (last - first)) shall not overlap. - * - * The following code snippet demonstrates how to use \p upper_bound - * to search for multiple values in a ordered range using the \p thrust::device execution policy for - * parallelization: - * - * \code - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::device_vector values(6); - * values[0] = 0; - * values[1] = 1; - * values[2] = 2; - * values[3] = 3; - * values[4] = 8; - * values[5] = 9; - * - * thrust::device_vector output(6); - * - * thrust::upper_bound(thrust::device, - * input.begin(), input.end(), - * values.begin(), values.end(), - * output.begin()); - * - * // output is now [1, 1, 2, 2, 5, 5] - * \endcode - * - * \see http://www.sgi.com/tech/stl/upper_bound.html - * \see \p upper_bound - * \see \p equal_range - * \see \p binary_search - */ -template -__host__ __device__ -OutputIterator upper_bound(const thrust::detail::execution_policy_base &exec, - ForwardIterator first, - ForwardIterator last, - InputIterator values_first, - InputIterator values_last, - OutputIterator result); - - -/*! \p upper_bound is a vectorized version of binary search: for each - * iterator \c v in [values_first, values_last) it attempts to - * find the value *v in an ordered range [first, last). - * Specifically, it returns the index of last position where value could - * be inserted without violating the ordering. - * - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param values_first The beginning of the search values sequence. - * \param values_last The end of the search values sequence. - * \param result The beginning of the output sequence. - * - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam InputIterator is a model of Input Iterator. - * and \c InputIterator's \c value_type is LessThanComparable. - * \tparam OutputIterator is a model of Output Iterator. - * and \c ForwardIterator's difference_type is convertible to \c OutputIterator's \c value_type. - * - * \pre The ranges [first,last) and [result, result + (last - first)) shall not overlap. - * - * The following code snippet demonstrates how to use \p upper_bound - * to search for multiple values in a ordered range. - * - * \code - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::device_vector values(6); - * values[0] = 0; - * values[1] = 1; - * values[2] = 2; - * values[3] = 3; - * values[4] = 8; - * values[5] = 9; - * - * thrust::device_vector output(6); - * - * thrust::upper_bound(input.begin(), input.end(), - * values.begin(), values.end(), - * output.begin()); - * - * // output is now [1, 1, 2, 2, 5, 5] - * \endcode - * - * \see http://www.sgi.com/tech/stl/upper_bound.html - * \see \p upper_bound - * \see \p equal_range - * \see \p binary_search - */ -template -OutputIterator upper_bound(ForwardIterator first, - ForwardIterator last, - InputIterator values_first, - InputIterator values_last, - OutputIterator result); - - -/*! \p upper_bound is a vectorized version of binary search: for each - * iterator \c v in [values_first, values_last) it attempts to - * find the value *v in an ordered range [first, last). - * Specifically, it returns the index of first position where value could - * be inserted without violating the ordering. This version of - * \p upper_bound uses function object \c comp for comparison. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param values_first The beginning of the search values sequence. - * \param values_last The end of the search values sequence. - * \param result The beginning of the output sequence. - * \param comp The comparison operator. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam InputIterator is a model of Input Iterator. - * and \c InputIterator's \c value_type is comparable to \p ForwardIterator's \c value_type. - * \tparam OutputIterator is a model of Output Iterator. - * and \c ForwardIterator's difference_type is convertible to \c OutputIterator's \c value_type. - * \tparam StrictWeakOrdering is a model of Strict Weak Ordering. - * - * \pre The ranges [first,last) and [result, result + (last - first)) shall not overlap. - * - * The following code snippet demonstrates how to use \p upper_bound - * to search for multiple values in a ordered range using the \p thrust::device execution policy for - * parallelization: - * - * \code - * #include - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::device_vector values(6); - * values[0] = 0; - * values[1] = 1; - * values[2] = 2; - * values[3] = 3; - * values[4] = 8; - * values[5] = 9; - * - * thrust::device_vector output(6); - * - * thrust::upper_bound(thrust::device, - * input.begin(), input.end(), - * values.begin(), values.end(), - * output.begin(), - * thrust::less()); - * - * // output is now [1, 1, 2, 2, 5, 5] - * \endcode - * - * \see http://www.sgi.com/tech/stl/upper_bound.html - * \see \p lower_bound - * \see \p equal_range - * \see \p binary_search - */ -template -__host__ __device__ -OutputIterator upper_bound(const thrust::detail::execution_policy_base &exec, - ForwardIterator first, - ForwardIterator last, - InputIterator values_first, - InputIterator values_last, - OutputIterator result, - StrictWeakOrdering comp); - - -/*! \p upper_bound is a vectorized version of binary search: for each - * iterator \c v in [values_first, values_last) it attempts to - * find the value *v in an ordered range [first, last). - * Specifically, it returns the index of first position where value could - * be inserted without violating the ordering. This version of - * \p upper_bound uses function object \c comp for comparison. - * - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param values_first The beginning of the search values sequence. - * \param values_last The end of the search values sequence. - * \param result The beginning of the output sequence. - * \param comp The comparison operator. - * - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam InputIterator is a model of Input Iterator. - * and \c InputIterator's \c value_type is comparable to \p ForwardIterator's \c value_type. - * \tparam OutputIterator is a model of Output Iterator. - * and \c ForwardIterator's difference_type is convertible to \c OutputIterator's \c value_type. - * \tparam StrictWeakOrdering is a model of Strict Weak Ordering. - * - * \pre The ranges [first,last) and [result, result + (last - first)) shall not overlap. - * - * The following code snippet demonstrates how to use \p upper_bound - * to search for multiple values in a ordered range. - * - * \code - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::device_vector values(6); - * values[0] = 0; - * values[1] = 1; - * values[2] = 2; - * values[3] = 3; - * values[4] = 8; - * values[5] = 9; - * - * thrust::device_vector output(6); - * - * thrust::upper_bound(input.begin(), input.end(), - * values.begin(), values.end(), - * output.begin(), - * thrust::less()); - * - * // output is now [1, 1, 2, 2, 5, 5] - * \endcode - * - * \see http://www.sgi.com/tech/stl/upper_bound.html - * \see \p lower_bound - * \see \p equal_range - * \see \p binary_search - */ -template -OutputIterator upper_bound(ForwardIterator first, - ForwardIterator last, - InputIterator values_first, - InputIterator values_last, - OutputIterator result, - StrictWeakOrdering comp); - - -/*! \p binary_search is a vectorized version of binary search: for each - * iterator \c v in [values_first, values_last) it attempts to - * find the value *v in an ordered range [first, last). - * It returns \c true if an element that is equivalent to \c value - * is present in [first, last) and \c false if no such element - * exists. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param values_first The beginning of the search values sequence. - * \param values_last The end of the search values sequence. - * \param result The beginning of the output sequence. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam InputIterator is a model of Input Iterator. - * and \c InputIterator's \c value_type is LessThanComparable. - * \tparam OutputIterator is a model of Output Iterator. - * and bool is convertible to \c OutputIterator's \c value_type. - * - * \pre The ranges [first,last) and [result, result + (last - first)) shall not overlap. - * - * The following code snippet demonstrates how to use \p binary_search - * to search for multiple values in a ordered range using the \p thrust::device execution policy for - * parallelization: - * - * \code - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::device_vector values(6); - * values[0] = 0; - * values[1] = 1; - * values[2] = 2; - * values[3] = 3; - * values[4] = 8; - * values[5] = 9; - * - * thrust::device_vector output(6); - * - * thrust::binary_search(thrust::device, - * input.begin(), input.end(), - * values.begin(), values.end(), - * output.begin()); - * - * // output is now [true, false, true, false, true, false] - * \endcode - * - * \see http://www.sgi.com/tech/stl/binary_search.html - * \see \p lower_bound - * \see \p upper_bound - * \see \p equal_range - */ -template -__host__ __device__ -OutputIterator binary_search(const thrust::detail::execution_policy_base &exec, - ForwardIterator first, - ForwardIterator last, - InputIterator values_first, - InputIterator values_last, - OutputIterator result); - - -/*! \p binary_search is a vectorized version of binary search: for each - * iterator \c v in [values_first, values_last) it attempts to - * find the value *v in an ordered range [first, last). - * It returns \c true if an element that is equivalent to \c value - * is present in [first, last) and \c false if no such element - * exists. - * - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param values_first The beginning of the search values sequence. - * \param values_last The end of the search values sequence. - * \param result The beginning of the output sequence. - * - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam InputIterator is a model of Input Iterator. - * and \c InputIterator's \c value_type is LessThanComparable. - * \tparam OutputIterator is a model of Output Iterator. - * and bool is convertible to \c OutputIterator's \c value_type. - * - * \pre The ranges [first,last) and [result, result + (last - first)) shall not overlap. - * - * The following code snippet demonstrates how to use \p binary_search - * to search for multiple values in a ordered range. - * - * \code - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::device_vector values(6); - * values[0] = 0; - * values[1] = 1; - * values[2] = 2; - * values[3] = 3; - * values[4] = 8; - * values[5] = 9; - * - * thrust::device_vector output(6); - * - * thrust::binary_search(input.begin(), input.end(), - * values.begin(), values.end(), - * output.begin()); - * - * // output is now [true, false, true, false, true, false] - * \endcode - * - * \see http://www.sgi.com/tech/stl/binary_search.html - * \see \p lower_bound - * \see \p upper_bound - * \see \p equal_range - */ -template -OutputIterator binary_search(ForwardIterator first, - ForwardIterator last, - InputIterator values_first, - InputIterator values_last, - OutputIterator result); - - -/*! \p binary_search is a vectorized version of binary search: for each - * iterator \c v in [values_first, values_last) it attempts to - * find the value *v in an ordered range [first, last). - * It returns \c true if an element that is equivalent to \c value - * is present in [first, last) and \c false if no such element - * exists. This version of \p binary_search uses function object - * \c comp for comparison. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param values_first The beginning of the search values sequence. - * \param values_last The end of the search values sequence. - * \param result The beginning of the output sequence. - * \param comp The comparison operator. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam InputIterator is a model of Input Iterator. - * and \c InputIterator's \c value_type is LessThanComparable. - * \tparam OutputIterator is a model of Output Iterator. - * and bool is convertible to \c OutputIterator's \c value_type. - * \tparam StrictWeakOrdering is a model of Strict Weak Ordering. - * - * \pre The ranges [first,last) and [result, result + (last - first)) shall not overlap. - * - * The following code snippet demonstrates how to use \p binary_search - * to search for multiple values in a ordered range using the \p thrust::device execution policy for - * parallelization: - * - * \code - * #include - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::device_vector values(6); - * values[0] = 0; - * values[1] = 1; - * values[2] = 2; - * values[3] = 3; - * values[4] = 8; - * values[5] = 9; - * - * thrust::device_vector output(6); - * - * thrust::binary_search(thrust::device, - * input.begin(), input.end(), - * values.begin(), values.end(), - * output.begin(), - * thrust::less()); - * - * // output is now [true, false, true, false, true, false] - * \endcode - * - * \see http://www.sgi.com/tech/stl/binary_search.html - * \see \p lower_bound - * \see \p upper_bound - * \see \p equal_range - */ -template -__host__ __device__ -OutputIterator binary_search(const thrust::detail::execution_policy_base &exec, - ForwardIterator first, - ForwardIterator last, - InputIterator values_first, - InputIterator values_last, - OutputIterator result, - StrictWeakOrdering comp); - - -/*! \p binary_search is a vectorized version of binary search: for each - * iterator \c v in [values_first, values_last) it attempts to - * find the value *v in an ordered range [first, last). - * It returns \c true if an element that is equivalent to \c value - * is present in [first, last) and \c false if no such element - * exists. This version of \p binary_search uses function object - * \c comp for comparison. - * - * \param first The beginning of the ordered sequence. - * \param last The end of the ordered sequence. - * \param values_first The beginning of the search values sequence. - * \param values_last The end of the search values sequence. - * \param result The beginning of the output sequence. - * \param comp The comparison operator. - * - * \tparam ForwardIterator is a model of Forward Iterator. - * \tparam InputIterator is a model of Input Iterator. - * and \c InputIterator's \c value_type is LessThanComparable. - * \tparam OutputIterator is a model of Output Iterator. - * and bool is convertible to \c OutputIterator's \c value_type. - * \tparam StrictWeakOrdering is a model of Strict Weak Ordering. - * - * \pre The ranges [first,last) and [result, result + (last - first)) shall not overlap. - * - * The following code snippet demonstrates how to use \p binary_search - * to search for multiple values in a ordered range. - * - * \code - * #include - * #include - * #include - * ... - * thrust::device_vector input(5); - * - * input[0] = 0; - * input[1] = 2; - * input[2] = 5; - * input[3] = 7; - * input[4] = 8; - * - * thrust::device_vector values(6); - * values[0] = 0; - * values[1] = 1; - * values[2] = 2; - * values[3] = 3; - * values[4] = 8; - * values[5] = 9; - * - * thrust::device_vector output(6); - * - * thrust::binary_search(input.begin(), input.end(), - * values.begin(), values.end(), - * output.begin(), - * thrust::less()); - * - * // output is now [true, false, true, false, true, false] - * \endcode - * - * \see http://www.sgi.com/tech/stl/binary_search.html - * \see \p lower_bound - * \see \p upper_bound - * \see \p equal_range - */ -template -OutputIterator binary_search(ForwardIterator first, - ForwardIterator last, - InputIterator values_first, - InputIterator values_last, - OutputIterator result, - StrictWeakOrdering comp); - - -/*! \} // end vectorized_binary_search - */ - - -/*! \} // end binary_search - */ - - -/*! \} // end searching - */ - - -} // end namespace thrust - -#include - diff --git a/spaces/ClearLove443/Robby-chatbot/modules/utils.py b/spaces/ClearLove443/Robby-chatbot/modules/utils.py deleted file mode 100644 index d0b0288d3b65ea88bd4b6067be5c5af8804ee321..0000000000000000000000000000000000000000 --- a/spaces/ClearLove443/Robby-chatbot/modules/utils.py +++ /dev/null @@ -1,105 +0,0 @@ -import os -import pandas as pd -import streamlit as st -import pdfplumber - -from modules.chatbot import Chatbot -from modules.embedder import Embedder - -class Utilities: - - @staticmethod - def load_api_key(): - """ - Loads the OpenAI API key from the .env file or - from the user's input and returns it - """ - if not hasattr(st.session_state, "api_key"): - st.session_state.api_key = None - #you can define your API key in .env directly - if os.path.exists(".env") and os.environ.get("OPENAI_API_KEY") is not None: - user_api_key = os.environ["OPENAI_API_KEY"] - st.sidebar.success("API key loaded from .env", icon="🚀") - else: - if st.session_state.api_key is not None: - user_api_key = st.session_state.api_key - st.sidebar.success("API key loaded from previous input", icon="🚀") - else: - user_api_key = st.sidebar.text_input( - label="#### Your OpenAI API key 👇", placeholder="sk-...", type="password" - ) - if user_api_key: - st.session_state.api_key = user_api_key - - return user_api_key - - - @staticmethod - def handle_upload(file_types): - """ - Handles and display uploaded_file - :param file_types: List of accepted file types, e.g., ["csv", "pdf", "txt"] - """ - uploaded_file = st.sidebar.file_uploader("upload", type=file_types, label_visibility="collapsed") - if uploaded_file is not None: - - def show_csv_file(uploaded_file): - file_container = st.expander("Your CSV file :") - uploaded_file.seek(0) - shows = pd.read_csv(uploaded_file) - file_container.write(shows) - - def show_pdf_file(uploaded_file): - file_container = st.expander("Your PDF file :") - with pdfplumber.open(uploaded_file) as pdf: - pdf_text = "" - for page in pdf.pages: - pdf_text += page.extract_text() + "\n\n" - file_container.write(pdf_text) - - def show_txt_file(uploaded_file): - file_container = st.expander("Your TXT file:") - uploaded_file.seek(0) - content = uploaded_file.read().decode("utf-8") - file_container.write(content) - - def get_file_extension(uploaded_file): - return os.path.splitext(uploaded_file)[1].lower() - - file_extension = get_file_extension(uploaded_file.name) - - # Show the contents of the file based on its extension - #if file_extension == ".csv" : - # show_csv_file(uploaded_file) - if file_extension== ".pdf" : - show_pdf_file(uploaded_file) - elif file_extension== ".txt" : - show_txt_file(uploaded_file) - - else: - st.session_state["reset_chat"] = True - - #print(uploaded_file) - return uploaded_file - - @staticmethod - def setup_chatbot(uploaded_file, model, temperature): - """ - Sets up the chatbot with the uploaded file, model, and temperature - """ - embeds = Embedder() - - with st.spinner("Processing..."): - uploaded_file.seek(0) - file = uploaded_file.read() - # Get the document embeddings for the uploaded file - vectors = embeds.getDocEmbeds(file, uploaded_file.name) - - # Create a Chatbot instance with the specified model and temperature - chatbot = Chatbot(model, temperature,vectors) - st.session_state["ready"] = True - - return chatbot - - - diff --git "a/spaces/Cong723/gpt-academic-public/crazy_functions/\346\200\273\347\273\223word\346\226\207\346\241\243.py" "b/spaces/Cong723/gpt-academic-public/crazy_functions/\346\200\273\347\273\223word\346\226\207\346\241\243.py" deleted file mode 100644 index f1fe20171cc54aec0c79f4961e71b57845f252d5..0000000000000000000000000000000000000000 --- "a/spaces/Cong723/gpt-academic-public/crazy_functions/\346\200\273\347\273\223word\346\226\207\346\241\243.py" +++ /dev/null @@ -1,127 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -fast_debug = False - - -def 解析docx(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): - import time, os - # pip install python-docx 用于docx格式,跨平台 - # pip install pywin32 用于doc格式,仅支持Win平台 - for index, fp in enumerate(file_manifest): - if fp.split(".")[-1] == "docx": - from docx import Document - doc = Document(fp) - file_content = "\n".join([para.text for para in doc.paragraphs]) - else: - import win32com.client - word = win32com.client.Dispatch("Word.Application") - word.visible = False - # 打开文件 - print('fp', os.getcwd()) - doc = word.Documents.Open(os.getcwd() + '/' + fp) - # file_content = doc.Content.Text - doc = word.ActiveDocument - file_content = doc.Range().Text - doc.Close() - word.Quit() - - print(file_content) - # private_upload里面的文件名在解压zip后容易出现乱码(rar和7z格式正常),故可以只分析文章内容,不输入文件名 - from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf - from request_llm.bridge_all import model_info - max_token = model_info[llm_kwargs['llm_model']]['max_token'] - TOKEN_LIMIT_PER_FRAGMENT = max_token * 3 // 4 - paper_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf( - txt=file_content, - get_token_fn=model_info[llm_kwargs['llm_model']]['token_cnt'], - limit=TOKEN_LIMIT_PER_FRAGMENT - ) - this_paper_history = [] - for i, paper_frag in enumerate(paper_fragments): - i_say = f'请对下面的文章片段用中文做概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{paper_frag}```' - i_say_show_user = f'请对下面的文章片段做概述: {os.path.abspath(fp)}的第{i+1}/{len(paper_fragments)}个片段。' - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, - inputs_show_user=i_say_show_user, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history=[], - sys_prompt="总结文章。" - ) - - chatbot[-1] = (i_say_show_user, gpt_say) - history.extend([i_say_show_user,gpt_say]) - this_paper_history.extend([i_say_show_user,gpt_say]) - - # 已经对该文章的所有片段总结完毕,如果文章被切分了, - if len(paper_fragments) > 1: - i_say = f"根据以上的对话,总结文章{os.path.abspath(fp)}的主要内容。" - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, - inputs_show_user=i_say, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history=this_paper_history, - sys_prompt="总结文章。" - ) - - history.extend([i_say,gpt_say]) - this_paper_history.extend([i_say,gpt_say]) - - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - res = write_results_to_file(history) - chatbot.append(("所有文件都总结完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - -@CatchException -def 总结word文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - import glob, os - - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "批量总结Word文档。函数插件贡献者: JasonGuo1"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - from docx import Document - except: - report_execption(chatbot, history, - a=f"解析项目: {txt}", - b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade python-docx pywin32```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 清空历史,以免输入溢出 - history = [] - - # 检测输入参数,如没有给定输入参数,直接退出 - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 搜索需要处理的文件清单 - if txt.endswith('.docx') or txt.endswith('.doc'): - file_manifest = [txt] - else: - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.docx', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.doc', recursive=True)] - - # 如果没找到任何文件 - if len(file_manifest) == 0: - report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.docx或doc文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 开始正式执行任务 - yield from 解析docx(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) diff --git a/spaces/Curranj/FlowerDiffusion/app.py b/spaces/Curranj/FlowerDiffusion/app.py deleted file mode 100644 index 7b0b10379b315d319921da1edbe887397c29ff5a..0000000000000000000000000000000000000000 --- a/spaces/Curranj/FlowerDiffusion/app.py +++ /dev/null @@ -1,72 +0,0 @@ -import io -import os -import warnings - -from PIL import Image -from stability_sdk import client -import stability_sdk.interfaces.gooseai.generation.generation_pb2 as generation - -import gradio as gr -stability_api = client.StabilityInference( - key=os.environ["Secret"], - verbose=True, -) - - -def infer(prompt): - # the object returned is a python generator - answers = stability_api.generate( - prompt=f"Beautiful Portait of a {prompt} made out of flowers 💐 🌺 🌸 , artstation winner by Victo Ngai, Kilian Eng, vibrant colors, winning-award masterpiece, aesthetic octane render, 8K HD", - height =640 - ) - - # iterating over the generator produces the api response - for resp in answers: - for artifact in resp.artifacts: - if artifact.finish_reason == generation.FILTER: - warnings.warn( - "Your request activated the API's safety filters and could not be processed." - "Please modify the prompt and try again.") - if artifact.type == generation.ARTIFACT_IMAGE: - img = Image.open(io.BytesIO(artifact.binary)) - return img - - -block = gr.Blocks(css=".container { max-width: 600px; margin: auto; }") - -num_samples = 1 - - - -with block as demo: - gr.Markdown("

Flower Diffusion

") - gr.Markdown( - "Get a pretty flowery image from any prompt - keep it simple!" - ) - with gr.Group(): - with gr.Box(): - with gr.Row().style(mobile_collapse=False, equal_height=True): - - text = gr.Textbox( - value = "Kitty cat", - label="Enter your prompt", show_label=False, max_lines=1 - ).style( - border=(True, False, True, True), - rounded=(True, False, False, True), - container=False, - ) - btn = gr.Button("Run").style( - margin=False, - rounded=(False, True, True, False), - ) - - - gallery = gr.Image() - text.submit(infer, inputs=[text], outputs=gallery) - btn.click(infer, inputs=[text], outputs=gallery) - - - - - -demo.launch(debug=True, enable_queue = True) \ No newline at end of file diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/BufrStubImagePlugin.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/BufrStubImagePlugin.py deleted file mode 100644 index 0425bbd750eacf884ca1fc0ba8aa893a71ccdfc6..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/BufrStubImagePlugin.py +++ /dev/null @@ -1,73 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# BUFR stub adapter -# -# Copyright (c) 1996-2003 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -from . import Image, ImageFile - -_handler = None - - -def register_handler(handler): - """ - Install application-specific BUFR image handler. - - :param handler: Handler object. - """ - global _handler - _handler = handler - - -# -------------------------------------------------------------------- -# Image adapter - - -def _accept(prefix): - return prefix[:4] == b"BUFR" or prefix[:4] == b"ZCZC" - - -class BufrStubImageFile(ImageFile.StubImageFile): - format = "BUFR" - format_description = "BUFR" - - def _open(self): - offset = self.fp.tell() - - if not _accept(self.fp.read(4)): - msg = "Not a BUFR file" - raise SyntaxError(msg) - - self.fp.seek(offset) - - # make something up - self.mode = "F" - self._size = 1, 1 - - loader = self._load() - if loader: - loader.open(self) - - def _load(self): - return _handler - - -def _save(im, fp, filename): - if _handler is None or not hasattr(_handler, "save"): - msg = "BUFR save handler not installed" - raise OSError(msg) - _handler.save(im, fp, filename) - - -# -------------------------------------------------------------------- -# Registry - -Image.register_open(BufrStubImageFile.format, BufrStubImageFile, _accept) -Image.register_save(BufrStubImageFile.format, _save) - -Image.register_extension(BufrStubImageFile.format, ".bufr") diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/subset/__main__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/subset/__main__.py deleted file mode 100644 index decf9ee6e50a612c65a87ebeaa8be115f1d25242..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/subset/__main__.py +++ /dev/null @@ -1,6 +0,0 @@ -import sys -from fontTools.subset import main - - -if __name__ == "__main__": - sys.exit(main()) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/Model3D-98fc2b2c.css b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/Model3D-98fc2b2c.css deleted file mode 100644 index cee82ea831d77ca0e001baf10a07f84e176679f0..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/Model3D-98fc2b2c.css +++ /dev/null @@ -1 +0,0 @@ -.gallery.svelte-1ayixqk{padding:var(--size-1) var(--size-2)} diff --git a/spaces/DaleChen/AutoGPT/autogpt/commands/web_playwright.py b/spaces/DaleChen/AutoGPT/autogpt/commands/web_playwright.py deleted file mode 100644 index 4e388ded203cefb5e24f9116f7fe5b8a94893413..0000000000000000000000000000000000000000 --- a/spaces/DaleChen/AutoGPT/autogpt/commands/web_playwright.py +++ /dev/null @@ -1,80 +0,0 @@ -"""Web scraping commands using Playwright""" -from __future__ import annotations - -try: - from playwright.sync_api import sync_playwright -except ImportError: - print( - "Playwright not installed. Please install it with 'pip install playwright' to use." - ) -from bs4 import BeautifulSoup - -from autogpt.processing.html import extract_hyperlinks, format_hyperlinks - - -def scrape_text(url: str) -> str: - """Scrape text from a webpage - - Args: - url (str): The URL to scrape text from - - Returns: - str: The scraped text - """ - with sync_playwright() as p: - browser = p.chromium.launch() - page = browser.new_page() - - try: - page.goto(url) - html_content = page.content() - soup = BeautifulSoup(html_content, "html.parser") - - for script in soup(["script", "style"]): - script.extract() - - text = soup.get_text() - lines = (line.strip() for line in text.splitlines()) - chunks = (phrase.strip() for line in lines for phrase in line.split(" ")) - text = "\n".join(chunk for chunk in chunks if chunk) - - except Exception as e: - text = f"Error: {str(e)}" - - finally: - browser.close() - - return text - - -def scrape_links(url: str) -> str | list[str]: - """Scrape links from a webpage - - Args: - url (str): The URL to scrape links from - - Returns: - Union[str, List[str]]: The scraped links - """ - with sync_playwright() as p: - browser = p.chromium.launch() - page = browser.new_page() - - try: - page.goto(url) - html_content = page.content() - soup = BeautifulSoup(html_content, "html.parser") - - for script in soup(["script", "style"]): - script.extract() - - hyperlinks = extract_hyperlinks(soup, url) - formatted_links = format_hyperlinks(hyperlinks) - - except Exception as e: - formatted_links = f"Error: {str(e)}" - - finally: - browser.close() - - return formatted_links diff --git a/spaces/DamianMH/Mlove/Dockerfile b/spaces/DamianMH/Mlove/Dockerfile deleted file mode 100644 index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000 --- a/spaces/DamianMH/Mlove/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM node:18-bullseye-slim - -RUN apt-get update && \ - -apt-get install -y git - -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app - -WORKDIR /app - -RUN npm install - -COPY Dockerfile greeting.md* .env* ./ - -RUN npm run build - -EXPOSE 7860 - -ENV NODE_ENV=production - -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/Datasculptor/DescriptionGPT/detic/modeling/meta_arch/d2_deformable_detr.py b/spaces/Datasculptor/DescriptionGPT/detic/modeling/meta_arch/d2_deformable_detr.py deleted file mode 100644 index 47ff220fc3946d1bf68fad87076589e46b274ef3..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/DescriptionGPT/detic/modeling/meta_arch/d2_deformable_detr.py +++ /dev/null @@ -1,308 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import torch -import torch.nn.functional as F -from torch import nn -import math - -from detectron2.modeling import META_ARCH_REGISTRY, build_backbone -from detectron2.structures import Boxes, Instances -from ..utils import load_class_freq, get_fed_loss_inds - -from models.backbone import Joiner -from models.deformable_detr import DeformableDETR, SetCriterion, MLP -from models.deformable_detr import _get_clones -from models.matcher import HungarianMatcher -from models.position_encoding import PositionEmbeddingSine -from models.deformable_transformer import DeformableTransformer -from models.segmentation import sigmoid_focal_loss -from util.box_ops import box_cxcywh_to_xyxy, box_xyxy_to_cxcywh -from util.misc import NestedTensor, accuracy - - -__all__ = ["DeformableDetr"] - -class CustomSetCriterion(SetCriterion): - def __init__(self, num_classes, matcher, weight_dict, losses, \ - focal_alpha=0.25, use_fed_loss=False): - super().__init__(num_classes, matcher, weight_dict, losses, focal_alpha) - self.use_fed_loss = use_fed_loss - if self.use_fed_loss: - self.register_buffer( - 'fed_loss_weight', load_class_freq(freq_weight=0.5)) - - def loss_labels(self, outputs, targets, indices, num_boxes, log=True): - """Classification loss (NLL) - targets dicts must contain the key "labels" containing a tensor of dim [nb_target_boxes] - """ - assert 'pred_logits' in outputs - src_logits = outputs['pred_logits'] - - idx = self._get_src_permutation_idx(indices) - target_classes_o = torch.cat([t["labels"][J] for t, (_, J) in zip(targets, indices)]) - target_classes = torch.full(src_logits.shape[:2], self.num_classes, - dtype=torch.int64, device=src_logits.device) - target_classes[idx] = target_classes_o - - target_classes_onehot = torch.zeros( - [src_logits.shape[0], src_logits.shape[1], src_logits.shape[2] + 1], - dtype=src_logits.dtype, layout=src_logits.layout, - device=src_logits.device) - target_classes_onehot.scatter_(2, target_classes.unsqueeze(-1), 1) - - target_classes_onehot = target_classes_onehot[:,:,:-1] # B x N x C - if self.use_fed_loss: - inds = get_fed_loss_inds( - gt_classes=target_classes_o, - num_sample_cats=50, - weight=self.fed_loss_weight, - C=target_classes_onehot.shape[2]) - loss_ce = sigmoid_focal_loss( - src_logits[:, :, inds], - target_classes_onehot[:, :, inds], - num_boxes, - alpha=self.focal_alpha, - gamma=2) * src_logits.shape[1] - else: - loss_ce = sigmoid_focal_loss( - src_logits, target_classes_onehot, num_boxes, - alpha=self.focal_alpha, - gamma=2) * src_logits.shape[1] - losses = {'loss_ce': loss_ce} - - if log: - # TODO this should probably be a separate loss, not hacked in this one here - losses['class_error'] = 100 - accuracy(src_logits[idx], target_classes_o)[0] - return losses - - -class MaskedBackbone(nn.Module): - """ This is a thin wrapper around D2's backbone to provide padding masking""" - - def __init__(self, cfg): - super().__init__() - self.backbone = build_backbone(cfg) - backbone_shape = self.backbone.output_shape() - self.feature_strides = [backbone_shape[f].stride for f in backbone_shape.keys()] - self.strides = [backbone_shape[f].stride for f in backbone_shape.keys()] - self.num_channels = [backbone_shape[x].channels for x in backbone_shape.keys()] - - def forward(self, tensor_list: NestedTensor): - xs = self.backbone(tensor_list.tensors) - out = {} - for name, x in xs.items(): - m = tensor_list.mask - assert m is not None - mask = F.interpolate(m[None].float(), size=x.shape[-2:]).to(torch.bool)[0] - out[name] = NestedTensor(x, mask) - return out - -@META_ARCH_REGISTRY.register() -class DeformableDetr(nn.Module): - """ - Implement Deformable Detr - """ - - def __init__(self, cfg): - super().__init__() - self.with_image_labels = cfg.WITH_IMAGE_LABELS - self.weak_weight = cfg.MODEL.DETR.WEAK_WEIGHT - - self.device = torch.device(cfg.MODEL.DEVICE) - self.test_topk = cfg.TEST.DETECTIONS_PER_IMAGE - self.num_classes = cfg.MODEL.DETR.NUM_CLASSES - self.mask_on = cfg.MODEL.MASK_ON - hidden_dim = cfg.MODEL.DETR.HIDDEN_DIM - num_queries = cfg.MODEL.DETR.NUM_OBJECT_QUERIES - - # Transformer parameters: - nheads = cfg.MODEL.DETR.NHEADS - dropout = cfg.MODEL.DETR.DROPOUT - dim_feedforward = cfg.MODEL.DETR.DIM_FEEDFORWARD - enc_layers = cfg.MODEL.DETR.ENC_LAYERS - dec_layers = cfg.MODEL.DETR.DEC_LAYERS - num_feature_levels = cfg.MODEL.DETR.NUM_FEATURE_LEVELS - two_stage = cfg.MODEL.DETR.TWO_STAGE - with_box_refine = cfg.MODEL.DETR.WITH_BOX_REFINE - - # Loss parameters: - giou_weight = cfg.MODEL.DETR.GIOU_WEIGHT - l1_weight = cfg.MODEL.DETR.L1_WEIGHT - deep_supervision = cfg.MODEL.DETR.DEEP_SUPERVISION - cls_weight = cfg.MODEL.DETR.CLS_WEIGHT - focal_alpha = cfg.MODEL.DETR.FOCAL_ALPHA - - N_steps = hidden_dim // 2 - d2_backbone = MaskedBackbone(cfg) - backbone = Joiner(d2_backbone, PositionEmbeddingSine(N_steps, normalize=True)) - - transformer = DeformableTransformer( - d_model=hidden_dim, - nhead=nheads, - num_encoder_layers=enc_layers, - num_decoder_layers=dec_layers, - dim_feedforward=dim_feedforward, - dropout=dropout, - activation="relu", - return_intermediate_dec=True, - num_feature_levels=num_feature_levels, - dec_n_points=4, - enc_n_points=4, - two_stage=two_stage, - two_stage_num_proposals=num_queries) - - self.detr = DeformableDETR( - backbone, transformer, num_classes=self.num_classes, - num_queries=num_queries, - num_feature_levels=num_feature_levels, - aux_loss=deep_supervision, - with_box_refine=with_box_refine, - two_stage=two_stage, - ) - - if self.mask_on: - assert 0, 'Mask is not supported yet :(' - - matcher = HungarianMatcher( - cost_class=cls_weight, cost_bbox=l1_weight, cost_giou=giou_weight) - weight_dict = {"loss_ce": cls_weight, "loss_bbox": l1_weight} - weight_dict["loss_giou"] = giou_weight - if deep_supervision: - aux_weight_dict = {} - for i in range(dec_layers - 1): - aux_weight_dict.update({k + f"_{i}": v for k, v in weight_dict.items()}) - weight_dict.update(aux_weight_dict) - print('weight_dict', weight_dict) - losses = ["labels", "boxes", "cardinality"] - if self.mask_on: - losses += ["masks"] - self.criterion = CustomSetCriterion( - self.num_classes, matcher=matcher, weight_dict=weight_dict, - focal_alpha=focal_alpha, - losses=losses, - use_fed_loss=cfg.MODEL.DETR.USE_FED_LOSS - ) - pixel_mean = torch.Tensor(cfg.MODEL.PIXEL_MEAN).to(self.device).view(3, 1, 1) - pixel_std = torch.Tensor(cfg.MODEL.PIXEL_STD).to(self.device).view(3, 1, 1) - self.normalizer = lambda x: (x - pixel_mean) / pixel_std - - - def forward(self, batched_inputs): - """ - Args: - Returns: - dict[str: Tensor]: - mapping from a named loss to a tensor storing the loss. Used during training only. - """ - images = self.preprocess_image(batched_inputs) - output = self.detr(images) - if self.training: - gt_instances = [x["instances"].to(self.device) for x in batched_inputs] - targets = self.prepare_targets(gt_instances) - loss_dict = self.criterion(output, targets) - weight_dict = self.criterion.weight_dict - for k in loss_dict.keys(): - if k in weight_dict: - loss_dict[k] *= weight_dict[k] - if self.with_image_labels: - if batched_inputs[0]['ann_type'] in ['image', 'captiontag']: - loss_dict['loss_image'] = self.weak_weight * self._weak_loss( - output, batched_inputs) - else: - loss_dict['loss_image'] = images[0].new_zeros( - [1], dtype=torch.float32)[0] - # import pdb; pdb.set_trace() - return loss_dict - else: - image_sizes = output["pred_boxes"].new_tensor( - [(t["height"], t["width"]) for t in batched_inputs]) - results = self.post_process(output, image_sizes) - return results - - - def prepare_targets(self, targets): - new_targets = [] - for targets_per_image in targets: - h, w = targets_per_image.image_size - image_size_xyxy = torch.as_tensor([w, h, w, h], dtype=torch.float, device=self.device) - gt_classes = targets_per_image.gt_classes - gt_boxes = targets_per_image.gt_boxes.tensor / image_size_xyxy - gt_boxes = box_xyxy_to_cxcywh(gt_boxes) - new_targets.append({"labels": gt_classes, "boxes": gt_boxes}) - if self.mask_on and hasattr(targets_per_image, 'gt_masks'): - assert 0, 'Mask is not supported yet :(' - gt_masks = targets_per_image.gt_masks - gt_masks = convert_coco_poly_to_mask(gt_masks.polygons, h, w) - new_targets[-1].update({'masks': gt_masks}) - return new_targets - - - def post_process(self, outputs, target_sizes): - """ - """ - out_logits, out_bbox = outputs['pred_logits'], outputs['pred_boxes'] - assert len(out_logits) == len(target_sizes) - assert target_sizes.shape[1] == 2 - - prob = out_logits.sigmoid() - topk_values, topk_indexes = torch.topk( - prob.view(out_logits.shape[0], -1), self.test_topk, dim=1) - scores = topk_values - topk_boxes = topk_indexes // out_logits.shape[2] - labels = topk_indexes % out_logits.shape[2] - boxes = box_cxcywh_to_xyxy(out_bbox) - boxes = torch.gather(boxes, 1, topk_boxes.unsqueeze(-1).repeat(1,1,4)) - - # and from relative [0, 1] to absolute [0, height] coordinates - img_h, img_w = target_sizes.unbind(1) - scale_fct = torch.stack([img_w, img_h, img_w, img_h], dim=1) - boxes = boxes * scale_fct[:, None, :] - - results = [] - for s, l, b, size in zip(scores, labels, boxes, target_sizes): - r = Instances((size[0], size[1])) - r.pred_boxes = Boxes(b) - r.scores = s - r.pred_classes = l - results.append({'instances': r}) - return results - - - def preprocess_image(self, batched_inputs): - """ - Normalize, pad and batch the input images. - """ - images = [self.normalizer(x["image"].to(self.device)) for x in batched_inputs] - return images - - - def _weak_loss(self, outputs, batched_inputs): - loss = 0 - for b, x in enumerate(batched_inputs): - labels = x['pos_category_ids'] - pred_logits = [outputs['pred_logits'][b]] - pred_boxes = [outputs['pred_boxes'][b]] - for xx in outputs['aux_outputs']: - pred_logits.append(xx['pred_logits'][b]) - pred_boxes.append(xx['pred_boxes'][b]) - pred_logits = torch.stack(pred_logits, dim=0) # L x N x C - pred_boxes = torch.stack(pred_boxes, dim=0) # L x N x 4 - for label in labels: - loss += self._max_size_loss( - pred_logits, pred_boxes, label) / len(labels) - loss = loss / len(batched_inputs) - return loss - - - def _max_size_loss(self, logits, boxes, label): - ''' - Inputs: - logits: L x N x C - boxes: L x N x 4 - ''' - target = logits.new_zeros((logits.shape[0], logits.shape[2])) - target[:, label] = 1. - sizes = boxes[..., 2] * boxes[..., 3] # L x N - ind = sizes.argmax(dim=1) # L - loss = F.binary_cross_entropy_with_logits( - logits[range(len(ind)), ind], target, reduction='sum') - return loss \ No newline at end of file diff --git a/spaces/Datasculptor/StyleGAN-NADA/e4e/options/__init__.py b/spaces/Datasculptor/StyleGAN-NADA/e4e/options/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Dinoking/Guccio-AI-Designer/decomposition.py b/spaces/Dinoking/Guccio-AI-Designer/decomposition.py deleted file mode 100644 index 4819e3324707f15c33fba6f35ab6abdc66dea919..0000000000000000000000000000000000000000 --- a/spaces/Dinoking/Guccio-AI-Designer/decomposition.py +++ /dev/null @@ -1,402 +0,0 @@ -# Copyright 2020 Erik Härkönen. All rights reserved. -# This file is licensed to you under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. You may obtain a copy -# of the License at http://www.apache.org/licenses/LICENSE-2.0 - -# Unless required by applicable law or agreed to in writing, software distributed under -# the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR REPRESENTATIONS -# OF ANY KIND, either express or implied. See the License for the specific language -# governing permissions and limitations under the License. - -# Patch for broken CTRL+C handler -# https://github.com/ContinuumIO/anaconda-issues/issues/905 -import os -os.environ['FOR_DISABLE_CONSOLE_CTRL_HANDLER'] = '1' - -import numpy as np -import os -from pathlib import Path -import re -import sys -import datetime -import argparse -import torch -import json -from types import SimpleNamespace -import scipy -from scipy.cluster.vq import kmeans -from tqdm import trange -from netdissect.nethook import InstrumentedModel -from config import Config -from estimators import get_estimator -from models import get_instrumented_model - -SEED_SAMPLING = 1 -SEED_RANDOM_DIRS = 2 -SEED_LINREG = 3 -SEED_VISUALIZATION = 5 - -B = 20 -n_clusters = 500 - -def get_random_dirs(components, dimensions): - gen = np.random.RandomState(seed=SEED_RANDOM_DIRS) - dirs = gen.normal(size=(components, dimensions)) - dirs /= np.sqrt(np.sum(dirs**2, axis=1, keepdims=True)) - return dirs.astype(np.float32) - -# Compute maximum batch size for given VRAM and network -def get_max_batch_size(inst, device, layer_name=None): - inst.remove_edits() - - # Reset statistics - torch.cuda.reset_max_memory_cached(device) - torch.cuda.reset_max_memory_allocated(device) - total_mem = torch.cuda.get_device_properties(device).total_memory - - B_max = 20 - - # Measure actual usage - for i in range(2, B_max, 2): - z = inst.model.sample_latent(n_samples=i) - if layer_name: - inst.model.partial_forward(z, layer_name) - else: - inst.model.forward(z) - - maxmem = torch.cuda.max_memory_allocated(device) - del z - - if maxmem > 0.5*total_mem: - print('Batch size {:d}: memory usage {:.0f}MB'.format(i, maxmem / 1e6)) - return i - - return B_max - -# Solve for directions in latent space that match PCs in activaiton space -def linreg_lstsq(comp_np, mean_np, stdev_np, inst, config): - print('Performing least squares regression', flush=True) - - torch.manual_seed(SEED_LINREG) - np.random.seed(SEED_LINREG) - - comp = torch.from_numpy(comp_np).float().to(inst.model.device) - mean = torch.from_numpy(mean_np).float().to(inst.model.device) - stdev = torch.from_numpy(stdev_np).float().to(inst.model.device) - - n_samp = max(10_000, config.n) // B * B # make divisible - n_comp = comp.shape[0] - latent_dims = inst.model.get_latent_dims() - - # We're looking for M s.t. M*P*G'(Z) = Z => M*A = Z - # Z = batch of latent vectors (n_samples x latent_dims) - # G'(Z) = batch of activations at intermediate layer - # A = P*G'(Z) = projected activations (n_samples x pca_coords) - # M = linear mapping (pca_coords x latent_dims) - - # Minimization min_M ||MA - Z||_l2 rewritten as min_M.T ||A.T*M.T - Z.T||_l2 - # to match format expected by pytorch.lstsq - - # TODO: regression on pixel-space outputs? (using nonlinear optimizer) - # min_M lpips(G_full(MA), G_full(Z)) - - # Tensors to fill with data - # Dimensions other way around, so these are actually the transposes - A = np.zeros((n_samp, n_comp), dtype=np.float32) - Z = np.zeros((n_samp, latent_dims), dtype=np.float32) - - # Project tensor X onto PCs, return coordinates - def project(X, comp): - N = X.shape[0] - K = comp.shape[0] - coords = torch.bmm(comp.expand([N]+[-1]*comp.ndim), X.view(N, -1, 1)) - return coords.reshape(N, K) - - for i in trange(n_samp // B, desc='Collecting samples', ascii=True): - z = inst.model.sample_latent(B) - inst.model.partial_forward(z, config.layer) - act = inst.retained_features()[config.layer].reshape(B, -1) - - # Project onto basis - act = act - mean - coords = project(act, comp) - coords_scaled = coords / stdev - - A[i*B:(i+1)*B] = coords_scaled.detach().cpu().numpy() - Z[i*B:(i+1)*B] = z.detach().cpu().numpy().reshape(B, -1) - - # Solve least squares fit - - # gelsd = divide-and-conquer SVD; good default - # gelsy = complete orthogonal factorization; sometimes faster - # gelss = SVD; slow but less memory hungry - M_t = scipy.linalg.lstsq(A, Z, lapack_driver='gelsd')[0] # torch.lstsq(Z, A)[0][:n_comp, :] - - # Solution given by rows of M_t - Z_comp = M_t[:n_comp, :] - Z_mean = np.mean(Z, axis=0, keepdims=True) - - return Z_comp, Z_mean - -def regression(comp, mean, stdev, inst, config): - # Sanity check: verify orthonormality - M = np.dot(comp, comp.T) - if not np.allclose(M, np.identity(M.shape[0])): - det = np.linalg.det(M) - print(f'WARNING: Computed basis is not orthonormal (determinant={det})') - - return linreg_lstsq(comp, mean, stdev, inst, config) - -def compute(config, dump_name, instrumented_model): - global B - - timestamp = lambda : datetime.datetime.now().strftime("%d.%m %H:%M") - print(f'[{timestamp()}] Computing', dump_name.name) - - # Ensure reproducibility - torch.manual_seed(0) # also sets cuda seeds - np.random.seed(0) - - # Speed up backend - torch.backends.cudnn.benchmark = True - - has_gpu = torch.cuda.is_available() - device = torch.device('cuda' if has_gpu else 'cpu') - layer_key = config.layer - - if instrumented_model is None: - inst = get_instrumented_model(config.model, config.output_class, layer_key, device) - model = inst.model - else: - print('Reusing InstrumentedModel instance') - inst = instrumented_model - model = inst.model - inst.remove_edits() - model.set_output_class(config.output_class) - - # Regress back to w space - if config.use_w: - print('Using W latent space') - model.use_w() - - inst.retain_layer(layer_key) - model.partial_forward(model.sample_latent(1), layer_key) - sample_shape = inst.retained_features()[layer_key].shape - sample_dims = np.prod(sample_shape) - print('Feature shape:', sample_shape) - - input_shape = inst.model.get_latent_shape() - input_dims = inst.model.get_latent_dims() - - config.components = min(config.components, sample_dims) - transformer = get_estimator(config.estimator, config.components, config.sparsity) - - X = None - X_global_mean = None - - # Figure out batch size if not provided - B = config.batch_size or get_max_batch_size(inst, device, layer_key) - - # Divisible by B (ignored in output name) - N = config.n // B * B - - # Compute maximum batch size based on RAM + pagefile budget - target_bytes = 20 * 1_000_000_000 # GB - feat_size_bytes = sample_dims * np.dtype('float64').itemsize - N_limit_RAM = np.floor_divide(target_bytes, feat_size_bytes) - if not transformer.batch_support and N > N_limit_RAM: - print('WARNING: estimator does not support batching, ' \ - 'given config will use {:.1f} GB memory.'.format(feat_size_bytes / 1_000_000_000 * N)) - - # 32-bit LAPACK gets very unhappy about huge matrices (in linalg.svd) - if config.estimator == 'ica': - lapack_max_N = np.floor_divide(np.iinfo(np.int32).max // 4, sample_dims) # 4x extra buffer - if N > lapack_max_N: - raise RuntimeError(f'Matrices too large for ICA, please use N <= {lapack_max_N}') - - print('B={}, N={}, dims={}, N/dims={:.1f}'.format(B, N, sample_dims, N/sample_dims), flush=True) - - # Must not depend on chosen batch size (reproducibility) - NB = max(B, max(2_000, 3*config.components)) # ipca: as large as possible! - - samples = None - if not transformer.batch_support: - samples = np.zeros((N + NB, sample_dims), dtype=np.float32) - - torch.manual_seed(config.seed or SEED_SAMPLING) - np.random.seed(config.seed or SEED_SAMPLING) - - # Use exactly the same latents regardless of batch size - # Store in main memory, since N might be huge (1M+) - # Run in batches, since sample_latent() might perform Z -> W mapping - n_lat = ((N + NB - 1) // B + 1) * B - latents = np.zeros((n_lat, *input_shape[1:]), dtype=np.float32) - with torch.no_grad(): - for i in trange(n_lat // B, desc='Sampling latents'): - latents[i*B:(i+1)*B] = model.sample_latent(n_samples=B).cpu().numpy() - - # Decomposition on non-Gaussian latent space - samples_are_latents = layer_key in ['g_mapping', 'style'] and inst.model.latent_space_name() == 'W' - - canceled = False - try: - X = np.ones((NB, sample_dims), dtype=np.float32) - action = 'Fitting' if transformer.batch_support else 'Collecting' - for gi in trange(0, N, NB, desc=f'{action} batches (NB={NB})', ascii=True): - for mb in range(0, NB, B): - z = torch.from_numpy(latents[gi+mb:gi+mb+B]).to(device) - - if samples_are_latents: - # Decomposition on latents directly (e.g. StyleGAN W) - batch = z.reshape((B, -1)) - else: - # Decomposition on intermediate layer - with torch.no_grad(): - model.partial_forward(z, layer_key) - - # Permuted to place PCA dimensions last - batch = inst.retained_features()[layer_key].reshape((B, -1)) - - space_left = min(B, NB - mb) - X[mb:mb+space_left] = batch.cpu().numpy()[:space_left] - - if transformer.batch_support: - if not transformer.fit_partial(X.reshape(-1, sample_dims)): - break - else: - samples[gi:gi+NB, :] = X.copy() - except KeyboardInterrupt: - if not transformer.batch_support: - sys.exit(1) # no progress yet - - dump_name = dump_name.parent / dump_name.name.replace(f'n{N}', f'n{gi}') - print(f'Saving current state to "{dump_name.name}" before exiting') - canceled = True - - if not transformer.batch_support: - X = samples # Use all samples - X_global_mean = X.mean(axis=0, keepdims=True, dtype=np.float32) # TODO: activations surely multi-modal...! - X -= X_global_mean - - print(f'[{timestamp()}] Fitting whole batch') - t_start_fit = datetime.datetime.now() - - transformer.fit(X) - - print(f'[{timestamp()}] Done in {datetime.datetime.now() - t_start_fit}') - assert np.all(transformer.transformer.mean_ < 1e-3), 'Mean of normalized data should be zero' - else: - X_global_mean = transformer.transformer.mean_.reshape((1, sample_dims)) - X = X.reshape(-1, sample_dims) - X -= X_global_mean - - X_comp, X_stdev, X_var_ratio = transformer.get_components() - - assert X_comp.shape[1] == sample_dims \ - and X_comp.shape[0] == config.components \ - and X_global_mean.shape[1] == sample_dims \ - and X_stdev.shape[0] == config.components, 'Invalid shape' - - # 'Activations' are really latents in a secondary latent space - if samples_are_latents: - Z_comp = X_comp - Z_global_mean = X_global_mean - else: - Z_comp, Z_global_mean = regression(X_comp, X_global_mean, X_stdev, inst, config) - - # Normalize - Z_comp /= np.linalg.norm(Z_comp, axis=-1, keepdims=True) - - # Random projections - # We expect these to explain much less of the variance - random_dirs = get_random_dirs(config.components, np.prod(sample_shape)) - n_rand_samples = min(5000, X.shape[0]) - X_view = X[:n_rand_samples, :].T - assert np.shares_memory(X_view, X), "Error: slice produced copy" - X_stdev_random = np.dot(random_dirs, X_view).std(axis=1) - - # Inflate back to proper shapes (for easier broadcasting) - X_comp = X_comp.reshape(-1, *sample_shape) - X_global_mean = X_global_mean.reshape(sample_shape) - Z_comp = Z_comp.reshape(-1, *input_shape) - Z_global_mean = Z_global_mean.reshape(input_shape) - - # Compute stdev in latent space if non-Gaussian - lat_stdev = np.ones_like(X_stdev) - if config.use_w: - samples = model.sample_latent(5000).reshape(5000, input_dims).detach().cpu().numpy() - coords = np.dot(Z_comp.reshape(-1, input_dims), samples.T) - lat_stdev = coords.std(axis=1) - - os.makedirs(dump_name.parent, exist_ok=True) - np.savez_compressed(dump_name, **{ - 'act_comp': X_comp.astype(np.float32), - 'act_mean': X_global_mean.astype(np.float32), - 'act_stdev': X_stdev.astype(np.float32), - 'lat_comp': Z_comp.astype(np.float32), - 'lat_mean': Z_global_mean.astype(np.float32), - 'lat_stdev': lat_stdev.astype(np.float32), - 'var_ratio': X_var_ratio.astype(np.float32), - 'random_stdevs': X_stdev_random.astype(np.float32), - }) - - if canceled: - sys.exit(1) - - # Don't shutdown if passed as param - if instrumented_model is None: - inst.close() - del inst - del model - - del X - del X_comp - del random_dirs - del batch - del samples - del latents - torch.cuda.empty_cache() - -# Return cached results or commpute if needed -# Pass existing InstrumentedModel instance to reuse it -def get_or_compute(config, model=None, submit_config=None, force_recompute=False): - if submit_config is None: - wrkdir = str(Path(__file__).parent.resolve()) - submit_config = SimpleNamespace(run_dir_root = wrkdir, run_dir = wrkdir) - - # Called directly by run.py - return _compute(submit_config, config, model, force_recompute) - -def _compute(submit_config, config, model=None, force_recompute=False): - basedir = Path(submit_config.run_dir) - outdir = basedir / 'out' - - if config.n is None: - raise RuntimeError('Must specify number of samples with -n=XXX') - - if model and not isinstance(model, InstrumentedModel): - raise RuntimeError('Passed model has to be wrapped in "InstrumentedModel"') - - if config.use_w and not 'StyleGAN' in config.model: - raise RuntimeError(f'Cannot change latent space of non-StyleGAN model {config.model}') - - transformer = get_estimator(config.estimator, config.components, config.sparsity) - dump_name = "{}-{}_{}_{}_n{}{}{}.npz".format( - config.model.lower(), - config.output_class.replace(' ', '_'), - config.layer.lower(), - transformer.get_param_str(), - config.n, - '_w' if config.use_w else '', - f'_seed{config.seed}' if config.seed else '' - ) - - dump_path = basedir / 'cache' / 'components' / dump_name - - if not dump_path.is_file() or force_recompute: - print('Not cached') - t_start = datetime.datetime.now() - compute(config, dump_path, model) - print('Total time:', datetime.datetime.now() - t_start) - - return dump_path \ No newline at end of file diff --git a/spaces/Disguised/anime_character_recognizer/app.py b/spaces/Disguised/anime_character_recognizer/app.py deleted file mode 100644 index 662b79156ca568397324ce9f05f54fd0284c47e7..0000000000000000000000000000000000000000 --- a/spaces/Disguised/anime_character_recognizer/app.py +++ /dev/null @@ -1,20 +0,0 @@ -import gradio as gr -from fastai.vision.all import * -import gradio as gr -import re -from glob import glob - -learn = load_learner('model_ft15(extra).pkl') - -categories = learn.dls.vocab - -def classify_image(img): - pred,idx,probs = learn.predict(img) - return dict(zip(categories, map(float,probs))) - -image = gr.inputs.Image(shape=(192, 192)) -label = gr.outputs.Label() - - -intf = gr.Interface(fn=classify_image, inputs=image, outputs=label, examples='./examples') -intf.launch(inline=False) \ No newline at end of file diff --git a/spaces/Dorado607/ChuanhuChatGPT/modules/config.py b/spaces/Dorado607/ChuanhuChatGPT/modules/config.py deleted file mode 100644 index 115312dd2ec4e0bd99eb8b5869b2f0aeed649039..0000000000000000000000000000000000000000 --- a/spaces/Dorado607/ChuanhuChatGPT/modules/config.py +++ /dev/null @@ -1,269 +0,0 @@ -from collections import defaultdict -from contextlib import contextmanager -import os -import logging -import sys -import commentjson as json - -from . import shared -from . import presets - - -__all__ = [ - "my_api_key", - "sensitive_id", - "authflag", - "auth_list", - "dockerflag", - "retrieve_proxy", - "log_level", - "advance_docs", - "update_doc_config", - "usage_limit", - "multi_api_key", - "server_name", - "server_port", - "share", - "check_update", - "latex_delimiters_set", - "hide_history_when_not_logged_in", - "default_chuanhu_assistant_model", - "show_api_billing" -] - -# 添加一个统一的config文件,避免文件过多造成的疑惑(优先级最低) -# 同时,也可以为后续支持自定义功能提供config的帮助 -if os.path.exists("config.json"): - with open("config.json", "r", encoding='utf-8') as f: - config = json.load(f) -else: - config = {} - - -def load_config_to_environ(key_list): - global config - for key in key_list: - if key in config: - os.environ[key.upper()] = os.environ.get(key.upper(), config[key]) - - -lang_config = config.get("language", "auto") -language = os.environ.get("LANGUAGE", lang_config) - -hide_history_when_not_logged_in = config.get( - "hide_history_when_not_logged_in", False) -check_update = config.get("check_update", True) -show_api_billing = config.get("show_api_billing", False) -show_api_billing = bool(os.environ.get("SHOW_API_BILLING", show_api_billing)) - -if os.path.exists("api_key.txt"): - logging.info("检测到api_key.txt文件,正在进行迁移...") - with open("api_key.txt", "r", encoding="utf-8") as f: - config["openai_api_key"] = f.read().strip() - os.rename("api_key.txt", "api_key(deprecated).txt") - with open("config.json", "w", encoding='utf-8') as f: - json.dump(config, f, indent=4, ensure_ascii=False) - -if os.path.exists("auth.json"): - logging.info("检测到auth.json文件,正在进行迁移...") - auth_list = [] - with open("auth.json", "r", encoding='utf-8') as f: - auth = json.load(f) - for _ in auth: - if auth[_]["username"] and auth[_]["password"]: - auth_list.append((auth[_]["username"], auth[_]["password"])) - else: - logging.error("请检查auth.json文件中的用户名和密码!") - sys.exit(1) - config["users"] = auth_list - os.rename("auth.json", "auth(deprecated).json") - with open("config.json", "w", encoding='utf-8') as f: - json.dump(config, f, indent=4, ensure_ascii=False) - -# 处理docker if we are running in Docker -dockerflag = config.get("dockerflag", False) -if os.environ.get("dockerrun") == "yes": - dockerflag = True - -# 处理 api-key 以及 允许的用户列表 -my_api_key = config.get("openai_api_key", "") -my_api_key = os.environ.get("OPENAI_API_KEY", my_api_key) -os.environ["OPENAI_API_KEY"] = my_api_key -os.environ["OPENAI_EMBEDDING_API_KEY"] = my_api_key - -if config.get("legacy_api_usage", False): - sensitive_id = config.get("sensitive_id", "") - sensitive_id = os.environ.get("SENSITIVE_ID", sensitive_id) -else: - sensitive_id = my_api_key - -google_palm_api_key = config.get("google_palm_api_key", "") -google_palm_api_key = os.environ.get( - "GOOGLE_PALM_API_KEY", google_palm_api_key) -os.environ["GOOGLE_PALM_API_KEY"] = google_palm_api_key - -xmchat_api_key = config.get("xmchat_api_key", "") -os.environ["XMCHAT_API_KEY"] = xmchat_api_key - -minimax_api_key = config.get("minimax_api_key", "") -os.environ["MINIMAX_API_KEY"] = minimax_api_key -minimax_group_id = config.get("minimax_group_id", "") -os.environ["MINIMAX_GROUP_ID"] = minimax_group_id - -load_config_to_environ(["openai_api_type", "azure_openai_api_key", "azure_openai_api_base_url", - "azure_openai_api_version", "azure_deployment_name", "azure_embedding_deployment_name", "azure_embedding_model_name"]) - - -usage_limit = os.environ.get("USAGE_LIMIT", config.get("usage_limit", 120)) - -# 多账户机制 -multi_api_key = config.get("multi_api_key", False) # 是否开启多账户机制 -if multi_api_key: - api_key_list = config.get("api_key_list", []) - if len(api_key_list) == 0: - logging.error("多账号模式已开启,但api_key_list为空,请检查config.json") - sys.exit(1) - shared.state.set_api_key_queue(api_key_list) - -auth_list = config.get("users", []) # 实际上是使用者的列表 -authflag = len(auth_list) > 0 # 是否开启认证的状态值,改为判断auth_list长度 - -# 处理自定义的api_host,优先读环境变量的配置,如果存在则自动装配 -api_host = os.environ.get( - "OPENAI_API_BASE", config.get("openai_api_base", None)) -if api_host is not None: - shared.state.set_api_host(api_host) - os.environ["OPENAI_API_BASE"] = f"{api_host}/v1" - logging.info(f"OpenAI API Base set to: {os.environ['OPENAI_API_BASE']}") - -default_chuanhu_assistant_model = config.get( - "default_chuanhu_assistant_model", "gpt-3.5-turbo") -for x in ["GOOGLE_CSE_ID", "GOOGLE_API_KEY", "WOLFRAM_ALPHA_APPID", "SERPAPI_API_KEY"]: - if config.get(x, None) is not None: - os.environ[x] = config[x] - - -@contextmanager -def retrieve_openai_api(api_key=None): - old_api_key = os.environ.get("OPENAI_API_KEY", "") - if api_key is None: - os.environ["OPENAI_API_KEY"] = my_api_key - yield my_api_key - else: - os.environ["OPENAI_API_KEY"] = api_key - yield api_key - os.environ["OPENAI_API_KEY"] = old_api_key - - -# 处理log -log_level = config.get("log_level", "INFO") -logging.basicConfig( - level=log_level, - format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s", -) - -# 处理代理: -http_proxy = os.environ.get("HTTP_PROXY", "") -https_proxy = os.environ.get("HTTPS_PROXY", "") -http_proxy = config.get("http_proxy", http_proxy) -https_proxy = config.get("https_proxy", https_proxy) - -# 重置系统变量,在不需要设置的时候不设置环境变量,以免引起全局代理报错 -os.environ["HTTP_PROXY"] = "" -os.environ["HTTPS_PROXY"] = "" - -local_embedding = config.get("local_embedding", False) # 是否使用本地embedding - - -@contextmanager -def retrieve_proxy(proxy=None): - """ - 1, 如果proxy = NONE,设置环境变量,并返回最新设置的代理 - 2,如果proxy != NONE,更新当前的代理配置,但是不更新环境变量 - """ - global http_proxy, https_proxy - if proxy is not None: - http_proxy = proxy - https_proxy = proxy - yield http_proxy, https_proxy - else: - old_var = os.environ["HTTP_PROXY"], os.environ["HTTPS_PROXY"] - os.environ["HTTP_PROXY"] = http_proxy - os.environ["HTTPS_PROXY"] = https_proxy - yield http_proxy, https_proxy # return new proxy - - # return old proxy - os.environ["HTTP_PROXY"], os.environ["HTTPS_PROXY"] = old_var - - -# 处理latex options -user_latex_option = config.get("latex_option", "default") -if user_latex_option == "default": - latex_delimiters_set = [ - {"left": "$$", "right": "$$", "display": True}, - {"left": "$", "right": "$", "display": False}, - {"left": "\\(", "right": "\\)", "display": False}, - {"left": "\\[", "right": "\\]", "display": True}, - ] -elif user_latex_option == "strict": - latex_delimiters_set = [ - {"left": "$$", "right": "$$", "display": True}, - {"left": "\\(", "right": "\\)", "display": False}, - {"left": "\\[", "right": "\\]", "display": True}, - ] -elif user_latex_option == "all": - latex_delimiters_set = [ - {"left": "$$", "right": "$$", "display": True}, - {"left": "$", "right": "$", "display": False}, - {"left": "\\(", "right": "\\)", "display": False}, - {"left": "\\[", "right": "\\]", "display": True}, - {"left": "\\begin{equation}", "right": "\\end{equation}", "display": True}, - {"left": "\\begin{align}", "right": "\\end{align}", "display": True}, - {"left": "\\begin{alignat}", "right": "\\end{alignat}", "display": True}, - {"left": "\\begin{gather}", "right": "\\end{gather}", "display": True}, - {"left": "\\begin{CD}", "right": "\\end{CD}", "display": True}, - ] -elif user_latex_option == "disabled": - latex_delimiters_set = [] -else: - latex_delimiters_set = [ - {"left": "$$", "right": "$$", "display": True}, - {"left": "$", "right": "$", "display": False}, - {"left": "\\(", "right": "\\)", "display": False}, - {"left": "\\[", "right": "\\]", "display": True}, - ] - -# 处理advance docs -advance_docs = defaultdict(lambda: defaultdict(dict)) -advance_docs.update(config.get("advance_docs", {})) - - -def update_doc_config(two_column_pdf): - global advance_docs - advance_docs["pdf"]["two_column"] = two_column_pdf - - logging.info(f"更新后的文件参数为:{advance_docs}") - - -# 处理gradio.launch参数 -server_name = config.get("server_name", None) -server_port = config.get("server_port", None) -if server_name is None: - if dockerflag: - server_name = "0.0.0.0" - else: - server_name = "127.0.0.1" -if server_port is None: - if dockerflag: - server_port = 7860 - -assert server_port is None or type(server_port) == int, "要求port设置为int类型" - -# 设置默认model -default_model = config.get("default_model", "") -try: - presets.DEFAULT_MODEL = presets.MODELS.index(default_model) -except ValueError: - pass - -share = config.get("share", False) diff --git a/spaces/Drac77/hakurei-waifu-diffusion/app.py b/spaces/Drac77/hakurei-waifu-diffusion/app.py deleted file mode 100644 index ccef706bf3035fe470bf6a4f5bd701b18bf59133..0000000000000000000000000000000000000000 --- a/spaces/Drac77/hakurei-waifu-diffusion/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/hakurei/waifu-diffusion").launch() \ No newline at end of file diff --git a/spaces/EDGAhab/Aatrox-Talking/models.py b/spaces/EDGAhab/Aatrox-Talking/models.py deleted file mode 100644 index f5acdeb2bedd47897348407c0ae55c9a160da881..0000000000000000000000000000000000000000 --- a/spaces/EDGAhab/Aatrox-Talking/models.py +++ /dev/null @@ -1,534 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:,:,:max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) - diff --git a/spaces/EDGAhab/Aatrox-Talking/text/cleaners.py b/spaces/EDGAhab/Aatrox-Talking/text/cleaners.py deleted file mode 100644 index 759db477e3deb72a03ff65957419c3694781b5ef..0000000000000000000000000000000000000000 --- a/spaces/EDGAhab/Aatrox-Talking/text/cleaners.py +++ /dev/null @@ -1,138 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Cleaners are transformations that run over the input text at both training and eval time. - -Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners" -hyperparameter. Some cleaners are English-specific. You'll typically want to use: - 1. "english_cleaners" for English text - 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using - the Unidecode library (https://pypi.python.org/pypi/Unidecode) - 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update - the symbols in symbols.py to match your data). -''' - -import re -from unidecode import unidecode -from phonemizer import phonemize -from pypinyin import Style, pinyin -from pypinyin.style._utils import get_finals, get_initials -# Regular expression matching whitespace: -_whitespace_re = re.compile(r'\s+') - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [ - ('mrs', 'misess'), - ('mr', 'mister'), - ('dr', 'doctor'), - ('st', 'saint'), - ('co', 'company'), - ('jr', 'junior'), - ('maj', 'major'), - ('gen', 'general'), - ('drs', 'doctors'), - ('rev', 'reverend'), - ('lt', 'lieutenant'), - ('hon', 'honorable'), - ('sgt', 'sergeant'), - ('capt', 'captain'), - ('esq', 'esquire'), - ('ltd', 'limited'), - ('col', 'colonel'), - ('ft', 'fort'), -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def expand_numbers(text): - return normalize_numbers(text) - - -def lowercase(text): - return text.lower() - - -def collapse_whitespace(text): - return re.sub(_whitespace_re, ' ', text) - - -def convert_to_ascii(text): - return unidecode(text) - - -def basic_cleaners(text): - '''Basic pipeline that lowercases and collapses whitespace without transliteration.''' - text = lowercase(text) - text = collapse_whitespace(text) - return text - - -def transliteration_cleaners(text): - '''Pipeline for non-English text that transliterates to ASCII.''' - text = convert_to_ascii(text) - text = lowercase(text) - text = collapse_whitespace(text) - return text - - -def english_cleaners(text): - '''Pipeline for English text, including abbreviation expansion.''' - text = convert_to_ascii(text) - text = lowercase(text) - text = expand_abbreviations(text) - phonemes = phonemize(text, language='en-us', backend='espeak', strip=True) - phonemes = collapse_whitespace(phonemes) - return phonemes - - -def english_cleaners2(text): - '''Pipeline for English text, including abbreviation expansion. + punctuation + stress''' - text = convert_to_ascii(text) - text = lowercase(text) - text = expand_abbreviations(text) - phonemes = phonemize(text, language='en-us', backend='espeak', strip=True, preserve_punctuation=True, with_stress=True) - phonemes = collapse_whitespace(phonemes) - return phonemes - - - - -def chinese_cleaners1(text): - from pypinyin import Style, pinyin - - phones = [phone[0] for phone in pinyin(text, style=Style.TONE3)] - return ' '.join(phones) - - -def chinese_cleaners2(text): - phones = [ - p - for phone in pinyin(text, style=Style.TONE3) - for p in [ - get_initials(phone[0], strict=True), - get_finals(phone[0][:-1], strict=True) + phone[0][-1] - if phone[0][-1].isdigit() - else get_finals(phone[0], strict=True) - if phone[0][-1].isalnum() - else phone[0], - ] - # Remove the case of individual tones as a phoneme - if len(p) != 0 and not p.isdigit() - ] - return phones - # return phonemes - -if __name__ == '__main__': - res = chinese_cleaners2('这是语音测试!') - print(res) - res = chinese_cleaners1('"第一,南京不是发展的不行,是大家对他期望很高,') - print(res) - - - res = english_cleaners2('this is a club test for one train.GDP') - print(res) \ No newline at end of file diff --git a/spaces/Eddycrack864/Applio-Inference/lib/infer_pack/onnx_inference.py b/spaces/Eddycrack864/Applio-Inference/lib/infer_pack/onnx_inference.py deleted file mode 100644 index 6517853be49e61c427cf7cd9b5ed203f6d5f367e..0000000000000000000000000000000000000000 --- a/spaces/Eddycrack864/Applio-Inference/lib/infer_pack/onnx_inference.py +++ /dev/null @@ -1,145 +0,0 @@ -import onnxruntime -import librosa -import numpy as np -import soundfile - - -class ContentVec: - def __init__(self, vec_path="pretrained/vec-768-layer-12.onnx", device=None): - print("load model(s) from {}".format(vec_path)) - if device == "cpu" or device is None: - providers = ["CPUExecutionProvider"] - elif device == "cuda": - providers = ["CUDAExecutionProvider", "CPUExecutionProvider"] - elif device == "dml": - providers = ["DmlExecutionProvider"] - else: - raise RuntimeError("Unsportted Device") - self.model = onnxruntime.InferenceSession(vec_path, providers=providers) - - def __call__(self, wav): - return self.forward(wav) - - def forward(self, wav): - feats = wav - if feats.ndim == 2: # double channels - feats = feats.mean(-1) - assert feats.ndim == 1, feats.ndim - feats = np.expand_dims(np.expand_dims(feats, 0), 0) - onnx_input = {self.model.get_inputs()[0].name: feats} - logits = self.model.run(None, onnx_input)[0] - return logits.transpose(0, 2, 1) - - -def get_f0_predictor(f0_predictor, hop_length, sampling_rate, **kargs): - if f0_predictor == "pm": - from lib.infer_pack.modules.F0Predictor.PMF0Predictor import PMF0Predictor - - f0_predictor_object = PMF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - elif f0_predictor == "harvest": - from lib.infer_pack.modules.F0Predictor.HarvestF0Predictor import ( - HarvestF0Predictor, - ) - - f0_predictor_object = HarvestF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - elif f0_predictor == "dio": - from lib.infer_pack.modules.F0Predictor.DioF0Predictor import DioF0Predictor - - f0_predictor_object = DioF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - else: - raise Exception("Unknown f0 predictor") - return f0_predictor_object - - -class OnnxRVC: - def __init__( - self, - model_path, - sr=40000, - hop_size=512, - vec_path="vec-768-layer-12", - device="cpu", - ): - vec_path = f"pretrained/{vec_path}.onnx" - self.vec_model = ContentVec(vec_path, device) - if device == "cpu" or device is None: - providers = ["CPUExecutionProvider"] - elif device == "cuda": - providers = ["CUDAExecutionProvider", "CPUExecutionProvider"] - elif device == "dml": - providers = ["DmlExecutionProvider"] - else: - raise RuntimeError("Unsportted Device") - self.model = onnxruntime.InferenceSession(model_path, providers=providers) - self.sampling_rate = sr - self.hop_size = hop_size - - def forward(self, hubert, hubert_length, pitch, pitchf, ds, rnd): - onnx_input = { - self.model.get_inputs()[0].name: hubert, - self.model.get_inputs()[1].name: hubert_length, - self.model.get_inputs()[2].name: pitch, - self.model.get_inputs()[3].name: pitchf, - self.model.get_inputs()[4].name: ds, - self.model.get_inputs()[5].name: rnd, - } - return (self.model.run(None, onnx_input)[0] * 32767).astype(np.int16) - - def inference( - self, - raw_path, - sid, - f0_method="dio", - f0_up_key=0, - pad_time=0.5, - cr_threshold=0.02, - ): - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - f0_predictor = get_f0_predictor( - f0_method, - hop_length=self.hop_size, - sampling_rate=self.sampling_rate, - threshold=cr_threshold, - ) - wav, sr = librosa.load(raw_path, sr=self.sampling_rate) - org_length = len(wav) - if org_length / sr > 50.0: - raise RuntimeError("Reached Max Length") - - wav16k = librosa.resample(wav, orig_sr=self.sampling_rate, target_sr=16000) - wav16k = wav16k - - hubert = self.vec_model(wav16k) - hubert = np.repeat(hubert, 2, axis=2).transpose(0, 2, 1).astype(np.float32) - hubert_length = hubert.shape[1] - - pitchf = f0_predictor.compute_f0(wav, hubert_length) - pitchf = pitchf * 2 ** (f0_up_key / 12) - pitch = pitchf.copy() - f0_mel = 1127 * np.log(1 + pitch / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - pitch = np.rint(f0_mel).astype(np.int64) - - pitchf = pitchf.reshape(1, len(pitchf)).astype(np.float32) - pitch = pitch.reshape(1, len(pitch)) - ds = np.array([sid]).astype(np.int64) - - rnd = np.random.randn(1, 192, hubert_length).astype(np.float32) - hubert_length = np.array([hubert_length]).astype(np.int64) - - out_wav = self.forward(hubert, hubert_length, pitch, pitchf, ds, rnd).squeeze() - out_wav = np.pad(out_wav, (0, 2 * self.hop_size), "constant") - return out_wav[0:org_length] diff --git a/spaces/EuroPython2022/mmocr-demo/configs/textdet/drrg/README.md b/spaces/EuroPython2022/mmocr-demo/configs/textdet/drrg/README.md deleted file mode 100644 index 2f2beb1b757ccbf2dd2e41a70769d963b098264d..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/mmocr-demo/configs/textdet/drrg/README.md +++ /dev/null @@ -1,37 +0,0 @@ -# DRRG - -> [Deep relational reasoning graph network for arbitrary shape text detection](https://arxiv.org/abs/2003.07493) - - - -## Abstract - -Arbitrary shape text detection is a challenging task due to the high variety and complexity of scenes texts. In this paper, we propose a novel unified relational reasoning graph network for arbitrary shape text detection. In our method, an innovative local graph bridges a text proposal model via Convolutional Neural Network (CNN) and a deep relational reasoning network via Graph Convolutional Network (GCN), making our network end-to-end trainable. To be concrete, every text instance will be divided into a series of small rectangular components, and the geometry attributes (e.g., height, width, and orientation) of the small components will be estimated by our text proposal model. Given the geometry attributes, the local graph construction model can roughly establish linkages between different text components. For further reasoning and deducing the likelihood of linkages between the component and its neighbors, we adopt a graph-based network to perform deep relational reasoning on local graphs. Experiments on public available datasets demonstrate the state-of-the-art performance of our method. - -
- -
- -## Results and models - -### CTW1500 - -| Method | Pretrained Model | Training set | Test set | #epochs | Test size | Recall | Precision | Hmean | Download | -| :-------------------------------------------------: | :--------------: | :-----------: | :----------: | :-----: | :-------: | :-----------: | :-----------: | :-----------: | :---------------------------------------------------: | -| [DRRG](configs/textdet/drrg/drrg_r50_fpn_unet_1200e_ctw1500.py) | ImageNet | CTW1500 Train | CTW1500 Test | 1200 | 640 | 0.822 (0.791) | 0.858 (0.862) | 0.840 (0.825) | [model](https://download.openmmlab.com/mmocr/textdet/drrg/drrg_r50_fpn_unet_1200e_ctw1500_20211022-fb30b001.pth) \\ [log](https://download.openmmlab.com/mmocr/textdet/drrg/20210511_234719.log) | - -```{note} -We've upgraded our IoU backend from `Polygon3` to `shapely`. There are some performance differences for some models due to the backends' different logics to handle invalid polygons (more info [here](https://github.com/open-mmlab/mmocr/issues/465)). **New evaluation result is presented in brackets** and new logs will be uploaded soon. -``` - -## Citation - -```bibtex -@article{zhang2020drrg, - title={Deep relational reasoning graph network for arbitrary shape text detection}, - author={Zhang, Shi-Xue and Zhu, Xiaobin and Hou, Jie-Bo and Liu, Chang and Yang, Chun and Wang, Hongfa and Yin, Xu-Cheng}, - booktitle={CVPR}, - pages={9699-9708}, - year={2020} -} -``` diff --git a/spaces/Ezi/Licences_check/read_extract.py b/spaces/Ezi/Licences_check/read_extract.py deleted file mode 100644 index 7917e89b6d7d929b65a64c06d1bb71877b6ead30..0000000000000000000000000000000000000000 --- a/spaces/Ezi/Licences_check/read_extract.py +++ /dev/null @@ -1,271 +0,0 @@ -import os -import re - -import nltk -nltk.download('stopwords') -nltk.download('punkt') - - -from nltk.corpus import stopwords -from nltk.tokenize import word_tokenize -from nltk.util import ngrams -import spacy -# from gensim.summarization.summarizer import summarize -# from gensim.summarization import keywords - -# Abstractive Summarisation -from transformers import BartForConditionalGeneration -from transformers import AutoTokenizer -import torch - -# Keyword/Keyphrase Extraction -from keybert import _highlight -from keybert import KeyBERT -from keyphrase_vectorizers import KeyphraseCountVectorizer, KeyphraseTfidfVectorizer -from sklearn.feature_extraction.text import CountVectorizer - -import time -import threading -from collections import defaultdict - -class AbstractiveSummarizer: - - def __init__(self): - self.nlp = spacy.load('en_core_web_lg') - self.summary = "" - - def generate_batch(self, text, tokenizer): - """ - Convert the text into multiple sentence parts of appropriate input size to feed to the model - - Arguments: - text: The License text to summarise - tokenizer: The tokenizer corresponding to the model used to convert the text into separate words(tokens) - - Returns: - The text formatted into List of sentences to feed to the model - """ - parsed = self.nlp(text) - sents = [sent.text for sent in parsed.sents] - max_size = tokenizer.model_max_length - - batch = tokenizer(sents, return_tensors='pt', return_length=True, padding='longest') - - inp_batch = [] - cur_batch = torch.empty((0,), dtype=torch.int64) - for enc_sent, length in zip(batch['input_ids'], batch['length']): - cur_size = cur_batch.shape[0] - if (cur_size + length.item()) <= max_size: - cur_batch = torch.cat((cur_batch,enc_sent[:length.item()])) - else: - inp_batch.append(torch.unsqueeze(cur_batch,0)) - cur_batch = enc_sent[:length.item()] - inp_batch.append(torch.unsqueeze(cur_batch,0)) - - return inp_batch - - def summarize(self, src, tokenizer, model): - """ - Function to use the pre-trained model to generate the summary - Arguments: - src: License text to summarise - tokenizer: The tokenizer corresponding to the model used to convert the text into separate words(tokens) - model: The pre-trained Model object used to perform the summarization - - Returns: - summary: The summarised texts - """ - batch_texts = self.generate_batch(src, tokenizer) - - enc_summary_list = [model.generate(batch, max_length=512) for batch in batch_texts] - - summary_list = [tokenizer.batch_decode(enc_summ, skip_special_tokens=True) for enc_summ in enc_summary_list] - # orig_list = [tokenizer.batch_decode(batch, skip_special_tokens=True) for batch in batch_texts] - - summary_texts = [summ[0] for summ in summary_list] - summary = " ".join(summary_texts) - - self.summary = summary - - - def bart(self, src): - """ - Initialize the facebook BART pre-trained model and call necessary functions to summarize - Arguments: - src: The text to summarise - - Returns/Set as instance variable: - The summarized text - """ - - start_time = time.time() - model_name = 'facebook/bart-large-cnn' - device = 'cuda' if torch.cuda.is_available() else 'cpu' - tokenizer = AutoTokenizer.from_pretrained(model_name) - model = BartForConditionalGeneration.from_pretrained(model_name).to(device) - - self.summarize(src, tokenizer, model) - - - -def get_summary(lic_txt): - """ - Summarize the license and return it - Arguments: - spdx - Id of License to summarise - - Returns: - pos_text: The part of the License containing information for permitted use - neg_text: The part of the License containing information about usage restrictions - lic_txt: The full license text - summary - The generated summary of the license - """ - print('Summarising...') - absSum = AbstractiveSummarizer() - - # Generate summary - thread = absSum.bart(lic_txt) - - return thread - - -def extract_ngrams(phrase): - phrase = re.sub('[^a-zA-Z0-9]',' ', phrase) - tokens = word_tokenize(phrase) - res = [] - for num in range(len(tokens)+1): - temp = ngrams(tokens, num) - res += [' '.join(grams) for grams in temp] - - return res - - -def get_highlight_text(text, keywords): - """ - Custom function to find exact position of keywords for highlighting - """ - - text = re.sub('[-/]',' ', text) - # text = re.sub('(\n *){2,}','\n',text) - text = re.sub(' {2,}', ' ', text) - - # Group keywords by length - kw_len = defaultdict(list) - for kw in keywords: - kw_len[len(kw)].append(kw) - - # Use sliding window technique to check equal strings - spans = [] - for length in kw_len: - w_start, w_end = 0, length - - while w_end <= len(text): - - for kw in kw_len[length]: - j = w_start - eq = True - for i in range(len(kw)): - if text[j] != kw[i]: - eq = False - break - j += 1 - if eq: - spans.append([w_start, w_end]) - break - - w_start += 1 - w_end += 1 - - if not spans: - return text - - # merge spans - spans.sort(key=lambda x: x[0]) - merged = [] - - st, end = spans[0][0], spans[0][1] - - for i in range(1, len(spans)): - s,e = spans[i] - - if st <= s <= end: - end = max(e, end) - else: - merged.append([st, end]) - st, end = s,e - merged.append([st,end]) - - res = [] - sub_start = 0 - for s,e in merged: - res.append(text[sub_start:s]) - res.append((text[s:e], "", "#f66")) - sub_start = e - res.append(text[sub_start:]) - - return res - - - -def get_keywords(datatype, task, field, pos_text, neg_text): - """ - Summarize the license and generate the good and bad use tags - Arguments: - datafield - Type of 'data' used under the license: Eg. Model, Data, Model Derivatives, Source Code - task - The type of task the model is designed to do - field - Which 'field' to use the data in: Eg. Medical, Commercial, Non-Commercial, Research - pos_text: The part of the License containing information for permitted use - neg_text: The part of the License containing information about usage restrictions - - Returns: - p_keywords - List of Positive(Permitted use) keywords extracted from summary - n_keywords - List of Negative(Restriction) keywords extracted from summary - contrd - boolean flag to show if there is any contradiction or not - hl_text - the license text formatted to display in a highlighted manner - """ - print('Extracting keywords...') - - #[e.lower() for e in list_strings] - datatype, task, field = datatype.lower(), task.lower(), field.lower() - #datatype = [e.lower() for e in datatype] - #task = [e.lower() for e in task] - #field = [e.lower() for e in field] - #datatype, task, field = datatype, task, str(field) - - - stop_words = set(stopwords.words('english')) - #stops = nltk.corpus.stopwords.words('english') - #stop_words = set(stops) - stop_words = stop_words.union({'license', 'licensing', 'licensor', 'copyright', 'copyrights', 'patent'}) - - pos_kw_model = KeyBERT() - neg_kw_model = KeyBERT() - - candidates = [] - for term in [datatype, task, field]: - candidates += extract_ngrams(term) - - p_kw = pos_kw_model.extract_keywords(docs=pos_text, top_n=40, vectorizer=KeyphraseCountVectorizer(stop_words=stop_words))#, pos_pattern='+')) - n_kw = neg_kw_model.extract_keywords(docs=neg_text, top_n=40, vectorizer=KeyphraseCountVectorizer(stop_words=stop_words))#, pos_pattern='+')) - - ngram_max = max([len(word_tokenize(x)) for x in [datatype, task, field]]) - - pc_kw = pos_kw_model.extract_keywords(docs=pos_text ,candidates=candidates, keyphrase_ngram_range=(1,ngram_max)) - nc_kw = neg_kw_model.extract_keywords(docs=neg_text ,candidates=candidates, keyphrase_ngram_range=(1,ngram_max)) - - # Check contradiction - all_cont = [kw for (kw,_) in nc_kw] - cont_terms = set(all_cont).intersection(set(extract_ngrams(field))) - contrd = True if len(cont_terms) > 0 else False - hl_text = "" if not contrd else get_highlight_text(neg_text, all_cont) - - p_kw += pc_kw - n_kw += nc_kw - - p_kw.sort(key=lambda x: x[1], reverse=True) - n_kw.sort(key=lambda x: x[1], reverse=True) - - p_keywords = [kw for (kw,score) in p_kw if score < 0.5] - n_keywords = [kw for (kw,score) in n_kw if score < 0.5] - - return p_keywords, n_keywords, contrd, hl_text \ No newline at end of file diff --git a/spaces/FridaZuley/RVC_HFKawaii/infer/modules/vc/modules.py b/spaces/FridaZuley/RVC_HFKawaii/infer/modules/vc/modules.py deleted file mode 100644 index 458cfbe860b23bdd8f07abc2934443e6b8b01c3a..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/infer/modules/vc/modules.py +++ /dev/null @@ -1,526 +0,0 @@ -import os, sys -import traceback -import logging -now_dir = os.getcwd() -sys.path.append(now_dir) -logger = logging.getLogger(__name__) -import lib.globals.globals as rvc_globals -import numpy as np -import soundfile as sf -import torch -from io import BytesIO -from infer.lib.audio import load_audio -from infer.lib.audio import wav2 -from infer.lib.infer_pack.models import ( - SynthesizerTrnMs256NSFsid, - SynthesizerTrnMs256NSFsid_nono, - SynthesizerTrnMs768NSFsid, - SynthesizerTrnMs768NSFsid_nono, -) -from infer.modules.vc.pipeline import Pipeline -from infer.modules.vc.utils import * -import time -import scipy.io.wavfile as wavfile - -def note_to_hz(note_name): - SEMITONES = {'C': -9, 'C#': -8, 'D': -7, 'D#': -6, 'E': -5, 'F': -4, 'F#': -3, 'G': -2, 'G#': -1, 'A': 0, 'A#': 1, 'B': 2} - pitch_class, octave = note_name[:-1], int(note_name[-1]) - semitone = SEMITONES[pitch_class] - note_number = 12 * (octave - 4) + semitone - frequency = 440.0 * (2.0 ** (1.0/12)) ** note_number - return frequency - -class VC: - def __init__(self, config): - self.n_spk = None - self.tgt_sr = None - self.net_g = None - self.pipeline = None - self.cpt = None - self.version = None - self.if_f0 = None - self.version = None - self.hubert_model = None - - self.config = config - - def get_vc(self, sid, *to_return_protect): - logger.info("Get sid: " + sid) - - to_return_protect0 = { - "visible": self.if_f0 != 0, - "value": to_return_protect[0] - if self.if_f0 != 0 and to_return_protect - else 0.5, - "__type__": "update", - } - to_return_protect1 = { - "visible": self.if_f0 != 0, - "value": to_return_protect[1] - if self.if_f0 != 0 and to_return_protect - else 0.33, - "__type__": "update", - } - - if not sid: - if self.hubert_model is not None: # 考虑到轮询, 需要加个判断看是否 sid 是由有模型切换到无模型的 - logger.info("Clean model cache") - del ( - self.net_g, - self.n_spk, - self.vc, - self.hubert_model, - self.tgt_sr, - ) # ,cpt - self.hubert_model = ( - self.net_g - ) = self.n_spk = self.vc = self.hubert_model = self.tgt_sr = None - if torch.cuda.is_available(): - torch.cuda.empty_cache() - ###楼下不这么折腾清理不干净 - self.if_f0 = self.cpt.get("f0", 1) - self.version = self.cpt.get("version", "v1") - if self.version == "v1": - if self.if_f0 == 1: - self.net_g = SynthesizerTrnMs256NSFsid( - *self.cpt["config"], is_half=self.config.is_half - ) - else: - self.net_g = SynthesizerTrnMs256NSFsid_nono(*self.cpt["config"]) - elif self.version == "v2": - if self.if_f0 == 1: - self.net_g = SynthesizerTrnMs768NSFsid( - *self.cpt["config"], is_half=self.config.is_half - ) - else: - self.net_g = SynthesizerTrnMs768NSFsid_nono(*self.cpt["config"]) - del self.net_g, self.cpt - if torch.cuda.is_available(): - torch.cuda.empty_cache() - return ( - {"visible": False, "__type__": "update"}, - { - "visible": True, - "value": to_return_protect0, - "__type__": "update", - }, - { - "visible": True, - "value": to_return_protect1, - "__type__": "update", - }, - "", - "", - ) - #person = f'{os.getenv("weight_root")}/{sid}' - person = f'{sid}' - #logger.info(f"Loading: {person}") - logger.info(f"Loading...") - self.cpt = torch.load(person, map_location="cpu") - self.tgt_sr = self.cpt["config"][-1] - self.cpt["config"][-3] = self.cpt["weight"]["emb_g.weight"].shape[0] # n_spk - self.if_f0 = self.cpt.get("f0", 1) - self.version = self.cpt.get("version", "v1") - - synthesizer_class = { - ("v1", 1): SynthesizerTrnMs256NSFsid, - ("v1", 0): SynthesizerTrnMs256NSFsid_nono, - ("v2", 1): SynthesizerTrnMs768NSFsid, - ("v2", 0): SynthesizerTrnMs768NSFsid_nono, - } - - self.net_g = synthesizer_class.get( - (self.version, self.if_f0), SynthesizerTrnMs256NSFsid - )(*self.cpt["config"], is_half=self.config.is_half) - - del self.net_g.enc_q - - self.net_g.load_state_dict(self.cpt["weight"], strict=False) - self.net_g.eval().to(self.config.device) - if self.config.is_half: - self.net_g = self.net_g.half() - else: - self.net_g = self.net_g.float() - - self.pipeline = Pipeline(self.tgt_sr, self.config) - n_spk = self.cpt["config"][-3] - index = {"value": get_index_path_from_model(sid), "__type__": "update"} - logger.info("Select index: " + index["value"]) - - return ( - ( - {"visible": False, "maximum": n_spk, "__type__": "update"}, - to_return_protect0, - to_return_protect1 - ) - if to_return_protect - else {"visible": False, "maximum": n_spk, "__type__": "update"} - ) - - - def vc_single( - self, - sid, - input_audio_path0, - input_audio_path1, - f0_up_key, - f0_file, - f0_method, - file_index, - file_index2, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, - crepe_hop_length, - f0_min, - note_min, - f0_max, - note_max, - f0_autotune, - ): - global total_time - total_time = 0 - start_time = time.time() - if not input_audio_path0 and not input_audio_path1: - return "You need to upload an audio", None - - if (not os.path.exists(input_audio_path0)) and (not os.path.exists(os.path.join(now_dir, input_audio_path0))): - return "Audio was not properly selected or doesn't exist", None - - input_audio_path1 = input_audio_path1 or input_audio_path0 - print(f"\nStarting inference for '{os.path.basename(input_audio_path1)}'") - print("-------------------") - f0_up_key = int(f0_up_key) - if rvc_globals.NotesOrHertz and f0_method != 'rmvpe': - f0_min = note_to_hz(note_min) if note_min else 50 - f0_max = note_to_hz(note_max) if note_max else 1100 - print(f"Converted Min pitch: freq - {f0_min}\n" - f"Converted Max pitch: freq - {f0_max}") - else: - f0_min = f0_min or 50 - f0_max = f0_max or 1100 - try: - input_audio_path1 = input_audio_path1 or input_audio_path0 - print(f"Attempting to load {input_audio_path1}....") - audio = load_audio(file=input_audio_path1, - sr=16000, - DoFormant=rvc_globals.DoFormant, - Quefrency=rvc_globals.Quefrency, - Timbre=rvc_globals.Timbre) - - audio_max = np.abs(audio).max() / 0.95 - if audio_max > 1: - audio /= audio_max - times = [0, 0, 0] - - if self.hubert_model is None: - self.hubert_model = load_hubert(self.config) - - try: - self.if_f0 = self.cpt.get("f0", 1) - except NameError: - message = "Model was not properly selected" - print(message) - return message, None - - file_index = ( - ( - file_index.strip(" ") - .strip('"') - .strip("\n") - .strip('"') - .strip(" ") - .replace("trained", "added") - ) - if file_index != "" - else file_index2 - ) # 防止小白写错,自动帮他替换掉 - - try: - audio_opt = self.pipeline.pipeline( - self.hubert_model, - self.net_g, - sid, - audio, - input_audio_path1, - times, - f0_up_key, - f0_method, - file_index, - index_rate, - self.if_f0, - filter_radius, - self.tgt_sr, - resample_sr, - rms_mix_rate, - self.version, - protect, - crepe_hop_length, - f0_autotune, - f0_file=f0_file, - f0_min=f0_min, - f0_max=f0_max - ) - except AssertionError: - message = "Mismatching index version detected (v1 with v2, or v2 with v1)." - print(message) - return message, None - except NameError: - message = "RVC libraries are still loading. Please try again in a few seconds." - print(message) - return message, None - - if self.tgt_sr != resample_sr >= 16000: - self.tgt_sr = resample_sr - index_info = ( - "Index:\n%s." % file_index - if os.path.exists(file_index) - else "Index not used." - ) - end_time = time.time() - total_time = end_time - start_time - - output_folder = "audio-outputs" - os.makedirs(output_folder, exist_ok=True) - output_filename = "generated_audio_{}.wav" - output_count = 1 - while True: - current_output_path = os.path.join(output_folder, output_filename.format(output_count)) - if not os.path.exists(current_output_path): - break - output_count += 1 - - wavfile.write(current_output_path, self.tgt_sr, audio_opt) - print(f"Generated audio saved to: {current_output_path}") - return f"Success.\n {index_info}\nTime:\n npy:{times[0]}, f0:{times[1]}, infer:{times[2]}\nTotal Time: {total_time} seconds", (self.tgt_sr, audio_opt) - except: - info = traceback.format_exc() - logger.warn(info) - return info, (None, None) - - def vc_single_dont_save( - self, - sid, - input_audio_path0, - input_audio_path1, - f0_up_key, - f0_file, - f0_method, - file_index, - file_index2, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, - crepe_hop_length, - f0_min, - note_min, - f0_max, - note_max, - f0_autotune, - ): - global total_time - total_time = 0 - start_time = time.time() - if not input_audio_path0 and not input_audio_path1: - return "You need to upload an audio", None - - if (not os.path.exists(input_audio_path0)) and (not os.path.exists(os.path.join(now_dir, input_audio_path0))): - return "Audio was not properly selected or doesn't exist", None - - input_audio_path1 = input_audio_path1 or input_audio_path0 - print(f"\nStarting inference for '{os.path.basename(input_audio_path1)}'") - print("-------------------") - f0_up_key = int(f0_up_key) - if rvc_globals.NotesOrHertz and f0_method != 'rmvpe': - f0_min = note_to_hz(note_min) if note_min else 50 - f0_max = note_to_hz(note_max) if note_max else 1100 - print(f"Converted Min pitch: freq - {f0_min}\n" - f"Converted Max pitch: freq - {f0_max}") - else: - f0_min = f0_min or 50 - f0_max = f0_max or 1100 - try: - input_audio_path1 = input_audio_path1 or input_audio_path0 - print(f"Attempting to load {input_audio_path1}....") - audio = load_audio(file=input_audio_path1, - sr=16000, - DoFormant=rvc_globals.DoFormant, - Quefrency=rvc_globals.Quefrency, - Timbre=rvc_globals.Timbre) - - audio_max = np.abs(audio).max() / 0.95 - if audio_max > 1: - audio /= audio_max - times = [0, 0, 0] - - if self.hubert_model is None: - self.hubert_model = load_hubert(self.config) - - try: - self.if_f0 = self.cpt.get("f0", 1) - except NameError: - message = "Model was not properly selected" - print(message) - return message, None - - file_index = ( - ( - file_index.strip(" ") - .strip('"') - .strip("\n") - .strip('"') - .strip(" ") - .replace("trained", "added") - ) - if file_index != "" - else file_index2 - ) # 防止小白写错,自动帮他替换掉 - - try: - audio_opt = self.pipeline.pipeline( - self.hubert_model, - self.net_g, - sid, - audio, - input_audio_path1, - times, - f0_up_key, - f0_method, - file_index, - index_rate, - self.if_f0, - filter_radius, - self.tgt_sr, - resample_sr, - rms_mix_rate, - self.version, - protect, - crepe_hop_length, - f0_autotune, - f0_file=f0_file, - f0_min=f0_min, - f0_max=f0_max - ) - except AssertionError: - message = "Mismatching index version detected (v1 with v2, or v2 with v1)." - print(message) - return message, None - except NameError: - message = "RVC libraries are still loading. Please try again in a few seconds." - print(message) - return message, None - - if self.tgt_sr != resample_sr >= 16000: - self.tgt_sr = resample_sr - index_info = ( - "Index:\n%s." % file_index - if os.path.exists(file_index) - else "Index not used." - ) - end_time = time.time() - total_time = end_time - start_time - - return f"Success.\n {index_info}\nTime:\n npy:{times[0]}, f0:{times[1]}, infer:{times[2]}\nTotal Time: {total_time} seconds", (self.tgt_sr, audio_opt) - except: - info = traceback.format_exc() - logger.warn(info) - return info, (None, None) - - - def vc_multi( - self, - sid, - dir_path, - opt_root, - paths, - f0_up_key, - f0_method, - file_index, - file_index2, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, - format1, - crepe_hop_length, - f0_min, - note_min, - f0_max, - note_max, - f0_autotune, - ): - if rvc_globals.NotesOrHertz and f0_method != 'rmvpe': - f0_min = note_to_hz(note_min) if note_min else 50 - f0_max = note_to_hz(note_max) if note_max else 1100 - print(f"Converted Min pitch: freq - {f0_min}\n" - f"Converted Max pitch: freq - {f0_max}") - else: - f0_min = f0_min or 50 - f0_max = f0_max or 1100 - try: - dir_path = ( - dir_path.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - ) # 防止小白拷路径头尾带了空格和"和回车 - opt_root = opt_root.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - os.makedirs(opt_root, exist_ok=True) - try: - if dir_path != "": - paths = [ - os.path.join(dir_path, name) for name in os.listdir(dir_path) - ] - else: - paths = [path.name for path in paths] - except: - traceback.print_exc() - paths = [path.name for path in paths] - infos = [] - for path in paths: - info, opt = self.vc_single( - sid, - path, - f0_up_key, - None, - f0_method, - file_index, - file_index2, - # file_big_npy, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, - ) - if "Success" in info: - try: - tgt_sr, audio_opt = opt - if format1 in ["wav", "flac"]: - sf.write( - "%s/%s.%s" - % (opt_root, os.path.basename(path), format1), - audio_opt, - tgt_sr, - ) - else: - path = "%s/%s.%s" % (opt_root, os.path.basename(path), format1) - with BytesIO() as wavf: - sf.write( - wavf, - audio_opt, - tgt_sr, - format="wav" - ) - wavf.seek(0, 0) - with open(path, "wb") as outf: - wav2(wavf, outf, format1) - except: - info += traceback.format_exc() - infos.append("%s->%s" % (os.path.basename(path), info)) - yield "\n".join(infos) - yield "\n".join(infos) - except: - yield traceback.format_exc() diff --git a/spaces/Froleptan/stablediffusion-infinity/PyPatchMatch/csrc/pyinterface.cpp b/spaces/Froleptan/stablediffusion-infinity/PyPatchMatch/csrc/pyinterface.cpp deleted file mode 100644 index 612ccba6544ff111a2da0dce9adc4019858ebded..0000000000000000000000000000000000000000 --- a/spaces/Froleptan/stablediffusion-infinity/PyPatchMatch/csrc/pyinterface.cpp +++ /dev/null @@ -1,107 +0,0 @@ -#include "pyinterface.h" -#include "inpaint.h" - -static unsigned int PM_seed = 1212; -static bool PM_verbose = false; - -int _dtype_py_to_cv(int dtype_py); -int _dtype_cv_to_py(int dtype_cv); -cv::Mat _py_to_cv2(PM_mat_t pymat); -PM_mat_t _cv2_to_py(cv::Mat cvmat); - -void PM_set_random_seed(unsigned int seed) { - PM_seed = seed; -} - -void PM_set_verbose(int value) { - PM_verbose = static_cast(value); -} - -void PM_free_pymat(PM_mat_t pymat) { - free(pymat.data_ptr); -} - -PM_mat_t PM_inpaint(PM_mat_t source_py, PM_mat_t mask_py, int patch_size) { - cv::Mat source = _py_to_cv2(source_py); - cv::Mat mask = _py_to_cv2(mask_py); - auto metric = PatchSSDDistanceMetric(patch_size); - cv::Mat result = Inpainting(source, mask, &metric).run(PM_verbose, false, PM_seed); - return _cv2_to_py(result); -} - -PM_mat_t PM_inpaint_regularity(PM_mat_t source_py, PM_mat_t mask_py, PM_mat_t ijmap_py, int patch_size, float guide_weight) { - cv::Mat source = _py_to_cv2(source_py); - cv::Mat mask = _py_to_cv2(mask_py); - cv::Mat ijmap = _py_to_cv2(ijmap_py); - - auto metric = RegularityGuidedPatchDistanceMetricV2(patch_size, ijmap, guide_weight); - cv::Mat result = Inpainting(source, mask, &metric).run(PM_verbose, false, PM_seed); - return _cv2_to_py(result); -} - -PM_mat_t PM_inpaint2(PM_mat_t source_py, PM_mat_t mask_py, PM_mat_t global_mask_py, int patch_size) { - cv::Mat source = _py_to_cv2(source_py); - cv::Mat mask = _py_to_cv2(mask_py); - cv::Mat global_mask = _py_to_cv2(global_mask_py); - - auto metric = PatchSSDDistanceMetric(patch_size); - cv::Mat result = Inpainting(source, mask, global_mask, &metric).run(PM_verbose, false, PM_seed); - return _cv2_to_py(result); -} - -PM_mat_t PM_inpaint2_regularity(PM_mat_t source_py, PM_mat_t mask_py, PM_mat_t global_mask_py, PM_mat_t ijmap_py, int patch_size, float guide_weight) { - cv::Mat source = _py_to_cv2(source_py); - cv::Mat mask = _py_to_cv2(mask_py); - cv::Mat global_mask = _py_to_cv2(global_mask_py); - cv::Mat ijmap = _py_to_cv2(ijmap_py); - - auto metric = RegularityGuidedPatchDistanceMetricV2(patch_size, ijmap, guide_weight); - cv::Mat result = Inpainting(source, mask, global_mask, &metric).run(PM_verbose, false, PM_seed); - return _cv2_to_py(result); -} - -int _dtype_py_to_cv(int dtype_py) { - switch (dtype_py) { - case PM_UINT8: return CV_8U; - case PM_INT8: return CV_8S; - case PM_UINT16: return CV_16U; - case PM_INT16: return CV_16S; - case PM_INT32: return CV_32S; - case PM_FLOAT32: return CV_32F; - case PM_FLOAT64: return CV_64F; - } - - return CV_8U; -} - -int _dtype_cv_to_py(int dtype_cv) { - switch (dtype_cv) { - case CV_8U: return PM_UINT8; - case CV_8S: return PM_INT8; - case CV_16U: return PM_UINT16; - case CV_16S: return PM_INT16; - case CV_32S: return PM_INT32; - case CV_32F: return PM_FLOAT32; - case CV_64F: return PM_FLOAT64; - } - - return PM_UINT8; -} - -cv::Mat _py_to_cv2(PM_mat_t pymat) { - int dtype = _dtype_py_to_cv(pymat.dtype); - dtype = CV_MAKETYPE(pymat.dtype, pymat.shape.channels); - return cv::Mat(cv::Size(pymat.shape.width, pymat.shape.height), dtype, pymat.data_ptr).clone(); -} - -PM_mat_t _cv2_to_py(cv::Mat cvmat) { - PM_shape_t shape = {cvmat.size().width, cvmat.size().height, cvmat.channels()}; - int dtype = _dtype_cv_to_py(cvmat.depth()); - size_t dsize = cvmat.total() * cvmat.elemSize(); - - void *data_ptr = reinterpret_cast(malloc(dsize)); - memcpy(data_ptr, reinterpret_cast(cvmat.data), dsize); - - return PM_mat_t {data_ptr, shape, dtype}; -} - diff --git a/spaces/GlimmeringStars/Testing/README.md b/spaces/GlimmeringStars/Testing/README.md deleted file mode 100644 index 3fe16448835a66b015adc61b6dc7170c0eafc66b..0000000000000000000000000000000000000000 --- a/spaces/GlimmeringStars/Testing/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Testing -emoji: 🏃 -colorFrom: yellow -colorTo: green -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Gradio-Blocks/StyleGAN-NADA/e4e/models/encoders/model_irse.py b/spaces/Gradio-Blocks/StyleGAN-NADA/e4e/models/encoders/model_irse.py deleted file mode 100644 index 6a94d67542f961ff6533f0335cf4cb0fa54024fb..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/StyleGAN-NADA/e4e/models/encoders/model_irse.py +++ /dev/null @@ -1,84 +0,0 @@ -from torch.nn import Linear, Conv2d, BatchNorm1d, BatchNorm2d, PReLU, Dropout, Sequential, Module -from e4e.models.encoders.helpers import get_blocks, Flatten, bottleneck_IR, bottleneck_IR_SE, l2_norm - -""" -Modified Backbone implementation from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch) -""" - - -class Backbone(Module): - def __init__(self, input_size, num_layers, mode='ir', drop_ratio=0.4, affine=True): - super(Backbone, self).__init__() - assert input_size in [112, 224], "input_size should be 112 or 224" - assert num_layers in [50, 100, 152], "num_layers should be 50, 100 or 152" - assert mode in ['ir', 'ir_se'], "mode should be ir or ir_se" - blocks = get_blocks(num_layers) - if mode == 'ir': - unit_module = bottleneck_IR - elif mode == 'ir_se': - unit_module = bottleneck_IR_SE - self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False), - BatchNorm2d(64), - PReLU(64)) - if input_size == 112: - self.output_layer = Sequential(BatchNorm2d(512), - Dropout(drop_ratio), - Flatten(), - Linear(512 * 7 * 7, 512), - BatchNorm1d(512, affine=affine)) - else: - self.output_layer = Sequential(BatchNorm2d(512), - Dropout(drop_ratio), - Flatten(), - Linear(512 * 14 * 14, 512), - BatchNorm1d(512, affine=affine)) - - modules = [] - for block in blocks: - for bottleneck in block: - modules.append(unit_module(bottleneck.in_channel, - bottleneck.depth, - bottleneck.stride)) - self.body = Sequential(*modules) - - def forward(self, x): - x = self.input_layer(x) - x = self.body(x) - x = self.output_layer(x) - return l2_norm(x) - - -def IR_50(input_size): - """Constructs a ir-50 model.""" - model = Backbone(input_size, num_layers=50, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_101(input_size): - """Constructs a ir-101 model.""" - model = Backbone(input_size, num_layers=100, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_152(input_size): - """Constructs a ir-152 model.""" - model = Backbone(input_size, num_layers=152, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_50(input_size): - """Constructs a ir_se-50 model.""" - model = Backbone(input_size, num_layers=50, mode='ir_se', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_101(input_size): - """Constructs a ir_se-101 model.""" - model = Backbone(input_size, num_layers=100, mode='ir_se', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_152(input_size): - """Constructs a ir_se-152 model.""" - model = Backbone(input_size, num_layers=152, mode='ir_se', drop_ratio=0.4, affine=False) - return model diff --git a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/common/__init__.py b/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/common/__init__.py deleted file mode 100644 index d3c65d69d5f61b7b9547153c47d84e7f545e2636..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/common/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright 2021 DeepMind Technologies Limited -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Common data types and constants used within Alphafold.""" diff --git a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/model/tf/utils.py b/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/model/tf/utils.py deleted file mode 100644 index fc40a2ceb2de1c2d56c17697393713804d7da350..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/model/tf/utils.py +++ /dev/null @@ -1,47 +0,0 @@ -# Copyright 2021 DeepMind Technologies Limited -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Shared utilities for various components.""" -import tensorflow.compat.v1 as tf - - -def tf_combine_mask(*masks): - """Take the intersection of float-valued masks.""" - ret = 1 - for m in masks: - ret *= m - return ret - - -class SeedMaker(object): - """Return unique seeds.""" - - def __init__(self, initial_seed=0): - self.next_seed = initial_seed - - def __call__(self): - i = self.next_seed - self.next_seed += 1 - return i - -seed_maker = SeedMaker() - - -def make_random_seed(): - return tf.random.uniform([2], - tf.int32.min, - tf.int32.max, - tf.int32, - seed=seed_maker()) - diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/models/rpn_r50_caffe_c4.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/models/rpn_r50_caffe_c4.py deleted file mode 100644 index 9c32a55ddaa88812c8020872c33502122c409041..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/models/rpn_r50_caffe_c4.py +++ /dev/null @@ -1,56 +0,0 @@ -# model settings -model = dict( - type='RPN', - pretrained='open-mmlab://detectron2/resnet50_caffe', - backbone=dict( - type='ResNet', - depth=50, - num_stages=3, - strides=(1, 2, 2), - dilations=(1, 1, 1), - out_indices=(2, ), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=False), - norm_eval=True, - style='caffe'), - neck=None, - rpn_head=dict( - type='RPNHead', - in_channels=1024, - feat_channels=1024, - anchor_generator=dict( - type='AnchorGenerator', - scales=[2, 4, 8, 16, 32], - ratios=[0.5, 1.0, 2.0], - strides=[16]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=0, - pos_weight=-1, - debug=False)), - test_cfg=dict( - rpn=dict( - nms_pre=12000, - max_per_img=2000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0))) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/fcos/fcos_x101_64x4d_fpn_gn-head_mstrain_640-800_2x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/fcos/fcos_x101_64x4d_fpn_gn-head_mstrain_640-800_2x_coco.py deleted file mode 100644 index fc576f6a674ee61b7332dc2085c488bebf972030..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/fcos/fcos_x101_64x4d_fpn_gn-head_mstrain_640-800_2x_coco.py +++ /dev/null @@ -1,59 +0,0 @@ -_base_ = './fcos_r50_caffe_fpn_gn-head_1x_coco.py' -model = dict( - pretrained='open-mmlab://resnext101_64x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=64, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch')) -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict( - type='Resize', - img_scale=[(1333, 640), (1333, 800)], - multiscale_mode='value', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) -# optimizer -optimizer = dict( - lr=0.01, paramwise_cfg=dict(bias_lr_mult=2., bias_decay_mult=0.)) -optimizer_config = dict( - _delete_=True, grad_clip=dict(max_norm=35, norm_type=2)) -# learning policy -lr_config = dict(step=[16, 22]) -runner = dict(type='EpochBasedRunner', max_epochs=24) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/evaluation/__init__.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/evaluation/__init__.py deleted file mode 100644 index d11ef15b9db95166b4427ad4d08debbd0630a741..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/evaluation/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -from .class_names import (cityscapes_classes, coco_classes, dataset_aliases, - get_classes, imagenet_det_classes, - imagenet_vid_classes, voc_classes) -from .eval_hooks import DistEvalHook, EvalHook -from .mean_ap import average_precision, eval_map, print_map_summary -from .recall import (eval_recalls, plot_iou_recall, plot_num_recall, - print_recall_summary) - -__all__ = [ - 'voc_classes', 'imagenet_det_classes', 'imagenet_vid_classes', - 'coco_classes', 'cityscapes_classes', 'dataset_aliases', 'get_classes', - 'DistEvalHook', 'EvalHook', 'average_precision', 'eval_map', - 'print_map_summary', 'eval_recalls', 'print_recall_summary', - 'plot_num_recall', 'plot_iou_recall' -] diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/resnest/deeplabv3plus_s101-d8_512x1024_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/resnest/deeplabv3plus_s101-d8_512x1024_80k_cityscapes.py deleted file mode 100644 index 69bef7238345cf6aabb126012af992602f910287..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/resnest/deeplabv3plus_s101-d8_512x1024_80k_cityscapes.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = '../deeplabv3plus/deeplabv3plus_r101-d8_512x1024_80k_cityscapes.py' -model = dict( - pretrained='open-mmlab://resnest101', - backbone=dict( - type='ResNeSt', - stem_channels=128, - radix=2, - reduction_factor=4, - avg_down_stride=True)) diff --git a/spaces/HaloMaster/chinesesummary/fengshen/README.md b/spaces/HaloMaster/chinesesummary/fengshen/README.md deleted file mode 100644 index 45f7b3579c36a68f899a9a02cfcfbe1330d413d8..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/README.md +++ /dev/null @@ -1,105 +0,0 @@ -## 最新发布 - -* \[2022.09.13\] [更新ErLangShen系列DeBERTa预训练代码](https://huggingface.co/IDEA-CCNL/Erlangshen-DeBERTa-v2-97M-Chinese) -* \[2022.09.13\] [更新RanDeng系列Bart预训练代码](https://huggingface.co/IDEA-CCNL/Randeng-BART-139M) -* \[2022.09.13\] [更新ErLangShen系列Bert预训练代码](https://huggingface.co/IDEA-CCNL/Erlangshen-MegatronBert-1.3B) -* \[2022.05.11\] [更新TaiYi系列VIT多模态模型及下游任务示例](https://fengshenbang-doc.readthedocs.io/zh/latest/docs/太乙系列/Taiyi-vit-87M-D.html) -* \[2022.05.11\] [更新BiGan系列Transformer-XL去噪模型及下游任务示例](https://fengshenbang-doc.readthedocs.io/zh/latest/docs/比干系列/Bigan-Transformer-XL-denoise-1.1B.html) -* \[2022.05.11\] [更新ErLangShen系列下游任务示例](https://fengshenbang-doc.readthedocs.io/zh/latest/docs/二郎神系列/Erlangshen-Roberta-110M-NLI.html) - -# 导航 - -- [导航](#导航) - - [框架简介](#框架简介) - - [依赖环境](#依赖环境) - - [项目结构](#项目结构) - - [设计思路](#设计思路) - - [分类下游任务](#分类下游任务) - -## 框架简介 - -FengShen训练框架是封神榜大模型开源计划的重要一环,在大模型的生产和应用中起到至关重要的作用。FengShen可以应用在基于海量数据的预训练以及各种下游任务的finetune中。封神榜专注于NLP大模型开源,然而模型的增大带来不仅仅是训练的问题,在使用上也存在诸多不便。为了解决训练和使用的问题,FengShen参考了目前开源的优秀方案并且重新设计了Pipeline,用户可以根据自己的需求,从封神榜中选取丰富的预训练模型,同时利用FengShen快速微调下游任务。 - -目前所有实例以及文档可以查看我们的[Wiki](https://fengshenbang-doc.readthedocs.io/zh/latest/index.html) -所有的模型可以在[Huggingface主页](https://huggingface.co/IDEA-CCNL)找到 - -通过我们的框架,你可以快速享受到: - -1. 比原生torch更强的性能,训练速度提升**300%** -2. 支持更大的模型,支持**百亿级别**内模型训练及微调 -3. 支持**TB级以上**的数据集,在家用主机上即可享受预训练模型带来的效果提升 -3. 丰富的预训练、下游任务示例,一键开始训练 -4. 适应各种设备环境,支持在CPU、GPU、TPU等不同设备上运行 -5. 集成主流的分布式训练逻辑,无需修改代码即可支持DDP、Zero Optimizer等分布式优化技术 - -![avartar](../pics/fengshen_pic.png) - -## 依赖环境 - -* Python >= 3.8 -* torch >= 1.8 -* transformers >= 3.2.0 -* pytorch-lightning >= 1.5.10 - -在Fengshenbang-LM根目录下 -pip install --editable ./ - -## 项目结构 - -``` -├── data # 支持多种数据处理方式以及数据集 -│   ├── cbart_dataloader -| ├── fs_datasets # 基于transformers datasets的封装,新增中文数据集(开源计划中) -| ├── universal_datamodule # 打通fs_datasets与lightning datamodule,减少重复开发工作量 -│   ├── megatron_dataloader # 支持基于Megatron实现的TB级别数据集处理、训练 -│   ├── mmap_dataloader # 通用的Memmap形式的数据加载 -│   └── task_dataloader # 支持多种下游任务 -├── examples # 丰富的示例,从预训练到下游任务,应有尽有。 -├── metric # 提供各种metric计算,支持用户自定义metric -├── losses # 同样支持loss自定义,满足定制化需求 -├── tokenizer # 支持自定义tokenizer,比如我们使用的SentencePiece训练代码等 -├── models # 模型库 -│   ├── auto # 支持自动导入对应的模型 -│   ├── bart -│   ├── longformer -│   ├── megatron_t5 -│   └── roformer -└── utils # 实用函数 -``` - -## 设计思路 - -FengShen框架目前整体基于Pytorch-Lightning & Transformer进行开发,在底层框架上不断开源基于中文的预训练模型,同时提供丰富的examples,每一个封神榜的模型都能找到对应的预训练、下游任务代码。 - -在FengShen上开发,整体可以按照下面的三个步骤进行: - -1. 封装数据处理流程 -> pytorch_lightning.LightningDataModule -2. 封装模型结构 -> pytorch_lightning.LightningModule -3. 配置一些插件,比如log_monitor,checkpoint_callback等等。 - -一个完整的DEMO可以看Randeng-BART系列实例 -> [文档](https://fengshenbang-doc.readthedocs.io/zh/latest/docs/燃灯系列/BART-139M.html) [代码](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/hf-ds/fengshen/examples/pretrain_bart) - -## 分类下游任务 - - 在examples/classification目录下,我们提供丰富的分类任务的示例,其中我们提供三个一键式运行的示例。 - -* demo_classification_afqmc_roberta.sh 使用DDP微调roberta -* demo_classification_afqmc_roberta_deepspeed.sh 结合deepspeed微调roberta,获得更快的运算速度 -* demo_classification_afqmc_erlangshen_offload.sh 仅需7G显存即可微调我们效果最好的二郎神系列模型 - - 上述示例均采用AFQMC的数据集,关于数据集的介绍可以在[这里](https://www.cluebenchmarks.com/introduce.html)找到。 - 同时我们处理过的数据文件已经放在Huggingface上,点击[这里](https://huggingface.co/datasets/IDEA-CCNL/AFQMC)直达源文件。 - 仅需要按我们的格式稍微处理一下数据集,即可适配下游不同的分类任务。 - 在脚本示例中,仅需要修改如下参数即可适配本地文件 - - ``` - --dataset_name IDEA-CCNL/AFQMC \ - - -------> 修改为 - - --data_dir $DATA_DIR \ # 数据目录 - --train_data train.json \ # 数据文件 - --valid_data dev.json \ - --test_data test.json \ - - ``` diff --git a/spaces/HaloMaster/chinesesummary/fengshen/examples/zen2_finetune/ner_zen2_base_cmeee.sh b/spaces/HaloMaster/chinesesummary/fengshen/examples/zen2_finetune/ner_zen2_base_cmeee.sh deleted file mode 100644 index 46f27f142891c62587f6c7184c372f4883215bbf..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/examples/zen2_finetune/ner_zen2_base_cmeee.sh +++ /dev/null @@ -1,91 +0,0 @@ -#!/bin/bash -#SBATCH --job-name=zen2_base_cmeee # create a short name for your job -#SBATCH --nodes=1 # node count -#SBATCH --ntasks=1 # total number of tasks across all nodes -#SBATCH --cpus-per-task=30 # cpu-cores per task (>1 if multi-threaded tasks) -#SBATCH --gres=gpu:1 # number of gpus per node -#SBATCH --mail-type=ALL # send email when job begins, ends or failed etc. -#SBATCH -o /cognitive_comp/ganruyi/experiments/ner_finetune/zen2_base_cmeee/%x-%j.log # output and error file name (%x=job name, %j=job id) - - -# export CUDA_VISIBLE_DEVICES='2' -export TORCH_EXTENSIONS_DIR=/cognitive_comp/ganruyi/tmp/torch_extendsions - -MODEL_NAME=zen2_base - -TASK=cmeee - -ZERO_STAGE=1 -STRATEGY=deepspeed_stage_${ZERO_STAGE} - -ROOT_DIR=/cognitive_comp/ganruyi/experiments/ner_finetune/${MODEL_NAME}_${TASK} -if [ ! -d ${ROOT_DIR} ];then - mkdir -p ${ROOT_DIR} - echo ${ROOT_DIR} created!!!!!!!!!!!!!! -else - echo ${ROOT_DIR} exist!!!!!!!!!!!!!!! -fi - -DATA_DIR=/cognitive_comp/lujunyu/data_zh/NER_Aligned/CMeEE/ -PRETRAINED_MODEL_PATH=/cognitive_comp/ganruyi/hf_models/zen/zh_zen_base_2.0 - -CHECKPOINT_PATH=${ROOT_DIR}/ckpt/ -OUTPUT_PATH=${ROOT_DIR}/predict.json - -DATA_ARGS="\ - --data_dir $DATA_DIR \ - --train_data train.char.bio \ - --valid_data dev.char.bio \ - --test_data dev.char.bio \ - --train_batchsize 32 \ - --valid_batchsize 16 \ - --max_seq_length 256 \ - --task_name cmeee \ - " - -MODEL_ARGS="\ - --learning_rate 3e-5 \ - --weight_decay 0.1 \ - --warmup_ratio 0.01 \ - --markup bio \ - --middle_prefix I- \ - " - -MODEL_CHECKPOINT_ARGS="\ - --monitor val_f1 \ - --save_top_k 3 \ - --mode max \ - --every_n_train_steps 100 \ - --save_weights_only True \ - --dirpath $CHECKPOINT_PATH \ - --filename model-{epoch:02d}-{val_f1:.4f} \ - " - -TRAINER_ARGS="\ - --max_epochs 30 \ - --gpus 1 \ - --check_val_every_n_epoch 1 \ - --val_check_interval 100 \ - --default_root_dir $ROOT_DIR \ - " - - -options=" \ - --pretrained_model_path $PRETRAINED_MODEL_PATH \ - --vocab_file $PRETRAINED_MODEL_PATH/vocab.txt \ - --do_lower_case \ - --output_save_path $OUTPUT_PATH \ - $DATA_ARGS \ - $MODEL_ARGS \ - $MODEL_CHECKPOINT_ARGS \ - $TRAINER_ARGS \ -" -SCRIPT_PATH=/cognitive_comp/ganruyi/Fengshenbang-LM/fengshen/examples/zen2_finetune/fengshen_token_level_ft_task.py -/home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options - -# SINGULARITY_PATH=/cognitive_comp/ganruyi/pytorch21_06_py3_docker_image_v2.sif -# python3 $SCRIPT_PATH $options -# source activate base -# singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $SINGULARITY_PATH /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options -# /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options - diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/megatron_11b/detok.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/megatron_11b/detok.py deleted file mode 100644 index 49921b28a1f35c6216b5ed85729453524e7a049d..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/megatron_11b/detok.py +++ /dev/null @@ -1,32 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import fileinput - -import sacremoses - - -def main(): - parser = argparse.ArgumentParser(description="") - parser.add_argument("files", nargs="*", help="input files") - args = parser.parse_args() - - detok = sacremoses.MosesDetokenizer() - - for line in fileinput.input(args.files, openhook=fileinput.hook_compressed): - print( - detok.detokenize(line.strip().split(" ")) - .replace(" @", "") - .replace("@ ", "") - .replace(" =", "=") - .replace("= ", "=") - .replace(" – ", "–") - ) - - -if __name__ == "__main__": - main() diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/utils.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/utils.py deleted file mode 100644 index 66a426d2223ce75ffae6cee2131770556c5949bc..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/utils.py +++ /dev/null @@ -1,167 +0,0 @@ -import collections -import io -import json -import librosa -import numpy as np -import soundfile as sf -import time -import torch -from scipy.io.wavfile import read -from .text import SOS_TOK, EOS_TOK - - -def get_mask_from_lengths(lengths): - max_len = torch.max(lengths).item() - ids = torch.arange(0, max_len, out=torch.cuda.LongTensor(max_len)) - mask = (ids < lengths.unsqueeze(1)) - return mask - - -def load_wav_to_torch(full_path, sr=None): - data, sr = librosa.load(full_path, sr=sr) - data = np.clip(data, -1, 1) # potentially out of [-1, 1] due to resampling - data = data * 32768.0 # match values loaded by scipy - return torch.FloatTensor(data.astype(np.float32)), sr - - -def read_binary_audio(bin_data, tar_sr=None): - """ - read binary audio (`bytes` or `uint8` `numpy.ndarray`) to `float32` - `numpy.ndarray` - - RETURNS: - data (np.ndarray) : audio of shape (n,) or (2, n) - tar_sr (int) : sample rate - """ - data, ori_sr = sf.read(io.BytesIO(bin_data), dtype='float32') - data = data.T - if (tar_sr is not None) and (ori_sr != tar_sr): - data = librosa.resample(data, ori_sr, tar_sr) - else: - tar_sr = ori_sr - data = np.clip(data, -1, 1) - data = data * 32768.0 - return torch.FloatTensor(data.astype(np.float32)), tar_sr - - -def load_filepaths_and_text(filename): - with open(filename, encoding='utf-8') as f: - data = [json.loads(line.rstrip()) for line in f] - return data - - -def to_gpu(x): - x = x.contiguous() - - if torch.cuda.is_available(): - x = x.cuda(non_blocking=True) - return torch.autograd.Variable(x) - - -def load_code_dict(path, add_sos=False, add_eos=False): - if not path: - return {} - - with open(path, 'r') as f: - codes = ['_'] + [line.rstrip() for line in f] # '_' for pad - code_dict = {c: i for i, c in enumerate(codes)} - - if add_sos: - code_dict[SOS_TOK] = len(code_dict) - if add_eos: - code_dict[EOS_TOK] = len(code_dict) - assert(set(code_dict.values()) == set(range(len(code_dict)))) - - return code_dict - - -def load_obs_label_dict(path): - if not path: - return {} - with open(path, 'r') as f: - obs_labels = [line.rstrip() for line in f] - return {c: i for i, c in enumerate(obs_labels)} - - -# A simple timer class inspired from `tnt.TimeMeter` -class CudaTimer: - def __init__(self, keys): - self.keys = keys - self.reset() - - def start(self, key): - s = torch.cuda.Event(enable_timing=True) - s.record() - self.start_events[key].append(s) - return self - - def stop(self, key): - e = torch.cuda.Event(enable_timing=True) - e.record() - self.end_events[key].append(e) - return self - - def reset(self): - self.start_events = collections.defaultdict(list) - self.end_events = collections.defaultdict(list) - self.running_times = collections.defaultdict(float) - self.n = collections.defaultdict(int) - return self - - def value(self): - self._synchronize() - return {k: self.running_times[k] / self.n[k] for k in self.keys} - - def _synchronize(self): - torch.cuda.synchronize() - for k in self.keys: - starts = self.start_events[k] - ends = self.end_events[k] - if len(starts) == 0: - raise ValueError("Trying to divide by zero in TimeMeter") - if len(ends) != len(starts): - raise ValueError("Call stop before checking value!") - time = 0 - for start, end in zip(starts, ends): - time += start.elapsed_time(end) - self.running_times[k] += time * 1e-3 - self.n[k] += len(starts) - self.start_events = collections.defaultdict(list) - self.end_events = collections.defaultdict(list) - - -# Used to measure the time taken for multiple events -class Timer: - def __init__(self, keys): - self.keys = keys - self.n = {} - self.running_time = {} - self.total_time = {} - self.reset() - - def start(self, key): - self.running_time[key] = time.time() - return self - - def stop(self, key): - self.total_time[key] = time.time() - self.running_time[key] - self.n[key] += 1 - self.running_time[key] = None - return self - - def reset(self): - for k in self.keys: - self.total_time[k] = 0 - self.running_time[k] = None - self.n[k] = 0 - return self - - def value(self): - vals = {} - for k in self.keys: - if self.n[k] == 0: - raise ValueError("Trying to divide by zero in TimeMeter") - else: - vals[k] = self.total_time[k] / self.n[k] - return vals - diff --git a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/scripts/glow/prepare_data.sh b/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/scripts/glow/prepare_data.sh deleted file mode 100644 index 2357eeebd0fb7e6fba858242af44e8b8aa87fdf9..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/scripts/glow/prepare_data.sh +++ /dev/null @@ -1,12 +0,0 @@ -input_text_path='/home/harveen/en/iitm_data/english/txt.done.data' -input_wav_path='/home/harveen/en/iitm_data/english/wav_22k' -gender='male' - - -output_data_path='../../data/glow/'$gender - -valid_samples=100 -test_samples=10 - -mkdir -p $output_data_path -python ../../utils/glow/prepare_iitm_data_glow_en.py -i $input_text_path -o $output_data_path -w $input_wav_path -v $valid_samples -t $test_samples diff --git a/spaces/ICML2022/OFA/fairseq/examples/roberta/README.custom_classification.md b/spaces/ICML2022/OFA/fairseq/examples/roberta/README.custom_classification.md deleted file mode 100644 index 7254bb7d178760ef5b847901bbcac3711af33ca2..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/roberta/README.custom_classification.md +++ /dev/null @@ -1,168 +0,0 @@ -# Finetuning RoBERTa on a custom classification task - -This example shows how to finetune RoBERTa on the IMDB dataset, but should illustrate the process for most classification tasks. - -### 1) Get the data - -```bash -wget http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz -tar zxvf aclImdb_v1.tar.gz -``` - - -### 2) Format data - -`IMDB` data has one data-sample in each file, below python code-snippet converts it one file for train and valid each for ease of processing. -```python -import argparse -import os -import random -from glob import glob - -random.seed(0) - -def main(args): - for split in ['train', 'test']: - samples = [] - for class_label in ['pos', 'neg']: - fnames = glob(os.path.join(args.datadir, split, class_label) + '/*.txt') - for fname in fnames: - with open(fname) as fin: - line = fin.readline() - samples.append((line, 1 if class_label == 'pos' else 0)) - random.shuffle(samples) - out_fname = 'train' if split == 'train' else 'dev' - f1 = open(os.path.join(args.datadir, out_fname + '.input0'), 'w') - f2 = open(os.path.join(args.datadir, out_fname + '.label'), 'w') - for sample in samples: - f1.write(sample[0] + '\n') - f2.write(str(sample[1]) + '\n') - f1.close() - f2.close() - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--datadir', default='aclImdb') - args = parser.parse_args() - main(args) -``` - - -### 3) BPE encode - -Run `multiprocessing_bpe_encoder`, you can also do this in previous step for each sample but that might be slower. -```bash -# Download encoder.json and vocab.bpe -wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json' -wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe' - -for SPLIT in train dev; do - python -m examples.roberta.multiprocessing_bpe_encoder \ - --encoder-json encoder.json \ - --vocab-bpe vocab.bpe \ - --inputs "aclImdb/$SPLIT.input0" \ - --outputs "aclImdb/$SPLIT.input0.bpe" \ - --workers 60 \ - --keep-empty -done -``` - - -### 4) Preprocess data - -```bash -# Download fairseq dictionary. -wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/dict.txt' - -fairseq-preprocess \ - --only-source \ - --trainpref "aclImdb/train.input0.bpe" \ - --validpref "aclImdb/dev.input0.bpe" \ - --destdir "IMDB-bin/input0" \ - --workers 60 \ - --srcdict dict.txt - -fairseq-preprocess \ - --only-source \ - --trainpref "aclImdb/train.label" \ - --validpref "aclImdb/dev.label" \ - --destdir "IMDB-bin/label" \ - --workers 60 - -``` - - -### 5) Run training - -```bash -TOTAL_NUM_UPDATES=7812 # 10 epochs through IMDB for bsz 32 -WARMUP_UPDATES=469 # 6 percent of the number of updates -LR=1e-05 # Peak LR for polynomial LR scheduler. -HEAD_NAME=imdb_head # Custom name for the classification head. -NUM_CLASSES=2 # Number of classes for the classification task. -MAX_SENTENCES=8 # Batch size. -ROBERTA_PATH=/path/to/roberta.large/model.pt - -CUDA_VISIBLE_DEVICES=0 fairseq-train IMDB-bin/ \ - --restore-file $ROBERTA_PATH \ - --max-positions 512 \ - --batch-size $MAX_SENTENCES \ - --max-tokens 4400 \ - --task sentence_prediction \ - --reset-optimizer --reset-dataloader --reset-meters \ - --required-batch-size-multiple 1 \ - --init-token 0 --separator-token 2 \ - --arch roberta_large \ - --criterion sentence_prediction \ - --classification-head-name $HEAD_NAME \ - --num-classes $NUM_CLASSES \ - --dropout 0.1 --attention-dropout 0.1 \ - --weight-decay 0.1 --optimizer adam --adam-betas "(0.9, 0.98)" --adam-eps 1e-06 \ - --clip-norm 0.0 \ - --lr-scheduler polynomial_decay --lr $LR --total-num-update $TOTAL_NUM_UPDATES --warmup-updates $WARMUP_UPDATES \ - --fp16 --fp16-init-scale 4 --threshold-loss-scale 1 --fp16-scale-window 128 \ - --max-epoch 10 \ - --best-checkpoint-metric accuracy --maximize-best-checkpoint-metric \ - --shorten-method "truncate" \ - --find-unused-parameters \ - --update-freq 4 -``` - -The above command will finetune RoBERTa-large with an effective batch-size of 32 -sentences (`--batch-size=8 --update-freq=4`). The expected -`best-validation-accuracy` after 10 epochs is ~96.5%. - -If you run out of GPU memory, try decreasing `--batch-size` and increase -`--update-freq` to compensate. - - -### 6) Load model using hub interface - -Now we can load the trained model checkpoint using the RoBERTa hub interface. - -Assuming your checkpoints are stored in `checkpoints/`: -```python -from fairseq.models.roberta import RobertaModel -roberta = RobertaModel.from_pretrained( - 'checkpoints', - checkpoint_file='checkpoint_best.pt', - data_name_or_path='IMDB-bin' -) -roberta.eval() # disable dropout -``` - -Finally you can make predictions using the `imdb_head` (or whatever you set -`--classification-head-name` to during training): -```python -label_fn = lambda label: roberta.task.label_dictionary.string( - [label + roberta.task.label_dictionary.nspecial] -) - -tokens = roberta.encode('Best movie this year') -pred = label_fn(roberta.predict('imdb_head', tokens).argmax().item()) -assert pred == '1' # positive - -tokens = roberta.encode('Worst movie ever') -pred = label_fn(roberta.predict('imdb_head', tokens).argmax().item()) -assert pred == '0' # negative -``` diff --git a/spaces/IISRFactCheck/claim_detection/code/prediction.py b/spaces/IISRFactCheck/claim_detection/code/prediction.py deleted file mode 100644 index 7cbe68fe78c2e77d60371195d6cd175362bd0f64..0000000000000000000000000000000000000000 --- a/spaces/IISRFactCheck/claim_detection/code/prediction.py +++ /dev/null @@ -1,102 +0,0 @@ -import torch -from args import args, config -from tqdm import tqdm -from items_dataset import items_dataset - -def test_predict(test_loader, device, model, min_label=1, max_label=3): - model.eval() - result = [] - - for i, test_batch in enumerate(tqdm(test_loader)): - batch_text = test_batch['batch_text'] - input_ids = test_batch['input_ids'].to(device) - token_type_ids = test_batch['token_type_ids'].to(device) - attention_mask = test_batch['attention_mask'].to(device) - #labels = test_batch['labels'].to(device) - crf_mask = test_batch["crf_mask"].to(device) - sample_mapping = test_batch["overflow_to_sample_mapping"] - output = model(input_ids=input_ids, token_type_ids=token_type_ids, attention_mask=attention_mask, labels=None, crf_mask=crf_mask) - if args.use_crf: - prediction = model.crf.decode(output[0], crf_mask) - else: - prediction = torch.max(output[0], -1).indices - - #make result of every sample - sample_id = -1 - sample_result= {"text_a" : test_batch['batch_text'][0]} - for batch_id in range(len(sample_mapping)): - change_sample = False - if sample_id != sample_mapping[batch_id]: change_sample = True - #print(i, id) - if change_sample: - sample_id = sample_mapping[batch_id] - sample_result= {"text_a" : test_batch['batch_text'][sample_id]} - decode_span_table = torch.zeros(len(test_batch['batch_text'][sample_id])) - - spans = items_dataset.cal_agreement_span(None, agreement_table=prediction[batch_id], min_agree=min_label, max_agree=max_label) - #decode spans - for span in spans: - #print(span) - if span[0]==0: span[0]+=1 - if span[1]==1: span[1]+=1 - - while(True): - start = test_batch[batch_id].token_to_chars(span[0]) - if start != None or span[0]>=span[1]: - break - span[0]+=1 - - while(True): - end = test_batch[batch_id].token_to_chars(span[1]) - if end != None or span[0]>=span[1]: - break - span[1]-=1 - - if span[0]512): print(de_start, de_end) - decode_span_table[de_start:de_end]=2 #insite - decode_span_table[de_start]=1 #begin - if change_sample: - sample_result["predict_span_table"] = decode_span_table - #sample_result["boundary"] = test_batch["boundary"][id] - result.append(sample_result) - model.train() - return result - -def add_sentence_table(result, pattern =":;。,,?!~!: ", threshold_num=5, threshold_rate=0.5): - for sample in result: - boundary_list = [] - for i, char in enumerate(sample['text_a']): - if char in pattern: - boundary_list.append(i) - boundary_list.append(len(sample['text_a'])+1) - start=0 - end =0 - fist_sentence = True - sample["predict_sentence_table"] = torch.zeros(len(sample["predict_span_table"])) - for boundary in boundary_list: - end = boundary - predict_num = sum(sample["predict_span_table"][start:end]>0) - sentence_num = len(sample["predict_span_table"][start:end]) - if(predict_num > threshold_num) or (predict_num > sentence_num*threshold_rate): - if fist_sentence: - sample["predict_sentence_table"][start:end] = 2 - sample["predict_sentence_table"][start] = 1 - fist_sentence=False - else: - sample["predict_sentence_table"][start-1:end] = 2 - else: fist_sentence =True - start = end+1 - -def add_doc_id(result, test_data): - #make dict {'text_a':"docid"} - text_to_id = dict() - for sample in test_data: - text_to_id[sample["text_a"]] = sample["docid"] - - #add doc_id - for sample in result: - sample["docid"] = text_to_id[sample["text_a"]] \ No newline at end of file diff --git a/spaces/Iceclear/StableSR/StableSR/basicsr/data/vimeo90k_dataset.py b/spaces/Iceclear/StableSR/StableSR/basicsr/data/vimeo90k_dataset.py deleted file mode 100644 index e5e33e1082667aeee61fecf2436fb287e82e0936..0000000000000000000000000000000000000000 --- a/spaces/Iceclear/StableSR/StableSR/basicsr/data/vimeo90k_dataset.py +++ /dev/null @@ -1,199 +0,0 @@ -import random -import torch -from pathlib import Path -from torch.utils import data as data - -from basicsr.data.transforms import augment, paired_random_crop -from basicsr.utils import FileClient, get_root_logger, imfrombytes, img2tensor -from basicsr.utils.registry import DATASET_REGISTRY - - -@DATASET_REGISTRY.register() -class Vimeo90KDataset(data.Dataset): - """Vimeo90K dataset for training. - - The keys are generated from a meta info txt file. - basicsr/data/meta_info/meta_info_Vimeo90K_train_GT.txt - - Each line contains the following items, separated by a white space. - - 1. clip name; - 2. frame number; - 3. image shape - - Examples: - - :: - - 00001/0001 7 (256,448,3) - 00001/0002 7 (256,448,3) - - - Key examples: "00001/0001" - - GT (gt): Ground-Truth; - - LQ (lq): Low-Quality, e.g., low-resolution/blurry/noisy/compressed frames. - - The neighboring frame list for different num_frame: - - :: - - num_frame | frame list - 1 | 4 - 3 | 3,4,5 - 5 | 2,3,4,5,6 - 7 | 1,2,3,4,5,6,7 - - Args: - opt (dict): Config for train dataset. It contains the following keys: - dataroot_gt (str): Data root path for gt. - dataroot_lq (str): Data root path for lq. - meta_info_file (str): Path for meta information file. - io_backend (dict): IO backend type and other kwarg. - num_frame (int): Window size for input frames. - gt_size (int): Cropped patched size for gt patches. - random_reverse (bool): Random reverse input frames. - use_hflip (bool): Use horizontal flips. - use_rot (bool): Use rotation (use vertical flip and transposing h and w for implementation). - scale (bool): Scale, which will be added automatically. - """ - - def __init__(self, opt): - super(Vimeo90KDataset, self).__init__() - self.opt = opt - self.gt_root, self.lq_root = Path(opt['dataroot_gt']), Path(opt['dataroot_lq']) - - with open(opt['meta_info_file'], 'r') as fin: - self.keys = [line.split(' ')[0] for line in fin] - - # file client (io backend) - self.file_client = None - self.io_backend_opt = opt['io_backend'] - self.is_lmdb = False - if self.io_backend_opt['type'] == 'lmdb': - self.is_lmdb = True - self.io_backend_opt['db_paths'] = [self.lq_root, self.gt_root] - self.io_backend_opt['client_keys'] = ['lq', 'gt'] - - # indices of input images - self.neighbor_list = [i + (9 - opt['num_frame']) // 2 for i in range(opt['num_frame'])] - - # temporal augmentation configs - self.random_reverse = opt['random_reverse'] - logger = get_root_logger() - logger.info(f'Random reverse is {self.random_reverse}.') - - def __getitem__(self, index): - if self.file_client is None: - self.file_client = FileClient(self.io_backend_opt.pop('type'), **self.io_backend_opt) - - # random reverse - if self.random_reverse and random.random() < 0.5: - self.neighbor_list.reverse() - - scale = self.opt['scale'] - gt_size = self.opt['gt_size'] - key = self.keys[index] - clip, seq = key.split('/') # key example: 00001/0001 - - # get the GT frame (im4.png) - if self.is_lmdb: - img_gt_path = f'{key}/im4' - else: - img_gt_path = self.gt_root / clip / seq / 'im4.png' - img_bytes = self.file_client.get(img_gt_path, 'gt') - img_gt = imfrombytes(img_bytes, float32=True) - - # get the neighboring LQ frames - img_lqs = [] - for neighbor in self.neighbor_list: - if self.is_lmdb: - img_lq_path = f'{clip}/{seq}/im{neighbor}' - else: - img_lq_path = self.lq_root / clip / seq / f'im{neighbor}.png' - img_bytes = self.file_client.get(img_lq_path, 'lq') - img_lq = imfrombytes(img_bytes, float32=True) - img_lqs.append(img_lq) - - # randomly crop - img_gt, img_lqs = paired_random_crop(img_gt, img_lqs, gt_size, scale, img_gt_path) - - # augmentation - flip, rotate - img_lqs.append(img_gt) - img_results = augment(img_lqs, self.opt['use_hflip'], self.opt['use_rot']) - - img_results = img2tensor(img_results) - img_lqs = torch.stack(img_results[0:-1], dim=0) - img_gt = img_results[-1] - - # img_lqs: (t, c, h, w) - # img_gt: (c, h, w) - # key: str - return {'lq': img_lqs, 'gt': img_gt, 'key': key} - - def __len__(self): - return len(self.keys) - - -@DATASET_REGISTRY.register() -class Vimeo90KRecurrentDataset(Vimeo90KDataset): - - def __init__(self, opt): - super(Vimeo90KRecurrentDataset, self).__init__(opt) - - self.flip_sequence = opt['flip_sequence'] - self.neighbor_list = [1, 2, 3, 4, 5, 6, 7] - - def __getitem__(self, index): - if self.file_client is None: - self.file_client = FileClient(self.io_backend_opt.pop('type'), **self.io_backend_opt) - - # random reverse - if self.random_reverse and random.random() < 0.5: - self.neighbor_list.reverse() - - scale = self.opt['scale'] - gt_size = self.opt['gt_size'] - key = self.keys[index] - clip, seq = key.split('/') # key example: 00001/0001 - - # get the neighboring LQ and GT frames - img_lqs = [] - img_gts = [] - for neighbor in self.neighbor_list: - if self.is_lmdb: - img_lq_path = f'{clip}/{seq}/im{neighbor}' - img_gt_path = f'{clip}/{seq}/im{neighbor}' - else: - img_lq_path = self.lq_root / clip / seq / f'im{neighbor}.png' - img_gt_path = self.gt_root / clip / seq / f'im{neighbor}.png' - # LQ - img_bytes = self.file_client.get(img_lq_path, 'lq') - img_lq = imfrombytes(img_bytes, float32=True) - # GT - img_bytes = self.file_client.get(img_gt_path, 'gt') - img_gt = imfrombytes(img_bytes, float32=True) - - img_lqs.append(img_lq) - img_gts.append(img_gt) - - # randomly crop - img_gts, img_lqs = paired_random_crop(img_gts, img_lqs, gt_size, scale, img_gt_path) - - # augmentation - flip, rotate - img_lqs.extend(img_gts) - img_results = augment(img_lqs, self.opt['use_hflip'], self.opt['use_rot']) - - img_results = img2tensor(img_results) - img_lqs = torch.stack(img_results[:7], dim=0) - img_gts = torch.stack(img_results[7:], dim=0) - - if self.flip_sequence: # flip the sequence: 7 frames to 14 frames - img_lqs = torch.cat([img_lqs, img_lqs.flip(0)], dim=0) - img_gts = torch.cat([img_gts, img_gts.flip(0)], dim=0) - - # img_lqs: (t, c, h, w) - # img_gt: (c, h, w) - # key: str - return {'lq': img_lqs, 'gt': img_gts, 'key': key} - - def __len__(self): - return len(self.keys) diff --git a/spaces/Iceclear/StableSR/StableSR/scripts/sr_val_ddpm_text_T_vqganfin_oldcanvas_tile.py b/spaces/Iceclear/StableSR/StableSR/scripts/sr_val_ddpm_text_T_vqganfin_oldcanvas_tile.py deleted file mode 100644 index d8d0671f9c059edb00a32773d6a5fe9deb1014d9..0000000000000000000000000000000000000000 --- a/spaces/Iceclear/StableSR/StableSR/scripts/sr_val_ddpm_text_T_vqganfin_oldcanvas_tile.py +++ /dev/null @@ -1,422 +0,0 @@ -"""make variations of input image""" - -import argparse, os, sys, glob -import PIL -import torch -import numpy as np -import torchvision -from omegaconf import OmegaConf -from PIL import Image -from tqdm import tqdm, trange -from itertools import islice -from einops import rearrange, repeat -from torchvision.utils import make_grid -from torch import autocast -from contextlib import nullcontext -import time -from pytorch_lightning import seed_everything - -from ldm.util import instantiate_from_config -from ldm.models.diffusion.ddim import DDIMSampler -from ldm.models.diffusion.plms import PLMSSampler -import math -import copy -import torch.nn.functional as F -import cv2 -from util_image import ImageSpliterTh -from pathlib import Path -from scripts.wavelet_color_fix import wavelet_reconstruction, adaptive_instance_normalization - -def space_timesteps(num_timesteps, section_counts): - """ - Create a list of timesteps to use from an original diffusion process, - given the number of timesteps we want to take from equally-sized portions - of the original process. - For example, if there's 300 timesteps and the section counts are [10,15,20] - then the first 100 timesteps are strided to be 10 timesteps, the second 100 - are strided to be 15 timesteps, and the final 100 are strided to be 20. - If the stride is a string starting with "ddim", then the fixed striding - from the DDIM paper is used, and only one section is allowed. - :param num_timesteps: the number of diffusion steps in the original - process to divide up. - :param section_counts: either a list of numbers, or a string containing - comma-separated numbers, indicating the step count - per section. As a special case, use "ddimN" where N - is a number of steps to use the striding from the - DDIM paper. - :return: a set of diffusion steps from the original process to use. - """ - if isinstance(section_counts, str): - if section_counts.startswith("ddim"): - desired_count = int(section_counts[len("ddim"):]) - for i in range(1, num_timesteps): - if len(range(0, num_timesteps, i)) == desired_count: - return set(range(0, num_timesteps, i)) - raise ValueError( - f"cannot create exactly {num_timesteps} steps with an integer stride" - ) - section_counts = [int(x) for x in section_counts.split(",")] #[250,] - size_per = num_timesteps // len(section_counts) - extra = num_timesteps % len(section_counts) - start_idx = 0 - all_steps = [] - for i, section_count in enumerate(section_counts): - size = size_per + (1 if i < extra else 0) - if size < section_count: - raise ValueError( - f"cannot divide section of {size} steps into {section_count}" - ) - if section_count <= 1: - frac_stride = 1 - else: - frac_stride = (size - 1) / (section_count - 1) - cur_idx = 0.0 - taken_steps = [] - for _ in range(section_count): - taken_steps.append(start_idx + round(cur_idx)) - cur_idx += frac_stride - all_steps += taken_steps - start_idx += size - return set(all_steps) - -def chunk(it, size): - it = iter(it) - return iter(lambda: tuple(islice(it, size)), ()) - - -def load_model_from_config(config, ckpt, verbose=False): - print(f"Loading model from {ckpt}") - pl_sd = torch.load(ckpt, map_location="cpu") - if "global_step" in pl_sd: - print(f"Global Step: {pl_sd['global_step']}") - sd = pl_sd["state_dict"] - model = instantiate_from_config(config.model) - m, u = model.load_state_dict(sd, strict=False) - if len(m) > 0 and verbose: - print("missing keys:") - print(m) - if len(u) > 0 and verbose: - print("unexpected keys:") - print(u) - - model.cuda() - model.eval() - return model - -def load_img(path): - image = Image.open(path).convert("RGB") - w, h = image.size - print(f"loaded input image of size ({w}, {h}) from {path}") - w, h = map(lambda x: x - x % 8, (w, h)) # resize to integer multiple of 32 - image = image.resize((w, h), resample=PIL.Image.LANCZOS) - image = np.array(image).astype(np.float32) / 255.0 - image = image[None].transpose(0, 3, 1, 2) - image = torch.from_numpy(image) - return 2.*image - 1. - -def read_image(im_path): - im = np.array(Image.open(im_path).convert("RGB")) - im = im.astype(np.float32)/255.0 - im = im[None].transpose(0,3,1,2) - im = (torch.from_numpy(im) - 0.5) / 0.5 - - return im.cuda() - -def main(): - parser = argparse.ArgumentParser() - - parser.add_argument( - "--init-img", - type=str, - nargs="?", - help="path to the input image", - default="inputs/user_upload" - ) - parser.add_argument( - "--outdir", - type=str, - nargs="?", - help="dir to write results to", - default="outputs/user_upload" - ) - parser.add_argument( - "--ddpm_steps", - type=int, - default=1000, - help="number of ddpm sampling steps", - ) - parser.add_argument( - "--n_iter", - type=int, - default=1, - help="sample this often", - ) - parser.add_argument( - "--C", - type=int, - default=4, - help="latent channels", - ) - parser.add_argument( - "--f", - type=int, - default=8, - help="downsampling factor, most often 8 or 16", - ) - parser.add_argument( - "--n_samples", - type=int, - default=1, - help="how many samples to produce for each given prompt. A.k.a batch size", - ) - parser.add_argument( - "--config", - type=str, - default="configs/stable-diffusion/v1-inference.yaml", - help="path to config which constructs model", - ) - parser.add_argument( - "--ckpt", - type=str, - default="./stablesr_000117.ckpt", - help="path to checkpoint of model", - ) - parser.add_argument( - "--vqgan_ckpt", - type=str, - default="./vqgan_cfw_00011.ckpt", - help="path to checkpoint of VQGAN model", - ) - parser.add_argument( - "--seed", - type=int, - default=42, - help="the seed (for reproducible sampling)", - ) - parser.add_argument( - "--precision", - type=str, - help="evaluate at this precision", - choices=["full", "autocast"], - default="autocast" - ) - parser.add_argument( - "--dec_w", - type=float, - default=0.5, - help="weight for combining VQGAN and Diffusion", - ) - parser.add_argument( - "--tile_overlap", - type=int, - default=32, - help="tile overlap size (in latent)", - ) - parser.add_argument( - "--upscale", - type=float, - default=4.0, - help="upsample scale", - ) - parser.add_argument( - "--colorfix_type", - type=str, - default="nofix", - help="Color fix type to adjust the color of HR result according to LR input: adain (used in paper); wavelet; nofix", - ) - parser.add_argument( - "--vqgantile_stride", - type=int, - default=1000, - help="the stride for tile operation before VQGAN decoder (in pixel)", - ) - parser.add_argument( - "--vqgantile_size", - type=int, - default=1280, - help="the size for tile operation before VQGAN decoder (in pixel)", - ) - parser.add_argument( - "--input_size", - type=int, - default=512, - help="input size", - ) - - opt = parser.parse_args() - seed_everything(opt.seed) - - print('>>>>>>>>>>color correction>>>>>>>>>>>') - if opt.colorfix_type == 'adain': - print('Use adain color correction') - elif opt.colorfix_type == 'wavelet': - print('Use wavelet color correction') - else: - print('No color correction') - print('>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>') - - config = OmegaConf.load(f"{opt.config}") - model = load_model_from_config(config, f"{opt.ckpt}") - device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") - model = model.to(device) - - model.configs = config - - vqgan_config = OmegaConf.load("configs/autoencoder/autoencoder_kl_64x64x4_resi.yaml") - vq_model = load_model_from_config(vqgan_config, opt.vqgan_ckpt) - vq_model = vq_model.to(device) - vq_model.decoder.fusion_w = opt.dec_w - - os.makedirs(opt.outdir, exist_ok=True) - outpath = opt.outdir - - batch_size = opt.n_samples - - images_path_ori = sorted(glob.glob(os.path.join(opt.init_img, "*"))) - images_path = copy.deepcopy(images_path_ori) - for item in images_path_ori: - img_name = item.split('/')[-1] - if os.path.exists(os.path.join(outpath, img_name)): - images_path.remove(item) - print(f"Found {len(images_path)} inputs.") - - model.register_schedule(given_betas=None, beta_schedule="linear", timesteps=1000, - linear_start=0.00085, linear_end=0.0120, cosine_s=8e-3) - model.num_timesteps = 1000 - - sqrt_alphas_cumprod = copy.deepcopy(model.sqrt_alphas_cumprod) - sqrt_one_minus_alphas_cumprod = copy.deepcopy(model.sqrt_one_minus_alphas_cumprod) - - use_timesteps = set(space_timesteps(1000, [opt.ddpm_steps])) - last_alpha_cumprod = 1.0 - new_betas = [] - timestep_map = [] - for i, alpha_cumprod in enumerate(model.alphas_cumprod): - if i in use_timesteps: - new_betas.append(1 - alpha_cumprod / last_alpha_cumprod) - last_alpha_cumprod = alpha_cumprod - timestep_map.append(i) - new_betas = [beta.data.cpu().numpy() for beta in new_betas] - model.register_schedule(given_betas=np.array(new_betas), timesteps=len(new_betas)) - model.num_timesteps = 1000 - model.ori_timesteps = list(use_timesteps) - model.ori_timesteps.sort() - model = model.to(device) - - precision_scope = autocast if opt.precision == "autocast" else nullcontext - niqe_list = [] - with torch.no_grad(): - with precision_scope("cuda"): - with model.ema_scope(): - tic = time.time() - all_samples = list() - for n in trange(len(images_path), desc="Sampling"): - if (n + 1) % opt.n_samples == 1 or opt.n_samples == 1: - cur_image = read_image(images_path[n]) - size_min = min(cur_image.size(-1), cur_image.size(-2)) - upsample_scale = max(opt.input_size/size_min, opt.upscale) - cur_image = F.interpolate( - cur_image, - size=(int(cur_image.size(-2)*upsample_scale), - int(cur_image.size(-1)*upsample_scale)), - mode='bicubic', - ) - cur_image = cur_image.clamp(-1, 1) - im_lq_bs = [cur_image, ] # 1 x c x h x w, [-1, 1] - im_path_bs = [images_path[n], ] - else: - cur_image = read_image(images_path[n]) - size_min = min(cur_image.size(-1), cur_image.size(-2)) - upsample_scale = max(opt.input_size/size_min, opt.upscale) - cur_image = F.interpolate( - cur_image, - size=(int(cur_image.size(-2)*upsample_scale), - int(cur_image.size(-1)*upsample_scale)), - mode='bicubic', - ) - cur_image = cur_image.clamp(-1, 1) - im_lq_bs.append(cur_image) # 1 x c x h x w, [-1, 1] - im_path_bs.append(images_path[n]) # 1 x c x h x w, [-1, 1] - - if (n + 1) % opt.n_samples == 0 or (n+1) == len(images_path): - im_lq_bs = torch.cat(im_lq_bs, dim=0) - ori_h, ori_w = im_lq_bs.shape[2:] - ref_patch=None - if not (ori_h % 32 == 0 and ori_w % 32 == 0): - flag_pad = True - pad_h = ((ori_h // 32) + 1) * 32 - ori_h - pad_w = ((ori_w // 32) + 1) * 32 - ori_w - im_lq_bs = F.pad(im_lq_bs, pad=(0, pad_w, 0, pad_h), mode='reflect') - else: - flag_pad = False - - if im_lq_bs.shape[2] > opt.vqgantile_size or im_lq_bs.shape[3] > opt.vqgantile_size: - im_spliter = ImageSpliterTh(im_lq_bs, opt.vqgantile_size, opt.vqgantile_stride, sf=1) - for im_lq_pch, index_infos in im_spliter: - seed_everything(opt.seed) - init_latent = model.get_first_stage_encoding(model.encode_first_stage(im_lq_pch)) # move to latent space - text_init = ['']*opt.n_samples - semantic_c = model.cond_stage_model(text_init) - noise = torch.randn_like(init_latent) - # If you would like to start from the intermediate steps, you can add noise to LR to the specific steps. - t = repeat(torch.tensor([999]), '1 -> b', b=im_lq_bs.size(0)) - t = t.to(device).long() - x_T = model.q_sample_respace(x_start=init_latent, t=t, sqrt_alphas_cumprod=sqrt_alphas_cumprod, sqrt_one_minus_alphas_cumprod=sqrt_one_minus_alphas_cumprod, noise=noise) - # x_T = noise - samples, _ = model.sample_canvas(cond=semantic_c, struct_cond=init_latent, batch_size=im_lq_pch.size(0), timesteps=opt.ddpm_steps, time_replace=opt.ddpm_steps, x_T=x_T, return_intermediates=True, tile_size=int(opt.input_size/8), tile_overlap=opt.tile_overlap, batch_size_sample=opt.n_samples) - _, enc_fea_lq = vq_model.encode(im_lq_pch) - x_samples = vq_model.decode(samples * 1. / model.scale_factor, enc_fea_lq) - if opt.colorfix_type == 'adain': - x_samples = adaptive_instance_normalization(x_samples, im_lq_pch) - elif opt.colorfix_type == 'wavelet': - x_samples = wavelet_reconstruction(x_samples, im_lq_pch) - im_spliter.update(x_samples, index_infos) - im_sr = im_spliter.gather() - im_sr = torch.clamp((im_sr+1.0)/2.0, min=0.0, max=1.0) - else: - init_latent = model.get_first_stage_encoding(model.encode_first_stage(im_lq_bs)) # move to latent space - text_init = ['']*opt.n_samples - semantic_c = model.cond_stage_model(text_init) - noise = torch.randn_like(init_latent) - # If you would like to start from the intermediate steps, you can add noise to LR to the specific steps. - t = repeat(torch.tensor([999]), '1 -> b', b=im_lq_bs.size(0)) - t = t.to(device).long() - x_T = model.q_sample_respace(x_start=init_latent, t=t, sqrt_alphas_cumprod=sqrt_alphas_cumprod, sqrt_one_minus_alphas_cumprod=sqrt_one_minus_alphas_cumprod, noise=noise) - # x_T = noise - samples, _ = model.sample_canvas(cond=semantic_c, struct_cond=init_latent, batch_size=im_lq_bs.size(0), timesteps=opt.ddpm_steps, time_replace=opt.ddpm_steps, x_T=x_T, return_intermediates=True, tile_size=int(opt.input_size/8), tile_overlap=opt.tile_overlap, batch_size_sample=opt.n_samples) - _, enc_fea_lq = vq_model.encode(im_lq_bs) - x_samples = vq_model.decode(samples * 1. / model.scale_factor, enc_fea_lq) - if opt.colorfix_type == 'adain': - x_samples = adaptive_instance_normalization(x_samples, im_lq_bs) - elif opt.colorfix_type == 'wavelet': - x_samples = wavelet_reconstruction(x_samples, im_lq_bs) - im_sr = torch.clamp((x_samples+1.0)/2.0, min=0.0, max=1.0) - - if upsample_scale > opt.upscale: - im_sr = F.interpolate( - im_sr, - size=(int(im_lq_bs.size(-2)*opt.upscale/upsample_scale), - int(im_lq_bs.size(-1)*opt.upscale/upsample_scale)), - mode='bicubic', - ) - im_sr = torch.clamp(im_sr, min=0.0, max=1.0) - - im_sr = im_sr.cpu().numpy().transpose(0,2,3,1)*255 # b x h x w x c - - if flag_pad: - im_sr = im_sr[:, :ori_h, :ori_w, ] - - for jj in range(im_lq_bs.shape[0]): - img_name = str(Path(im_path_bs[jj]).name) - basename = os.path.splitext(os.path.basename(img_name))[0] - outpath = str(Path(opt.outdir)) + '/' + basename + '.png' - Image.fromarray(im_sr[jj, ].astype(np.uint8)).save(outpath) - - toc = time.time() - - print(f"Your samples are ready and waiting for you here: \n{outpath} \n" - f" \nEnjoy.") - - -if __name__ == "__main__": - main() diff --git a/spaces/Illumotion/Koboldcpp/include/CL/Utils/OpenCLUtilsCpp_Export.h b/spaces/Illumotion/Koboldcpp/include/CL/Utils/OpenCLUtilsCpp_Export.h deleted file mode 100644 index b063c9fe11a1ecd5959feb5a30562f052012f8a2..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/include/CL/Utils/OpenCLUtilsCpp_Export.h +++ /dev/null @@ -1,42 +0,0 @@ - -#ifndef UTILSCPP_EXPORT_H -#define UTILSCPP_EXPORT_H - -#ifdef OPENCLUTILSCPP_STATIC_DEFINE -# define UTILSCPP_EXPORT -# define OPENCLUTILSCPP_NO_EXPORT -#else -# ifndef UTILSCPP_EXPORT -# ifdef OpenCLUtilsCpp_EXPORTS - /* We are building this library */ -# define UTILSCPP_EXPORT -# else - /* We are using this library */ -# define UTILSCPP_EXPORT -# endif -# endif - -# ifndef OPENCLUTILSCPP_NO_EXPORT -# define OPENCLUTILSCPP_NO_EXPORT -# endif -#endif - -#ifndef OPENCLUTILSCPP_DEPRECATED -# define OPENCLUTILSCPP_DEPRECATED __declspec(deprecated) -#endif - -#ifndef OPENCLUTILSCPP_DEPRECATED_EXPORT -# define OPENCLUTILSCPP_DEPRECATED_EXPORT UTILSCPP_EXPORT OPENCLUTILSCPP_DEPRECATED -#endif - -#ifndef OPENCLUTILSCPP_DEPRECATED_NO_EXPORT -# define OPENCLUTILSCPP_DEPRECATED_NO_EXPORT OPENCLUTILSCPP_NO_EXPORT OPENCLUTILSCPP_DEPRECATED -#endif - -#if 0 /* DEFINE_NO_DEPRECATED */ -# ifndef OPENCLUTILSCPP_NO_DEPRECATED -# define OPENCLUTILSCPP_NO_DEPRECATED -# endif -#endif - -#endif /* UTILSCPP_EXPORT_H */ diff --git a/spaces/Jeff2323/ai-comic-factory/src/components/ui/table.tsx b/spaces/Jeff2323/ai-comic-factory/src/components/ui/table.tsx deleted file mode 100644 index 953fb3c003bc0cd9d93059c373bc23e6aecbded8..0000000000000000000000000000000000000000 --- a/spaces/Jeff2323/ai-comic-factory/src/components/ui/table.tsx +++ /dev/null @@ -1,114 +0,0 @@ -import * as React from "react" - -import { cn } from "@/lib/utils" - -const Table = React.forwardRef< - HTMLTableElement, - React.HTMLAttributes ->(({ className, ...props }, ref) => ( -
- - -)) -Table.displayName = "Table" - -const TableHeader = React.forwardRef< - HTMLTableSectionElement, - React.HTMLAttributes ->(({ className, ...props }, ref) => ( - -)) -TableHeader.displayName = "TableHeader" - -const TableBody = React.forwardRef< - HTMLTableSectionElement, - React.HTMLAttributes ->(({ className, ...props }, ref) => ( - -)) -TableBody.displayName = "TableBody" - -const TableFooter = React.forwardRef< - HTMLTableSectionElement, - React.HTMLAttributes ->(({ className, ...props }, ref) => ( - -)) -TableFooter.displayName = "TableFooter" - -const TableRow = React.forwardRef< - HTMLTableRowElement, - React.HTMLAttributes ->(({ className, ...props }, ref) => ( - -)) -TableRow.displayName = "TableRow" - -const TableHead = React.forwardRef< - HTMLTableCellElement, - React.ThHTMLAttributes ->(({ className, ...props }, ref) => ( -
-)) -TableHead.displayName = "TableHead" - -const TableCell = React.forwardRef< - HTMLTableCellElement, - React.TdHTMLAttributes ->(({ className, ...props }, ref) => ( - -)) -TableCell.displayName = "TableCell" - -const TableCaption = React.forwardRef< - HTMLTableCaptionElement, - React.HTMLAttributes ->(({ className, ...props }, ref) => ( -
-)) -TableCaption.displayName = "TableCaption" - -export { - Table, - TableHeader, - TableBody, - TableFooter, - TableHead, - TableRow, - TableCell, - TableCaption, -} diff --git a/spaces/Jikiwi/sovits-models/vdecoder/__init__.py b/spaces/Jikiwi/sovits-models/vdecoder/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Jipski/Flos_gpt-2/app.py b/spaces/Jipski/Flos_gpt-2/app.py deleted file mode 100644 index c433cc8043cfd146395b9251d405b1ca0559b4c7..0000000000000000000000000000000000000000 --- a/spaces/Jipski/Flos_gpt-2/app.py +++ /dev/null @@ -1,65 +0,0 @@ -import transformers -import streamlit as st -from transformers import AutoTokenizer, AutoModelWithLMHead - -tokenizer = AutoTokenizer.from_pretrained("anonymous-german-nlp/german-gpt2") - - -@st.cache -def load_model(model_name): - model = AutoModelWithLMHead.from_pretrained("Jipski/Flos_gpt-2_erw-02") - return model -model = load_model("Jipski/Flos_gpt-2_erw") -def infer(input_ids, max_length, temperature, top_k, top_p, num_return_sequences): - output_sequences = model.generate( - input_ids=input_ids, - max_length=max_length, - temperature=temperature, - top_k=top_k, - top_p=top_p, - do_sample=True, - num_return_sequences=num_return_sequences, - ) - return output_sequences - -def update_showing(): - st.session_state.showing = st.session_state.gen - -default_value = "Jetzt tippen!" -#prompts -st.title("Flos gpt-2") -#st.write("The almighty king of text generation, GPT-2 comes in four available sizes, only three of which have been publicly made available. Feared for its fake news generation capabilities, it currently stands as the most syntactically coherent model. A direct successor to the original GPT, it reinforces the already established pre-training/fine-tuning killer duo. From the paper: Language Models are Unsupervised Multitask Learners by Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei and Ilya Sutskever.") -sent = st.text_area("Text", default_value, key='showing', height = 275) -max_length = st.sidebar.slider("Max Length", min_value = 50, max_value=500) -temperature = st.sidebar.slider("Temperature", value = 1.0, min_value = 0.0, max_value=1.0, step=0.05) -top_k = st.sidebar.slider("Top-k", min_value = 0, max_value=5, value = 0) -top_p = st.sidebar.slider("Top-p", min_value = 0.0, max_value=1.0, step = 0.05, value = 0.9) -num_return_sequences = st.sidebar.number_input('Number of Return Sequences', min_value=1, max_value=5, value=1, step=1) -encoded_prompt = tokenizer.encode(sent, add_special_tokens=False, return_tensors="pt") -if encoded_prompt.size()[-1] == 0: - input_ids = None -else: - input_ids = encoded_prompt -output_sequences = infer(input_ids, max_length, temperature, top_k, top_p, num_return_sequences) - -for generated_sequence_idx, generated_sequence in enumerate(output_sequences): - - print(f"=== GENERATED SEQUENCE {generated_sequence_idx + 1} ===") - generated_sequences = generated_sequence.tolist() - # Decode text - text = tokenizer.decode(generated_sequence, clean_up_tokenization_spaces=True) - # Remove all text after the stop token - #text = text[: text.find(args.stop_token) if args.stop_token else None] - # Add the prompt at the beginning of the sequence. Remove the excess text that was used for pre-processing - total_sequence = ( - sent + text[len(tokenizer.decode(encoded_prompt[0], clean_up_tokenization_spaces=True)) :] - ) - generated_sequences.append(total_sequence) - print(total_sequence) - -st.write(generated_sequences[-1]) -#st.text_area("Output", generated_sequences[-1], key='gen', height=275, on_change=update_showing) - -#st.session_state.catch_rand = generated_sequences[-1] -#st.write(st.session_state.catch_rand) - diff --git a/spaces/Jo0xFF/4xArText/utils/architecture/RRDB.py b/spaces/Jo0xFF/4xArText/utils/architecture/RRDB.py deleted file mode 100644 index 476ca5073b293810451ca792eccb46935bd61457..0000000000000000000000000000000000000000 --- a/spaces/Jo0xFF/4xArText/utils/architecture/RRDB.py +++ /dev/null @@ -1,260 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- - -import functools -import math -import re -from collections import OrderedDict - -import torch -import torch.nn as nn -import utils.architecture.block as B - - -# Borrowed from https://github.com/rlaphoenix/VSGAN/blob/master/vsgan/archs/ESRGAN.py -# Which enhanced stuff that was already here -class RRDBNet(nn.Module): - def __init__( - self, - state_dict, - norm=None, - act: str = "leakyrelu", - upsampler: str = "upconv", - mode: str = "CNA", - ) -> None: - """ - ESRGAN - Enhanced Super-Resolution Generative Adversarial Networks. - By Xintao Wang, Ke Yu, Shixiang Wu, Jinjin Gu, Yihao Liu, Chao Dong, Yu Qiao, - and Chen Change Loy. - This is old-arch Residual in Residual Dense Block Network and is not - the newest revision that's available at github.com/xinntao/ESRGAN. - This is on purpose, the newest Network has severely limited the - potential use of the Network with no benefits. - This network supports model files from both new and old-arch. - Args: - norm: Normalization layer - act: Activation layer - upsampler: Upsample layer. upconv, pixel_shuffle - mode: Convolution mode - """ - super(RRDBNet, self).__init__() - - self.state = state_dict - self.norm = norm - self.act = act - self.upsampler = upsampler - self.mode = mode - - self.state_map = { - # currently supports old, new, and newer RRDBNet arch models - # ESRGAN, BSRGAN/RealSR, Real-ESRGAN - "model.0.weight": ("conv_first.weight",), - "model.0.bias": ("conv_first.bias",), - "model.1.sub./NB/.weight": ("trunk_conv.weight", "conv_body.weight"), - "model.1.sub./NB/.bias": ("trunk_conv.bias", "conv_body.bias"), - "model.3.weight": ("upconv1.weight", "conv_up1.weight"), - "model.3.bias": ("upconv1.bias", "conv_up1.bias"), - "model.6.weight": ("upconv2.weight", "conv_up2.weight"), - "model.6.bias": ("upconv2.bias", "conv_up2.bias"), - "model.8.weight": ("HRconv.weight", "conv_hr.weight"), - "model.8.bias": ("HRconv.bias", "conv_hr.bias"), - "model.10.weight": ("conv_last.weight",), - "model.10.bias": ("conv_last.bias",), - r"model.1.sub.\1.RDB\2.conv\3.0.\4": ( - r"RRDB_trunk\.(\d+)\.RDB(\d)\.conv(\d+)\.(weight|bias)", - r"body\.(\d+)\.rdb(\d)\.conv(\d+)\.(weight|bias)", - ), - } - if "params_ema" in self.state: - self.state = self.state["params_ema"] - self.num_blocks = self.get_num_blocks() - - self.plus = any("conv1x1" in k for k in self.state.keys()) - - self.state = self.new_to_old_arch(self.state) - - self.key_arr = list(self.state.keys()) - # print(self.key_arr) - - self.in_nc = self.state[self.key_arr[0]].shape[1] - self.out_nc = self.state[self.key_arr[-1]].shape[0] - - self.scale = self.get_scale() - - self.num_filters = self.state[self.key_arr[0]].shape[0] - - c2x2 = False - if self.state["model.0.weight"].shape[-2] == 2: - c2x2 = True - self.scale = math.ceil(self.scale ** (1.0 / 3)) - - # Detect if pixelunshuffle was used (Real-ESRGAN) - if self.in_nc in (self.out_nc * 4, self.out_nc * 16) and self.out_nc in ( - self.in_nc / 4, - self.in_nc / 16, - ): - self.shuffle_factor = int(math.sqrt(self.in_nc / self.out_nc)) - else: - self.shuffle_factor = None - - upsample_block = { - "upconv": B.upconv_block, - "pixel_shuffle": B.pixelshuffle_block, - }.get(self.upsampler) - if upsample_block is None: - raise NotImplementedError(f"Upsample mode [{self.upsampler}] is not found") - - if self.scale == 3: - upsample_blocks = upsample_block( - in_nc=self.num_filters, - out_nc=self.num_filters, - upscale_factor=3, - act_type=self.act, - c2x2=c2x2, - ) - else: - upsample_blocks = [ - upsample_block( - in_nc=self.num_filters, - out_nc=self.num_filters, - act_type=self.act, - c2x2=c2x2, - ) - for _ in range(int(math.log(self.scale, 2))) - ] - - self.model = B.sequential( - # fea conv - B.conv_block( - in_nc=self.in_nc, - out_nc=self.num_filters, - kernel_size=3, - norm_type=None, - act_type=None, - c2x2=c2x2, - ), - B.ShortcutBlock( - B.sequential( - # rrdb blocks - *[ - B.RRDB( - nf=self.num_filters, - kernel_size=3, - gc=32, - stride=1, - bias=True, - pad_type="zero", - norm_type=self.norm, - act_type=self.act, - mode="CNA", - plus=self.plus, - c2x2=c2x2, - ) - for _ in range(self.num_blocks) - ], - # lr conv - B.conv_block( - in_nc=self.num_filters, - out_nc=self.num_filters, - kernel_size=3, - norm_type=self.norm, - act_type=None, - mode=self.mode, - c2x2=c2x2, - ), - ) - ), - *upsample_blocks, - # hr_conv0 - B.conv_block( - in_nc=self.num_filters, - out_nc=self.num_filters, - kernel_size=3, - norm_type=None, - act_type=self.act, - c2x2=c2x2, - ), - # hr_conv1 - B.conv_block( - in_nc=self.num_filters, - out_nc=self.out_nc, - kernel_size=3, - norm_type=None, - act_type=None, - c2x2=c2x2, - ), - ) - - self.load_state_dict(self.state, strict=False) - - def new_to_old_arch(self, state): - """Convert a new-arch model state dictionary to an old-arch dictionary.""" - if "params_ema" in state: - state = state["params_ema"] - - if "conv_first.weight" not in state: - # model is already old arch, this is a loose check, but should be sufficient - return state - - # add nb to state keys - for kind in ("weight", "bias"): - self.state_map[f"model.1.sub.{self.num_blocks}.{kind}"] = self.state_map[ - f"model.1.sub./NB/.{kind}" - ] - del self.state_map[f"model.1.sub./NB/.{kind}"] - - old_state = OrderedDict() - for old_key, new_keys in self.state_map.items(): - for new_key in new_keys: - if r"\1" in old_key: - for k, v in state.items(): - sub = re.sub(new_key, old_key, k) - if sub != k: - old_state[sub] = v - else: - if new_key in state: - old_state[old_key] = state[new_key] - - # Sort by first numeric value of each layer - def compare(item1, item2): - parts1 = item1.split(".") - parts2 = item2.split(".") - int1 = int(parts1[1]) - int2 = int(parts2[1]) - return int1 - int2 - - sorted_keys = sorted(old_state.keys(), key=functools.cmp_to_key(compare)) - - # Rebuild the output dict in the right order - out_dict = OrderedDict((k, old_state[k]) for k in sorted_keys) - - return out_dict - - def get_scale(self, min_part: int = 6) -> int: - n = 0 - for part in list(self.state): - parts = part.split(".")[1:] - if len(parts) == 2: - part_num = int(parts[0]) - if part_num > min_part and parts[1] == "weight": - n += 1 - return 2**n - - def get_num_blocks(self) -> int: - nbs = [] - state_keys = self.state_map[r"model.1.sub.\1.RDB\2.conv\3.0.\4"] + ( - r"model\.\d+\.sub\.(\d+)\.RDB(\d+)\.conv(\d+)\.0\.(weight|bias)", - ) - for state_key in state_keys: - for k in self.state: - m = re.search(state_key, k) - if m: - nbs.append(int(m.group(1))) - if nbs: - break - return max(*nbs) + 1 - - def forward(self, x): - if self.shuffle_factor: - x = torch.pixel_unshuffle(x, downscale_factor=self.shuffle_factor) - return self.model(x) diff --git a/spaces/KPCGD/bingo/src/components/external-link.tsx b/spaces/KPCGD/bingo/src/components/external-link.tsx deleted file mode 100644 index 011265f364d5a64a770f4c7e9c65c5ade21d623a..0000000000000000000000000000000000000000 --- a/spaces/KPCGD/bingo/src/components/external-link.tsx +++ /dev/null @@ -1,30 +0,0 @@ -export function ExternalLink({ - href, - children -}: { - href: string - children: React.ReactNode -}) { - return ( - - {children} - - - ) -} diff --git a/spaces/Kamtera/Persian_Automatic_Speech_Recognition_and-more/app.py b/spaces/Kamtera/Persian_Automatic_Speech_Recognition_and-more/app.py deleted file mode 100644 index ae64cc0b4c77538cac4b1c9a461b3dffa07a2529..0000000000000000000000000000000000000000 --- a/spaces/Kamtera/Persian_Automatic_Speech_Recognition_and-more/app.py +++ /dev/null @@ -1,271 +0,0 @@ -import gradio as gr -from transformers import pipeline -from pydub import AudioSegment -import os -import speech_recognition as sr - - -html_seeker=''' - - - -
- - -''' - -model_name = "voidful/wav2vec2-xlsr-multilingual-56" -model0 = pipeline(task="automatic-speech-recognition", - model=model_name) - - -model_name = "SLPL/Sharif-wav2vec2" -model2 = pipeline(task="automatic-speech-recognition", - model=model_name) -model_name = "ghofrani/common8" -model1 = pipeline(task="automatic-speech-recognition", - model=model_name) - -import json -def predict_fa(speech,model): - if model== "SLPL/Sharif-wav2vec2": - text = model2(speech,return_timestamps="word" ) - elif model== "ghofrani/common8": - text = model1(speech,return_timestamps="word" ) - elif model== "voidful/wav2vec2-xlsr-multilingual-56": - text = model0(speech,return_timestamps="word" ) - - return [text['text'],json.dumps(text),html_seeker+json.dumps(text)+html_seeker2] - - -def convert_to_wav(filename): - filenameObj=os.path.splitext(filename) - audio = AudioSegment.from_file(filename,format=filenameObj[1].replace(".","")) - new_filename = filenameObj[0] + ".wav" - while os.path.exists(new_filename): - new_filename = os.path.splitext(new_filename)[0]+"(1)"+ ".wav" - audio.export(new_filename, format="wav") - print(f"Converting {filename} to {new_filename}...") - return new_filename -def g_rec(audio_File ,language): - r = sr.Recognizer() - print(audio_File) - - #if not os.path.splitext(audio_File)[1]==".wav": - # audio_File=convert_to_wav(audio_File) - hellow=sr.AudioFile(audio_File) - with hellow as source: - audio = r.record(source) - try: - s = r.recognize_google(audio,language =language) - res= "Text: "+s - except Exception as e: - res= "Exception: "+str(e) - return res - # Export file as .wav - -#predict(load_file_to_data('audio file path',sampling_rate=16_000)) # beware of the audio file sampling rate - -#predict_lang_specific(load_file_to_data('audio file path',sampling_rate=16_000),'en') # beware of the audio file sampling rate -with gr.Blocks() as demo: - gr.Markdown("multilingual Speech Recognition") - - with gr.Tab("Persian models"): - inputs_speech_fa =gr.Audio(source="upload", type="filepath", optional=True,label="Upload your audio:") - inputs_model_fa =gr.inputs.Radio(label="Language", choices=["ghofrani/common8","SLPL/Sharif-wav2vec2","voidful/wav2vec2-xlsr-multilingual-56"]) - output_transcribe1_fa = gr.Textbox(label="Transcribed text:") - output_transcribe1_fa1 = gr.Textbox(label="Transcribed text with timestamps:") - output_transcribe1_fa2 =gr.HTML(label="") - transcribe_audio1_fa= gr.Button("Submit") - with gr.Tab("google"): - gr.Markdown("set your speech language") - inputs_speech1 =[ - gr.Audio(source="upload", type="filepath"), - gr.Dropdown(choices=["af-ZA","am-ET","ar-AE","ar-BH","ar-DZ","ar-EG","ar-IL","ar-IQ","ar-JO","ar-KW","ar-LB","ar-MA","ar-MR","ar-OM","ar-PS","ar-QA","ar-SA","ar-TN","ar-YE","az-AZ","bg-BG","bn-BD","bn-IN","bs-BA","ca-ES","cs-CZ","da-DK","de-AT","de-CH","de-DE","el-GR","en-AU","en-CA","en-GB","en-GH","en-HK","en-IE","en-IN","en-KE","en-NG","en-NZ","en-PH","en-PK","en-SG","en-TZ","en-US","en-ZA","es-AR","es-BO","es-CL","es-CO","es-CR","es-DO","es-EC","es-ES","es-GT","es-HN","es-MX","es-NI","es-PA","es-PE","es-PR","es-PY","es-SV","es-US","es-UY","es-VE","et-EE","eu-ES","fa-IR","fi-FI","fil-PH","fr-BE","fr-CA","fr-CH","fr-FR","gl-ES","gu-IN","hi-IN","hr-HR","hu-HU","hy-AM","id-ID","is-IS","it-CH","it-IT","iw-IL","ja-JP","jv-ID","ka-GE","kk-KZ","km-KH","kn-IN","ko-KR","lo-LA","lt-LT","lv-LV","mk-MK","ml-IN","mn-MN","mr-IN","ms-MY","my-MM","ne-NP","nl-BE","nl-NL","no-NO","pa-Guru-IN","pl-PL","pt-BR","pt-PT","ro-RO","ru-RU","si-LK","sk-SK","sl-SI","sq-AL","sr-RS","su-ID","sv-SE","sw-KE","sw-TZ","ta-IN","ta-LK","ta-MY","ta-SG","te-IN","th-TH","tr-TR","uk-UA","ur-IN","ur-PK","uz-UZ","vi-VN","yue-Hant-HK","zh (cmn-Hans-CN)","zh-TW (cmn-Hant-TW)","zu-ZA"] -,value="fa-IR",label="language code") - ] - output_transcribe1 = gr.Textbox(label="output") - transcribe_audio1_go= gr.Button("Submit") - - transcribe_audio1_fa.click(fn=predict_fa, - inputs=[inputs_speech_fa ,inputs_model_fa ], - outputs=[output_transcribe1_fa ,output_transcribe1_fa1,output_transcribe1_fa2 ] ) - - transcribe_audio1_go.click(fn=g_rec, - inputs=inputs_speech1 , - outputs=output_transcribe1 ) - - -if __name__ == "__main__": - demo.launch() diff --git a/spaces/Kedreamix/YoloGesture/app.py b/spaces/Kedreamix/YoloGesture/app.py deleted file mode 100644 index 20ae4e3baa55f56d62ddbfafe0bd273589afab2b..0000000000000000000000000000000000000000 --- a/spaces/Kedreamix/YoloGesture/app.py +++ /dev/null @@ -1,400 +0,0 @@ -"""Create an Object Detection Web App using PyTorch and Streamlit.""" -# import libraries -from PIL import Image -from torchvision import models, transforms -import torch -import streamlit as st -from yolo import YOLO -import os -import urllib -import numpy as np -from streamlit_webrtc import webrtc_streamer, WebRtcMode, RTCConfiguration -import av -# 设置网页的icon -st.set_page_config(page_title='Gesture Detector', page_icon='✌', - layout='centered', initial_sidebar_state='expanded') - -RTC_CONFIGURATION = RTCConfiguration( - { - "RTCIceServer": [{ - "urls": ["stun:stun.l.google.com:19302"], - "username": "pikachu", - "credential": "1234", - }] - } -) -def main(): - # Render the readme as markdown using st.markdown. - readme_text = st.markdown(open("instructions.md",encoding='utf-8').read()) - - - # Once we have the dependencies, add a selector for the app mode on the sidebar. - st.sidebar.title("What to do") - app_mode = st.sidebar.selectbox("Choose the app mode", - ["Show instructions", "Run the app", "Show the source code"]) - if app_mode == "Show instructions": - st.sidebar.success('To continue select "Run the app".') - elif app_mode == "Show the source code": - readme_text.empty() - st.code(open("app.py",encoding='utf-8').read()) - elif app_mode == "Run the app": - # Download external dependencies. - for filename in EXTERNAL_DEPENDENCIES.keys(): - download_file(filename) - - readme_text.empty() - run_the_app() - -# External files to download. -EXTERNAL_DEPENDENCIES = { - "yolov4_tiny.pth": { - "url": "https://github.com/Kedreamix/YoloGesture/releases/download/v1.0/yolov4_tiny.pth", - "size": 23631189 - }, - "yolov4_SE.pth": { - "url": "https://github.com/Kedreamix/YoloGesture/releases/download/v1.0/yolov4_SE.pth", - "size": 23806027 - }, - "yolov4_CBAM.pth":{ - "url": "https://github.com/Kedreamix/YoloGesture/releases/download/v1.0/yolov4_CBAM.pth", - "size": 23981478 - }, - "yolov4_ECA.pth":{ - "url": "https://github.com/Kedreamix/YoloGesture/releases/download/v1.0/yolov4_ECA.pth", - "size": 23632688 - }, - "yolov4_weights_ep150_608.pth":{ - "url": "https://github.com/Kedreamix/YoloGesture/releases/download/v1.0/yolov4_weights_ep150_608.pth", - "size": 256423031 - }, - "yolov4_weights_ep150_416.pth":{ - "url": "https://github.com/Kedreamix/YoloGesture/releases/download/v1.0/yolov4_weights_ep150_416.pth", - "size": 256423031 - }, -} - - -# This file downloader demonstrates Streamlit animation. -def download_file(file_path): - # Don't download the file twice. (If possible, verify the download using the file length.) - if os.path.exists(file_path): - if "size" not in EXTERNAL_DEPENDENCIES[file_path]: - return - elif os.path.getsize(file_path) == EXTERNAL_DEPENDENCIES[file_path]["size"]: - return - # print(os.path.getsize(file_path)) - # These are handles to two visual elements to animate. - weights_warning, progress_bar = None, None - try: - weights_warning = st.warning("Downloading %s..." % file_path) - progress_bar = st.progress(0) - with open(file_path, "wb") as output_file: - with urllib.request.urlopen(EXTERNAL_DEPENDENCIES[file_path]["url"]) as response: - length = int(response.info()["Content-Length"]) - counter = 0.0 - MEGABYTES = 2.0 ** 20.0 - while True: - data = response.read(8192) - if not data: - break - counter += len(data) - output_file.write(data) - - # We perform animation by overwriting the elements. - weights_warning.warning("Downloading %s... (%6.2f/%6.2f MB)" % - (file_path, counter / MEGABYTES, length / MEGABYTES)) - progress_bar.progress(min(counter / length, 1.0)) - except Exception as e: - print(e) - # Finally, we remove these visual elements by calling .empty(). - finally: - if weights_warning is not None: - weights_warning.empty() - if progress_bar is not None: - progress_bar.empty() - -# This is the main app app itself, which appears when the user selects "Run the app". -def run_the_app(): - class Config(): - def __init__(self, weights = 'yolov4_tiny.pth', tiny = True, phi = 0, shape = 416,nms_iou = 0.3, confidence = 0.5): - self.weights = weights - self.tiny = tiny - self.phi = phi - self.cuda = False - self.shape = shape - self.confidence = confidence - self.nms_iou = nms_iou - # set title of app - st.markdown('

✌ Gesture Detection

', - unsafe_allow_html=True) - st.sidebar.markdown("# Gesture Detection on?") - activities = ["Example","Image", "Camera", "FPS", "Heatmap","Real Time", "Video"] - choice = st.sidebar.selectbox("Choose among the given options:", activities) - phi = st.sidebar.selectbox("yolov4-tiny 使用的自注意力模式:",('0tiny','1SE','2CABM','3ECA')) - print("") - - tiny = st.sidebar.checkbox('是否使用 yolov4 tiny 模型') - if not tiny: - shape = st.sidebar.selectbox("Choose shape to Input:", [416,608]) - conf,nms = object_detector_ui() - @st.cache_data - def get_yolo(tiny,phi,conf,nms,shape=416): - weights = 'yolov4_tiny.pth' - if tiny: - if phi == '0tiny': - weights = 'yolov4_tiny.pth' - elif phi == '1SE': - weights = 'yolov4_SE.pth' - elif phi == '2CABM': - weights = 'yolov4_CBAM.pth' - elif phi == '3ECA': - weights = 'yolov4_ECA.pth' - else: - if shape == 608: - weights = 'yolov4_weights_ep150_608.pth' - elif shape == 416: - weights = 'yolov4_weights_ep150_416.pth' - opt = Config(weights = weights, tiny = tiny , phi = int(phi[0]), shape = shape,nms_iou = nms, confidence = conf) - yolo = YOLO(opt) - return yolo - - if tiny: - yolo = get_yolo(tiny, phi, conf, nms) - st.write("YOLOV4 tiny 模型加载完毕") - else: - yolo = get_yolo(tiny, phi, conf, nms, shape) - st.write("YOLOV4 模型加载完毕") - - if choice == 'Image': - detect_image(yolo) - elif choice =='Camera': - detect_camera(yolo) - elif choice == 'FPS': - detect_fps(yolo) - elif choice == "Heatmap": - detect_heatmap(yolo) - elif choice == "Example": - detect_example(yolo) - elif choice == "Real Time": - detect_realtime(yolo) - elif choice == "Video": - detect_video(yolo) - - - -# This sidebar UI lets the user select parameters for the YOLO object detector. -def object_detector_ui(): - st.sidebar.markdown("# Model") - confidence_threshold = st.sidebar.slider("Confidence threshold", 0.0, 1.0, 0.5, 0.01) - overlap_threshold = st.sidebar.slider("Overlap threshold", 0.0, 1.0, 0.3, 0.01) - return confidence_threshold, overlap_threshold - -def predict(image,yolo): - """Return predictions. - - Parameters - ---------- - :param image: uploaded image - :type image: jpg - :rtype: list - :return: none - """ - crop = False - count = False - try: - # image = Image.open(image) - r_image = yolo.detect_image(image, crop = crop, count=count) - transform = transforms.Compose([transforms.ToTensor()]) - result = transform(r_image) - st.image(result.permute(1,2,0).numpy(), caption = 'Processed Image.', use_column_width = True) - except Exception as e: - print(e) - -def fps(image,yolo): - test_interval = 50 - tact_time = yolo.get_FPS(image, test_interval) - st.write(str(tact_time) + ' seconds, ', str(1/tact_time),'FPS, @batch_size 1') - return tact_time - # print(str(tact_time) + ' seconds, ' + str(1/tact_time) + 'FPS, @batch_size 1') - - -def detect_image(yolo): - # enable users to upload images for the model to make predictions - file_up = st.file_uploader("Upload an image", type = ["jpg","png","jpeg"]) - classes = ["up","down","left","right","front","back","clockwise","anticlockwise"] - class_to_idx = {cls: idx for (idx, cls) in enumerate(classes)} - st.sidebar.markdown("See the model preformance and play with it") - if file_up is not None: - with st.spinner(text='Preparing Image'): - # display image that user uploaded - image = Image.open(file_up) - st.image(image, caption = 'Uploaded Image.', use_column_width = True) - st.balloons() - detect = st.button("开始检测Image") - if detect: - st.write("") - st.write("Just a second ...") - predict(image,yolo) - st.balloons() - - - -def detect_camera(yolo): - picture = st.camera_input("Take a picture") - if picture: - filters_to_funcs = { - "No filter": predict, - "Heatmap": heatmap, - "FPS": fps, - } - filters = st.selectbox("...and now, apply a filter!", filters_to_funcs.keys()) - image = Image.open(picture) - with st.spinner(text='Preparing Image'): - filters_to_funcs[filters](image,yolo) - st.balloons() - -def detect_fps(yolo): - file_up = st.file_uploader("Upload an image", type = ["jpg","png","jpeg"]) - classes = ["up","down","left","right","front","back","clockwise","anticlockwise"] - class_to_idx = {cls: idx for (idx, cls) in enumerate(classes)} - st.sidebar.markdown("See the model preformance and play with it") - if file_up is not None: - # display image that user uploaded - image = Image.open(file_up) - st.image(image, caption = 'Uploaded Image.', use_column_width = True) - st.balloons() - detect = st.button("开始检测 FPS") - if detect: - with st.spinner(text='Preparing Image'): - st.write("") - st.write("Just a second ...") - tact_time = fps(image,yolo) - # st.write(str(tact_time) + ' seconds, ', str(1/tact_time),'FPS, @batch_size 1') - st.balloons() - -def heatmap(image,yolo): - heatmap_save_path = "heatmap_vision.png" - yolo.detect_heatmap(image, heatmap_save_path) - img = Image.open(heatmap_save_path) - transform = transforms.Compose([transforms.ToTensor()]) - result = transform(img) - st.image(result.permute(1,2,0).numpy(), caption = 'Processed Image.', use_column_width = True) - -def detect_heatmap(yolo): - file_up = st.file_uploader("Upload an image", type = ["jpg","png","jpeg"]) - classes = ["up","down","left","right","front","back","clockwise","anticlockwise"] - class_to_idx = {cls: idx for (idx, cls) in enumerate(classes)} - st.sidebar.markdown("See the model preformance and play with it") - if file_up is not None: - # display image that user uploaded - image = Image.open(file_up) - st.image(image, caption = 'Uploaded Image.', use_column_width = True) - st.balloons() - detect = st.button("开始检测 heatmap") - if detect: - with st.spinner(text='Preparing Heatmap'): - st.write("") - st.write("Just a second ...") - heatmap(image,yolo) - st.balloons() - -def detect_example(yolo): - st.sidebar.title("Choose an Image as a example") - images = os.listdir('./img') - images.sort() - image = st.sidebar.selectbox("Image Name", images) - st.sidebar.markdown("See the model preformance and play with it") - image = Image.open(os.path.join('img',image)) - st.image(image, caption = 'Choose Image.', use_column_width = True) - st.balloons() - detect = st.button("开始检测Image") - if detect: - st.write("") - st.write("Just a second ...") - predict(image,yolo) - st.balloons() - -def detect_realtime(yolo): - - class VideoProcessor: - def recv(self, frame): - img = frame.to_ndarray(format="bgr24") - img = Image.fromarray(img) - crop = False - count = False - r_image = yolo.detect_image(img, crop = crop, count=count) - transform = transforms.Compose([transforms.ToTensor()]) - result = transform(r_image) - result = result.permute(1,2,0).numpy() - result = (result * 255).astype(np.uint8) - return av.VideoFrame.from_ndarray(result, format="bgr24") - - webrtc_ctx = webrtc_streamer( - key="example", - mode=WebRtcMode.SENDRECV, - rtc_configuration=RTC_CONFIGURATION, - media_stream_constraints={"video": True, "audio": False}, - async_processing=False, - video_processor_factory=VideoProcessor - ) - -import cv2 -import time -def detect_video(yolo): - file_up = st.file_uploader("Upload a video", type = ["mp4"]) - print(file_up) - classes = ["up","down","left","right","front","back","clockwise","anticlockwise"] - - if file_up is not None: - video_path = 'video.mp4' - st.video(file_up) - with open(video_path, 'wb') as f: - f.write(file_up.read()) - detect = st.button("开始检测 Video") - - if detect: - video_save_path = 'video2.mp4' - # display image that user uploaded - capture = cv2.VideoCapture(video_path) - - video_fps = st.slider("Video FPS", 5, 30, int(capture.get(cv2.CAP_PROP_FPS)), 1) - fourcc = cv2.VideoWriter_fourcc(*'XVID') - size = (int(capture.get(cv2.CAP_PROP_FRAME_WIDTH)), int(capture.get(cv2.CAP_PROP_FRAME_HEIGHT))) - out = cv2.VideoWriter(video_save_path, fourcc, video_fps, size) - - - - while(True): - # 读取某一帧 - ref, frame = capture.read() - if not ref: - break - # 转变成Image - # frame = Image.fromarray(np.uint8(frame)) - # 格式转变,BGRtoRGB - frame = cv2.cvtColor(frame,cv2.COLOR_BGR2RGB) - # 转变成Image - frame = Image.fromarray(np.uint8(frame)) - # 进行检测 - frame = np.array(yolo.detect_image(frame)) - # RGBtoBGR满足opencv显示格式 - frame = cv2.cvtColor(frame,cv2.COLOR_RGB2BGR) - - # print("fps= %.2f"%(fps)) - # frame = cv2.putText(frame, "fps= %.2f"%(fps), (0, 40), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2) - out.write(frame) - - out.release() - capture.release() - print("Save processed video to the path :" + video_save_path) - - with open(video_save_path, "rb") as file: - btn = st.download_button( - label="Download Video", - data=file, - file_name="video.mp4", - ) - st.balloons() - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/KyanChen/RSPrompter/mmpl/models/pler/gpt_pler.py b/spaces/KyanChen/RSPrompter/mmpl/models/pler/gpt_pler.py deleted file mode 100644 index 66a07c36fab2f526803f895ca59079b3fc707e16..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmpl/models/pler/gpt_pler.py +++ /dev/null @@ -1,34 +0,0 @@ -from typing import Any - -import torch -import torch.nn as nn -from mmpl.registry import MODELS -from ..builder import build_backbone, build_loss -from .base_pler import BasePLer -from mmpl.structures import ClsDataSample -from .base import BaseClassifier -import lightning.pytorch as pl -import torch.nn.functional as F - - -@MODELS.register_module() -class GPTPLer(BasePLer): - def __init__(self, - backbone, - loss=dict(type='CrossEntropyLoss', loss_weight=1.0), - *args, **kwargs): - super().__init__(*args, **kwargs) - self.backbone = build_backbone(backbone) - self.loss = build_loss(loss) - - def training_step(self, batch, batch_idx): - x, gt_label = batch['x'], batch['gt_label'] - outputs = self(input_ids=x, labels=gt_label) - loss, logits = outputs['loss'], outputs['logits'] - return loss - - def forward(self, *args: Any, **kwargs: Any) -> Any: - return self.backbone(*args, **kwargs) - - def validation_step(self, batch, batch_idx): - pass diff --git a/spaces/KyanChen/RSPrompter/mmpretrain/datasets/sun397.py b/spaces/KyanChen/RSPrompter/mmpretrain/datasets/sun397.py deleted file mode 100644 index ac7e9efcca0ad8bdfdec5fe90afa60ed4f20fc91..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmpretrain/datasets/sun397.py +++ /dev/null @@ -1,225 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import List - -from mmengine import get_file_backend, list_from_file - -from mmpretrain.registry import DATASETS -from .base_dataset import BaseDataset -from .categories import SUN397_CATEGORIES - -# Note that some images are not a jpg file although the name ends -# with jpg and therefore cannot be read properly. So we provide -# a list to skip these files. -INVALID = [ - '/a/assembly_line/sun_ajckcfldgdrdjogj.jpg', - '/a/auto_factory/sun_apfsprenzdnzbhmt.jpg', - '/b/baggage_claim/sun_avittiqqaiibgcau.jpg', - '/b/batters_box/sun_alqlfpgtbgggezyr.jpg', - '/b/bow_window/indoor/sun_ahsholsagvlrsboa.jpg', - '/b/bow_window/indoor/sun_aioomcoujmmcxkkx.jpg', - '/b/bow_window/outdoor/sun_atgtjdpqikjmllth.jpg', - '/c/carrousel/sun_atsgphqympojgxnc.jpg', - '/c/carrousel/sun_auzitjuirwolazns.jpg', - '/c/church/outdoor/sun_boagasgfltequmal.jpg', - '/c/church/outdoor/sun_brhmnwzzbkphcvfo.jpg', - '/c/church/outdoor/sun_byjkqzybxpjnuofa.jpg', - '/c/corridor/sun_aznefxvocwpgimko.jpg', - '/d/dentists_office/sun_aaefsoauqlcsihou.jpg', - '/d/diner/indoor/sun_apswilaujhntrybg.jpg', - '/e/elevator/door/sun_aaudobqlphijkjdv.jpg', - '/f/fastfood_restaurant/sun_axeniwtesffxqedr.jpg', - '/f/fire_station/sun_bjyapttwilyyuxqm.jpg', - '/f/fountain/sun_axgmpbdyvqhtkhee.jpg', - '/h/hospital_room/sun_ahokhhxjiclpxqqa.jpg', - '/o/oast_house/sun_bqsrrygxyrutgjve.jpg', - '/r/restaurant_patio/sun_aurwypviprwycame.jpg', - '/s/ski_resort/sun_bplmntyzoiobcqhp.jpg', - '/w/wine_cellar/bottle_storage/sun_afmzwxkzmxkbamqi.jpg', - '/w/wine_cellar/bottle_storage/sun_ahyymswdjejrbhyb.jpg', - '/w/wine_cellar/bottle_storage/sun_avnttpxamufejbfe.jpg', - '/a/archive/sun_awgsrbljlsvhqjij.jpg', - '/a/art_school/sun_aabogqsjulyvmcse.jpg', - '/a/art_school/sun_apnzojafyvkariue.jpg', - '/b/ball_pit/sun_atjhwqngtoeuwhso.jpg', - '/b/bow_window/indoor/sun_asxvsqbexmmtqmht.jpg', - '/b/bow_window/indoor/sun_abeugxecxrwzmffp.jpg', - '/b/bow_window/outdoor/sun_auwcqhrtzkgihvlv.jpg', - '/b/bow_window/outdoor/sun_apnvdyecnjjmcuhi.jpg', - '/c/childs_room/sun_alggivksjwwiklmt.jpg', - '/c/control_tower/outdoor/sun_avbcxakrvpomqdgr.jpg', - '/d/diner/indoor/sun_ajmzozstvsxisvgx.jpg', - '/e/elevator/door/sun_aaqsyluqbluugqgy.jpg', - '/f/fastfood_restaurant/sun_aevchxlxoruhxgrb.jpg', - '/f/firing_range/indoor/sun_affrzvahwjorpalo.jpg', - '/f/formal_garden/sun_bjvrlaeatjufekft.jpg', - '/g/garage/indoor/sun_akbocuwclkxqlofx.jpg', - '/g/greenhouse/indoor/sun_addirvgtxfbndlwf.jpg', - '/k/kindergarden_classroom/sun_ajtpaahilrqzarri.jpg', - '/l/laundromat/sun_afrrjykuhhlwiwun.jpg', - '/m/music_studio/sun_bsntklkmwqgnjrjj.jpg', - '/t/track/outdoor/sun_aophkoiosslinihb.jpg', - '/a/archive/sun_aegmzltkiwyevpwa.jpg', - '/a/auto_factory/sun_aybymzvbxgvcrwgn.jpg', - '/b/baggage_claim/sun_atpmiqmnxjpgqsxi.jpg', - '/b/baggage_claim/sun_ajffcdpsvgqfzoxx.jpg', - '/b/bamboo_forest/sun_ausmxphosyahoyjo.jpg', - '/b/batters_box/sun_aaeheulsicxtxnbu.jpg', - '/c/carrousel/sun_arjrjcxemhttubqz.jpg', - '/c/chicken_coop/outdoor/sun_abcegmmdbizqkpgh.jpg', - '/c/control_tower/outdoor/sun_axhjfpkxdvqdfkyr.jpg', - '/d/diner/indoor/sun_apaotiublwqeowck.jpg', - '/f/fastfood_restaurant/sun_anexashcgmxdbmxq.jpg', - '/l/landing_deck/sun_aizahnjfkuurjibw.jpg', - '/n/nuclear_power_plant/outdoor/sun_aoblfvgyleweqanr.jpg', - '/w/waiting_room/sun_aicytusmthfvqcwc.jpg', - '/b/bow_window/indoor/sun_asmvdfnjlulewkpr.jpg', - '/b/bus_interior/sun_adhktvidwzmodeou.jpg', - '/c/catacomb/sun_algnawesgjzzmcqd.jpg', - '/c/church/outdoor/sun_baihxlseimcsdhdx.jpg', - '/d/diner/indoor/sun_agoyalzcawgxodbm.jpg', - '/e/elevator_shaft/sun_awaitimkinrjaybl.jpg', - '/f/fastfood_restaurant/sun_aplvzfbmtqtbsvbx.jpg', - '/g/greenhouse/indoor/sun_bkccvyfpwetwjuhk.jpg', - '/c/car_interior/backseat/sun_adexwfoqdyhowxpu.jpg', - '/c/church/outdoor/sun_blmmweiumednscuf.jpg', - '/f/fire_station/sun_bibntbsuunbsdrum.jpg', - '/g/game_room/sun_aopfaqlllpvzhrak.jpg', - '/u/underwater/coral_reef/sun_biiueajvszaxqopo.jpg', - '/a/airplane_cabin/sun_arqyikigkyfpegug.jpg', - '/b/badminton_court/indoor/sun_amppvxecgtjpfold.jpg', - '/c/carrousel/sun_anxtrtieimkpmhvk.jpg', - '/c/computer_room/sun_aebgvpgtwoqbfyvl.jpg', - '/f/fire_escape/sun_atbraxuwwlvdoolv.jpg', - '/k/kasbah/sun_abxkkoielpavsouu.jpg', - '/t/tower/sun_bccqnzcvqkiwicjt.jpg', - '/a/archive/sun_afngadshxudodkct.jpg', - '/b/bow_window/indoor/sun_awnrlipyxpgxxgxz.jpg', - '/c/control_tower/outdoor/sun_arohngcbtsvbthho.jpg', - '/f/fire_station/sun_brbskkfgghbfvgkk.jpg', - '/r/restaurant_patio/sun_amjfbqzfgxarrpec.jpg', - '/v/vineyard/sun_bdxhnbgbnolddswz.jpg', - '/b/baggage_claim/sun_axrtsmillrglugia.jpg', - '/d/diner/indoor/sun_alaqevbwpjaqqdqz.jpg', - '/l/landing_deck/sun_acodgoamhgnnbmvr.jpg', - '/c/carrousel/sun_adsafgyrinnekycc.jpg', - '/c/church/outdoor/sun_bzqhuwshtdgakkay.jpg', - '/c/closet/sun_absahzamlrylkxyn.jpg', - '/f/fire_escape/sun_acdthenaosuqcoqn.jpg', - '/b/butchers_shop/sun_asrdgbefoszenfex.jpg', - '/c/church/outdoor/sun_bzfyucfrdigaqneg.jpg', - '/c/church/outdoor/sun_byzxhknqrejdajxi.jpg', - '/c/cockpit/sun_ajkulpqauavrmxae.jpg', - '/l/living_room/sun_aefoqbeatyufobtx.jpg', - '/s/supermarket/sun_attvxbzocurnddbz.jpg', - '/c/closet/sun_aqnutmwfkypmrnfy.jpg', - '/f/fire_station/sun_bttrtzktpbymxkmf.jpg', - '/s/shopping_mall/indoor/sun_avwzjsijaxnwuzjx.jpg', - '/w/windmill/sun_blvczkyqbmabzeej.jpg', - '/c/chicken_coop/outdoor/sun_amaonsnnkskxwmrj.jpg', - '/s/swimming_pool/outdoor/sun_bslaihiqlhfewtzn.jpg', - '/u/underwater/coral_reef/sun_bhcrnmvbgnkvcvkr.jpg', - '/d/dining_room/sun_azlxdhiajwrhaivq.jpg', - '/c/church/outdoor/sun_bnunxbznqnvgeykx.jpg', - '/c/corridor/sun_aspwpqqlcwzfanvl.jpg', - '/r/restaurant_patio/sun_awcbpizjbudjvrhs.jpg', - '/b/ball_pit/sun_avdnmemjrgrbkwjm.jpg', -] - - -@DATASETS.register_module() -class SUN397(BaseDataset): - """The SUN397 Dataset. - - Support the `SUN397 Dataset `_ Dataset. - After downloading and decompression, the dataset directory structure is as follows. - - SUN397 dataset directory: :: - - SUN397 - ├── SUN397 - │ ├── a - │ │ ├── abbey - │ | | ├── sun_aaalbzqrimafwbiv.jpg - │ | | └── ... - │ │ ├── airplane_cabin - │ | | ├── sun_aadqdkqaslqqoblu.jpg - │ | | └── ... - │ | └── ... - │ ├── b - │ │ └── ... - │ ├── c - │ │ └── ... - │ └── ... - └── Partitions - ├── ClassName.txt - ├── Training_01.txt - ├── Testing_01.txt - └── ... - - Args: - data_root (str): The root directory for Stanford Cars dataset. - split (str, optional): The dataset split, supports "train" and "test". - Default to "train". - - Examples: - >>> from mmpretrain.datasets import SUN397 - >>> train_dataset = SUN397(data_root='data/SUN397', split='train') - >>> train_dataset - Dataset SUN397 - Number of samples: 19824 - Number of categories: 397 - Root of dataset: data/SUN397 - >>> test_dataset = SUN397(data_root='data/SUN397', split='test') - >>> test_dataset - Dataset SUN397 - Number of samples: 19829 - Number of categories: 397 - Root of dataset: data/SUN397 - """ # noqa: E501 - - METAINFO = {'classes': SUN397_CATEGORIES} - - def __init__(self, data_root: str, split: str = 'train', **kwargs): - - splits = ['train', 'test'] - assert split in splits, \ - f"The split must be one of {splits}, but get '{split}'" - self.split = split - - self.backend = get_file_backend(data_root, enable_singleton=True) - if split == 'train': - ann_file = self.backend.join_path('Partitions', 'Training_01.txt') - else: - ann_file = self.backend.join_path('Partitions', 'Testing_01.txt') - - data_prefix = 'SUN397' - test_mode = split == 'test' - - super(SUN397, self).__init__( - ann_file=ann_file, - data_root=data_root, - test_mode=test_mode, - data_prefix=data_prefix, - **kwargs) - - def load_data_list(self): - pairs = list_from_file(self.ann_file) - data_list = [] - for pair in pairs: - if pair in INVALID: - continue - img_path = self.backend.join_path(self.img_prefix, pair[1:]) - items = pair.split('/') - class_name = '_'.join(items[2:-1]) - gt_label = self.METAINFO['classes'].index(class_name) - info = dict(img_path=img_path, gt_label=gt_label) - data_list.append(info) - - return data_list - - def extra_repr(self) -> List[str]: - """The extra repr information of the dataset.""" - body = [ - f'Root of dataset: \t{self.data_root}', - ] - return body diff --git a/spaces/Lbin123/Lbingo/src/components/chat-image.tsx b/spaces/Lbin123/Lbingo/src/components/chat-image.tsx deleted file mode 100644 index 05ecc9771eada27a0f2d160bb01cba170d37bb09..0000000000000000000000000000000000000000 --- a/spaces/Lbin123/Lbingo/src/components/chat-image.tsx +++ /dev/null @@ -1,170 +0,0 @@ -import { - useEffect, - useState, - useCallback, - ChangeEvent, - ClipboardEvent, - MouseEventHandler, - FormEvent, - useRef -} from "react" -import Image from 'next/image' -import PasteIcon from '@/assets/images/paste.svg' -import UploadIcon from '@/assets/images/upload.svg' -import CameraIcon from '@/assets/images/camera.svg' -import { useBing } from '@/lib/hooks/use-bing' -import { cn } from '@/lib/utils' - -interface ChatImageProps extends Pick, 'uploadImage'> {} - -const preventDefault: MouseEventHandler = (event) => { - event.nativeEvent.stopImmediatePropagation() -} - -const toBase64 = (file: File): Promise => new Promise((resolve, reject) => { - const reader = new FileReader() - reader.readAsDataURL(file) - reader.onload = () => resolve(reader.result as string) - reader.onerror = reject -}) - -export function ChatImage({ children, uploadImage }: React.PropsWithChildren) { - const videoRef = useRef(null) - const canvasRef = useRef(null) - const mediaStream = useRef() - const [panel, setPanel] = useState('none') - - const upload = useCallback((url: string) => { - if (url) { - uploadImage(url) - } - setPanel('none') - }, [panel]) - - const onUpload = useCallback(async (event: ChangeEvent) => { - const file = event.target.files?.[0] - if (file) { - const fileDataUrl = await toBase64(file) - if (fileDataUrl) { - upload(fileDataUrl) - } - } - }, []) - - const onPaste = useCallback((event: ClipboardEvent) => { - const pasteUrl = event.clipboardData.getData('text') ?? '' - upload(pasteUrl) - }, []) - - const onEnter = useCallback((event: FormEvent) => { - event.preventDefault() - event.stopPropagation() - // @ts-ignore - const inputUrl = event.target.elements.image.value - if (inputUrl) { - upload(inputUrl) - } - }, []) - - const openVideo: MouseEventHandler = async (event) => { - event.stopPropagation() - setPanel('camera-mode') - } - - const onCapture = () => { - if (canvasRef.current && videoRef.current) { - const canvas = canvasRef.current - canvas.width = videoRef.current!.videoWidth - canvas.height = videoRef.current!.videoHeight - canvas.getContext('2d')?.drawImage(videoRef.current, 0, 0, canvas.width, canvas.height) - const cameraUrl = canvas.toDataURL('image/jpeg') - upload(cameraUrl) - } - } - - useEffect(() => { - const handleBlur = () => { - if (panel !== 'none') { - setPanel('none') - } - } - document.addEventListener('click', handleBlur) - return () => { - document.removeEventListener('click', handleBlur) - } - }, [panel]) - - useEffect(() => { - if (panel === 'camera-mode') { - navigator.mediaDevices.getUserMedia({ video: true, audio: false }) - .then(videoStream => { - mediaStream.current = videoStream - if (videoRef.current) { - videoRef.current.srcObject = videoStream - } - }) - } else { - if (mediaStream.current) { - mediaStream.current.getTracks().forEach(function(track) { - track.stop() - }) - mediaStream.current = undefined - } - } - }, [panel]) - - return ( -
-
panel === 'none' ? setPanel('normal') : setPanel('none')}>{children}
-
-
-
-

添加图像

-
-
- paste -
- e.stopPropagation()} - /> -
-
-
- - -
-
- {panel === 'camera-mode' &&
-
-
-
-
-
-
-
} -
-
- ) -} diff --git a/spaces/Linkthat/IntentClassification/app.py b/spaces/Linkthat/IntentClassification/app.py deleted file mode 100644 index 7a1364f880715d47a34572bc742f31655c4e23ff..0000000000000000000000000000000000000000 --- a/spaces/Linkthat/IntentClassification/app.py +++ /dev/null @@ -1,16 +0,0 @@ -from transformers import pipeline -import gradio as gr -import os -token = os.environ["token"] - -classifier = pipeline("zero-shot-classification", - model="joeddav/xlm-roberta-large-xnli",use_auth_token=token) - -def classify(text, intent_labels): - res = classifier(text, intent_labels) - results = {k: v for k, v in zip(res["labels"], res["scores"])} - return results - -# input is two text boxes, one for the text and one for the labels - -gr.Interface(fn=classify, inputs=["textbox", "textbox"], outputs="label").launch() diff --git a/spaces/Logspace/LangflowView/README.md b/spaces/Logspace/LangflowView/README.md deleted file mode 100644 index 8193c867f90b958c59758a2526d632477a32a1c8..0000000000000000000000000000000000000000 --- a/spaces/Logspace/LangflowView/README.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -title: LangFlow -emoji: ⛓️ -colorFrom: indigo -colorTo: purple -sdk: docker -pinned: true -license: mit -duplicated_from: Logspace/Langflow ---- - -⛓️[LangFlow](https://github.com/logspace-ai/langflow) is an effortless way to experiment and prototype [LangChain](https://github.com/hwchase17/langchain) pipelines. Head over to our GitHub repo and join the Discord Server to follow what the community is building. -
-LangFlow Repo -LangFlow Repo -
\ No newline at end of file diff --git a/spaces/MAMADREZAMORADIam/Hgyukhfgtffftt/app.py b/spaces/MAMADREZAMORADIam/Hgyukhfgtffftt/app.py deleted file mode 100644 index 69dd457a9cb227ebd8204d05a55dd6aaac680109..0000000000000000000000000000000000000000 --- a/spaces/MAMADREZAMORADIam/Hgyukhfgtffftt/app.py +++ /dev/null @@ -1,123 +0,0 @@ -import math -from api_rubika import Bot -import datetime -from requests import get,post -from json import loads -math -bot=Bot("ivebflbphjsslquuxvrigxyittwgbtck") -list_message_seened = [] -time_reset = round(datetime.datetime.today().timestamp()) + 350 -guid_of_channel = "c0B2EER0a0585fad51fb41b9f958af38" -caption = "" -print("mmd") -while(2 > 1): - try: - chats_list:list = bot.get_updates_all_chats() - if chats_list != []: - for chat in chats_list: - access = chat['access'] - admins = open('admins.txt','r').read().split('\n') - m_id = chat['object_guid'] + chat['last_message']['message_id'] - if not m_id in list_message_seened and chat['object_guid'] in admins: - msg_data = bot.getMessagesInfo(chat['object_guid'],[chat['last_message']['message_id']])[0] - text:str = chat['last_message']['text'] - if text == "شروع" : - bot.sendMessage(chat['object_guid'],'ایدی کانال مورد نظر را وارد کنید') - elif text.startswith("@"): - id = text.split("@")[1] - guid_of_channel = bot.getInfoByUsername(id)['data']['channel']['channel_guid'] - bot.sendMessage(chat['object_guid'],'کپشن را وارد کنید') - elif text.startswith("cp ["): - caption = id = text.split("[")[1][:-1] - bot.sendMessage(chat['object_guid'],'فرمت فایل را وارد کنید') - elif text.startswith("-"): - new_name = f" {text}" - bot.sendMessage(chat['object_guid'],'پست مورد نظر را فوروارد کنید') - elif chat['abs_object']['type'] == 'User' and 'SendMessages' in access and (msg_data['type'] == 'FileInline' or msg_data['type'] == 'FileInlineCaption'): - bot.sendMessage(chat['object_guid'],'آپلود شروع شد') - start_time = datetime.datetime.now().timestamp() - fileID = msg_data['file_inline']['file_id'] - accessHashRec = msg_data['file_inline']['access_hash_rec'] - dc_id = msg_data['file_inline']['dc_id'] - size = msg_data['file_inline']['size'] - - - file_upload_data = bot.requestFile(new_name,size,msg_data['file_inline']['mime']) - header = { - 'auth':bot.auth, - 'file-id':str(fileID), - 'access-hash-rec':accessHashRec - } - server = "https://messenger"+str(dc_id)+".iranlms.ir/GetFile.ashx" - if size <= 131072: - header["start-index"], header["last-index"] = "0",str(size) - while True: - try: - part_data = get(url=server,headers=header).content - h = { - 'auth':bot.auth, - 'chunk-size':str(len(part_data)), - 'file-id':str(file_upload_data['id']), - 'access-hash-send':file_upload_data['access_hash_send'], - 'total-part':str(1), - 'part-number':str(1) - } - j = post(data=part_data,url=file_upload_data['upload_url'],headers=h).text - j = loads(j)['data']['access_hash_rec'] - break - except Exception as e: - print (e) - continue - else: - a = 1 - is_tweny_five = False - is_fifty = False - is_seventy_five = False - for i in range(0,size,131072): - while True: - try: - header["start-index"], header["last-index"] =str(i) if i == 0 else str(i+1), str(i+131072 if i+131072 <= size else size) - part_data = get(url=server,headers=header).content - total = size / 131072 - total += 1 - total = math.floor(total) - h = { - 'auth':bot.auth, - 'chunk-size':str(len(part_data)), - 'file-id':str(file_upload_data['id']), - 'access-hash-send':file_upload_data['access_hash_send'], - 'total-part':str(total), - 'part-number':str(a) - } - a +=1 - - j = post(data=part_data,url=file_upload_data['upload_url'],headers=h).text - if loads(j)['data'] != None and 'access_hash_rec' in loads(j)['data']: - j = loads(j)['data']['access_hash_rec'] - tweny_five = round(total / 4) - fifty = round(total / 2 ) - seventy_five = round(total * .75) - if a > tweny_five and is_tweny_five == False: - bot.sendMessage(chat['object_guid'],'25 درصد کار انجام شده') - is_tweny_five = True - elif a > fifty and is_fifty == False: - bot.sendMessage(chat['object_guid'],'50 درصد کار انجام شده') - is_fifty = True - elif a > seventy_five and is_seventy_five == False: - bot.sendMessage(chat['object_guid'],'75 درصد کار انجام شده') - is_seventy_five = True - break - except Exception as e: - continue - if j != None and type(j) == str: - bot.sendFile(guid_of_channel,file_upload_data['id'],msg_data['file_inline']['mime'],file_upload_data['dc_id'],j,new_name,size,text=caption) - bot.sendMessage(chat['object_guid'],'آپلود موفقیت آمیز بود \nزمان : '+ str(datetime.datetime.now().timestamp() - start_time) + ' (s)') - else: - bot.sendMessage(chat['object_guid'],'آپلود نشد') - list_message_seened.append(m_id) - except Exception as e: - print(e) - time_reset2 = round(datetime.datetime.today().timestamp()) - if list_message_seened != [] and time_reset2 > time_reset: - list_message_seened = [] - time_reset = round(datetime.datetime.today().timestamp()) + 350 \ No newline at end of file diff --git a/spaces/MWilinski/bot/api/question_answering/mocks.py b/spaces/MWilinski/bot/api/question_answering/mocks.py deleted file mode 100644 index a13af1ceec5a3201f1f11719a69b3c099edaad91..0000000000000000000000000000000000000000 --- a/spaces/MWilinski/bot/api/question_answering/mocks.py +++ /dev/null @@ -1,37 +0,0 @@ -from typing import Mapping, Optional, List, Any -import os -from langchain.llms.base import LLM - -class MockLocalBinaryModel(LLM): - """ - Mock Local Binary Model class, used for generating the string "a". - - Args: - model_id (str): The ID of the model to be mocked. - - Attributes: - model_path (str): The path to the model to be mocked. - llm (str): The string "a". - - Raises: - ValueError: If the model_path does not exist. - """ - - model_path: str = None - llm: str = "READY TO MOCK" - - def __init__(self, model_id: str = None): - super().__init__() - self.model_path = f'bot/question_answering/{model_id}' - - - def _call(self, prompt: str, stop: Optional[List[str]] = None) -> str: - return self.llm - - @property - def _identifying_params(self) -> Mapping[str, Any]: - return {"name_of_model": self.model_path} - - @property - def _llm_type(self) -> str: - return self.model_path diff --git a/spaces/MWilinski/bot/bot/discord_client/client.py b/spaces/MWilinski/bot/bot/discord_client/client.py deleted file mode 100644 index 3ab201b50e2e04cecbcbd41f44962bae8fa6daba..0000000000000000000000000000000000000000 --- a/spaces/MWilinski/bot/bot/discord_client/client.py +++ /dev/null @@ -1,132 +0,0 @@ -import json -import requests -from urllib.parse import quote -import discord -from typing import List - -from bot.logger import logger -from bot.discord_client.utils import split_text_into_chunks - - -class DiscordClient(discord.Client): - """ - Discord Client class, used for interacting with a Discord server. - - Args: - qa_service_url (str): The URL of the question answering service. - num_last_messages (int, optional): The number of previous messages to use as context for generating answers. - Defaults to 5. - use_names_in_context (bool, optional): Whether to include user names in the message context. Defaults to True. - enable_commands (bool, optional): Whether to enable commands for the bot. Defaults to True. - - Attributes: - qa_service_url (str): The URL of the question answering service. - num_last_messages (int): The number of previous messages to use as context for generating answers. - use_names_in_context (bool): Whether to include user names in the message context. - enable_commands (bool): Whether to enable commands for the bot. - max_message_len (int): The maximum length of a message. - system_prompt (str): The system prompt to be used. - - """ - def __init__( - self, - qa_service_url: str, - num_last_messages: int = 5, - use_names_in_context: bool = True, - enable_commands: bool = True, - debug: bool = False - ): - logger.info('Initializing Discord client...') - intents = discord.Intents.all() - intents.message_content = True - super().__init__(intents=intents, command_prefix='!') - - assert num_last_messages >= 1, \ - 'The number of last messages in context should be at least 1' - - self.qa_service_url: str = qa_service_url - self.num_last_messages: int = num_last_messages - self.use_names_in_context: bool = use_names_in_context - self.enable_commands: bool = enable_commands - self.debug: bool = debug - self.min_messgae_len: int = 1800 - self.max_message_len: int = 2000 - - - async def on_ready(self): - """ - Callback function to be called when the client is ready. - """ - logger.info('Successfully logged in as: {0.user}'.format(self)) - await self.change_presence(activity=discord.Game(name='Chatting...')) - - - async def get_last_messages(self, message) -> List[str]: - """ - Method to fetch recent messages from a message's channel. - - Args: - message (Message): The discord Message object used to identify the channel. - - Returns: - List[str]: Reversed list of recent messages from the channel, - excluding the input message. Messages may be prefixed with the author's name - if `self.use_names_in_context` is True. - """ - last_messages: List[str] = [] - async for msg in message.channel.history( - limit=self.num_last_messages): - if self.use_names_in_context: - last_messages.append(f'{msg.author}: {msg.content}') - else: - last_messages.append(msg.content) - last_messages.reverse() - last_messages.pop() # remove last message from context - return last_messages - - - async def send_message(self, message, answer: str, sources: str): - chunks = split_text_into_chunks( - text=answer, - split_characters=[". ", ", ", "\n"], - min_size=self.min_messgae_len, - max_size=self.max_message_len - ) - for chunk in chunks: - await message.channel.send(chunk) - await message.channel.send(sources) - - - async def on_message(self, message): - """ - Callback function to be called when a message is received. - - Args: - message (discord.Message): The received message. - """ - if message.author == self.user: - return - if self.enable_commands and message.content.startswith('!'): - if message.content == '!clear': - await message.channel.purge() - return - - last_messages = await self.get_last_messages(message) - context = '\n'.join(last_messages) - - logger.info('Received message: {0.content}'.format(message)) - question_encoded = quote(message.content, safe='') - context_encoded = quote(context, safe='') - url = \ - f'{self.qa_service_url}/' \ - f'?question={question_encoded}' \ - f'?&messgages_context={context_encoded}' - response = requests.get(url) - response.raise_for_status() - response = json.loads(response.content) - - logger.info('Sending response: {0}'.format(response)) - try: - await self.send_message(message, response['answer'], response['sources']) - except Exception as e: - logger.error('Failed to send response: {0}'.format(e)) diff --git a/spaces/Manimaran/pokemon-classifier/app.py b/spaces/Manimaran/pokemon-classifier/app.py deleted file mode 100644 index c3a455ba11eb837b6c36e211e606247c2de9f998..0000000000000000000000000000000000000000 --- a/spaces/Manimaran/pokemon-classifier/app.py +++ /dev/null @@ -1,34 +0,0 @@ -from fastai.vision import Path, load_learner, open_image -from io import BytesIO -import gradio as gr -from pathlib import Path -from huggingface_hub import hf_hub_download - -# Download file - -Path('./model').mkdir(exist_ok=True) - -model_path = hf_hub_download(repo_id='Manimaran/pokemon_classifer', filename='pokemon_v7_resnet34_st2.pkl', cache_dir='./model') - -sample_img_paths = [str(p) for p in Path('./samples').glob('*.png')] - -path = Path(__file__).parent - -learn = load_learner(path=path, file=model_path) - -def analyze(img_bytes): - img = open_image(BytesIO(img_bytes.read())) - prediction = learn.predict(img)[0] - return str(prediction) - - -iface = gr.Interface( - fn=analyze, - inputs=gr.inputs.Image(type='file', label='Pokemon Image'), - outputs='text', - examples=sample_img_paths, - title='Pokemon Classifier', - description='This classifier can name pokemons upto 7th gen!, try one of the samples below or try one from http://gearoid.me/pokemon' -) - -iface.launch() diff --git a/spaces/MarcusSu1216/XingTong/inference_main.py b/spaces/MarcusSu1216/XingTong/inference_main.py deleted file mode 100644 index b6c9ff8fc771c1bada0b04d59f0af4c87a524089..0000000000000000000000000000000000000000 --- a/spaces/MarcusSu1216/XingTong/inference_main.py +++ /dev/null @@ -1,137 +0,0 @@ -import io -import logging -import time -from pathlib import Path - -import librosa -import matplotlib.pyplot as plt -import numpy as np -import soundfile - -from inference import infer_tool -from inference import slicer -from inference.infer_tool import Svc - -logging.getLogger('numba').setLevel(logging.WARNING) -chunks_dict = infer_tool.read_temp("inference/chunks_temp.json") - - - -def main(): - import argparse - - parser = argparse.ArgumentParser(description='sovits4 inference') - - # 一定要设置的部分 - parser.add_argument('-m', '--model_path', type=str, default="logs/44k/G_0.pth", help='模型路径') - parser.add_argument('-c', '--config_path', type=str, default="configs/config.json", help='配置文件路径') - parser.add_argument('-cl', '--clip', type=float, default=0, help='音频强制切片,默认0为自动切片,单位为秒/s') - parser.add_argument('-n', '--clean_names', type=str, nargs='+', default=["君の知らない物語-src.wav"], help='wav文件名列表,放在raw文件夹下') - parser.add_argument('-t', '--trans', type=int, nargs='+', default=[0], help='音高调整,支持正负(半音)') - parser.add_argument('-s', '--spk_list', type=str, nargs='+', default=['nen'], help='合成目标说话人名称') - - # 可选项部分 - parser.add_argument('-a', '--auto_predict_f0', action='store_true', default=False,help='语音转换自动预测音高,转换歌声时不要打开这个会严重跑调') - parser.add_argument('-cm', '--cluster_model_path', type=str, default="logs/44k/kmeans_10000.pt", help='聚类模型路径,如果没有训练聚类则随便填') - parser.add_argument('-cr', '--cluster_infer_ratio', type=float, default=0, help='聚类方案占比,范围0-1,若没有训练聚类模型则默认0即可') - parser.add_argument('-lg', '--linear_gradient', type=float, default=0, help='两段音频切片的交叉淡入长度,如果强制切片后出现人声不连贯可调整该数值,如果连贯建议采用默认值0,单位为秒') - parser.add_argument('-fmp', '--f0_mean_pooling', type=bool, default=False, help='是否对F0使用均值滤波器(池化),对部分哑音有改善。注意,启动该选项会导致推理速度下降,默认关闭') - parser.add_argument('-eh', '--enhance', type=bool, default=False, help='是否使用NSF_HIFIGAN增强器,该选项对部分训练集少的模型有一定的音质增强效果,但是对训练好的模型有反面效果,默认关闭') - - # 不用动的部分 - parser.add_argument('-sd', '--slice_db', type=int, default=-40, help='默认-40,嘈杂的音频可以-30,干声保留呼吸可以-50') - parser.add_argument('-d', '--device', type=str, default=None, help='推理设备,None则为自动选择cpu和gpu') - parser.add_argument('-ns', '--noice_scale', type=float, default=0.4, help='噪音级别,会影响咬字和音质,较为玄学') - parser.add_argument('-p', '--pad_seconds', type=float, default=0.5, help='推理音频pad秒数,由于未知原因开头结尾会有异响,pad一小段静音段后就不会出现') - parser.add_argument('-wf', '--wav_format', type=str, default='flac', help='音频输出格式') - parser.add_argument('-lgr', '--linear_gradient_retain', type=float, default=0.75, help='自动音频切片后,需要舍弃每段切片的头尾。该参数设置交叉长度保留的比例,范围0-1,左开右闭') - parser.add_argument('-eak', '--enhancer_adaptive_key', type=int, default=0, help='使增强器适应更高的音域(单位为半音数)|默认为0') - - args = parser.parse_args() - - clean_names = args.clean_names - trans = args.trans - spk_list = args.spk_list - slice_db = args.slice_db - wav_format = args.wav_format - auto_predict_f0 = args.auto_predict_f0 - cluster_infer_ratio = args.cluster_infer_ratio - noice_scale = args.noice_scale - pad_seconds = args.pad_seconds - clip = args.clip - lg = args.linear_gradient - lgr = args.linear_gradient_retain - F0_mean_pooling = args.f0_mean_pooling - enhance = args.enhance - enhancer_adaptive_key = args.enhancer_adaptive_key - - svc_model = Svc(args.model_path, args.config_path, args.device, args.cluster_model_path,enhance) - infer_tool.mkdir(["raw", "results"]) - - infer_tool.fill_a_to_b(trans, clean_names) - for clean_name, tran in zip(clean_names, trans): - raw_audio_path = f"raw/{clean_name}" - if "." not in raw_audio_path: - raw_audio_path += ".wav" - infer_tool.format_wav(raw_audio_path) - wav_path = Path(raw_audio_path).with_suffix('.wav') - chunks = slicer.cut(wav_path, db_thresh=slice_db) - audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks) - per_size = int(clip*audio_sr) - lg_size = int(lg*audio_sr) - lg_size_r = int(lg_size*lgr) - lg_size_c_l = (lg_size-lg_size_r)//2 - lg_size_c_r = lg_size-lg_size_r-lg_size_c_l - lg = np.linspace(0,1,lg_size_r) if lg_size!=0 else 0 - - for spk in spk_list: - audio = [] - for (slice_tag, data) in audio_data: - print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======') - - length = int(np.ceil(len(data) / audio_sr * svc_model.target_sample)) - if slice_tag: - print('jump empty segment') - _audio = np.zeros(length) - audio.extend(list(infer_tool.pad_array(_audio, length))) - continue - if per_size != 0: - datas = infer_tool.split_list_by_n(data, per_size,lg_size) - else: - datas = [data] - for k,dat in enumerate(datas): - per_length = int(np.ceil(len(dat) / audio_sr * svc_model.target_sample)) if clip!=0 else length - if clip!=0: print(f'###=====segment clip start, {round(len(dat) / audio_sr, 3)}s======') - # padd - pad_len = int(audio_sr * pad_seconds) - dat = np.concatenate([np.zeros([pad_len]), dat, np.zeros([pad_len])]) - raw_path = io.BytesIO() - soundfile.write(raw_path, dat, audio_sr, format="wav") - raw_path.seek(0) - out_audio, out_sr = svc_model.infer(spk, tran, raw_path, - cluster_infer_ratio=cluster_infer_ratio, - auto_predict_f0=auto_predict_f0, - noice_scale=noice_scale, - F0_mean_pooling = F0_mean_pooling, - enhancer_adaptive_key = enhancer_adaptive_key - ) - _audio = out_audio.cpu().numpy() - pad_len = int(svc_model.target_sample * pad_seconds) - _audio = _audio[pad_len:-pad_len] - _audio = infer_tool.pad_array(_audio, per_length) - if lg_size!=0 and k!=0: - lg1 = audio[-(lg_size_r+lg_size_c_r):-lg_size_c_r] if lgr != 1 else audio[-lg_size:] - lg2 = _audio[lg_size_c_l:lg_size_c_l+lg_size_r] if lgr != 1 else _audio[0:lg_size] - lg_pre = lg1*(1-lg)+lg2*lg - audio = audio[0:-(lg_size_r+lg_size_c_r)] if lgr != 1 else audio[0:-lg_size] - audio.extend(lg_pre) - _audio = _audio[lg_size_c_l+lg_size_r:] if lgr != 1 else _audio[lg_size:] - audio.extend(list(_audio)) - key = "auto" if auto_predict_f0 else f"{tran}key" - cluster_name = "" if cluster_infer_ratio == 0 else f"_{cluster_infer_ratio}" - res_path = f'./results/{clean_name}_{key}_{spk}{cluster_name}.{wav_format}' - soundfile.write(res_path, audio, svc_model.target_sample, format=wav_format) - svc_model.clear_empty() - -if __name__ == '__main__': - main() diff --git a/spaces/Marshalls/testmtd/app.py b/spaces/Marshalls/testmtd/app.py deleted file mode 100644 index 3ccc069db2e525351b05580d2fb67a17a7f684b6..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/app.py +++ /dev/null @@ -1,137 +0,0 @@ -import os -from pathlib import Path -from tempfile import mkstemp, mkdtemp -import gradio as gr -from moviepy.editor import VideoFileClip, AudioFileClip -import shlex -import datetime -import subprocess -import time -import tempfile -import shutil -import os -import wave -end_time=1000 -allowed_medias = [".mp3", ".wav", ".ogg"] -def is_used(file_name): - try: - vHandle = file.CreateFile(file_name, file.GENERIC_READ, 0, None, file.OPEN_EXISTING, file.FILE_ATTRIBUTE_NORMAL, None) - return int(vHandle) == file.INVALID_HANDLE_VALUE - except: - return True - finally: - try: file.CloseHandle(vHandle) - except: pass -def execute_command(cmdstring, cwd=None, timeout=None, shell=False): - """执行一个SHELL命令 - 封装了subprocess的Popen方法, 支持超时判断,支持读取stdout和stderr - 参数: - cwd: 运行命令时更改路径,如果被设定,子进程会直接先更改当前路径到cwd - timeout: 超时时间,秒,支持小数,精度0.1秒 - shell: 是否通过shell运行 - Returns: return_code - Raises: Exception: 执行超时 - """ - global end_time - if shell: - cmdstring_list = cmdstring - else: - cmdstring_list = shlex.split(cmdstring) - if timeout: - end_time = datetime.datetime.now() + datetime.timedelta(seconds=timeout) - - # 没有指定标准输出和错误输出的管道,因此会打印到屏幕上; - sub = subprocess.Popen(cmdstring_list, cwd=cwd, stdin=subprocess.PIPE, shell=shell, bufsize=4096) - - # subprocess.poll()方法:检查子进程是否结束了,如果结束了,设定并返回码,放在subprocess.returncode变量中 - while sub.poll() is None: - time.sleep(0.1) - if timeout: - if end_time <= datetime.datetime.now(): - raise Exception("Timeout:%s" % cmdstring) - return str(sub.returncode) -def update(file,dclass): - #if dclass == "": - # raise gr.Error("请输入舞蹈类型.") - #if dappear == "": - # raise gr.Error("请输入人物外观.") - file_path = Path(file.name) - info = {} - info["size"] = os.path.getsize(file_path) - info["name"] = file_path.name - print(file_path.name) - file_extension = file_path.suffix - info["type"] = "audio" - audio = AudioFileClip(file.name) - info["duration"] = audio.duration - info["audio_channels"] = audio.nchannels - filename = file_path.name - print('##########################') - print(filename) - - if info["size"] > 100000000: - raise gr.Error( - "Please make sure all files are less than 100MB in size." - ) - audio.close() - ### - ### - temp_dir = tempfile.mkdtemp() - video_temp_dir=tempfile.mkdtemp() - shutil.copy(file_path,temp_dir) - for root,dirs,files in os.walk("./songs"): - for file1 in files: - path=os.path.join(root,file1) - shutil.copy(Path(path),temp_dir) - filenewname=filename.replace(".mp3","").replace(".wav","") - newdir=video_temp_dir.replace("/tmp/","") - execute_command("chmod +x ./feature_extraction/audio_feature_extraction_test.sh") - execute_command("chmod +x ./feature_extraction/script_to_list_filenames") - execute_command("chmod +x ./script_generate.sh") - print(execute_command("./feature_extraction/audio_feature_extraction_test.sh "+temp_dir+"/")) - print(execute_command("./script_generate.sh transflower_expmap_old "+filenewname+" "+newdir+" --generate_bvh --generate_video --data_dir="+temp_dir+"/ --seed=seed1 --max_length=400")) - print("./script_generate.sh transflower_expmap_old "+filenewname+" "+newdir+" --generate_bvh --generate_video --data_dir="+temp_dir+"/ --seed=seed1 --max_length=400") - print("###################newfie") - print(filenewname) - videopath=video_temp_dir+"/transflower_expmap_old/videos/"+filenewname+".mp4_music.mp4" - print(videopath) - return videopath - -# coding=utf-8 - - - - -with gr.Blocks() as demo: - gr.Markdown( - """ - # 🏞智能编舞系统 - 队名:Three Dancers - 成员:蒙贤辉 侯金言 毛忠昊 - 推荐上传WAV格式文件 - """, - elem_id="header", - ) - with gr.Row(): - with gr.Column(): - user_file = gr.File( - file_count="single", label="音频文件", keep_filename=True, - file_types=allowed_medias - ) - dclass=gr.Dropdown(["流行", "民族", "混合"], label="舞蹈类型", info="") - #dappear=gr.Dropdown(["舞蹈服", "休闲服", "混合"], label="人物外观", info="") - #number = gr.Slider(minimum=-0, maximum=20, value=1, step=1, - # interactive=True, label="人物数量") - btn = gr.Button("Run", label="Run") - with gr.Column(): - generated_video = gr.Video( - interactive=False, label="舞蹈视频", include_audio=True - ) - # generated_command = gr.Markdown() - btn.click( - fn=update, - inputs=[user_file, dclass], - outputs=[generated_video] - ) -if __name__ == "__main__": - demo.launch() diff --git a/spaces/Marshalls/testmtd/models/cdvae.py b/spaces/Marshalls/testmtd/models/cdvae.py deleted file mode 100644 index fc69ee3c4268e6a451ab4bbd843919fe9161cf4b..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/models/cdvae.py +++ /dev/null @@ -1,544 +0,0 @@ -from math import log2, sqrt -import torch -from torch import nn, einsum -import torch.nn.functional as F - -from models.transformer import BasicTransformerModel, EncDecTransformerModel, EncDecXTransformer - -from axial_positional_embedding import AxialPositionalEmbedding -from einops import rearrange - -# from dalle_pytorch import distributed_utils -# from dalle_pytorch.vae import OpenAIDiscreteVAE -# from dalle_pytorch.vae import VQGanVAE1024 -# from dalle_pytorch.transformer import Transformer - -# helpers - -def exists(val): - return val is not None - -def default(val, d): - return val if exists(val) else d - -def always(val): - def inner(*args, **kwargs): - return val - return inner - -def is_empty(t): - return t.nelement() == 0 - -def masked_mean(t, mask, dim = 1): - t = t.masked_fill(~mask[:, :, None], 0.) - return t.sum(dim = 1) / mask.sum(dim = 1)[..., None] - -def eval_decorator(fn): - def inner(model, *args, **kwargs): - was_training = model.training - model.eval() - out = fn(model, *args, **kwargs) - model.train(was_training) - return out - return inner - -# sampling helpers - -def top_k(logits, thres = 0.5): - num_logits = logits.shape[-1] - k = max(int((1 - thres) * num_logits), 1) - val, ind = torch.topk(logits, k) - probs = torch.full_like(logits, float('-inf')) - probs.scatter_(1, ind, val) - return probs - -# discrete vae class - -class ResBlock(nn.Module): - def __init__(self, chan): - super().__init__() - self.net = nn.Sequential( - nn.Conv2d(chan, chan, 3, padding = 1), - nn.ReLU(), - nn.Conv2d(chan, chan, 3, padding = 1), - nn.ReLU(), - nn.Conv2d(chan, chan, 1) - ) - - def forward(self, x): - return self.net(x) + x - -class ConditionalDiscreteVAEVision(nn.Module): - def __init__( - self, - image_shape = (256,256), - num_tokens = 512, - codebook_dim = 512, - num_layers = 3, - num_resnet_blocks = 0, - hidden_dim = 64, - conditioning_dim = 64, - channels = 3, - smooth_l1_loss = False, - temperature = 0.9, - straight_through = False, - kl_div_loss_weight = 0., - normalization = ((0.5,) * 3, (0.5,) * 3) - ): - super().__init__() - assert log2(image_shape[0]).is_integer(), 'image size must be a power of 2' - assert log2(image_shape[1]).is_integer(), 'image size must be a power of 2' - assert num_layers >= 1, 'number of layers must be greater than or equal to 1' - has_resblocks = num_resnet_blocks > 0 - - self.image_shape = image_shape - self.num_tokens = num_tokens - self.num_layers = num_layers - self.temperature = temperature - self.straight_through = straight_through - self.codebook = nn.Embedding(num_tokens, codebook_dim) - - hdim = hidden_dim - - enc_chans = [hidden_dim] * num_layers - dec_chans = list(reversed(enc_chans)) - - enc_chans = [channels, *enc_chans] - - if not has_resblocks: - dec_init_chan = codebook_dim - else: - dec_init_chan = dec_chans[0] - dec_chans = [dec_init_chan, *dec_chans] - - enc_chans_io, dec_chans_io = map(lambda t: list(zip(t[:-1], t[1:])), (enc_chans, dec_chans)) - - enc_layers = [] - dec_layers = [] - - for (enc_in, enc_out), (dec_in, dec_out) in zip(enc_chans_io, dec_chans_io): - enc_layers.append(nn.Sequential(nn.Conv2d(enc_in, enc_out, 4, stride = 2, padding = 1), nn.ReLU())) - dec_layers.append(nn.Sequential(nn.ConvTranspose2d(dec_in, dec_out, 4, stride = 2, padding = 1), nn.ReLU())) - - for _ in range(num_resnet_blocks): - dec_layers.insert(0, ResBlock(dec_chans[1])) - enc_layers.append(ResBlock(enc_chans[-1])) - - if num_resnet_blocks > 0: - dec_layers.insert(0, nn.Conv2d(codebook_dim, dec_chans[1], 1)) - - enc_layers.append(nn.Conv2d(enc_chans[-1], num_tokens, 1)) - dec_layers.append(nn.Conv2d(dec_chans[-1], channels, 1)) - - self.encoder = nn.Sequential(*enc_layers) - self.decoder = nn.Sequential(*dec_layers) - - self.loss_fn = F.smooth_l1_loss if smooth_l1_loss else F.mse_loss - self.kl_div_loss_weight = kl_div_loss_weight - - # take care of normalization within class - self.normalization = normalization - - # self._register_external_parameters() - - # def _register_external_parameters(self): - # """Register external parameters for DeepSpeed partitioning.""" - # if ( - # not distributed_utils.is_distributed - # or not distributed_utils.using_backend( - # distributed_utils.DeepSpeedBackend) - # ): - # return - # - # deepspeed = distributed_utils.backend.backend_module - # deepspeed.zero.register_external_parameters(self, self.codebook.weight) - - def norm(self, images): - if not exists(self.normalization): - return images - - means, stds = map(lambda t: torch.as_tensor(t).to(images), self.normalization) - means, stds = map(lambda t: rearrange(t, 'c -> () c () ()'), (means, stds)) - images = images.clone() - images.sub_(means).div_(stds) - return images - - @torch.no_grad() - @eval_decorator - def get_codebook_indices(self, images): - logits = self(images, return_logits = True) - codebook_indices = logits.argmax(dim = 1).flatten(1) - return codebook_indices - - def decode( - self, - img_seq - ): - image_embeds = self.codebook(img_seq) - b, n, d = image_embeds.shape - h = w = int(sqrt(n)) - - image_embeds = rearrange(image_embeds, 'b (h w) d -> b d h w', h = h, w = w) - images = self.decoder(image_embeds) - return images - - def forward( - self, - img, - return_loss = False, - return_recons = False, - return_logits = False, - temp = None - ): - device, num_tokens, image_shape, kl_div_loss_weight = img.device, self.num_tokens, self.image_shape, self.kl_div_loss_weight - assert img.shape[-1] == image_shape[1] and img.shape[-2] == image_shape[0], f'input must have the correct image size {image_shape[0]}x{image_shape[1]}' - - img = self.norm(img) - - logits = self.encoder(img) - - if return_logits: - return logits # return logits for getting hard image indices for DALL-E training - - temp = default(temp, self.temperature) - soft_one_hot = F.gumbel_softmax(logits, tau = temp, dim = 1, hard = self.straight_through) - sampled = einsum('b n h w, n d -> b d h w', soft_one_hot, self.codebook.weight) - out = self.decoder(sampled) - - if not return_loss: - return out - - # reconstruction loss - - recon_loss = self.loss_fn(img, out) - - # kl divergence - - logits = rearrange(logits, 'b n h w -> b (h w) n') - log_qy = F.log_softmax(logits, dim = -1) - log_uniform = torch.log(torch.tensor([1. / num_tokens], device = device)) - kl_div = F.kl_div(log_uniform, log_qy, None, None, 'batchmean', log_target = True) - - loss = recon_loss + (kl_div * kl_div_loss_weight) - - if not return_recons: - return loss - - return loss, out - -class ConditionalDiscreteVAE(nn.Module): - def __init__( - self, - input_shape = (256,256), - num_tokens = 512, - codebook_dim = 512, - num_layers = 3, - num_resnet_blocks = 0, - hidden_dim = 64, - cond_dim = 0, - channels = 3, - smooth_l1_loss = False, - temperature = 0.9, - straight_through = False, - kl_div_loss_weight = 0., - normalization = None, - prior_nhead = 8, - prior_dhid = 512, - prior_nlayers = 8, - prior_dropout = 0, - prior_use_pos_emb = True, - prior_use_x_transformers = False, - opt = None, - cond_vae = False - ): - super().__init__() - assert num_layers >= 1, 'number of layers must be greater than or equal to 1' - has_resblocks = num_resnet_blocks > 0 - - self.input_shape = input_shape - self.num_tokens = num_tokens - self.num_layers = num_layers - self.temperature = temperature - self.straight_through = straight_through - self.codebook = nn.Embedding(num_tokens, codebook_dim) - self.cond_dim = cond_dim - self.cond_vae = cond_vae - - hdim = hidden_dim - - enc_chans = [hidden_dim] * num_layers - dec_chans = list(reversed(enc_chans)) - - if cond_vae: - enc_chans = [channels + cond_dim, *enc_chans] - else: - enc_chans = [channels, *enc_chans] - - if not has_resblocks: - if cond_vae: - dec_init_chan = codebook_dim + cond_dim - else: - dec_init_chan = codebook_dim - else: - dec_init_chan = dec_chans[0] - dec_chans = [dec_init_chan, *dec_chans] - - enc_chans_io, dec_chans_io = map(lambda t: list(zip(t[:-1], t[1:])), (enc_chans, dec_chans)) - - enc_layers = [] - dec_layers = [] - - - if input_shape[0] == 1: - kernel_size1 = 1 - padding_size1 = 0 - codebook_layer_shape1 = 1 - elif input_shape[0] in [2,3,4]: - kernel_size1 = 3 - padding_size1 = 1 - codebook_layer_shape1 = input_shape[0] - else: - #kernel_size1 = 4 - kernel_size1 = 3 - padding_size1 = 1 - #codebook_layer_shape1 = input_shape[0] - num_layers - codebook_layer_shape1 = input_shape[0] - - if input_shape[1] == 1: - kernel_size2 = 1 - padding_size2 = 0 - codebook_layer_shape2 = 1 - elif input_shape[1] in [2,3,4]: - kernel_size2 = 3 - padding_size2 = 1 - codebook_layer_shape2 = input_shape[1] - else: - #kernel_size2 = 4 - kernel_size2 = 3 - padding_size2 = 1 - #codebook_layer_shape2 = input_shape[1] - num_layers - codebook_layer_shape2 = input_shape[1] - - self.codebook_layer_shape = (codebook_layer_shape1,codebook_layer_shape2) - kernel_shape = (kernel_size1, kernel_size2) - padding_shape = (padding_size1, padding_size2) - for (enc_in, enc_out), (dec_in, dec_out) in zip(enc_chans_io, dec_chans_io): - enc_layers.append(nn.Sequential(nn.Conv2d(enc_in, enc_out, kernel_shape, stride = 1, padding = padding_shape), nn.ReLU())) - dec_layers.append(nn.Sequential(nn.ConvTranspose2d(dec_in, dec_out, kernel_shape, stride = 1, padding = padding_shape), nn.ReLU())) - - for _ in range(num_resnet_blocks): - dec_layers.insert(0, ResBlock(dec_chans[1])) - enc_layers.append(ResBlock(enc_chans[-1])) - - if num_resnet_blocks > 0: - if cond_vae: - dec_layers.insert(0, nn.Conv2d(codebook_dim + cond_dim, dec_chans[1], 1)) - else: - dec_layers.insert(0, nn.Conv2d(codebook_dim, dec_chans[1], 1)) - - enc_layers.append(nn.Conv2d(enc_chans[-1], num_tokens, 1)) - dec_layers.append(nn.Conv2d(dec_chans[-1], channels, 1)) - - self.cond_upsampler = torch.nn.Upsample(size=input_shape) #upsampler to feed the conditioning to the input of the encoder - self.encoder = nn.Sequential(*enc_layers) - self.decoder = nn.Sequential(*dec_layers) - - self.loss_fn = F.smooth_l1_loss if smooth_l1_loss else F.mse_loss - self.kl_div_loss_weight = kl_div_loss_weight - - # take care of normalization within class - self.normalization = normalization - - latent_size = codebook_layer_shape1*codebook_layer_shape2 - self.latent_size = latent_size - if cond_dim > 0: - self.prior_transformer = ContDiscTransformer(cond_dim, num_tokens, codebook_dim, prior_nhead, prior_dhid, prior_nlayers, prior_dropout, - use_pos_emb=prior_use_pos_emb, - src_length=latent_size, - tgt_length=latent_size, - use_x_transformers=prior_use_x_transformers, - opt=opt) - - # self._register_external_parameters() - - # def _register_external_parameters(self): - # """Register external parameters for DeepSpeed partitioning.""" - # if ( - # not distributed_utils.is_distributed - # or not distributed_utils.using_backend( - # distributed_utils.DeepSpeedBackend) - # ): - # return - # - # deepspeed = distributed_utils.backend.backend_module - # deepspeed.zero.register_external_parameters(self, self.codebook.weight) - - def norm(self, images): - if not exists(self.normalization): - return images - - means, stds = map(lambda t: torch.as_tensor(t).to(images), self.normalization) - means, stds = map(lambda t: rearrange(t, 'c -> () c () ()'), (means, stds)) - images = images.clone() - images.sub_(means).div_(stds) - return images - - @torch.no_grad() - @eval_decorator - def get_codebook_indices(self, inputs, cond=None): - logits = self(inputs, cond, return_logits = True) - codebook_indices = logits.argmax(dim = 1).flatten(1) - return codebook_indices - - def decode( - self, - img_seq, - cond = None - ): - image_embeds = self.codebook(img_seq) - b, n, d = image_embeds.shape - h = w = int(sqrt(n)) - - image_embeds = rearrange(image_embeds, 'b (h w) d -> b d h w', h = h, w = w) - if cond is not None: - image_embeds_cond = torch.cat([image_embeds, cond], dim = 1) - images = self.decoder(image_embeds_cond) - else: - images = self.decoder(image_embeds) - - return images - - def prior_logp( - self, - inputs, - cond = None, - return_accuracy = False, - detach_cond = False - ): - # import pdb;pdb.set_trace() - #if cond is None: raise NotImplementedError("Haven't implemented non-conditional DVAEs") - if len(inputs.shape) == 3: - inputs = inputs.reshape(inputs.shape[0], inputs.shape[1],*self.input_shape) - if len(cond.shape) == 3: - cond = cond.reshape(cond.shape[0], cond.shape[1],*self.codebook_layer_shape) - with torch.no_grad(): - if self.cond_vae: - labels = self.get_codebook_indices(inputs, cond) - else: - labels = self.get_codebook_indices(inputs) - if detach_cond: - cond = cond.detach() - logits = self.prior_transformer(cond.squeeze(-1).permute(2,0,1), labels.permute(1,0)).permute(1,2,0) - loss = F.cross_entropy(logits, labels) - if not return_accuracy: - return loss - # import pdb;pdb.set_trace() - predicted = logits.argmax(dim = 1).flatten(1) - accuracy = (predicted == labels).sum()/predicted.nelement() - return loss, accuracy - - def generate(self, cond, temp=1.0, filter_thres = 0.5): - #if cond is None: raise NotImplementedError("Haven't implemented non-conditional DVAEs") - if len(cond.shape) == 3: - cond = cond.reshape(cond.shape[0], cond.shape[1],*self.codebook_layer_shape) - dummy = torch.zeros(1,1).long().to(cond.device) - tokens = [] - for i in range(self.latent_size): - # print(i) - logits = self.prior_transformer(cond.squeeze(-1).permute(2,0,1), torch.cat(tokens+[dummy], 0)).permute(1,2,0)[:,-1,:] - filtered_logits = top_k(logits, thres = filter_thres) - probs = F.softmax(filtered_logits / temp, dim = -1) - sampled = torch.multinomial(probs, 1) - tokens.append(sampled) - print(tokens) - embs = self.codebook(torch.cat(tokens, 0)) - # import pdb;pdb.set_trace() - if self.cond_vae: - sampled_cond = torch.cat([embs.permute(2,0,1).unsqueeze(0),cond], dim=1) - else: - sampled_cond = embs.permute(2,0,1).unsqueeze(0) - out = self.decoder(sampled_cond) - return out - - def forward( - self, - inp, - cond = None, - return_loss = False, - return_recons = False, - return_logits = False, - temp = None - ): - if len(inp.shape) == 3: - inp = inp.reshape(inp.shape[0], inp.shape[1],*self.input_shape) - device, num_tokens, input_shape, kl_div_loss_weight = inp.device, self.num_tokens, self.input_shape, self.kl_div_loss_weight - assert inp.shape[-1] == input_shape[1] and inp.shape[-2] == input_shape[0], f'input must have the correct image size {input_shape[0]}x{input_shape[1]}. Instead got {inp.shape[0]}x{inp.shape[1]}' - - inp = self.norm(inp) - if cond is not None: - if len(cond.shape) == 3: - cond = cond.reshape(cond.shape[0], cond.shape[1],*self.codebook_layer_shape) - cond_upsampled = self.cond_upsampler(cond) - inp_cond = torch.cat([inp,cond_upsampled], dim=1) - inp_cond = self.norm(inp_cond) - else: - inp_cond = self.norm(inp) - - logits = self.encoder(inp_cond) - # codebook_indices = logits.argmax(dim = 1).flatten(1) - # print(codebook_indices.shape) - # print(codebook_indices) - # print(list(self.encoder.parameters())[1].data) - # for p in self.prior_transformer.parameters(): - # print(p.norm()) - - if return_logits: - return logits # return logits for getting hard image indices for DALL-E training - - temp = default(temp, self.temperature) - soft_one_hot = F.gumbel_softmax(logits, tau = temp, dim = 1, hard = self.straight_through) - sampled = einsum('b n h w, n d -> b d h w', soft_one_hot, self.codebook.weight) - if cond is not None: - sampled_cond = torch.cat([sampled,cond], dim=1) - out = self.decoder(sampled_cond) - else: - out = self.decoder(sampled) - - if not return_loss: - return out - - # reconstruction loss - - # import pdb;pdb.set_trace() - recon_loss = self.loss_fn(inp, out) - - # kl divergence - - logits = rearrange(logits, 'b n h w -> b (h w) n') - log_qy = F.log_softmax(logits, dim = -1) - log_uniform = torch.log(torch.tensor([1. / num_tokens], device = device)) - kl_div = F.kl_div(log_uniform, log_qy, None, None, 'batchmean', log_target = True) - - loss = recon_loss + (kl_div * kl_div_loss_weight) - - if not return_recons: - return loss - - return loss, out - -class ContDiscTransformer(nn.Module): - - def __init__(self, src_d, tgt_num_tokens, tgt_emb_dim, nhead, dhid, nlayers, dropout=0.5,use_pos_emb=False,src_length=0,tgt_length=0,use_x_transformers=False,opt=None): - super(ContDiscTransformer, self).__init__() - self.transformer = EncDecTransformerModel(tgt_num_tokens, src_d, tgt_emb_dim, nhead, dhid, nlayers, dropout=dropout,use_pos_emb=use_pos_emb,src_length=src_length,tgt_length=tgt_length,use_x_transformers=use_x_transformers,opt=opt) - #self.transformer = EncDecTransformerModel(tgt_num_tokens, src_d, tgt_emb_dim, nhead, dhid, nlayers, dropout=dropout,use_pos_emb=False,src_length=src_length,tgt_length=tgt_length,use_x_transformers=use_x_transformers,opt=opt) - # self.transformer = EncDecXTransformer(dim=dhid, dec_dim_out=tgt_num_tokens, enc_dim_in=src_d, enc_dim_out=tgt_emb_dim, dec_din_in=tgt_emb_dim, enc_heads=nhead, dec_heads=nhead, enc_depth=nlayers, dec_depth=nlayers, enc_dropout=dropout, dec_dropout=dropout, enc_max_seq_len=1024, dec_max_seq_len=1024) - self.embedding = nn.Embedding(tgt_num_tokens, tgt_emb_dim) - self.first_input = nn.Parameter((torch.randn(1,1,tgt_emb_dim))) - - def forward(self, src, tgt): - tgt = tgt[:-1] - embs = self.embedding(tgt) - embs = torch.cat([torch.tile(self.first_input, (1,embs.shape[1],1)), embs], 0) - output = self.transformer(src,embs) - return output diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/openpose/__init__.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/openpose/__init__.py deleted file mode 100644 index 8c26f1b37dae854f51da938da2fa67a8ef48ce5a..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/openpose/__init__.py +++ /dev/null @@ -1,44 +0,0 @@ -import os -os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE" - -import torch -import numpy as np -from . import util -from .body import Body -from .hand import Hand -from annotator.util import annotator_ckpts_path - - -body_model_path = "https://huggingface.co/lllyasviel/ControlNet/resolve/main/annotator/ckpts/body_pose_model.pth" -hand_model_path = "https://huggingface.co/lllyasviel/ControlNet/resolve/main/annotator/ckpts/hand_pose_model.pth" - - -class OpenposeDetector: - def __init__(self): - body_modelpath = os.path.join(annotator_ckpts_path, "body_pose_model.pth") - hand_modelpath = os.path.join(annotator_ckpts_path, "hand_pose_model.pth") - - if not os.path.exists(hand_modelpath): - from basicsr.utils.download_util import load_file_from_url - load_file_from_url(body_model_path, model_dir=annotator_ckpts_path) - load_file_from_url(hand_model_path, model_dir=annotator_ckpts_path) - - self.body_estimation = Body(body_modelpath) - self.hand_estimation = Hand(hand_modelpath) - - def __call__(self, oriImg, hand=False): - oriImg = oriImg[:, :, ::-1].copy() - with torch.no_grad(): - candidate, subset = self.body_estimation(oriImg) - canvas = np.zeros_like(oriImg) - canvas = util.draw_bodypose(canvas, candidate, subset) - if hand: - hands_list = util.handDetect(candidate, subset, oriImg) - all_hand_peaks = [] - for x, y, w, is_left in hands_list: - peaks = self.hand_estimation(oriImg[y:y+w, x:x+w, :]) - peaks[:, 0] = np.where(peaks[:, 0] == 0, peaks[:, 0], peaks[:, 0] + x) - peaks[:, 1] = np.where(peaks[:, 1] == 0, peaks[:, 1], peaks[:, 1] + y) - all_hand_peaks.append(peaks) - canvas = util.draw_handpose(canvas, all_hand_peaks) - return canvas, dict(candidate=candidate.tolist(), subset=subset.tolist()) diff --git a/spaces/MingGatsby/Grounding_DINO_demo/groundingdino/util/utils.py b/spaces/MingGatsby/Grounding_DINO_demo/groundingdino/util/utils.py deleted file mode 100644 index e9f0318e306fa04bff0ada70486b41aaa69b07c8..0000000000000000000000000000000000000000 --- a/spaces/MingGatsby/Grounding_DINO_demo/groundingdino/util/utils.py +++ /dev/null @@ -1,608 +0,0 @@ -import argparse -import json -import warnings -from collections import OrderedDict -from copy import deepcopy -from typing import Any, Dict, List - -import numpy as np -import torch -from transformers import AutoTokenizer - -from groundingdino.util.slconfig import SLConfig - - -def slprint(x, name="x"): - if isinstance(x, (torch.Tensor, np.ndarray)): - print(f"{name}.shape:", x.shape) - elif isinstance(x, (tuple, list)): - print("type x:", type(x)) - for i in range(min(10, len(x))): - slprint(x[i], f"{name}[{i}]") - elif isinstance(x, dict): - for k, v in x.items(): - slprint(v, f"{name}[{k}]") - else: - print(f"{name}.type:", type(x)) - - -def clean_state_dict(state_dict): - new_state_dict = OrderedDict() - for k, v in state_dict.items(): - if k[:7] == "module.": - k = k[7:] # remove `module.` - new_state_dict[k] = v - return new_state_dict - - -def renorm( - img: torch.FloatTensor, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] -) -> torch.FloatTensor: - # img: tensor(3,H,W) or tensor(B,3,H,W) - # return: same as img - assert img.dim() == 3 or img.dim() == 4, "img.dim() should be 3 or 4 but %d" % img.dim() - if img.dim() == 3: - assert img.size(0) == 3, 'img.size(0) shoule be 3 but "%d". (%s)' % ( - img.size(0), - str(img.size()), - ) - img_perm = img.permute(1, 2, 0) - mean = torch.Tensor(mean) - std = torch.Tensor(std) - img_res = img_perm * std + mean - return img_res.permute(2, 0, 1) - else: # img.dim() == 4 - assert img.size(1) == 3, 'img.size(1) shoule be 3 but "%d". (%s)' % ( - img.size(1), - str(img.size()), - ) - img_perm = img.permute(0, 2, 3, 1) - mean = torch.Tensor(mean) - std = torch.Tensor(std) - img_res = img_perm * std + mean - return img_res.permute(0, 3, 1, 2) - - -class CocoClassMapper: - def __init__(self) -> None: - self.category_map_str = { - "1": 1, - "2": 2, - "3": 3, - "4": 4, - "5": 5, - "6": 6, - "7": 7, - "8": 8, - "9": 9, - "10": 10, - "11": 11, - "13": 12, - "14": 13, - "15": 14, - "16": 15, - "17": 16, - "18": 17, - "19": 18, - "20": 19, - "21": 20, - "22": 21, - "23": 22, - "24": 23, - "25": 24, - "27": 25, - "28": 26, - "31": 27, - "32": 28, - "33": 29, - "34": 30, - "35": 31, - "36": 32, - "37": 33, - "38": 34, - "39": 35, - "40": 36, - "41": 37, - "42": 38, - "43": 39, - "44": 40, - "46": 41, - "47": 42, - "48": 43, - "49": 44, - "50": 45, - "51": 46, - "52": 47, - "53": 48, - "54": 49, - "55": 50, - "56": 51, - "57": 52, - "58": 53, - "59": 54, - "60": 55, - "61": 56, - "62": 57, - "63": 58, - "64": 59, - "65": 60, - "67": 61, - "70": 62, - "72": 63, - "73": 64, - "74": 65, - "75": 66, - "76": 67, - "77": 68, - "78": 69, - "79": 70, - "80": 71, - "81": 72, - "82": 73, - "84": 74, - "85": 75, - "86": 76, - "87": 77, - "88": 78, - "89": 79, - "90": 80, - } - self.origin2compact_mapper = {int(k): v - 1 for k, v in self.category_map_str.items()} - self.compact2origin_mapper = {int(v - 1): int(k) for k, v in self.category_map_str.items()} - - def origin2compact(self, idx): - return self.origin2compact_mapper[int(idx)] - - def compact2origin(self, idx): - return self.compact2origin_mapper[int(idx)] - - -def to_device(item, device): - if isinstance(item, torch.Tensor): - return item.to(device) - elif isinstance(item, list): - return [to_device(i, device) for i in item] - elif isinstance(item, dict): - return {k: to_device(v, device) for k, v in item.items()} - else: - raise NotImplementedError( - "Call Shilong if you use other containers! type: {}".format(type(item)) - ) - - -# -def get_gaussian_mean(x, axis, other_axis, softmax=True): - """ - - Args: - x (float): Input images(BxCxHxW) - axis (int): The index for weighted mean - other_axis (int): The other index - - Returns: weighted index for axis, BxC - - """ - mat2line = torch.sum(x, axis=other_axis) - # mat2line = mat2line / mat2line.mean() * 10 - if softmax: - u = torch.softmax(mat2line, axis=2) - else: - u = mat2line / (mat2line.sum(2, keepdim=True) + 1e-6) - size = x.shape[axis] - ind = torch.linspace(0, 1, size).to(x.device) - batch = x.shape[0] - channel = x.shape[1] - index = ind.repeat([batch, channel, 1]) - mean_position = torch.sum(index * u, dim=2) - return mean_position - - -def get_expected_points_from_map(hm, softmax=True): - """get_gaussian_map_from_points - B,C,H,W -> B,N,2 float(0, 1) float(0, 1) - softargmax function - - Args: - hm (float): Input images(BxCxHxW) - - Returns: - weighted index for axis, BxCx2. float between 0 and 1. - - """ - # hm = 10*hm - B, C, H, W = hm.shape - y_mean = get_gaussian_mean(hm, 2, 3, softmax=softmax) # B,C - x_mean = get_gaussian_mean(hm, 3, 2, softmax=softmax) # B,C - # return torch.cat((x_mean.unsqueeze(-1), y_mean.unsqueeze(-1)), 2) - return torch.stack([x_mean, y_mean], dim=2) - - -# Positional encoding (section 5.1) -# borrow from nerf -class Embedder: - def __init__(self, **kwargs): - self.kwargs = kwargs - self.create_embedding_fn() - - def create_embedding_fn(self): - embed_fns = [] - d = self.kwargs["input_dims"] - out_dim = 0 - if self.kwargs["include_input"]: - embed_fns.append(lambda x: x) - out_dim += d - - max_freq = self.kwargs["max_freq_log2"] - N_freqs = self.kwargs["num_freqs"] - - if self.kwargs["log_sampling"]: - freq_bands = 2.0 ** torch.linspace(0.0, max_freq, steps=N_freqs) - else: - freq_bands = torch.linspace(2.0**0.0, 2.0**max_freq, steps=N_freqs) - - for freq in freq_bands: - for p_fn in self.kwargs["periodic_fns"]: - embed_fns.append(lambda x, p_fn=p_fn, freq=freq: p_fn(x * freq)) - out_dim += d - - self.embed_fns = embed_fns - self.out_dim = out_dim - - def embed(self, inputs): - return torch.cat([fn(inputs) for fn in self.embed_fns], -1) - - -def get_embedder(multires, i=0): - import torch.nn as nn - - if i == -1: - return nn.Identity(), 3 - - embed_kwargs = { - "include_input": True, - "input_dims": 3, - "max_freq_log2": multires - 1, - "num_freqs": multires, - "log_sampling": True, - "periodic_fns": [torch.sin, torch.cos], - } - - embedder_obj = Embedder(**embed_kwargs) - embed = lambda x, eo=embedder_obj: eo.embed(x) - return embed, embedder_obj.out_dim - - -class APOPMeter: - def __init__(self) -> None: - self.tp = 0 - self.fp = 0 - self.tn = 0 - self.fn = 0 - - def update(self, pred, gt): - """ - Input: - pred, gt: Tensor() - """ - assert pred.shape == gt.shape - self.tp += torch.logical_and(pred == 1, gt == 1).sum().item() - self.fp += torch.logical_and(pred == 1, gt == 0).sum().item() - self.tn += torch.logical_and(pred == 0, gt == 0).sum().item() - self.tn += torch.logical_and(pred == 1, gt == 0).sum().item() - - def update_cm(self, tp, fp, tn, fn): - self.tp += tp - self.fp += fp - self.tn += tn - self.tn += fn - - -def inverse_sigmoid(x, eps=1e-5): - x = x.clamp(min=0, max=1) - x1 = x.clamp(min=eps) - x2 = (1 - x).clamp(min=eps) - return torch.log(x1 / x2) - - -def get_raw_dict(args): - """ - return the dicf contained in args. - - e.g: - >>> with open(path, 'w') as f: - json.dump(get_raw_dict(args), f, indent=2) - """ - if isinstance(args, argparse.Namespace): - return vars(args) - elif isinstance(args, dict): - return args - elif isinstance(args, SLConfig): - return args._cfg_dict - else: - raise NotImplementedError("Unknown type {}".format(type(args))) - - -def stat_tensors(tensor): - assert tensor.dim() == 1 - tensor_sm = tensor.softmax(0) - entropy = (tensor_sm * torch.log(tensor_sm + 1e-9)).sum() - - return { - "max": tensor.max(), - "min": tensor.min(), - "mean": tensor.mean(), - "var": tensor.var(), - "std": tensor.var() ** 0.5, - "entropy": entropy, - } - - -class NiceRepr: - """Inherit from this class and define ``__nice__`` to "nicely" print your - objects. - - Defines ``__str__`` and ``__repr__`` in terms of ``__nice__`` function - Classes that inherit from :class:`NiceRepr` should redefine ``__nice__``. - If the inheriting class has a ``__len__``, method then the default - ``__nice__`` method will return its length. - - Example: - >>> class Foo(NiceRepr): - ... def __nice__(self): - ... return 'info' - >>> foo = Foo() - >>> assert str(foo) == '' - >>> assert repr(foo).startswith('>> class Bar(NiceRepr): - ... pass - >>> bar = Bar() - >>> import pytest - >>> with pytest.warns(None) as record: - >>> assert 'object at' in str(bar) - >>> assert 'object at' in repr(bar) - - Example: - >>> class Baz(NiceRepr): - ... def __len__(self): - ... return 5 - >>> baz = Baz() - >>> assert str(baz) == '' - """ - - def __nice__(self): - """str: a "nice" summary string describing this module""" - if hasattr(self, "__len__"): - # It is a common pattern for objects to use __len__ in __nice__ - # As a convenience we define a default __nice__ for these objects - return str(len(self)) - else: - # In all other cases force the subclass to overload __nice__ - raise NotImplementedError(f"Define the __nice__ method for {self.__class__!r}") - - def __repr__(self): - """str: the string of the module""" - try: - nice = self.__nice__() - classname = self.__class__.__name__ - return f"<{classname}({nice}) at {hex(id(self))}>" - except NotImplementedError as ex: - warnings.warn(str(ex), category=RuntimeWarning) - return object.__repr__(self) - - def __str__(self): - """str: the string of the module""" - try: - classname = self.__class__.__name__ - nice = self.__nice__() - return f"<{classname}({nice})>" - except NotImplementedError as ex: - warnings.warn(str(ex), category=RuntimeWarning) - return object.__repr__(self) - - -def ensure_rng(rng=None): - """Coerces input into a random number generator. - - If the input is None, then a global random state is returned. - - If the input is a numeric value, then that is used as a seed to construct a - random state. Otherwise the input is returned as-is. - - Adapted from [1]_. - - Args: - rng (int | numpy.random.RandomState | None): - if None, then defaults to the global rng. Otherwise this can be an - integer or a RandomState class - Returns: - (numpy.random.RandomState) : rng - - a numpy random number generator - - References: - .. [1] https://gitlab.kitware.com/computer-vision/kwarray/blob/master/kwarray/util_random.py#L270 # noqa: E501 - """ - - if rng is None: - rng = np.random.mtrand._rand - elif isinstance(rng, int): - rng = np.random.RandomState(rng) - else: - rng = rng - return rng - - -def random_boxes(num=1, scale=1, rng=None): - """Simple version of ``kwimage.Boxes.random`` - - Returns: - Tensor: shape (n, 4) in x1, y1, x2, y2 format. - - References: - https://gitlab.kitware.com/computer-vision/kwimage/blob/master/kwimage/structs/boxes.py#L1390 - - Example: - >>> num = 3 - >>> scale = 512 - >>> rng = 0 - >>> boxes = random_boxes(num, scale, rng) - >>> print(boxes) - tensor([[280.9925, 278.9802, 308.6148, 366.1769], - [216.9113, 330.6978, 224.0446, 456.5878], - [405.3632, 196.3221, 493.3953, 270.7942]]) - """ - rng = ensure_rng(rng) - - tlbr = rng.rand(num, 4).astype(np.float32) - - tl_x = np.minimum(tlbr[:, 0], tlbr[:, 2]) - tl_y = np.minimum(tlbr[:, 1], tlbr[:, 3]) - br_x = np.maximum(tlbr[:, 0], tlbr[:, 2]) - br_y = np.maximum(tlbr[:, 1], tlbr[:, 3]) - - tlbr[:, 0] = tl_x * scale - tlbr[:, 1] = tl_y * scale - tlbr[:, 2] = br_x * scale - tlbr[:, 3] = br_y * scale - - boxes = torch.from_numpy(tlbr) - return boxes - - -class ModelEma(torch.nn.Module): - def __init__(self, model, decay=0.9997, device=None): - super(ModelEma, self).__init__() - # make a copy of the model for accumulating moving average of weights - self.module = deepcopy(model) - self.module.eval() - - # import ipdb; ipdb.set_trace() - - self.decay = decay - self.device = device # perform ema on different device from model if set - if self.device is not None: - self.module.to(device=device) - - def _update(self, model, update_fn): - with torch.no_grad(): - for ema_v, model_v in zip( - self.module.state_dict().values(), model.state_dict().values() - ): - if self.device is not None: - model_v = model_v.to(device=self.device) - ema_v.copy_(update_fn(ema_v, model_v)) - - def update(self, model): - self._update(model, update_fn=lambda e, m: self.decay * e + (1.0 - self.decay) * m) - - def set(self, model): - self._update(model, update_fn=lambda e, m: m) - - -class BestMetricSingle: - def __init__(self, init_res=0.0, better="large") -> None: - self.init_res = init_res - self.best_res = init_res - self.best_ep = -1 - - self.better = better - assert better in ["large", "small"] - - def isbetter(self, new_res, old_res): - if self.better == "large": - return new_res > old_res - if self.better == "small": - return new_res < old_res - - def update(self, new_res, ep): - if self.isbetter(new_res, self.best_res): - self.best_res = new_res - self.best_ep = ep - return True - return False - - def __str__(self) -> str: - return "best_res: {}\t best_ep: {}".format(self.best_res, self.best_ep) - - def __repr__(self) -> str: - return self.__str__() - - def summary(self) -> dict: - return { - "best_res": self.best_res, - "best_ep": self.best_ep, - } - - -class BestMetricHolder: - def __init__(self, init_res=0.0, better="large", use_ema=False) -> None: - self.best_all = BestMetricSingle(init_res, better) - self.use_ema = use_ema - if use_ema: - self.best_ema = BestMetricSingle(init_res, better) - self.best_regular = BestMetricSingle(init_res, better) - - def update(self, new_res, epoch, is_ema=False): - """ - return if the results is the best. - """ - if not self.use_ema: - return self.best_all.update(new_res, epoch) - else: - if is_ema: - self.best_ema.update(new_res, epoch) - return self.best_all.update(new_res, epoch) - else: - self.best_regular.update(new_res, epoch) - return self.best_all.update(new_res, epoch) - - def summary(self): - if not self.use_ema: - return self.best_all.summary() - - res = {} - res.update({f"all_{k}": v for k, v in self.best_all.summary().items()}) - res.update({f"regular_{k}": v for k, v in self.best_regular.summary().items()}) - res.update({f"ema_{k}": v for k, v in self.best_ema.summary().items()}) - return res - - def __repr__(self) -> str: - return json.dumps(self.summary(), indent=2) - - def __str__(self) -> str: - return self.__repr__() - - -def targets_to(targets: List[Dict[str, Any]], device): - """Moves the target dicts to the given device.""" - excluded_keys = [ - "questionId", - "tokens_positive", - "strings_positive", - "tokens", - "dataset_name", - "sentence_id", - "original_img_id", - "nb_eval", - "task_id", - "original_id", - "token_span", - "caption", - "dataset_type", - ] - return [ - {k: v.to(device) if k not in excluded_keys else v for k, v in t.items()} for t in targets - ] - - -def get_phrases_from_posmap( - posmap: torch.BoolTensor, tokenized: Dict, tokenizer: AutoTokenizer -): - assert isinstance(posmap, torch.Tensor), "posmap must be torch.Tensor" - if posmap.dim() == 1: - non_zero_idx = posmap.nonzero(as_tuple=True)[0].tolist() - token_ids = [tokenized["input_ids"][i] for i in non_zero_idx] - return tokenizer.decode(token_ids) - else: - raise NotImplementedError("posmap must be 1-dim") diff --git a/spaces/Mountchicken/MAERec-Gradio/configs/textdet/fcenet/fcenet_resnet50-oclip_fpn_1500e_icdar2015.py b/spaces/Mountchicken/MAERec-Gradio/configs/textdet/fcenet/fcenet_resnet50-oclip_fpn_1500e_icdar2015.py deleted file mode 100644 index 87d87de5d1ae38deef32dcca42018eeab57cf359..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/configs/textdet/fcenet/fcenet_resnet50-oclip_fpn_1500e_icdar2015.py +++ /dev/null @@ -1,16 +0,0 @@ -_base_ = [ - 'fcenet_resnet50_fpn_1500e_icdar2015.py', -] -load_from = None - -_base_.model.backbone = dict( - type='CLIPResNet', - out_indices=(1, 2, 3), - init_cfg=dict( - type='Pretrained', - checkpoint='https://download.openmmlab.com/' - 'mmocr/backbone/resnet50-oclip-7ba0c533.pth')) - -_base_.train_dataloader.batch_size = 16 -_base_.train_dataloader.num_workers = 24 -_base_.optim_wrapper.optimizer.lr = 0.0005 diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/utils/check_argument.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/utils/check_argument.py deleted file mode 100644 index 34cbe8dc2658d725c328eb5cd98652633a22aa24..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/mmocr/utils/check_argument.py +++ /dev/null @@ -1,72 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. - - -def is_3dlist(x): - """check x is 3d-list([[[1], []]]) or 2d empty list([[], []]) or 1d empty - list([]). - - Notice: - The reason that it contains 1d or 2d empty list is because - some arguments from gt annotation file or model prediction - may be empty, but usually, it should be 3d-list. - """ - if not isinstance(x, list): - return False - if len(x) == 0: - return True - for sub_x in x: - if not is_2dlist(sub_x): - return False - - return True - - -def is_2dlist(x): - """check x is 2d-list([[1], []]) or 1d empty list([]). - - Notice: - The reason that it contains 1d empty list is because - some arguments from gt annotation file or model prediction - may be empty, but usually, it should be 2d-list. - """ - if not isinstance(x, list): - return False - if len(x) == 0: - return True - - return all(isinstance(item, list) for item in x) - - -def is_type_list(x, type): - - if not isinstance(x, list): - return False - - return all(isinstance(item, type) for item in x) - - -def is_none_or_type(x, type): - - return isinstance(x, type) or x is None - - -def equal_len(*argv): - assert len(argv) > 0 - - num_arg = len(argv[0]) - for arg in argv: - if len(arg) != num_arg: - return False - return True - - -def valid_boundary(x, with_score=True): - num = len(x) - if num < 8: - return False - if num % 2 == 0 and (not with_score): - return True - if num % 2 == 1 and with_score: - return True - - return False diff --git a/spaces/MrBodean/VoiceClone/vocoder_train.py b/spaces/MrBodean/VoiceClone/vocoder_train.py deleted file mode 100644 index d712ffa3e6c92a091aa18dc90f0027f46940e400..0000000000000000000000000000000000000000 --- a/spaces/MrBodean/VoiceClone/vocoder_train.py +++ /dev/null @@ -1,56 +0,0 @@ -from utils.argutils import print_args -from vocoder.train import train -from pathlib import Path -import argparse - - -if __name__ == "__main__": - parser = argparse.ArgumentParser( - description="Trains the vocoder from the synthesizer audios and the GTA synthesized mels, " - "or ground truth mels.", - formatter_class=argparse.ArgumentDefaultsHelpFormatter - ) - - parser.add_argument("run_id", type=str, help= \ - "Name for this model instance. If a model state from the same run ID was previously " - "saved, the training will restart from there. Pass -f to overwrite saved states and " - "restart from scratch.") - parser.add_argument("datasets_root", type=str, help= \ - "Path to the directory containing your SV2TTS directory. Specifying --syn_dir or --voc_dir " - "will take priority over this argument.") - parser.add_argument("--syn_dir", type=str, default=argparse.SUPPRESS, help= \ - "Path to the synthesizer directory that contains the ground truth mel spectrograms, " - "the wavs and the embeds. Defaults to /SV2TTS/synthesizer/.") - parser.add_argument("--voc_dir", type=str, default=argparse.SUPPRESS, help= \ - "Path to the vocoder directory that contains the GTA synthesized mel spectrograms. " - "Defaults to /SV2TTS/vocoder/. Unused if --ground_truth is passed.") - parser.add_argument("-m", "--models_dir", type=str, default="vocoder/saved_models/", help=\ - "Path to the directory that will contain the saved model weights, as well as backups " - "of those weights and wavs generated during training.") - parser.add_argument("-g", "--ground_truth", action="store_true", help= \ - "Train on ground truth spectrograms (/SV2TTS/synthesizer/mels).") - parser.add_argument("-s", "--save_every", type=int, default=1000, help= \ - "Number of steps between updates of the model on the disk. Set to 0 to never save the " - "model.") - parser.add_argument("-b", "--backup_every", type=int, default=25000, help= \ - "Number of steps between backups of the model. Set to 0 to never make backups of the " - "model.") - parser.add_argument("-f", "--force_restart", action="store_true", help= \ - "Do not load any saved model and restart from scratch.") - args = parser.parse_args() - - # Process the arguments - if not hasattr(args, "syn_dir"): - args.syn_dir = Path(args.datasets_root, "SV2TTS", "synthesizer") - args.syn_dir = Path(args.syn_dir) - if not hasattr(args, "voc_dir"): - args.voc_dir = Path(args.datasets_root, "SV2TTS", "vocoder") - args.voc_dir = Path(args.voc_dir) - del args.datasets_root - args.models_dir = Path(args.models_dir) - args.models_dir.mkdir(exist_ok=True) - - # Run the training - print_args(args, parser) - train(**vars(args)) - \ No newline at end of file diff --git a/spaces/NeuralInternet/Audio-to-Text_Playground/app.py b/spaces/NeuralInternet/Audio-to-Text_Playground/app.py deleted file mode 100644 index e5b88e53c45c87cc26c020ed485739b9a710b5b4..0000000000000000000000000000000000000000 --- a/spaces/NeuralInternet/Audio-to-Text_Playground/app.py +++ /dev/null @@ -1,109 +0,0 @@ -import torch - -import gradio as gr -import pytube as pt -from transformers import pipeline - -MODEL_NAME = "openai/whisper-large-v2" - -device = 0 if torch.cuda.is_available() else "cpu" - -pipe = pipeline( - task="automatic-speech-recognition", - model=MODEL_NAME, - chunk_length_s=30, - device=device, - # return_timestamps=True -) - - -all_special_ids = pipe.tokenizer.all_special_ids -transcribe_token_id = all_special_ids[-5] -translate_token_id = all_special_ids[-6] - - -def transcribe(microphone, file_upload, task): - warn_output = "" - if (microphone is not None) and (file_upload is not None): - warn_output = ( - "WARNING: You've uploaded an audio file and used the microphone. " - "The recorded file from the microphone will be used and the uploaded audio will be discarded.\n" - ) - - elif (microphone is None) and (file_upload is None): - return "ERROR: You have to either use the microphone or upload an audio file" - - file = microphone if microphone is not None else file_upload - - pipe.model.config.forced_decoder_ids = [[2, transcribe_token_id if task=="transcribe" else translate_token_id]] - text = pipe(file,return_timestamps=True)["text"] - - return warn_output + text - - -def _return_yt_html_embed(yt_url): - video_id = yt_url.split("?v=")[-1] - HTML_str = ( - f'
' - "
" - ) - return HTML_str - - -def yt_transcribe(yt_url, task): - yt = pt.YouTube(yt_url) - html_embed_str = _return_yt_html_embed(yt_url) - stream = yt.streams.filter(only_audio=True)[0] - stream.download(filename="audio.mp3") - - pipe.model.config.forced_decoder_ids = [[2, transcribe_token_id if task=="transcribe" else translate_token_id]] - - text = pipe("audio.mp3")["text"] - - return html_embed_str, text - - -demo = gr.Blocks() - -mf_transcribe = gr.Interface( - fn=transcribe, - inputs=[ - gr.inputs.Audio(source="microphone", type="filepath", optional=True), - gr.inputs.Audio(source="upload", type="filepath", optional=True), - gr.inputs.Radio(["transcribe", "translate"], label="Task", default="transcribe"), - ], - outputs="text", - layout="horizontal", - theme="huggingface", - title="Audio-to-Text Playground: Transcribe Audio", - description=( - "Transcribe long-form microphone or audio inputs with the click of a button! Demo uses the" - f" checkpoint [{MODEL_NAME}](https://huggingface.co/{MODEL_NAME}) and 🤗 Transformers to transcribe audio files" - " of arbitrary length." - ), - allow_flagging="never", -) - -yt_transcribe = gr.Interface( - fn=yt_transcribe, - inputs=[ - gr.inputs.Textbox(lines=1, placeholder="Paste the URL to a YouTube video here", label="YouTube URL"), - gr.inputs.Radio(["transcribe", "translate"], label="Task", default="transcribe") - ], - outputs=["html", "text"], - layout="horizontal", - theme="huggingface", - title="Audio-to-Text Playground: Transcribe YouTube", - description=( - "Transcribe long-form YouTube videos with the click of a button! Demo uses the checkpoint" - f" [{MODEL_NAME}](https://huggingface.co/{MODEL_NAME}) and 🤗 Transformers to transcribe video files of" - " arbitrary length." - ), - allow_flagging="never", -) - -with demo: - gr.TabbedInterface([mf_transcribe, yt_transcribe], ["Transcribe Audio", "Transcribe YouTube"]) - -demo.launch(enable_queue=True) - diff --git a/spaces/Nightwing25/AICoverGen/src/infer_pack/attentions.py b/spaces/Nightwing25/AICoverGen/src/infer_pack/attentions.py deleted file mode 100644 index 77cb63ffccf3e33badf22d50862a64ba517b487f..0000000000000000000000000000000000000000 --- a/spaces/Nightwing25/AICoverGen/src/infer_pack/attentions.py +++ /dev/null @@ -1,417 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from infer_pack import commons -from infer_pack import modules -from infer_pack.modules import LayerNorm - - -class Encoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - window_size=10, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - window_size=window_size, - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - proximal_bias=False, - proximal_init=True, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - proximal_bias=proximal_bias, - proximal_init=proximal_init, - ) - ) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append( - MultiHeadAttention( - hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - causal=True, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to( - device=x.device, dtype=x.dtype - ) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__( - self, - channels, - out_channels, - n_heads, - p_dropout=0.0, - window_size=None, - heads_share=True, - block_length=None, - proximal_bias=False, - proximal_init=False, - ): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - self.emb_rel_v = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert ( - t_s == t_t - ), "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys( - query / math.sqrt(self.k_channels), key_relative_embeddings - ) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to( - device=scores.device, dtype=scores.dtype - ) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert ( - t_s == t_t - ), "Local attention is only available for self-attention." - block_mask = ( - torch.ones_like(scores) - .triu(-self.block_length) - .tril(self.block_length) - ) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings( - self.emb_rel_v, t_s - ) - output = output + self._matmul_with_relative_values( - relative_weights, value_relative_embeddings - ) - output = ( - output.transpose(2, 3).contiguous().view(b, d, t_t) - ) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]), - ) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[ - :, slice_start_position:slice_end_position - ] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad( - x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]) - ) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[ - :, :, :length, length - 1 : - ] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad( - x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]) - ) - x_flat = x.view([batch, heads, length**2 + length * (length - 1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__( - self, - in_channels, - out_channels, - filter_channels, - kernel_size, - p_dropout=0.0, - activation=None, - causal=False, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/NimaBoscarino/climategan/climategan/norms.py b/spaces/NimaBoscarino/climategan/climategan/norms.py deleted file mode 100644 index c448248488af0baf131628e994cb17df20a58cbd..0000000000000000000000000000000000000000 --- a/spaces/NimaBoscarino/climategan/climategan/norms.py +++ /dev/null @@ -1,186 +0,0 @@ -"""Normalization layers used in blocks -""" -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class AdaptiveInstanceNorm2d(nn.Module): - def __init__(self, num_features, eps=1e-5, momentum=0.1): - super(AdaptiveInstanceNorm2d, self).__init__() - self.num_features = num_features - self.eps = eps - self.momentum = momentum - # weight and bias are dynamically assigned - self.weight = None - self.bias = None - # just dummy buffers, not used - self.register_buffer("running_mean", torch.zeros(num_features)) - self.register_buffer("running_var", torch.ones(num_features)) - - def forward(self, x): - assert ( - self.weight is not None and self.bias is not None - ), "Please assign weight and bias before calling AdaIN!" - b, c = x.size(0), x.size(1) - running_mean = self.running_mean.repeat(b) - running_var = self.running_var.repeat(b) - - # Apply instance norm - x_reshaped = x.contiguous().view(1, b * c, *x.size()[2:]) - - out = F.batch_norm( - x_reshaped, - running_mean, - running_var, - self.weight, - self.bias, - True, - self.momentum, - self.eps, - ) - - return out.view(b, c, *x.size()[2:]) - - def __repr__(self): - return self.__class__.__name__ + "(" + str(self.num_features) + ")" - - -class LayerNorm(nn.Module): - def __init__(self, num_features, eps=1e-5, affine=True): - super(LayerNorm, self).__init__() - self.num_features = num_features - self.affine = affine - self.eps = eps - - if self.affine: - self.gamma = nn.Parameter(torch.Tensor(num_features).uniform_()) - self.beta = nn.Parameter(torch.zeros(num_features)) - - def forward(self, x): - shape = [-1] + [1] * (x.dim() - 1) - # print(x.size()) - if x.size(0) == 1: - # These two lines run much faster in pytorch 0.4 - # than the two lines listed below. - mean = x.view(-1).mean().view(*shape) - std = x.view(-1).std().view(*shape) - else: - mean = x.view(x.size(0), -1).mean(1).view(*shape) - std = x.view(x.size(0), -1).std(1).view(*shape) - - x = (x - mean) / (std + self.eps) - - if self.affine: - shape = [1, -1] + [1] * (x.dim() - 2) - x = x * self.gamma.view(*shape) + self.beta.view(*shape) - return x - - -def l2normalize(v, eps=1e-12): - return v / (v.norm() + eps) - - -class SpectralNorm(nn.Module): - """ - Based on the paper "Spectral Normalization for Generative Adversarial Networks" - by Takeru Miyato, Toshiki Kataoka, Masanori Koyama, Yuichi Yoshida and the - Pytorch implementation: - https://github.com/christiancosgrove/pytorch-spectral-normalization-gan - """ - - def __init__(self, module, name="weight", power_iterations=1): - super().__init__() - self.module = module - self.name = name - self.power_iterations = power_iterations - if not self._made_params(): - self._make_params() - - def _update_u_v(self): - u = getattr(self.module, self.name + "_u") - v = getattr(self.module, self.name + "_v") - w = getattr(self.module, self.name + "_bar") - - height = w.data.shape[0] - for _ in range(self.power_iterations): - v.data = l2normalize(torch.mv(torch.t(w.view(height, -1).data), u.data)) - u.data = l2normalize(torch.mv(w.view(height, -1).data, v.data)) - - # sigma = torch.dot(u.data, torch.mv(w.view(height,-1).data, v.data)) - sigma = u.dot(w.view(height, -1).mv(v)) - setattr(self.module, self.name, w / sigma.expand_as(w)) - - def _made_params(self): - try: - u = getattr(self.module, self.name + "_u") # noqa: F841 - v = getattr(self.module, self.name + "_v") # noqa: F841 - w = getattr(self.module, self.name + "_bar") # noqa: F841 - return True - except AttributeError: - return False - - def _make_params(self): - w = getattr(self.module, self.name) - - height = w.data.shape[0] - width = w.view(height, -1).data.shape[1] - - u = nn.Parameter(w.data.new(height).normal_(0, 1), requires_grad=False) - v = nn.Parameter(w.data.new(width).normal_(0, 1), requires_grad=False) - u.data = l2normalize(u.data) - v.data = l2normalize(v.data) - w_bar = nn.Parameter(w.data) - - del self.module._parameters[self.name] - - self.module.register_parameter(self.name + "_u", u) - self.module.register_parameter(self.name + "_v", v) - self.module.register_parameter(self.name + "_bar", w_bar) - - def forward(self, *args): - self._update_u_v() - return self.module.forward(*args) - - -class SPADE(nn.Module): - def __init__(self, param_free_norm_type, kernel_size, norm_nc, cond_nc): - super().__init__() - - if param_free_norm_type == "instance": - self.param_free_norm = nn.InstanceNorm2d(norm_nc, affine=False) - # elif param_free_norm_type == "syncbatch": - # self.param_free_norm = SynchronizedBatchNorm2d(norm_nc, affine=False) - elif param_free_norm_type == "batch": - self.param_free_norm = nn.BatchNorm2d(norm_nc, affine=False) - else: - raise ValueError( - "%s is not a recognized param-free norm type in SPADE" - % param_free_norm_type - ) - - # The dimension of the intermediate embedding space. Yes, hardcoded. - nhidden = 128 - - pw = kernel_size // 2 - self.mlp_shared = nn.Sequential( - nn.Conv2d(cond_nc, nhidden, kernel_size=kernel_size, padding=pw), nn.ReLU() - ) - self.mlp_gamma = nn.Conv2d( - nhidden, norm_nc, kernel_size=kernel_size, padding=pw - ) - self.mlp_beta = nn.Conv2d(nhidden, norm_nc, kernel_size=kernel_size, padding=pw) - - def forward(self, x, segmap): - # Part 1. generate parameter-free normalized activations - normalized = self.param_free_norm(x) - - # Part 2. produce scaling and bias conditioned on semantic map - segmap = F.interpolate(segmap, size=x.size()[2:], mode="nearest") - actv = self.mlp_shared(segmap) - gamma = self.mlp_gamma(actv) - beta = self.mlp_beta(actv) - # apply scale and bias - out = normalized * (1 + gamma) + beta - - return out diff --git a/spaces/NimaBoscarino/climategan/climategan/optim.py b/spaces/NimaBoscarino/climategan/climategan/optim.py deleted file mode 100644 index 3e6ffea333aedcb4b06ed5fcf7306affc453bee1..0000000000000000000000000000000000000000 --- a/spaces/NimaBoscarino/climategan/climategan/optim.py +++ /dev/null @@ -1,291 +0,0 @@ -"""Define ExtraAdam and schedulers -""" -import math - -import torch -from torch.optim import Adam, Optimizer, RMSprop, lr_scheduler -from torch_optimizer import NovoGrad, RAdam - - -def get_scheduler(optimizer, hyperparameters, iterations=-1): - """Get an optimizer's learning rate scheduler based on opts - - Args: - optimizer (torch.Optimizer): optimizer for which to schedule the learning rate - hyperparameters (addict.Dict): configuration options - iterations (int, optional): The index of last epoch. Defaults to -1. - When last_epoch=-1, sets initial lr as lr. - - Returns: - [type]: [description] - """ - - policy = hyperparameters.get("lr_policy") - lr_step_size = hyperparameters.get("lr_step_size") - lr_gamma = hyperparameters.get("lr_gamma") - milestones = hyperparameters.get("lr_milestones") - - if policy is None or policy == "constant": - scheduler = None # constant scheduler - elif policy == "step": - scheduler = lr_scheduler.StepLR( - optimizer, step_size=lr_step_size, gamma=lr_gamma, last_epoch=iterations, - ) - elif policy == "multi_step": - if isinstance(milestones, (list, tuple)): - milestones = milestones - elif isinstance(milestones, int): - assert "lr_step_size" in hyperparameters - if iterations == -1: - last_milestone = 1000 - else: - last_milestone = iterations - milestones = list(range(milestones, last_milestone, lr_step_size)) - scheduler = lr_scheduler.MultiStepLR( - optimizer, milestones=milestones, gamma=lr_gamma, last_epoch=iterations, - ) - else: - return NotImplementedError( - "learning rate policy [%s] is not implemented", hyperparameters["lr_policy"] - ) - return scheduler - - -def get_optimizer(net, opt_conf, tasks=None, is_disc=False, iterations=-1): - """Returns a tuple (optimizer, scheduler) according to opt_conf which - should come from the trainer's opts as: trainer.opts..opt - - Args: - net (nn.Module): Network to update - opt_conf (addict.Dict): optimizer and scheduler options - tasks: list of tasks - iterations (int, optional): Last epoch number. Defaults to -1, meaning - start with base lr. - - Returns: - Tuple: (torch.Optimizer, torch._LRScheduler) - """ - opt = scheduler = None - lr_names = [] - if tasks is None: - lr_default = opt_conf.lr - params = net.parameters() - lr_names.append("full") - elif isinstance(opt_conf.lr, float): # Use default for all tasks - lr_default = opt_conf.lr - params = net.parameters() - lr_names.append("full") - elif len(opt_conf.lr) == 1: # Use default for all tasks - lr_default = opt_conf.lr.default - params = net.parameters() - lr_names.append("full") - else: - lr_default = opt_conf.lr.default - params = list() - for task in tasks: - lr = opt_conf.lr.get(task, lr_default) - parameters = None - # Parameters for encoder - if not is_disc: - if task == "m": - parameters = net.encoder.parameters() - params.append({"params": parameters, "lr": lr}) - lr_names.append("encoder") - # Parameters for decoders - if task == "p": - if hasattr(net, "painter"): - parameters = net.painter.parameters() - lr_names.append("painter") - else: - parameters = net.decoders[task].parameters() - lr_names.append(f"decoder_{task}") - else: - if task in net: - parameters = net[task].parameters() - lr_names.append(f"disc_{task}") - - if parameters is not None: - params.append({"params": parameters, "lr": lr}) - - if opt_conf.optimizer.lower() == "extraadam": - opt = ExtraAdam(params, lr=lr_default, betas=(opt_conf.beta1, 0.999)) - elif opt_conf.optimizer.lower() == "novograd": - opt = NovoGrad( - params, lr=lr_default, betas=(opt_conf.beta1, 0) - ) # default for beta2 is 0 - elif opt_conf.optimizer.lower() == "radam": - opt = RAdam(params, lr=lr_default, betas=(opt_conf.beta1, 0.999)) - elif opt_conf.optimizer.lower() == "rmsprop": - opt = RMSprop(params, lr=lr_default) - else: - opt = Adam(params, lr=lr_default, betas=(opt_conf.beta1, 0.999)) - scheduler = get_scheduler(opt, opt_conf, iterations) - return opt, scheduler, lr_names - - -""" -Extragradient Optimizer - -Mostly copied from the extragrad paper repo. - -MIT License -Copyright (c) Facebook, Inc. and its affiliates. -written by Hugo Berard (berard.hugo@gmail.com) while at Facebook. -""" - - -class Extragradient(Optimizer): - """Base class for optimizers with extrapolation step. - Arguments: - params (iterable): an iterable of :class:`torch.Tensor` s or - :class:`dict` s. Specifies what Tensors should be optimized. - defaults: (dict): a dict containing default values of optimization - options (used when a parameter group doesn't specify them). - """ - - def __init__(self, params, defaults): - super(Extragradient, self).__init__(params, defaults) - self.params_copy = [] - - def update(self, p, group): - raise NotImplementedError - - def extrapolation(self): - """Performs the extrapolation step and save a copy of the current - parameters for the update step. - """ - # Check if a copy of the parameters was already made. - is_empty = len(self.params_copy) == 0 - for group in self.param_groups: - for p in group["params"]: - u = self.update(p, group) - if is_empty: - # Save the current parameters for the update step. - # Several extrapolation step can be made before each update but - # only the parametersbefore the first extrapolation step are saved. - self.params_copy.append(p.data.clone()) - if u is None: - continue - # Update the current parameters - p.data.add_(u) - - def step(self, closure=None): - """Performs a single optimization step. - Arguments: - closure (callable, optional): A closure that reevaluates the model - and returns the loss. - """ - if len(self.params_copy) == 0: - raise RuntimeError("Need to call extrapolation before calling step.") - - loss = None - if closure is not None: - loss = closure() - - i = -1 - for group in self.param_groups: - for p in group["params"]: - i += 1 - u = self.update(p, group) - if u is None: - continue - # Update the parameters saved during the extrapolation step - p.data = self.params_copy[i].add_(u) - - # Free the old parameters - self.params_copy = [] - return loss - - -class ExtraAdam(Extragradient): - """Implements the Adam algorithm with extrapolation step. - Arguments: - params (iterable): iterable of parameters to optimize or dicts defining - parameter groups - lr (float, optional): learning rate (default: 1e-3) - betas (Tuple[float, float], optional): coefficients used for computing - running averages of gradient and its square (default: (0.9, 0.999)) - eps (float, optional): term added to the denominator to improve - numerical stability (default: 1e-8) - weight_decay (float, optional): weight decay (L2 penalty) (default: 0) - amsgrad (boolean, optional): whether to use the AMSGrad variant of this - algorithm from the paper `On the Convergence of Adam and Beyond`_ - """ - - def __init__( - self, - params, - lr=1e-3, - betas=(0.9, 0.999), - eps=1e-8, - weight_decay=0, - amsgrad=False, - ): - if not 0.0 <= lr: - raise ValueError("Invalid learning rate: {}".format(lr)) - if not 0.0 <= eps: - raise ValueError("Invalid epsilon value: {}".format(eps)) - if not 0.0 <= betas[0] < 1.0: - raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0])) - if not 0.0 <= betas[1] < 1.0: - raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1])) - defaults = dict( - lr=lr, betas=betas, eps=eps, weight_decay=weight_decay, amsgrad=amsgrad - ) - super(ExtraAdam, self).__init__(params, defaults) - - def __setstate__(self, state): - super(ExtraAdam, self).__setstate__(state) - for group in self.param_groups: - group.setdefault("amsgrad", False) - - def update(self, p, group): - if p.grad is None: - return None - grad = p.grad.data - if grad.is_sparse: - raise RuntimeError( - "Adam does not support sparse gradients," - + " please consider SparseAdam instead" - ) - amsgrad = group["amsgrad"] - - state = self.state[p] - - # State initialization - if len(state) == 0: - state["step"] = 0 - # Exponential moving average of gradient values - state["exp_avg"] = torch.zeros_like(p.data) - # Exponential moving average of squared gradient values - state["exp_avg_sq"] = torch.zeros_like(p.data) - if amsgrad: - # Maintains max of all exp. moving avg. of sq. grad. values - state["max_exp_avg_sq"] = torch.zeros_like(p.data) - - exp_avg, exp_avg_sq = state["exp_avg"], state["exp_avg_sq"] - if amsgrad: - max_exp_avg_sq = state["max_exp_avg_sq"] - beta1, beta2 = group["betas"] - - state["step"] += 1 - - if group["weight_decay"] != 0: - grad = grad.add(group["weight_decay"], p.data) - - # Decay the first and second moment running average coefficient - exp_avg.mul_(beta1).add_(1 - beta1, grad) - exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad) - if amsgrad: - # Maintains the maximum of all 2nd moment running avg. till now - torch.max(max_exp_avg_sq, exp_avg_sq, out=max_exp_avg_sq) # type: ignore - # Use the max. for normalizing running avg. of gradient - denom = max_exp_avg_sq.sqrt().add_(group["eps"]) # type: ignore - else: - denom = exp_avg_sq.sqrt().add_(group["eps"]) - - bias_correction1 = 1 - beta1 ** state["step"] - bias_correction2 = 1 - beta2 ** state["step"] - step_size = group["lr"] * math.sqrt(bias_correction2) / bias_correction1 - - return -step_size * exp_avg / denom diff --git a/spaces/NimaBoscarino/climategan/figures/bootstrap_ablation_summary.py b/spaces/NimaBoscarino/climategan/figures/bootstrap_ablation_summary.py deleted file mode 100644 index e64a7b86d737a1a2ce422b2f14850d7f00169e23..0000000000000000000000000000000000000000 --- a/spaces/NimaBoscarino/climategan/figures/bootstrap_ablation_summary.py +++ /dev/null @@ -1,361 +0,0 @@ -""" -This script computes the median difference and confidence intervals of all techniques from the ablation study for -improving the masker evaluation metrics. The differences in the metrics are computed -for all images of paired models, that is those which only differ in the inclusion or -not of the given technique. Then, statistical inference is performed through the -percentile bootstrap to obtain robust estimates of the differences in the metrics and -confidence intervals. The script plots the summary for all techniques. -""" -print("Imports...", end="") -from argparse import ArgumentParser -import yaml -import numpy as np -import pandas as pd -import seaborn as sns -from scipy.special import comb -from scipy.stats import trim_mean -from tqdm import tqdm -from collections import OrderedDict -from pathlib import Path -import matplotlib.pyplot as plt -import matplotlib.patches as mpatches -import matplotlib.transforms as transforms - - -# ----------------------- -# ----- Constants ----- -# ----------------------- - -dict_metrics = { - "names": { - "tpr": "TPR, Recall, Sensitivity", - "tnr": "TNR, Specificity, Selectivity", - "fpr": "FPR", - "fpt": "False positives relative to image size", - "fnr": "FNR, Miss rate", - "fnt": "False negatives relative to image size", - "mpr": "May positive rate (MPR)", - "mnr": "May negative rate (MNR)", - "accuracy": "Accuracy (ignoring may)", - "error": "Error", - "f05": "F05 score", - "precision": "Precision", - "edge_coherence": "Edge coherence", - "accuracy_must_may": "Accuracy (ignoring cannot)", - }, - "key_metrics": ["error", "f05", "edge_coherence"], -} - -dict_techniques = OrderedDict( - [ - ("pseudo", "Pseudo labels"), - ("depth", "Depth (D)"), - ("seg", "Seg. (S)"), - ("spade", "SPADE"), - ("dada_seg", "DADA (S)"), - ("dada_masker", "DADA (M)"), - ] -) - -# Model features -model_feats = [ - "masker", - "seg", - "depth", - "dada_seg", - "dada_masker", - "spade", - "pseudo", - "ground", - "instagan", -] - -# Colors -crest = sns.color_palette("crest", as_cmap=False, n_colors=7) -palette_metrics = [crest[0], crest[3], crest[6]] -sns.palplot(palette_metrics) - -# Markers -dict_markers = {"error": "o", "f05": "s", "edge_coherence": "^"} - - -def parsed_args(): - """ - Parse and returns command-line args - - Returns: - argparse.Namespace: the parsed arguments - """ - parser = ArgumentParser() - parser.add_argument( - "--input_csv", - default="ablations_metrics_20210311.csv", - type=str, - help="CSV containing the results of the ablation study", - ) - parser.add_argument( - "--output_dir", - default=None, - type=str, - help="Output directory", - ) - parser.add_argument( - "--dpi", - default=200, - type=int, - help="DPI for the output images", - ) - parser.add_argument( - "--n_bs", - default=1e6, - type=int, - help="Number of bootrstrap samples", - ) - parser.add_argument( - "--alpha", - default=0.99, - type=float, - help="Confidence level", - ) - parser.add_argument( - "--bs_seed", - default=17, - type=int, - help="Bootstrap random seed, for reproducibility", - ) - - return parser.parse_args() - - -def trim_mean_wrapper(a): - return trim_mean(a, proportiontocut=0.2) - - -def find_model_pairs(technique, model_feats): - model_pairs = [] - for mi in df.loc[df[technique]].model_feats.unique(): - for mj in df.model_feats.unique(): - if mj == mi: - continue - - if df.loc[df.model_feats == mj, technique].unique()[0]: - continue - - is_pair = True - for f in model_feats: - if f == technique: - continue - elif ( - df.loc[df.model_feats == mj, f].unique()[0] - != df.loc[df.model_feats == mi, f].unique()[0] - ): - is_pair = False - break - else: - pass - if is_pair: - model_pairs.append((mi, mj)) - break - return model_pairs - - -if __name__ == "__main__": - # ----------------------------- - # ----- Parse arguments ----- - # ----------------------------- - args = parsed_args() - print("Args:\n" + "\n".join([f" {k:20}: {v}" for k, v in vars(args).items()])) - - # Determine output dir - if args.output_dir is None: - output_dir = Path(os.environ["SLURM_TMPDIR"]) - else: - output_dir = Path(args.output_dir) - if not output_dir.exists(): - output_dir.mkdir(parents=True, exist_ok=False) - - # Store args - output_yml = output_dir / "bootstrap_summary.yml" - with open(output_yml, "w") as f: - yaml.dump(vars(args), f) - - # Read CSV - df = pd.read_csv(args.input_csv, index_col="model_img_idx") - - # Build data set - dfbs = pd.DataFrame(columns=["diff", "technique", "metric"]) - for technique in model_feats: - - # Get pairs - model_pairs = find_model_pairs(technique, model_feats) - - # Compute differences - for m_with, m_without in model_pairs: - df_with = df.loc[df.model_feats == m_with] - df_without = df.loc[df.model_feats == m_without] - for metric in dict_metrics["key_metrics"]: - diff = ( - df_with.sort_values(by="img_idx")[metric].values - - df_without.sort_values(by="img_idx")[metric].values - ) - dfm = pd.DataFrame.from_dict( - {"metric": metric, "technique": technique, "diff": diff} - ) - dfbs = dfbs.append(dfm, ignore_index=True) - - ### Plot - - # Set up plot - sns.reset_orig() - sns.set(style="whitegrid") - plt.rcParams.update({"font.family": "serif"}) - plt.rcParams.update( - { - "font.serif": [ - "Computer Modern Roman", - "Times New Roman", - "Utopia", - "New Century Schoolbook", - "Century Schoolbook L", - "ITC Bookman", - "Bookman", - "Times", - "Palatino", - "Charter", - "serif" "Bitstream Vera Serif", - "DejaVu Serif", - ] - } - ) - - fig, axes = plt.subplots( - nrows=1, ncols=3, sharey=True, dpi=args.dpi, figsize=(9, 3) - ) - - metrics = ["error", "f05", "edge_coherence"] - dict_ci = {m: {} for m in metrics} - - for idx, metric in enumerate(dict_metrics["key_metrics"]): - - ax = sns.pointplot( - ax=axes[idx], - data=dfbs.loc[dfbs.metric.isin(["error", "f05", "edge_coherence"])], - order=dict_techniques.keys(), - x="diff", - y="technique", - hue="metric", - hue_order=[metric], - markers=dict_markers[metric], - palette=[palette_metrics[idx]], - errwidth=1.5, - scale=0.6, - join=False, - estimator=trim_mean_wrapper, - ci=int(args.alpha * 100), - n_boot=args.n_bs, - seed=args.bs_seed, - ) - - # Retrieve confidence intervals and update results dictionary - for line, technique in zip(ax.lines, dict_techniques.keys()): - dict_ci[metric].update( - { - technique: { - "20_trimmed_mean": float( - trim_mean_wrapper( - dfbs.loc[ - (dfbs.technique == technique) - & (dfbs.metric == metrics[idx]), - "diff", - ].values - ) - ), - "ci_left": float(line.get_xdata()[0]), - "ci_right": float(line.get_xdata()[1]), - } - } - ) - - leg_handles, leg_labels = ax.get_legend_handles_labels() - - # Change spines - sns.despine(left=True, bottom=True) - - # Set Y-label - ax.set_ylabel(None) - - # Y-tick labels - ax.set_yticklabels(list(dict_techniques.values()), fontsize="medium") - - # Set X-label - ax.set_xlabel(None) - - # X-ticks - xticks = ax.get_xticks() - xticklabels = xticks - ax.set_xticks(xticks) - ax.set_xticklabels(xticklabels, fontsize="small") - - # Y-lim - display2data = ax.transData.inverted() - ax2display = ax.transAxes - _, y_bottom = display2data.transform(ax.transAxes.transform((0.0, 0.02))) - _, y_top = display2data.transform(ax.transAxes.transform((0.0, 0.98))) - ax.set_ylim(bottom=y_bottom, top=y_top) - - # Draw line at H0 - y = np.arange(ax.get_ylim()[1], ax.get_ylim()[0], 0.1) - x = 0.0 * np.ones(y.shape[0]) - ax.plot(x, y, linestyle=":", linewidth=1.5, color="black") - - # Draw gray area - xlim = ax.get_xlim() - ylim = ax.get_ylim() - if metric == "error": - x0 = xlim[0] - width = np.abs(x0) - else: - x0 = 0.0 - width = np.abs(xlim[1]) - trans = transforms.blended_transform_factory(ax.transData, ax.transAxes) - rect = mpatches.Rectangle( - xy=(x0, 0.0), - width=width, - height=1, - transform=trans, - linewidth=0.0, - edgecolor="none", - facecolor="gray", - alpha=0.05, - ) - ax.add_patch(rect) - - # Legend - leg_handles, leg_labels = ax.get_legend_handles_labels() - leg_labels = [dict_metrics["names"][metric] for metric in leg_labels] - leg = ax.legend( - handles=leg_handles, - labels=leg_labels, - loc="center", - title="", - bbox_to_anchor=(-0.2, 1.05, 1.0, 0.0), - framealpha=1.0, - frameon=False, - handletextpad=-0.2, - ) - - # Set X-label (title) │ - fig.suptitle( - "20 % trimmed mean difference and bootstrapped confidence intervals", - y=0.0, - fontsize="medium", - ) - - # Save figure - output_fig = output_dir / "bootstrap_summary.png" - fig.savefig(output_fig, dpi=fig.dpi, bbox_inches="tight") - - # Store results - output_results = output_dir / "bootstrap_summary_results.yml" - with open(output_results, "w") as f: - yaml.dump(dict_ci, f) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/simultaneous_translation/utils/functions.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/simultaneous_translation/utils/functions.py deleted file mode 100644 index 590a6c11cea222ac9096b19f0e3dfe1b71b6c10b..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/simultaneous_translation/utils/functions.py +++ /dev/null @@ -1,125 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - - -def prob_check(tensor, eps=1e-10): - assert not torch.isnan(tensor).any(), ( - "Nan in a probability tensor." - ) - # Add the eps here to prevent errors introduced by precision - assert tensor.le(1.0 + eps).all() and tensor.ge(0.0 - eps).all(), ( - "Incorrect values in a probability tensor" - ", 0.0 <= tensor <= 1.0" - ) - - -def exclusive_cumprod(tensor, dim: int, eps: float = 1e-10): - """ - Implementing exclusive cumprod. - There is cumprod in pytorch, however there is no exclusive mode. - cumprod(x) = [x1, x1x2, x2x3x4, ..., prod_{i=1}^n x_i] - exclusive means - cumprod(x) = [1, x1, x1x2, x1x2x3, ..., prod_{i=1}^{n-1} x_i] - """ - tensor_size = list(tensor.size()) - tensor_size[dim] = 1 - return_tensor = safe_cumprod( - torch.cat([torch.ones(tensor_size).type_as(tensor), tensor], dim=dim), - dim=dim, - eps=eps, - ) - - if dim == 0: - return return_tensor[:-1] - elif dim == 1: - return return_tensor[:, :-1] - elif dim == 2: - return return_tensor[:, :, :-1] - else: - raise RuntimeError( - "Cumprod on dimension 3 and more is not implemented" - ) - - -def safe_cumprod(tensor, dim: int, eps: float = 1e-10): - """ - An implementation of cumprod to prevent precision issue. - cumprod(x) - = [x1, x1x2, x1x2x3, ....] - = [exp(log(x1)), exp(log(x1) + log(x2)), exp(log(x1) + log(x2) + log(x3)), ...] - = exp(cumsum(log(x))) - """ - - if (tensor + eps < 0).any().item(): - raise RuntimeError( - "Safe cumprod can only take non-negative tensors as input." - "Consider use torch.cumprod if you want to calculate negative values." - ) - - log_tensor = torch.log(tensor + eps) - cumsum_log_tensor = torch.cumsum(log_tensor, dim) - exp_cumsum_log_tensor = torch.exp(cumsum_log_tensor) - return exp_cumsum_log_tensor - - -def moving_sum(x, start_idx: int, end_idx: int): - """ - From MONOTONIC CHUNKWISE ATTENTION - https://arxiv.org/pdf/1712.05382.pdf - Equation (18) - - x = [x_1, x_2, ..., x_N] - MovingSum(x, start_idx, end_idx)_n = Sigma_{m=n−(start_idx−1)}^{n+end_idx-1} x_m - for n in {1, 2, 3, ..., N} - - x : src_len, batch_size - start_idx : start idx - end_idx : end idx - - Example - src_len = 5 - batch_size = 3 - x = - [[ 0, 5, 10], - [ 1, 6, 11], - [ 2, 7, 12], - [ 3, 8, 13], - [ 4, 9, 14]] - - MovingSum(x, 3, 1) = - [[ 0, 5, 10], - [ 1, 11, 21], - [ 3, 18, 33], - [ 6, 21, 36], - [ 9, 24, 39]] - - MovingSum(x, 1, 3) = - [[ 3, 18, 33], - [ 6, 21, 36], - [ 9, 24, 39], - [ 7, 17, 27], - [ 4, 9, 14]] - """ - # TODO: Make dimension configurable - assert start_idx > 0 and end_idx > 0 - batch_size, tgt_len, src_len = x.size() - x = x.view(-1, src_len).unsqueeze(1) - # batch_size, 1, src_len - moving_sum_weight = torch.ones([1, 1, end_idx + start_idx - 1]).type_as(x) - - moving_sum = torch.nn.functional.conv1d( - x, moving_sum_weight, padding=start_idx + end_idx - 1 - ).squeeze(1) - - moving_sum = moving_sum[:, end_idx:-start_idx] - - assert src_len == moving_sum.size(1) - assert batch_size * tgt_len == moving_sum.size(0) - - moving_sum = moving_sum.view(batch_size, tgt_len, src_len) - - return moving_sum diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/speech_recognition/__init__.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/speech_recognition/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/multilingual/data_scripts/check_valid_test_overlaps.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/multilingual/data_scripts/check_valid_test_overlaps.py deleted file mode 100644 index 40fa9aecdf9108e095feb3661236453c0f7ed7c4..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/multilingual/data_scripts/check_valid_test_overlaps.py +++ /dev/null @@ -1,124 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import os -import argparse -import pandas as pd -import sys - - -WORKDIR_ROOT = os.environ.get('WORKDIR_ROOT', None) - -if WORKDIR_ROOT is None or not WORKDIR_ROOT.strip(): - print('please specify your working directory root in OS environment variable WORKDIR_ROOT. Exitting..."') - sys.exit(-1) - -def load_langs(path): - with open(path) as fr: - langs = [l.strip() for l in fr] - return langs - - - -def load_sentences(raw_data, split, direction): - src, tgt = direction.split('-') - src_path = f"{raw_data}/{split}.{direction}.{src}" - tgt_path = f"{raw_data}/{split}.{direction}.{tgt}" - if os.path.exists(src_path) and os.path.exists(tgt_path): - return [(src, open(src_path).read().splitlines()), (tgt, open(tgt_path).read().splitlines())] - else: - return [] - -def swap_direction(d): - src, tgt = d.split('-') - return f'{tgt}-{src}' - -def get_all_test_data(raw_data, directions, split='test'): - test_data = [ - x - for dd in directions - for d in [dd, swap_direction(dd)] - for x in load_sentences(raw_data, split, d) - ] - # all_test_data = {s for _, d in test_data for s in d} - all_test_data = {} - for lang, d in test_data: - for s in d: - s = s.strip() - lgs = all_test_data.get(s, set()) - lgs.add(lang) - all_test_data[s] = lgs - return all_test_data, test_data - - -def check_train_sentences(src_path, tgt_path, direction, all_test_data, mess_up_train={}): - # src, tgt = direction.split('-') - print(f'check training data for {direction} in {src_path} and {tgt_path}') - size = 0 - overlapped_size_counted_dup = 0 - if not os.path.exists(tgt_path) or not os.path.exists(src_path): - return mess_up_train, size, overlapped_size_counted_dup - - with open(src_path) as f, open(tgt_path) as g: - for src_line, tgt_line in zip(f, g): - s = src_line.strip() - t = tgt_line.strip() - size += 1 - if s in all_test_data: - langs = mess_up_train.get(s, set()) - langs.add(direction) - mess_up_train[s] = langs - overlapped_size_counted_dup += 1 - if t in all_test_data: - langs = mess_up_train.get(t, set()) - langs.add(direction) - mess_up_train[t] = langs - overlapped_size_counted_dup += 1 - print(f'{direction}: size={size}, overlapped={overlapped_size_counted_dup}') - return mess_up_train, size, overlapped_size_counted_dup - -def check_train_all(raw_data, directions, all_test_data): - mess_up_train = {} - data_sizes = {} - # raw_data = '~chau/data-bin/MineBART/multilingual_mined_100M/en_XX/et_EE-en_XX/all.{en_XX, et_EE}' - print(f'checking training data againsts # {len(all_test_data)} sentences') - print(f'example test data: ', [s for i, s in enumerate(all_test_data.keys()) if i < 10]) - for direction in directions: - src, tgt = direction.split('-') - path = f'{raw_data}/en_XX/{direction}/all' - src_path = f'{path}.{src}' - tgt_path = f'{path}.{tgt}' - print(f'checking {src_path} {tgt_path}') - _, size, overlapped_size_counted_dup = check_train_sentences(src_path, tgt_path, direction, all_test_data, mess_up_train) - data_sizes[direction] = (size, overlapped_size_counted_dup) - return mess_up_train, data_sizes - - - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--folder", type=str, required=True, - help="the data folder ") - parser.add_argument("--test-data", type=str, required=True, - help="the test data folder ") - parser.add_argument('--directions', type=str, default=None, required=False) - - args = parser.parse_args() - directions = args.directions.split(',') - directions = sorted(set(directions)) - - results = [] - # print(f'checking where {args.split} split data are in training') - # print(f'direction\tcommon_count\tsrc common\ttgt common\tfrom_size\tto_size') - raw_data = args.folder - all_test_data, test_data = get_all_test_data(args.test_data, directions, split='test') - mess_up_train, data_sizes = check_train_all(raw_data, directions, all_test_data) - print(data_sizes) - - -if __name__ == "__main__": - main() diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/unsup_select.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/unsup_select.py deleted file mode 100644 index 1122c88c1964d8beead63bc8dfe21d41602b83bc..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/unsup_select.py +++ /dev/null @@ -1,135 +0,0 @@ -""" -Implement unsupervised metric for decoding hyperparameter selection: - $$ alpha * LM_PPL + ViterbitUER(%) * 100 $$ -""" -import argparse -import logging -import math -import sys - -import kenlm -import editdistance -from g2p_en import G2p - -logging.root.setLevel(logging.INFO) -logging.basicConfig(stream=sys.stdout, level=logging.INFO) -logger = logging.getLogger(__name__) - - -def get_parser(): - parser = argparse.ArgumentParser() - parser.add_argument("ref_tra", help="reference pseudo labels") - parser.add_argument("hyp_tra", help="decoded pseudo labels to be assess") - parser.add_argument("--kenlm_path", default="/checkpoint/abaevski/data/speech/libri/librispeech_lm_novox.phnc_o5.bin", help="") - parser.add_argument("--uppercase", action="store_true", help="") - parser.add_argument("--skipwords", default="", help="") - parser.add_argument("--gt_tra", default="", help="ground truth pseudo labels for computing oracle WER") - parser.add_argument("--min_vt_uer", default=0.0, type=float) - parser.add_argument("--phonemize", action="store_true", help="phonemize word hypotheses, used when reference is phone transcript") - parser.add_argument("--phonemize_lexicon", default="", type=str, help="use a lexicon for phonemizing") - return parser - -def load_tra(tra_path): - with open(tra_path, "r") as f: - uid_to_tra = {} - for line in f: - toks = line.rstrip().split() - uid, tra = toks[0], " ".join(toks[1:]) - uid_to_tra[uid] = tra - logger.debug(f"loaded {len(uid_to_tra)} utterances from {tra_path}") - return uid_to_tra - -def load_lex(lex_path): - with open(lex_path, "r") as f: - w2p = {} - for line in f: - w, p = line.rstrip().split(None, 1) - w2p[w] = p.split() - return w2p - -def compute_wer(ref_uid_to_tra, hyp_uid_to_tra, g2p, g2p_dict): - d_cnt = 0 - w_cnt = 0 - w_cnt_h = 0 - for uid in hyp_uid_to_tra: - ref = ref_uid_to_tra[uid].split() - if g2p_dict is not None: - hyp = [] - for word in hyp_uid_to_tra[uid].split(): - if word in g2p_dict: - hyp = hyp + g2p_dict[word] - else: - logger.warning(f"{word} not in g2p_dict") - elif g2p is not None: - hyp = g2p(hyp_uid_to_tra[uid]) - hyp = [p for p in hyp if p != "'" and p != " "] - hyp = [p[:-1] if p[-1].isnumeric() else p for p in hyp] - else: - hyp = hyp_uid_to_tra[uid].split() - logger.debug(( - f"======================\n" - f"HYP: {' '.join(hyp)}\n" - f"REF: {' '.join(ref)}" - )) - d_cnt += editdistance.eval(ref, hyp) - w_cnt += len(ref) - w_cnt_h += len(hyp) - wer = float(d_cnt) / w_cnt - logger.debug(( - f"wer = {wer*100:.2f}%; num. of ref words = {w_cnt}; " - f"num. of hyp words = {w_cnt_h}; num. of sentences = {len(ref_uid_to_tra)}" - )) - return wer - -def compute_lm_ppl(hyp_uid_to_tra, score_fn): - lm_score = 0. - w_cnt = 0 - for hyp in hyp_uid_to_tra.values(): - cur_score = score_fn(hyp) - cur_cnt = len(hyp.split()) + 1 # plus one for - lm_score += cur_score - w_cnt += cur_cnt - logger.debug(( - f"======================\n" - f"score sum/avg = {cur_score:.2f}/{cur_score/cur_cnt:.2f}\n" - f"hyp = {hyp}" - )) - lm_ppl = math.pow(10, -lm_score / w_cnt) - logger.debug(f"lm ppl = {lm_ppl:.2f}; num. of words = {w_cnt}") - return lm_ppl - -def main(): - args = get_parser().parse_args() - logger.debug(f"Args: {args}") - - ref_uid_to_tra = load_tra(args.ref_tra) - hyp_uid_to_tra = load_tra(args.hyp_tra) - assert not bool(set(hyp_uid_to_tra.keys()) - set(ref_uid_to_tra.keys())) - - lm = kenlm.Model(args.kenlm_path) - skipwords = set(args.skipwords.split(",")) - def compute_lm_score(s): - s = " ".join(w for w in s.split() if w not in skipwords) - s = s.upper() if args.uppercase else s - return lm.score(s) - - g2p, g2p_dict = None, None - if args.phonemize: - if args.phonemize_lexicon: - g2p_dict = load_lex(args.phonemize_lexicon) - else: - g2p = G2p() - - wer = compute_wer(ref_uid_to_tra, hyp_uid_to_tra, g2p, g2p_dict) - lm_ppl = compute_lm_ppl(hyp_uid_to_tra, compute_lm_score) - - gt_wer = -math.inf - if args.gt_tra: - gt_uid_to_tra = load_tra(args.gt_tra) - gt_wer = compute_wer(gt_uid_to_tra, hyp_uid_to_tra, None, None) - - score = math.log(lm_ppl) * max(wer, args.min_vt_uer) - logging.info(f"{args.hyp_tra}: score={score:.4f}; wer={wer*100:.2f}%; lm_ppl={lm_ppl:.4f}; gt_wer={gt_wer*100:.2f}%") - -if __name__ == "__main__": - main() diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/model_utils.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/model_utils.py deleted file mode 100644 index 732d66b1d5f695151c26d29eb7f6b53179c269f1..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/model_utils.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import List, Optional - -import torch -from torch import Tensor - - -@torch.jit.script -def script_skip_tensor_list(x: List[Tensor], mask): - res = [xi[mask] if xi.size(0) == mask.size(0) else xi[:, mask] for xi in x] - outputs = [] - for i, t in enumerate(res): - if t.numel() != 0: - outputs.append(t) - else: - outputs.append(x[i]) - return outputs - - -@torch.jit.script -def script_skip_tensor(x: Tensor, mask): - # None case - if x.size(0) == 0: - return x - res = x[mask] if x.size(0) == mask.size(0) else x[:, mask] - if res.numel() == 0: - return x - else: - return res - - -@torch.jit.script -def expand_2d_or_3d_tensor(x, trg_dim: int, padding_idx: int): - """ - Expand 2D/3D tensor on dim=1 - """ - if x is None: - return None - - assert x.dim() == 2 or x.dim() == 3 - assert trg_dim >= x.size(1), (trg_dim, x.size()) - if trg_dim == x.size(1): - return x - - dims = [x.size(0), trg_dim - x.size(1)] - if x.dim() == 3: - dims.append(x.size(2)) - x = torch.cat([x, torch.zeros(dims).to(x).fill_(padding_idx)], 1) - - return x - - -@torch.jit.script -def coalesce(x: Optional[Tensor], y: Tensor) -> Tensor: - return x if x is not None else y - - -@torch.jit.script -def fill_tensors( - x: Optional[Tensor], mask, y: Optional[Tensor], padding_idx: int -) -> Optional[Tensor]: - """ - Filling tensor x with y at masked positions (dim=0). - """ - if x is None or x.size()[0] == 0 or y is None: - return x - assert x.dim() == y.dim() and mask.size(0) == x.size(0) - assert x.dim() == 2 or (x.dim() == 3 and x.size(2) == y.size(2)) - - n_selected = mask.sum() - if n_selected == 0: - return x - assert n_selected == y.size(0) - if n_selected == x.size(0): - return y - - if x.size(1) < y.size(1): - x = expand_2d_or_3d_tensor(x, y.size(1), padding_idx) - x[mask] = y - elif x.size(1) > y.size(1): - x[mask] = torch.tensor(padding_idx).type_as(x) - if x.dim() == 2: - x[mask, : y.size(1)] = y - else: - x[mask, : y.size(1), :] = y - else: - x[mask] = y - return x diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/strip_token_dataset.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/strip_token_dataset.py deleted file mode 100644 index cae39ba4d2f8106398eccd7eb0cf5c2194ec0db5..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/strip_token_dataset.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import BaseWrapperDataset - - -class StripTokenDataset(BaseWrapperDataset): - def __init__(self, dataset, id_to_strip): - super().__init__(dataset) - self.id_to_strip = id_to_strip - - def __getitem__(self, index): - item = self.dataset[index] - while len(item) > 0 and item[-1] == self.id_to_strip: - item = item[:-1] - while len(item) > 0 and item[0] == self.id_to_strip: - item = item[1:] - return item diff --git a/spaces/OkamiFeng/Bark-with-Voice-Cloning/training/training_prepare.py b/spaces/OkamiFeng/Bark-with-Voice-Cloning/training/training_prepare.py deleted file mode 100644 index da4b30622d096fe636a0db358c43336eeef4d959..0000000000000000000000000000000000000000 --- a/spaces/OkamiFeng/Bark-with-Voice-Cloning/training/training_prepare.py +++ /dev/null @@ -1,73 +0,0 @@ -import random -import uuid -import numpy -import os -import random -import fnmatch - -from tqdm.auto import tqdm -from scipy.io import wavfile - -from bark.generation import load_model, SAMPLE_RATE -from bark.api import semantic_to_waveform - -from bark import text_to_semantic -from bark.generation import load_model - -from training.data import load_books, random_split_chunk - -output = 'training/data/output' -output_wav = 'training/data/output_wav' - - -def prepare_semantics_from_text(num_generations): - loaded_data = load_books(True) - - print('Loading semantics model') - load_model(use_gpu=True, use_small=False, force_reload=False, model_type='text') - - if not os.path.isdir(output): - os.mkdir(output) - - loop = 1 - while 1: - filename = uuid.uuid4().hex + '.npy' - file_name = os.path.join(output, filename) - text = '' - while not len(text) > 0: - text = random_split_chunk(loaded_data) # Obtain a short chunk of text - text = text.strip() - print(f'{loop} Generating semantics for text:', text) - loop+=1 - semantics = text_to_semantic(text, temp=round(random.uniform(0.6, 0.8), ndigits=2)) - numpy.save(file_name, semantics) - - -def prepare_wavs_from_semantics(): - if not os.path.isdir(output): - raise Exception('No \'output\' folder, make sure you run create_data.py first!') - if not os.path.isdir(output_wav): - os.mkdir(output_wav) - - print('Loading coarse model') - load_model(use_gpu=True, use_small=False, force_reload=False, model_type='coarse') - print('Loading fine model') - load_model(use_gpu=True, use_small=False, force_reload=False, model_type='fine') - - files = fnmatch.filter(os.listdir(output), '*.npy') - current = 1 - total = len(files) - - for i, f in tqdm(enumerate(files), total=len(files)): - real_name = '.'.join(f.split('.')[:-1]) # Cut off the extension - file_name = os.path.join(output, f) - out_file = os.path.join(output_wav, f'{real_name}.wav') - if not os.path.isfile(out_file) and os.path.isfile(file_name): # Don't process files that have already been processed, to be able to continue previous generations - print(f'Processing ({i+1}/{total}) -> {f}') - wav = semantic_to_waveform(numpy.load(file_name), temp=round(random.uniform(0.6, 0.8), ndigits=2)) - # Change to PCM16 - # wav = (wav * 32767).astype(np.int16) - wavfile.write(out_file, SAMPLE_RATE, wav) - - print('Done!') - diff --git a/spaces/Omnibus/idefics_playground/README.md b/spaces/Omnibus/idefics_playground/README.md deleted file mode 100644 index b887f118a4c82f5dbc25279e8371c8a2c022f5d0..0000000000000000000000000000000000000000 --- a/spaces/Omnibus/idefics_playground/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: IDEFICS Playground -emoji: 🐨 -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.40.1 -app_file: app_dialogue.py -pinned: True -duplicated_from: HuggingFaceM4/idefics_playground ---- diff --git a/spaces/Omnibus/summarize-long-text/utils.py b/spaces/Omnibus/summarize-long-text/utils.py deleted file mode 100644 index c6b307fae3226c46edd3d0bf40f2890a8dda0641..0000000000000000000000000000000000000000 --- a/spaces/Omnibus/summarize-long-text/utils.py +++ /dev/null @@ -1,77 +0,0 @@ -""" - utils.py - Utility functions for the project. -""" - -import logging -import re -from pathlib import Path - -logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - level=logging.INFO, -) -import torch -from natsort import natsorted - - -def validate_pytorch2(torch_version: str = None): - torch_version = torch.__version__ if torch_version is None else torch_version - - pattern = r"^2\.\d+(\.\d+)*" - - return True if re.match(pattern, torch_version) else False - - -def truncate_word_count(text, max_words=512): - """ - truncate_word_count - a helper function for the gradio module - Parameters - ---------- - text : str, required, the text to be processed - max_words : int, optional, the maximum number of words, default=512 - Returns - ------- - dict, the text and whether it was truncated - """ - # split on whitespace with regex - words = re.split(r"\s+", text) - processed = {} - if len(words) > max_words: - processed["was_truncated"] = True - processed["truncated_text"] = " ".join(words[:max_words]) - else: - processed["was_truncated"] = False - processed["truncated_text"] = text - return processed - - -def load_examples(src): - """ - load_examples - a helper function for the gradio module to load examples - Returns: - list of str, the examples - """ - src = Path(src) - src.mkdir(exist_ok=True) - examples = [f for f in src.glob("*.txt")] - examples = natsorted(examples) - # load the examples into a list - text_examples = [] - for example in examples: - with open(example, "r") as f: - text = f.read() - text_examples.append([text, "large", 2, 512, 0.7, 3.5, 3]) - - return text_examples - - -def load_example_filenames(example_path: str or Path): - """ - load_example_filenames - a helper function for the gradio module to load examples - Returns: - dict, the examples (filename:full path) - """ - example_path = Path(example_path) - # load the examples into a list - examples = {f.name: f for f in example_path.glob("*.txt")} - return examples diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/models/ade20k/segm_lib/utils/data/__init__.py b/spaces/OpenGVLab/InternGPT/third-party/lama/bin/models/ade20k/segm_lib/utils/data/__init__.py deleted file mode 100644 index f3b008fb13c5e8a84b1b785056e8c4f5226dc976..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/models/ade20k/segm_lib/utils/data/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ - -from .dataset import Dataset, TensorDataset, ConcatDataset -from .dataloader import DataLoader diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/training/modules/base.py b/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/training/modules/base.py deleted file mode 100644 index a50c3fc7753a0bba64a5ab8c1ed64ff97e62313f..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/training/modules/base.py +++ /dev/null @@ -1,80 +0,0 @@ -import abc -from typing import Tuple, List - -import torch -import torch.nn as nn - -from saicinpainting.training.modules.depthwise_sep_conv import DepthWiseSeperableConv -from saicinpainting.training.modules.multidilated_conv import MultidilatedConv - - -class BaseDiscriminator(nn.Module): - @abc.abstractmethod - def forward(self, x: torch.Tensor) -> Tuple[torch.Tensor, List[torch.Tensor]]: - """ - Predict scores and get intermediate activations. Useful for feature matching loss - :return tuple (scores, list of intermediate activations) - """ - raise NotImplemented() - - -def get_conv_block_ctor(kind='default'): - if not isinstance(kind, str): - return kind - if kind == 'default': - return nn.Conv2d - if kind == 'depthwise': - return DepthWiseSeperableConv - if kind == 'multidilated': - return MultidilatedConv - raise ValueError(f'Unknown convolutional block kind {kind}') - - -def get_norm_layer(kind='bn'): - if not isinstance(kind, str): - return kind - if kind == 'bn': - return nn.BatchNorm2d - if kind == 'in': - return nn.InstanceNorm2d - raise ValueError(f'Unknown norm block kind {kind}') - - -def get_activation(kind='tanh'): - if kind == 'tanh': - return nn.Tanh() - if kind == 'sigmoid': - return nn.Sigmoid() - if kind is False: - return nn.Identity() - raise ValueError(f'Unknown activation kind {kind}') - - -class SimpleMultiStepGenerator(nn.Module): - def __init__(self, steps: List[nn.Module]): - super().__init__() - self.steps = nn.ModuleList(steps) - - def forward(self, x): - cur_in = x - outs = [] - for step in self.steps: - cur_out = step(cur_in) - outs.append(cur_out) - cur_in = torch.cat((cur_in, cur_out), dim=1) - return torch.cat(outs[::-1], dim=1) - -def deconv_factory(kind, ngf, mult, norm_layer, activation, max_features): - if kind == 'convtranspose': - return [nn.ConvTranspose2d(min(max_features, ngf * mult), - min(max_features, int(ngf * mult / 2)), - kernel_size=3, stride=2, padding=1, output_padding=1), - norm_layer(min(max_features, int(ngf * mult / 2))), activation] - elif kind == 'bilinear': - return [nn.Upsample(scale_factor=2, mode='bilinear'), - DepthWiseSeperableConv(min(max_features, ngf * mult), - min(max_features, int(ngf * mult / 2)), - kernel_size=3, stride=1, padding=1), - norm_layer(min(max_features, int(ngf * mult / 2))), activation] - else: - raise Exception(f"Invalid deconv kind: {kind}") \ No newline at end of file diff --git a/spaces/OpenGVLab/VideoChatGPT/models/videochat.py b/spaces/OpenGVLab/VideoChatGPT/models/videochat.py deleted file mode 100644 index 9726c5e54262e0979c403000e9f3d4bd1638e8de..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/VideoChatGPT/models/videochat.py +++ /dev/null @@ -1,222 +0,0 @@ -import os -import psutil -import random -import logging - -import torch -from torch.cuda.amp import autocast as autocast -import torch.nn as nn - -from .blip2 import Blip2Base, disabled_train -from .modeling_llama import LlamaForCausalLM -from transformers import LlamaTokenizer, LlamaConfig - - - - -class VideoChat(Blip2Base): - """ - VideoChat model. - """ - def __init__(self, config): - super().__init__() - - vit_model = config.get("vit_model", "eva_clip_g") - vit_model_path = config.get("vit_model_path", None) - q_former_model_path = config.get("q_former_model_path", None) - llama_model_path = config.get("llama_model_path") - videochat_model_path = config.get("videochat_model_path", "") - img_size = config.get("img_size") - - drop_path_rate = config.get("drop_path_rate", 0) - use_grad_checkpoint = config.get("use_grad_checkpoint", False) - vit_precision = config.get("vit_precision", "fp16") - freeze_vit = config.get("freeze_vit", True) - freeze_qformer = config.get("freeze_qformer", True) - low_resource = config.get("low_resource", False) # use 8 bit and put vit in cpu - max_txt_len = config.get("max_txt_len", 32) - - # uniformerv2 - freeze_mhra = config.get("freeze_mhra", False) - temporal_downsample = config.get("temporal_downsample", True) - no_lmhra = config.get("no_lmhra", False) - double_lmhra = config.get("double_lmhra", False) - lmhra_reduction = config.get("lmhra_reduction", 2.0) - gmhra_layers = config.get("gmhra_layers", 8) - gmhra_drop_path_rate = config.get("gmhra_drop_path_rate", 0.) - gmhra_dropout = config.get("gmhra_dropout", 0.5) - # qformer - num_query_token = config.get("num_query_token") - extra_num_query_token = config.get("extra_num_query_token", 64) - - self.tokenizer = self.init_tokenizer() - self.low_resource = low_resource - self.llama_model = LlamaForCausalLM.from_pretrained( - llama_model_path, - torch_dtype=torch.float16, - use_auth_token=os.environ["HF_TOKEN"], - load_in_8bit=True, - device_map="auto" - ) - self.vit_precision = vit_precision - print(f'Loading VIT. Use fp16: {vit_precision}') - self.visual_encoder, self.ln_vision = self.init_vision_encoder( - vit_model, img_size, drop_path_rate, - use_grad_checkpoint, vit_precision, vit_model_path, - temporal_downsample=temporal_downsample, - no_lmhra=no_lmhra, - double_lmhra=double_lmhra, - lmhra_reduction=lmhra_reduction, - gmhra_layers=gmhra_layers, - gmhra_drop_path_rate=gmhra_drop_path_rate, - gmhra_dropout=gmhra_dropout, - ) - if freeze_vit: - print("freeze vision encoder") - if not freeze_mhra: - open_list = [] - for name, param in self.visual_encoder.named_parameters(): - if 'mhra' not in name: - param.requires_grad = False - else: - open_list.append(name) - print(f"open module: {open_list}") - print("open ln_vision") - else: - for name, param in self.visual_encoder.named_parameters(): - param.requires_grad = False - self.visual_encoder = self.visual_encoder.eval() - self.visual_encoder.train = disabled_train - for name, param in self.ln_vision.named_parameters(): - param.requires_grad = False - self.ln_vision = self.ln_vision.eval() - self.ln_vision.train = disabled_train - print('Loading VIT Done') - - print('Loading Q-Former') - self.Qformer, self.query_tokens = self.init_Qformer( - num_query_token, self.visual_encoder.num_features, - ) - self.Qformer.cls = None - self.Qformer.bert.embeddings.word_embeddings = None - self.Qformer.bert.embeddings.position_embeddings = None - for layer in self.Qformer.bert.encoder.layer: - layer.output = None - layer.intermediate = None - self.load_from_pretrained(model_path=q_former_model_path) - print(f"Add extra {extra_num_query_token} tokens in QFormer") - self.extra_query_tokens = nn.Parameter( - torch.zeros(1, extra_num_query_token, self.query_tokens.shape[-1]) - ) - - if freeze_qformer: - print("freeze Qformer") - for name, param in self.Qformer.named_parameters(): - param.requires_grad = False - self.Qformer = self.Qformer.eval() - self.Qformer.train = disabled_train - self.query_tokens.requires_grad = False - print('Loading Q-Former Done') - - print('Loading LLAMA') - self.llama_tokenizer = LlamaTokenizer.from_pretrained(llama_model_path, use_fast=False, use_auth_token=os.environ["HF_TOKEN"]) - self.llama_tokenizer.pad_token = self.llama_tokenizer.eos_token - - - - print(u'当前进程的内存使用:%.4f GB' % (psutil.Process(os.getpid()).memory_info().rss / 1024 / 1024 / 1024) ) - info = psutil.virtual_memory() - print( u'电脑总内存:%.4f GB' % (info.total / 1024 / 1024 / 1024) ) - print(u'当前使用的总内存占比:',info.percent) - print(u'cpu个数:',psutil.cpu_count()) - - if self.low_resource: - self.llama_model = LlamaForCausalLM.from_pretrained( - llama_model_path, - torch_dtype=torch.float16, - load_in_8bit=True, - device_map="auto", - use_auth_token=os.environ["HF_TOKEN"], - ) - else: - ''' - self.llama_model = LlamaForCausalLM.from_pretrained( - llama_model_path, - torch_dtype=torch.float16, - use_auth_token=os.environ["HF_TOKEN"], - load_in_8bit=True, - device_map="auto" - ) - ''' - - print("freeze LLAMA") - for name, param in self.llama_model.named_parameters(): - param.requires_grad = False - print('Loading LLAMA Done') - print(u'当前进程的内存使用:%.4f GB' % (psutil.Process(os.getpid()).memory_info().rss / 1024 / 1024 / 1024) ) - info = psutil.virtual_memory() - print( u'电脑总内存:%.4f GB' % (info.total / 1024 / 1024 / 1024) ) - print(u'当前使用的总内存占比:',info.percent) - print(u'cpu个数:',psutil.cpu_count()) - self.llama_proj = nn.Linear( - self.Qformer.config.hidden_size, self.llama_model.config.hidden_size - ) - self.max_txt_len = max_txt_len - print(u'当前进程的内存使用:%.4f GB' % (psutil.Process(os.getpid()).memory_info().rss / 1024 / 1024 / 1024) ) - info = psutil.virtual_memory() - print( u'电脑总内存:%.4f GB' % (info.total / 1024 / 1024 / 1024) ) - print(u'当前使用的总内存占比:',info.percent) - print(u'cpu个数:',psutil.cpu_count()) - # load weights of VideoChat - if videochat_model_path: - print(u'当前进程的内存使用:%.4f GB' % (psutil.Process(os.getpid()).memory_info().rss / 1024 / 1024 / 1024) ) - info = psutil.virtual_memory() - print( u'电脑总内存:%.4f GB' % (info.total / 1024 / 1024 / 1024) ) - print(u'当前使用的总内存占比:',info.percent) - print(u'cpu个数:',psutil.cpu_count()) - print(f"Load VideoChat from: {videochat_model_path}") - ckpt = torch.load(videochat_model_path, map_location="cpu") - print(u'ckpt load success.当前进程的内存使用:%.4f GB' % (psutil.Process(os.getpid()).memory_info().rss / 1024 / 1024 / 1024) ) - info = psutil.virtual_memory() - print( u'电脑总内存:%.4f GB' % (info.total / 1024 / 1024 / 1024) ) - print(u'当前使用的总内存占比:',info.percent) - print(u'cpu个数:',psutil.cpu_count()) - msg = self.load_state_dict(ckpt['model'], strict=False) - print(msg) - print(u'当前进程的内存使用:%.4f GB' % (psutil.Process(os.getpid()).memory_info().rss / 1024 / 1024 / 1024) ) - info = psutil.virtual_memory() - print( u'电脑总内存:%.4f GB' % (info.total / 1024 / 1024 / 1024) ) - print(u'当前使用的总内存占比:',info.percent) - print(u'cpu个数:',psutil.cpu_count()) - def vit_to_cpu(self): - self.ln_vision.to("cpu") - self.ln_vision.float() - self.visual_encoder.to("cpu") - self.visual_encoder.float() - - def encode_img(self, image): - device = image.device - if self.low_resource: - self.vit_to_cpu() - image = image.to("cpu") - - with self.maybe_autocast(): - T = image.shape[1] - # use_image = True if T == 1 else False - image = image.permute(0, 2, 1, 3, 4) # [B,T,C,H,W] -> [B,C,T,H,W] - - image_embeds = self.ln_vision(self.visual_encoder(image)).to(device) - image_atts = torch.ones(image_embeds.size()[:-1], dtype=torch.long).to(device) - - query_tokens = torch.cat([self.query_tokens, self.extra_query_tokens], dim=1) - query_tokens = query_tokens.expand(image_embeds.shape[0], -1, -1) - query_output = self.Qformer.bert( - query_embeds=query_tokens, - encoder_hidden_states=image_embeds, - encoder_attention_mask=image_atts, - return_dict=True, - ) - - inputs_llama = self.llama_proj(query_output.last_hidden_state) - atts_llama = torch.ones(inputs_llama.size()[:-1], dtype=torch.long).to(image.device) - return inputs_llama, atts_llama diff --git a/spaces/Owechada/roopfaceswapr/roop/core.py b/spaces/Owechada/roopfaceswapr/roop/core.py deleted file mode 100644 index 7d9a5001c16fd09f875e506defa3962bc73c5f85..0000000000000000000000000000000000000000 --- a/spaces/Owechada/roopfaceswapr/roop/core.py +++ /dev/null @@ -1,217 +0,0 @@ -#!/usr/bin/env python3 - -import os -import sys -# single thread doubles cuda performance - needs to be set before torch import -if any(arg.startswith('--execution-provider') for arg in sys.argv): - os.environ['OMP_NUM_THREADS'] = '1' -# reduce tensorflow log level -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' -import warnings -from typing import List -import platform -import signal -import shutil -import argparse -import torch -import onnxruntime -import tensorflow - -import roop.globals -import roop.metadata -import roop.ui as ui -from roop.predicter import predict_image, predict_video -from roop.processors.frame.core import get_frame_processors_modules -from roop.utilities import has_image_extension, is_image, is_video, detect_fps, create_video, extract_frames, get_temp_frame_paths, restore_audio, create_temp, move_temp, clean_temp, normalize_output_path - -if 'ROCMExecutionProvider' in roop.globals.execution_providers: - del torch - -warnings.filterwarnings('ignore', category=FutureWarning, module='insightface') -warnings.filterwarnings('ignore', category=UserWarning, module='torchvision') - - -def parse_args() -> None: - signal.signal(signal.SIGINT, lambda signal_number, frame: destroy()) - program = argparse.ArgumentParser(formatter_class=lambda prog: argparse.HelpFormatter(prog, max_help_position=100)) - program.add_argument('-s', '--source', help='select an source image', dest='source_path') - program.add_argument('-t', '--target', help='select an target image or video', dest='target_path') - program.add_argument('-o', '--output', help='select output file or directory', dest='output_path') - program.add_argument('--frame-processor', help='frame processors (choices: face_swapper, face_enhancer, ...)', dest='frame_processor', default=['face_swapper'], nargs='+') - program.add_argument('--keep-fps', help='keep original fps', dest='keep_fps', action='store_true', default=False) - program.add_argument('--keep-audio', help='keep original audio', dest='keep_audio', action='store_true', default=True) - program.add_argument('--keep-frames', help='keep temporary frames', dest='keep_frames', action='store_true', default=False) - program.add_argument('--many-faces', help='process every face', dest='many_faces', action='store_true', default=False) - program.add_argument('--video-encoder', help='adjust output video encoder', dest='video_encoder', default='libx264', choices=['libx264', 'libx265', 'libvpx-vp9']) - program.add_argument('--video-quality', help='adjust output video quality', dest='video_quality', type=int, default=18, choices=range(52), metavar='[0-51]') - program.add_argument('--max-memory', help='maximum amount of RAM in GB', dest='max_memory', type=int, default=suggest_max_memory()) - program.add_argument('--execution-provider', help='available execution provider (choices: cpu, ...)', dest='execution_provider', default=['cpu'], choices=suggest_execution_providers(), nargs='+') - program.add_argument('--execution-threads', help='number of execution threads', dest='execution_threads', type=int, default=suggest_execution_threads()) - program.add_argument('-v', '--version', action='version', version=f'{roop.metadata.name} {roop.metadata.version}') - - args = program.parse_args() - - roop.globals.source_path = args.source_path - roop.globals.target_path = args.target_path - roop.globals.output_path = normalize_output_path(roop.globals.source_path, roop.globals.target_path, args.output_path) - roop.globals.frame_processors = args.frame_processor - roop.globals.headless = args.source_path or args.target_path or args.output_path - roop.globals.keep_fps = args.keep_fps - roop.globals.keep_audio = args.keep_audio - roop.globals.keep_frames = args.keep_frames - roop.globals.many_faces = args.many_faces - roop.globals.video_encoder = args.video_encoder - roop.globals.video_quality = args.video_quality - roop.globals.max_memory = args.max_memory - roop.globals.execution_providers = decode_execution_providers(args.execution_provider) - roop.globals.execution_threads = args.execution_threads - - -def encode_execution_providers(execution_providers: List[str]) -> List[str]: - return [execution_provider.replace('ExecutionProvider', '').lower() for execution_provider in execution_providers] - - -def decode_execution_providers(execution_providers: List[str]) -> List[str]: - return [provider for provider, encoded_execution_provider in zip(onnxruntime.get_available_providers(), encode_execution_providers(onnxruntime.get_available_providers())) - if any(execution_provider in encoded_execution_provider for execution_provider in execution_providers)] - - -def suggest_max_memory() -> int: - if platform.system().lower() == 'darwin': - return 4 - return 16 - - -def suggest_execution_providers() -> List[str]: - return encode_execution_providers(onnxruntime.get_available_providers()) - - -def suggest_execution_threads() -> int: - if 'DmlExecutionProvider' in roop.globals.execution_providers: - return 1 - if 'ROCMExecutionProvider' in roop.globals.execution_providers: - return 1 - return 8 - - -def limit_resources() -> None: - # prevent tensorflow memory leak - gpus = tensorflow.config.experimental.list_physical_devices('GPU') - for gpu in gpus: - tensorflow.config.experimental.set_virtual_device_configuration(gpu, [ - tensorflow.config.experimental.VirtualDeviceConfiguration(memory_limit=1024) - ]) - # limit memory usage - if roop.globals.max_memory: - memory = roop.globals.max_memory * 1024 ** 3 - if platform.system().lower() == 'darwin': - memory = roop.globals.max_memory * 1024 ** 6 - if platform.system().lower() == 'windows': - import ctypes - kernel32 = ctypes.windll.kernel32 - kernel32.SetProcessWorkingSetSize(-1, ctypes.c_size_t(memory), ctypes.c_size_t(memory)) - else: - import resource - resource.setrlimit(resource.RLIMIT_DATA, (memory, memory)) - - -def release_resources() -> None: - if 'CUDAExecutionProvider' in roop.globals.execution_providers: - torch.cuda.empty_cache() - - -def pre_check() -> bool: - if sys.version_info < (3, 9): - update_status('Python version is not supported - please upgrade to 3.9 or higher.') - return False - if not shutil.which('ffmpeg'): - update_status('ffmpeg is not installed.') - return False - return True - - -def update_status(message: str, scope: str = 'ROOP.CORE') -> None: - print(f'[{scope}] {message}') - if not roop.globals.headless: - ui.update_status(message) - - -def start() -> None: - for frame_processor in get_frame_processors_modules(roop.globals.frame_processors): - if not frame_processor.pre_start(): - return - # process image to image - if has_image_extension(roop.globals.target_path): - if predict_image(roop.globals.target_path): - destroy() - shutil.copy2(roop.globals.target_path, roop.globals.output_path) - for frame_processor in get_frame_processors_modules(roop.globals.frame_processors): - for frame_processor_name in roop.globals.frame_processors: - if frame_processor_name == frame_processor.frame_name: - update_status('Progressing...', frame_processor.NAME) - frame_processor.process_image(roop.globals.source_path, roop.globals.output_path, roop.globals.output_path) - frame_processor.post_process() - release_resources() - if is_image(roop.globals.target_path): - update_status('Processing to image succeed!') - else: - update_status('Processing to image failed!') - return - # process image to videos - if predict_video(roop.globals.target_path): - destroy() - update_status('Creating temp resources...') - create_temp(roop.globals.target_path) - update_status('Extracting frames...') - extract_frames(roop.globals.target_path) - temp_frame_paths = get_temp_frame_paths(roop.globals.target_path) - for frame_processor in get_frame_processors_modules(roop.globals.frame_processors): - update_status('Progressing...', frame_processor.NAME) - frame_processor.process_video(roop.globals.source_path, temp_frame_paths) - frame_processor.post_process() - release_resources() - # handles fps - if roop.globals.keep_fps: - update_status('Detecting fps...') - fps = detect_fps(roop.globals.target_path) - update_status(f'Creating video with {fps} fps...') - create_video(roop.globals.target_path, fps) - else: - update_status('Creating video with 30.0 fps...') - create_video(roop.globals.target_path) - # handle audio - if roop.globals.keep_audio: - if roop.globals.keep_fps: - update_status('Restoring audio...') - else: - update_status('Restoring audio might cause issues as fps are not kept...') - restore_audio(roop.globals.target_path, roop.globals.output_path) - else: - move_temp(roop.globals.target_path, roop.globals.output_path) - # clean and validate - clean_temp(roop.globals.target_path) - if is_video(roop.globals.target_path): - update_status('Processing to video succeed!') - else: - update_status('Processing to video failed!') - - -def destroy() -> None: - if roop.globals.target_path: - clean_temp(roop.globals.target_path) - quit() - - -def run() -> None: - parse_args() - if not pre_check(): - return - for frame_processor in get_frame_processors_modules(roop.globals.frame_processors): - if not frame_processor.pre_check(): - return - limit_resources() - if roop.globals.headless: - start() - else: - window = ui.init(start, destroy) - window.mainloop() diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/saconv.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/saconv.py deleted file mode 100644 index b4ee3978e097fca422805db4e31ae481006d7971..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/saconv.py +++ /dev/null @@ -1,145 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F - -from annotator.uniformer.mmcv.cnn import CONV_LAYERS, ConvAWS2d, constant_init -from annotator.uniformer.mmcv.ops.deform_conv import deform_conv2d -from annotator.uniformer.mmcv.utils import TORCH_VERSION, digit_version - - -@CONV_LAYERS.register_module(name='SAC') -class SAConv2d(ConvAWS2d): - """SAC (Switchable Atrous Convolution) - - This is an implementation of SAC in DetectoRS - (https://arxiv.org/pdf/2006.02334.pdf). - - Args: - in_channels (int): Number of channels in the input image - out_channels (int): Number of channels produced by the convolution - kernel_size (int or tuple): Size of the convolving kernel - stride (int or tuple, optional): Stride of the convolution. Default: 1 - padding (int or tuple, optional): Zero-padding added to both sides of - the input. Default: 0 - padding_mode (string, optional): ``'zeros'``, ``'reflect'``, - ``'replicate'`` or ``'circular'``. Default: ``'zeros'`` - dilation (int or tuple, optional): Spacing between kernel elements. - Default: 1 - groups (int, optional): Number of blocked connections from input - channels to output channels. Default: 1 - bias (bool, optional): If ``True``, adds a learnable bias to the - output. Default: ``True`` - use_deform: If ``True``, replace convolution with deformable - convolution. Default: ``False``. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - bias=True, - use_deform=False): - super().__init__( - in_channels, - out_channels, - kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups, - bias=bias) - self.use_deform = use_deform - self.switch = nn.Conv2d( - self.in_channels, 1, kernel_size=1, stride=stride, bias=True) - self.weight_diff = nn.Parameter(torch.Tensor(self.weight.size())) - self.pre_context = nn.Conv2d( - self.in_channels, self.in_channels, kernel_size=1, bias=True) - self.post_context = nn.Conv2d( - self.out_channels, self.out_channels, kernel_size=1, bias=True) - if self.use_deform: - self.offset_s = nn.Conv2d( - self.in_channels, - 18, - kernel_size=3, - padding=1, - stride=stride, - bias=True) - self.offset_l = nn.Conv2d( - self.in_channels, - 18, - kernel_size=3, - padding=1, - stride=stride, - bias=True) - self.init_weights() - - def init_weights(self): - constant_init(self.switch, 0, bias=1) - self.weight_diff.data.zero_() - constant_init(self.pre_context, 0) - constant_init(self.post_context, 0) - if self.use_deform: - constant_init(self.offset_s, 0) - constant_init(self.offset_l, 0) - - def forward(self, x): - # pre-context - avg_x = F.adaptive_avg_pool2d(x, output_size=1) - avg_x = self.pre_context(avg_x) - avg_x = avg_x.expand_as(x) - x = x + avg_x - # switch - avg_x = F.pad(x, pad=(2, 2, 2, 2), mode='reflect') - avg_x = F.avg_pool2d(avg_x, kernel_size=5, stride=1, padding=0) - switch = self.switch(avg_x) - # sac - weight = self._get_weight(self.weight) - zero_bias = torch.zeros( - self.out_channels, device=weight.device, dtype=weight.dtype) - - if self.use_deform: - offset = self.offset_s(avg_x) - out_s = deform_conv2d(x, offset, weight, self.stride, self.padding, - self.dilation, self.groups, 1) - else: - if (TORCH_VERSION == 'parrots' - or digit_version(TORCH_VERSION) < digit_version('1.5.0')): - out_s = super().conv2d_forward(x, weight) - elif digit_version(TORCH_VERSION) >= digit_version('1.8.0'): - # bias is a required argument of _conv_forward in torch 1.8.0 - out_s = super()._conv_forward(x, weight, zero_bias) - else: - out_s = super()._conv_forward(x, weight) - ori_p = self.padding - ori_d = self.dilation - self.padding = tuple(3 * p for p in self.padding) - self.dilation = tuple(3 * d for d in self.dilation) - weight = weight + self.weight_diff - if self.use_deform: - offset = self.offset_l(avg_x) - out_l = deform_conv2d(x, offset, weight, self.stride, self.padding, - self.dilation, self.groups, 1) - else: - if (TORCH_VERSION == 'parrots' - or digit_version(TORCH_VERSION) < digit_version('1.5.0')): - out_l = super().conv2d_forward(x, weight) - elif digit_version(TORCH_VERSION) >= digit_version('1.8.0'): - # bias is a required argument of _conv_forward in torch 1.8.0 - out_l = super()._conv_forward(x, weight, zero_bias) - else: - out_l = super()._conv_forward(x, weight) - - out = switch * out_s + (1 - switch) * out_l - self.padding = ori_p - self.dilation = ori_d - # post-context - avg_x = F.adaptive_avg_pool2d(out, output_size=1) - avg_x = self.post_context(avg_x) - avg_x = avg_x.expand_as(out) - out = out + avg_x - return out diff --git a/spaces/PKUWilliamYang/StyleGANEX/scripts/style_mixing.py b/spaces/PKUWilliamYang/StyleGANEX/scripts/style_mixing.py deleted file mode 100644 index e252b418adb26ac5dc9e30998d44279c2ff60cb7..0000000000000000000000000000000000000000 --- a/spaces/PKUWilliamYang/StyleGANEX/scripts/style_mixing.py +++ /dev/null @@ -1,101 +0,0 @@ -import os -from argparse import Namespace - -from tqdm import tqdm -import numpy as np -from PIL import Image -import torch -from torch.utils.data import DataLoader -import sys - -sys.path.append(".") -sys.path.append("..") - -from configs import data_configs -from datasets.inference_dataset import InferenceDataset -from utils.common import tensor2im, log_input_image -from options.test_options import TestOptions -from models.psp import pSp - - -def run(): - test_opts = TestOptions().parse() - - if test_opts.resize_factors is not None: - factors = test_opts.resize_factors.split(',') - assert len(factors) == 1, "When running inference, please provide a single downsampling factor!" - mixed_path_results = os.path.join(test_opts.exp_dir, 'style_mixing', - 'downsampling_{}'.format(test_opts.resize_factors)) - else: - mixed_path_results = os.path.join(test_opts.exp_dir, 'style_mixing') - os.makedirs(mixed_path_results, exist_ok=True) - - # update test options with options used during training - ckpt = torch.load(test_opts.checkpoint_path, map_location='cpu') - opts = ckpt['opts'] - opts.update(vars(test_opts)) - if 'learn_in_w' not in opts: - opts['learn_in_w'] = False - if 'output_size' not in opts: - opts['output_size'] = 1024 - opts = Namespace(**opts) - - net = pSp(opts) - net.eval() - net.cuda() - - print('Loading dataset for {}'.format(opts.dataset_type)) - dataset_args = data_configs.DATASETS[opts.dataset_type] - transforms_dict = dataset_args['transforms'](opts).get_transforms() - dataset = InferenceDataset(root=opts.data_path, - transform=transforms_dict['transform_inference'], - opts=opts) - dataloader = DataLoader(dataset, - batch_size=opts.test_batch_size, - shuffle=False, - num_workers=int(opts.test_workers), - drop_last=True) - - latent_mask = [int(l) for l in opts.latent_mask.split(",")] - if opts.n_images is None: - opts.n_images = len(dataset) - - global_i = 0 - for input_batch in tqdm(dataloader): - if global_i >= opts.n_images: - break - with torch.no_grad(): - input_batch = input_batch.cuda() - for image_idx, input_image in enumerate(input_batch): - # generate random vectors to inject into input image - vecs_to_inject = np.random.randn(opts.n_outputs_to_generate, 512).astype('float32') - multi_modal_outputs = [] - for vec_to_inject in vecs_to_inject: - cur_vec = torch.from_numpy(vec_to_inject).unsqueeze(0).to("cuda") - # get latent vector to inject into our input image - _, latent_to_inject = net(cur_vec, - input_code=True, - return_latents=True) - # get output image with injected style vector - res = net(input_image.unsqueeze(0).to("cuda").float(), - latent_mask=latent_mask, - inject_latent=latent_to_inject, - alpha=opts.mix_alpha, - resize=opts.resize_outputs) - multi_modal_outputs.append(res[0]) - - # visualize multi modal outputs - input_im_path = dataset.paths[global_i] - image = input_batch[image_idx] - input_image = log_input_image(image, opts) - resize_amount = (256, 256) if opts.resize_outputs else (opts.output_size, opts.output_size) - res = np.array(input_image.resize(resize_amount)) - for output in multi_modal_outputs: - output = tensor2im(output) - res = np.concatenate([res, np.array(output.resize(resize_amount))], axis=1) - Image.fromarray(res).save(os.path.join(mixed_path_results, os.path.basename(input_im_path))) - global_i += 1 - - -if __name__ == '__main__': - run() diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-67.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-67.go deleted file mode 100644 index 57a54fa5110d89aabac416a4b30a171399d58f9a..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-67.go and /dev/null differ diff --git a/spaces/PeepDaSlan9/whisper-web/assets/index-9018b2d2.css b/spaces/PeepDaSlan9/whisper-web/assets/index-9018b2d2.css deleted file mode 100644 index 262e0a3226766bcd9c92a9508c68be3b62fcca27..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/whisper-web/assets/index-9018b2d2.css +++ /dev/null @@ -1 +0,0 @@ -*,:before,:after{box-sizing:border-box;border-width:0;border-style:solid;border-color:#e5e7eb}:before,:after{--tw-content: ""}html{line-height:1.5;-webkit-text-size-adjust:100%;-moz-tab-size:4;-o-tab-size:4;tab-size:4;font-family:ui-sans-serif,system-ui,-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Helvetica Neue,Arial,Noto Sans,sans-serif,"Apple Color Emoji","Segoe UI Emoji",Segoe UI Symbol,"Noto Color Emoji";font-feature-settings:normal;font-variation-settings:normal}body{margin:0;line-height:inherit}hr{height:0;color:inherit;border-top-width:1px}abbr:where([title]){-webkit-text-decoration:underline dotted;text-decoration:underline dotted}h1,h2,h3,h4,h5,h6{font-size:inherit;font-weight:inherit}a{color:inherit;text-decoration:inherit}b,strong{font-weight:bolder}code,kbd,samp,pre{font-family:ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,monospace;font-size:1em}small{font-size:80%}sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}sub{bottom:-.25em}sup{top:-.5em}table{text-indent:0;border-color:inherit;border-collapse:collapse}button,input,optgroup,select,textarea{font-family:inherit;font-size:100%;font-weight:inherit;line-height:inherit;color:inherit;margin:0;padding:0}button,select{text-transform:none}button,[type=button],[type=reset],[type=submit]{-webkit-appearance:button;background-color:transparent;background-image:none}:-moz-focusring{outline:auto}:-moz-ui-invalid{box-shadow:none}progress{vertical-align:baseline}::-webkit-inner-spin-button,::-webkit-outer-spin-button{height:auto}[type=search]{-webkit-appearance:textfield;outline-offset:-2px}::-webkit-search-decoration{-webkit-appearance:none}::-webkit-file-upload-button{-webkit-appearance:button;font:inherit}summary{display:list-item}blockquote,dl,dd,h1,h2,h3,h4,h5,h6,hr,figure,p,pre{margin:0}fieldset{margin:0;padding:0}legend{padding:0}ol,ul,menu{list-style:none;margin:0;padding:0}textarea{resize:vertical}input::-moz-placeholder,textarea::-moz-placeholder{opacity:1;color:#9ca3af}input::placeholder,textarea::placeholder{opacity:1;color:#9ca3af}button,[role=button]{cursor:pointer}:disabled{cursor:default}img,svg,video,canvas,audio,iframe,embed,object{display:block;vertical-align:middle}img,video{max-width:100%;height:auto}[hidden]{display:none}*,:before,:after{--tw-border-spacing-x: 0;--tw-border-spacing-y: 0;--tw-translate-x: 0;--tw-translate-y: 0;--tw-rotate: 0;--tw-skew-x: 0;--tw-skew-y: 0;--tw-scale-x: 1;--tw-scale-y: 1;--tw-pan-x: ;--tw-pan-y: ;--tw-pinch-zoom: ;--tw-scroll-snap-strictness: proximity;--tw-gradient-from-position: ;--tw-gradient-via-position: ;--tw-gradient-to-position: ;--tw-ordinal: ;--tw-slashed-zero: ;--tw-numeric-figure: ;--tw-numeric-spacing: ;--tw-numeric-fraction: ;--tw-ring-inset: ;--tw-ring-offset-width: 0px;--tw-ring-offset-color: #fff;--tw-ring-color: rgb(59 130 246 / .5);--tw-ring-offset-shadow: 0 0 #0000;--tw-ring-shadow: 0 0 #0000;--tw-shadow: 0 0 #0000;--tw-shadow-colored: 0 0 #0000;--tw-blur: ;--tw-brightness: ;--tw-contrast: ;--tw-grayscale: ;--tw-hue-rotate: ;--tw-invert: ;--tw-saturate: ;--tw-sepia: ;--tw-drop-shadow: ;--tw-backdrop-blur: ;--tw-backdrop-brightness: ;--tw-backdrop-contrast: ;--tw-backdrop-grayscale: ;--tw-backdrop-hue-rotate: ;--tw-backdrop-invert: ;--tw-backdrop-opacity: ;--tw-backdrop-saturate: ;--tw-backdrop-sepia: }::backdrop{--tw-border-spacing-x: 0;--tw-border-spacing-y: 0;--tw-translate-x: 0;--tw-translate-y: 0;--tw-rotate: 0;--tw-skew-x: 0;--tw-skew-y: 0;--tw-scale-x: 1;--tw-scale-y: 1;--tw-pan-x: ;--tw-pan-y: ;--tw-pinch-zoom: ;--tw-scroll-snap-strictness: proximity;--tw-gradient-from-position: ;--tw-gradient-via-position: ;--tw-gradient-to-position: ;--tw-ordinal: ;--tw-slashed-zero: ;--tw-numeric-figure: ;--tw-numeric-spacing: ;--tw-numeric-fraction: ;--tw-ring-inset: ;--tw-ring-offset-width: 0px;--tw-ring-offset-color: #fff;--tw-ring-color: rgb(59 130 246 / .5);--tw-ring-offset-shadow: 0 0 #0000;--tw-ring-shadow: 0 0 #0000;--tw-shadow: 0 0 #0000;--tw-shadow-colored: 0 0 #0000;--tw-blur: ;--tw-brightness: ;--tw-contrast: ;--tw-grayscale: ;--tw-hue-rotate: ;--tw-invert: ;--tw-saturate: ;--tw-sepia: ;--tw-drop-shadow: ;--tw-backdrop-blur: ;--tw-backdrop-brightness: ;--tw-backdrop-contrast: ;--tw-backdrop-grayscale: ;--tw-backdrop-hue-rotate: ;--tw-backdrop-invert: ;--tw-backdrop-opacity: ;--tw-backdrop-saturate: ;--tw-backdrop-sepia: }.container{width:100%}@media (min-width: 640px){.container{max-width:640px}}@media (min-width: 768px){.container{max-width:768px}}@media (min-width: 1024px){.container{max-width:1024px}}@media (min-width: 1280px){.container{max-width:1280px}}@media (min-width: 1536px){.container{max-width:1536px}}.static{position:static}.fixed{position:fixed}.absolute{position:absolute}.relative{position:relative}.inset-0{inset:0px}.bottom-4{bottom:1rem}.right-4{right:1rem}.top-0{top:0px}.z-10{z-index:10}.m-2{margin:.5rem}.my-2{margin-top:.5rem;margin-bottom:.5rem}.mb-1{margin-bottom:.25rem}.mb-2{margin-bottom:.5rem}.mb-3{margin-bottom:.75rem}.mb-5{margin-bottom:1.25rem}.ml-2{margin-left:.5rem}.ml-4{margin-left:1rem}.mr-2{margin-right:.5rem}.mr-3{margin-right:.75rem}.mr-5{margin-right:1.25rem}.ms-1{-webkit-margin-start:.25rem;margin-inline-start:.25rem}.mt-0{margin-top:0}.mt-0\.5{margin-top:.125rem}.mt-1{margin-top:.25rem}.mt-3{margin-top:.75rem}.mt-4{margin-top:1rem}.block{display:block}.inline{display:inline}.flex{display:flex}.inline-flex{display:inline-flex}.hidden{display:none}.h-1{height:.25rem}.h-14{height:3.5rem}.h-4{height:1rem}.h-7{height:1.75rem}.h-full{height:100%}.max-h-\[20rem\]{max-height:20rem}.min-h-full{min-height:100%}.min-h-screen{min-height:100vh}.w-4{width:1rem}.w-7{width:1.75rem}.w-\[1px\]{width:1px}.w-full{width:100%}.max-w-md{max-width:28rem}.scale-100{--tw-scale-x: 1;--tw-scale-y: 1;transform:translate(var(--tw-translate-x),var(--tw-translate-y)) rotate(var(--tw-rotate)) skew(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y))}.scale-95{--tw-scale-x: .95;--tw-scale-y: .95;transform:translate(var(--tw-translate-x),var(--tw-translate-y)) rotate(var(--tw-rotate)) skew(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y))}.transform{transform:translate(var(--tw-translate-x),var(--tw-translate-y)) rotate(var(--tw-rotate)) skew(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y))}@keyframes spin{to{transform:rotate(360deg)}}.animate-spin{animation:spin 1s linear infinite}.flex-row{flex-direction:row}.flex-row-reverse{flex-direction:row-reverse}.flex-col{flex-direction:column}.items-center{align-items:center}.justify-center{justify-content:center}.justify-between{justify-content:space-between}.space-x-2>:not([hidden])~:not([hidden]){--tw-space-x-reverse: 0;margin-right:calc(.5rem * var(--tw-space-x-reverse));margin-left:calc(.5rem * calc(1 - var(--tw-space-x-reverse)))}.overflow-hidden{overflow:hidden}.overflow-y-auto{overflow-y:auto}.whitespace-nowrap{white-space:nowrap}.rounded-2xl{border-radius:1rem}.rounded-full{border-radius:9999px}.rounded-lg{border-radius:.5rem}.rounded-md{border-radius:.375rem}.border{border-width:1px}.border-gray-300{--tw-border-opacity: 1;border-color:rgb(209 213 219 / var(--tw-border-opacity))}.border-gray-400{--tw-border-opacity: 1;border-color:rgb(156 163 175 / var(--tw-border-opacity))}.border-transparent{border-color:transparent}.bg-black{--tw-bg-opacity: 1;background-color:rgb(0 0 0 / var(--tw-bg-opacity))}.bg-blue-500{--tw-bg-opacity: 1;background-color:rgb(59 130 246 / var(--tw-bg-opacity))}.bg-blue-600{--tw-bg-opacity: 1;background-color:rgb(37 99 235 / var(--tw-bg-opacity))}.bg-blue-700{--tw-bg-opacity: 1;background-color:rgb(29 78 216 / var(--tw-bg-opacity))}.bg-gray-200{--tw-bg-opacity: 1;background-color:rgb(229 231 235 / var(--tw-bg-opacity))}.bg-gray-50{--tw-bg-opacity: 1;background-color:rgb(249 250 251 / var(--tw-bg-opacity))}.bg-green-500{--tw-bg-opacity: 1;background-color:rgb(34 197 94 / var(--tw-bg-opacity))}.bg-indigo-100{--tw-bg-opacity: 1;background-color:rgb(224 231 255 / var(--tw-bg-opacity))}.bg-indigo-600{--tw-bg-opacity: 1;background-color:rgb(79 70 229 / var(--tw-bg-opacity))}.bg-red-500{--tw-bg-opacity: 1;background-color:rgb(239 68 68 / var(--tw-bg-opacity))}.bg-slate-200{--tw-bg-opacity: 1;background-color:rgb(226 232 240 / var(--tw-bg-opacity))}.bg-white{--tw-bg-opacity: 1;background-color:rgb(255 255 255 / var(--tw-bg-opacity))}.bg-opacity-25{--tw-bg-opacity: .25}.p-2{padding:.5rem}.p-2\.5{padding:.625rem}.p-4{padding:1rem}.p-6{padding:1.5rem}.px-1{padding-left:.25rem;padding-right:.25rem}.px-2{padding-left:.5rem;padding-right:.5rem}.px-4{padding-left:1rem;padding-right:1rem}.px-5{padding-left:1.25rem;padding-right:1.25rem}.py-2{padding-top:.5rem;padding-bottom:.5rem}.py-2\.5{padding-top:.625rem;padding-bottom:.625rem}.text-left{text-align:left}.text-center{text-align:center}.text-right{text-align:right}.align-middle{vertical-align:middle}.text-5xl{font-size:3rem;line-height:1}.text-lg{font-size:1.125rem;line-height:1.75rem}.text-sm{font-size:.875rem;line-height:1.25rem}.font-extrabold{font-weight:800}.font-medium{font-weight:500}.font-semibold{font-weight:600}.leading-6{line-height:1.5rem}.tracking-tight{letter-spacing:-.025em}.text-gray-500{--tw-text-opacity: 1;color:rgb(107 114 128 / var(--tw-text-opacity))}.text-gray-900{--tw-text-opacity: 1;color:rgb(17 24 39 / var(--tw-text-opacity))}.text-indigo-100{--tw-text-opacity: 1;color:rgb(224 231 255 / var(--tw-text-opacity))}.text-indigo-900{--tw-text-opacity: 1;color:rgb(49 46 129 / var(--tw-text-opacity))}.text-slate-500{--tw-text-opacity: 1;color:rgb(100 116 139 / var(--tw-text-opacity))}.text-slate-900{--tw-text-opacity: 1;color:rgb(15 23 42 / var(--tw-text-opacity))}.text-white{--tw-text-opacity: 1;color:rgb(255 255 255 / var(--tw-text-opacity))}.underline{text-decoration-line:underline}.opacity-0{opacity:0}.opacity-100{opacity:1}.shadow-xl{--tw-shadow: 0 20px 25px -5px rgb(0 0 0 / .1), 0 8px 10px -6px rgb(0 0 0 / .1);--tw-shadow-colored: 0 20px 25px -5px var(--tw-shadow-color), 0 8px 10px -6px var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow, 0 0 #0000),var(--tw-ring-shadow, 0 0 #0000),var(--tw-shadow)}.shadow-black\/5{--tw-shadow-color: rgb(0 0 0 / .05);--tw-shadow: var(--tw-shadow-colored)}.ring-1{--tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);--tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(1px + var(--tw-ring-offset-width)) var(--tw-ring-color);box-shadow:var(--tw-ring-offset-shadow),var(--tw-ring-shadow),var(--tw-shadow, 0 0 #0000)}.ring-slate-700\/10{--tw-ring-color: rgb(51 65 85 / .1)}.filter{filter:var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow)}.transition-all{transition-property:all;transition-timing-function:cubic-bezier(.4,0,.2,1);transition-duration:.15s}.duration-100{transition-duration:.1s}.duration-200{transition-duration:.2s}.duration-300{transition-duration:.3s}.ease-in{transition-timing-function:cubic-bezier(.4,0,1,1)}.ease-out{transition-timing-function:cubic-bezier(0,0,.2,1)}html,body,#root{height:100%}audio::-webkit-media-controls-panel{background-color:#fff}.container{width:41rem;max-width:95vw}.hover\:bg-blue-800:hover{--tw-bg-opacity: 1;background-color:rgb(30 64 175 / var(--tw-bg-opacity))}.hover\:bg-green-600:hover{--tw-bg-opacity: 1;background-color:rgb(22 163 74 / var(--tw-bg-opacity))}.hover\:bg-indigo-200:hover{--tw-bg-opacity: 1;background-color:rgb(199 210 254 / var(--tw-bg-opacity))}.hover\:bg-indigo-50:hover{--tw-bg-opacity: 1;background-color:rgb(238 242 255 / var(--tw-bg-opacity))}.hover\:bg-indigo-500:hover{--tw-bg-opacity: 1;background-color:rgb(99 102 241 / var(--tw-bg-opacity))}.hover\:bg-red-600:hover{--tw-bg-opacity: 1;background-color:rgb(220 38 38 / var(--tw-bg-opacity))}.hover\:text-indigo-600:hover{--tw-text-opacity: 1;color:rgb(79 70 229 / var(--tw-text-opacity))}.focus\:border-blue-500:focus{--tw-border-opacity: 1;border-color:rgb(59 130 246 / var(--tw-border-opacity))}.focus\:outline-none:focus{outline:2px solid transparent;outline-offset:2px}.focus\:ring-4:focus{--tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);--tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(4px + var(--tw-ring-offset-width)) var(--tw-ring-color);box-shadow:var(--tw-ring-offset-shadow),var(--tw-ring-shadow),var(--tw-shadow, 0 0 #0000)}.focus\:ring-blue-300:focus{--tw-ring-opacity: 1;--tw-ring-color: rgb(147 197 253 / var(--tw-ring-opacity))}.focus\:ring-blue-500:focus{--tw-ring-opacity: 1;--tw-ring-color: rgb(59 130 246 / var(--tw-ring-opacity))}.focus\:ring-green-300:focus{--tw-ring-opacity: 1;--tw-ring-color: rgb(134 239 172 / var(--tw-ring-opacity))}.focus-visible\:ring-2:focus-visible{--tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);--tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(2px + var(--tw-ring-offset-width)) var(--tw-ring-color);box-shadow:var(--tw-ring-offset-shadow),var(--tw-ring-shadow),var(--tw-shadow, 0 0 #0000)}.focus-visible\:ring-indigo-500:focus-visible{--tw-ring-opacity: 1;--tw-ring-color: rgb(99 102 241 / var(--tw-ring-opacity))}.focus-visible\:ring-offset-2:focus-visible{--tw-ring-offset-width: 2px}@media (prefers-color-scheme: dark){.dark\:border-gray-600{--tw-border-opacity: 1;border-color:rgb(75 85 99 / var(--tw-border-opacity))}.dark\:bg-blue-600{--tw-bg-opacity: 1;background-color:rgb(37 99 235 / var(--tw-bg-opacity))}.dark\:bg-gray-700{--tw-bg-opacity: 1;background-color:rgb(55 65 81 / var(--tw-bg-opacity))}.dark\:bg-green-600{--tw-bg-opacity: 1;background-color:rgb(22 163 74 / var(--tw-bg-opacity))}.dark\:text-white{--tw-text-opacity: 1;color:rgb(255 255 255 / var(--tw-text-opacity))}.dark\:placeholder-gray-400::-moz-placeholder{--tw-placeholder-opacity: 1;color:rgb(156 163 175 / var(--tw-placeholder-opacity))}.dark\:placeholder-gray-400::placeholder{--tw-placeholder-opacity: 1;color:rgb(156 163 175 / var(--tw-placeholder-opacity))}.dark\:hover\:bg-blue-700:hover{--tw-bg-opacity: 1;background-color:rgb(29 78 216 / var(--tw-bg-opacity))}.dark\:hover\:bg-green-700:hover{--tw-bg-opacity: 1;background-color:rgb(21 128 61 / var(--tw-bg-opacity))}.dark\:focus\:border-blue-500:focus{--tw-border-opacity: 1;border-color:rgb(59 130 246 / var(--tw-border-opacity))}.dark\:focus\:ring-blue-500:focus{--tw-ring-opacity: 1;--tw-ring-color: rgb(59 130 246 / var(--tw-ring-opacity))}.dark\:focus\:ring-blue-800:focus{--tw-ring-opacity: 1;--tw-ring-color: rgb(30 64 175 / var(--tw-ring-opacity))}.dark\:focus\:ring-green-800:focus{--tw-ring-opacity: 1;--tw-ring-color: rgb(22 101 52 / var(--tw-ring-opacity))}}@media (min-width: 640px){.sm\:text-2xl{font-size:1.5rem;line-height:2rem}.sm\:text-7xl{font-size:4.5rem;line-height:1}} diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/runner/__init__.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/runner/__init__.py deleted file mode 100644 index 52e4b48d383a84a055dcd7f6236f6e8e58eab924..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/runner/__init__.py +++ /dev/null @@ -1,47 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base_module import BaseModule, ModuleList, Sequential -from .base_runner import BaseRunner -from .builder import RUNNERS, build_runner -from .checkpoint import (CheckpointLoader, _load_checkpoint, - _load_checkpoint_with_prefix, load_checkpoint, - load_state_dict, save_checkpoint, weights_to_cpu) -from .default_constructor import DefaultRunnerConstructor -from .dist_utils import (allreduce_grads, allreduce_params, get_dist_info, - init_dist, master_only) -from .epoch_based_runner import EpochBasedRunner, Runner -from .fp16_utils import LossScaler, auto_fp16, force_fp32, wrap_fp16_model -from .hooks import (HOOKS, CheckpointHook, ClosureHook, DistEvalHook, - DistSamplerSeedHook, DvcliveLoggerHook, EMAHook, EvalHook, - Fp16OptimizerHook, GradientCumulativeFp16OptimizerHook, - GradientCumulativeOptimizerHook, Hook, IterTimerHook, - LoggerHook, LrUpdaterHook, MlflowLoggerHook, - NeptuneLoggerHook, OptimizerHook, PaviLoggerHook, - SyncBuffersHook, TensorboardLoggerHook, TextLoggerHook, - WandbLoggerHook) -from .iter_based_runner import IterBasedRunner, IterLoader -from .log_buffer import LogBuffer -from .optimizer import (OPTIMIZER_BUILDERS, OPTIMIZERS, - DefaultOptimizerConstructor, build_optimizer, - build_optimizer_constructor) -from .priority import Priority, get_priority -from .utils import get_host_info, get_time_str, obj_from_dict, set_random_seed - -__all__ = [ - 'BaseRunner', 'Runner', 'EpochBasedRunner', 'IterBasedRunner', 'LogBuffer', - 'HOOKS', 'Hook', 'CheckpointHook', 'ClosureHook', 'LrUpdaterHook', - 'OptimizerHook', 'IterTimerHook', 'DistSamplerSeedHook', 'LoggerHook', - 'PaviLoggerHook', 'TextLoggerHook', 'TensorboardLoggerHook', - 'NeptuneLoggerHook', 'WandbLoggerHook', 'MlflowLoggerHook', - 'DvcliveLoggerHook', '_load_checkpoint', 'load_state_dict', - 'load_checkpoint', 'weights_to_cpu', 'save_checkpoint', 'Priority', - 'get_priority', 'get_host_info', 'get_time_str', 'obj_from_dict', - 'init_dist', 'get_dist_info', 'master_only', 'OPTIMIZER_BUILDERS', - 'OPTIMIZERS', 'DefaultOptimizerConstructor', 'build_optimizer', - 'build_optimizer_constructor', 'IterLoader', 'set_random_seed', - 'auto_fp16', 'force_fp32', 'wrap_fp16_model', 'Fp16OptimizerHook', - 'SyncBuffersHook', 'EMAHook', 'build_runner', 'RUNNERS', 'allreduce_grads', - 'allreduce_params', 'LossScaler', 'CheckpointLoader', 'BaseModule', - '_load_checkpoint_with_prefix', 'EvalHook', 'DistEvalHook', 'Sequential', - 'ModuleList', 'GradientCumulativeOptimizerHook', - 'GradientCumulativeFp16OptimizerHook', 'DefaultRunnerConstructor' -] diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/modeling/backbone/ops.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/modeling/backbone/ops.py deleted file mode 100644 index e6e76d9a0ebbcd315ea0eaafe073a3ef2ac120f9..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/modeling/backbone/ops.py +++ /dev/null @@ -1,71 +0,0 @@ -import math -import torch -import torch.nn as nn -import torch.nn.functional as F - - -def conv7x7(in_planes, out_planes, stride=1, groups=1, dilation=1): - """7x7 convolution with padding""" - return nn.Conv2d(in_planes, out_planes, kernel_size=7, stride=stride, - padding=3*dilation, groups=groups, bias=False, dilation=dilation) - - -def conv5x5(in_planes, out_planes, stride=1, groups=1, dilation=1): - """5x5 convolution with padding""" - return nn.Conv2d(in_planes, out_planes, kernel_size=5, stride=stride, - padding=2*dilation, groups=groups, bias=False, dilation=dilation) - - -def conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1): - """3x3 convolution with padding""" - return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, - padding=dilation, groups=groups, bias=False, dilation=dilation) - - -def conv1x1(in_planes, out_planes, stride=1): - """1x1 convolution""" - return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False) - - -def maxpool(**kwargs): - return nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - - -def avgpool(**kwargs): - return nn.AvgPool2d(kernel_size=3, stride=2, padding=1) - -def dropout(prob): - return nn.Dropout(prob) - - -conv3x3sep = lambda i, o, s=1: conv3x3(i, o, s, groups=i) -conv3x3g2 = lambda i, o, s=1: conv3x3(i, o, s, groups=2) -conv3x3g4 = lambda i, o, s=1: conv3x3(i, o, s, groups=4) -conv3x3g8 = lambda i, o, s=1: conv3x3(i, o, s, groups=8) -conv3x3dw = lambda i, o, s=1: conv3x3(i, o, s, groups=i) - -conv3x3d2 = lambda i, o, s=1: conv3x3(i, o, s, dilation=2) -conv3x3d3 = lambda i, o, s=1: conv3x3(i, o, s, dilation=3) -conv3x3d4 = lambda i, o, s=1: conv3x3(i, o, s, dilation=4) - - -conv5x5sep = lambda i, o, s=1: conv5x5(i, o, s, groups=i) -conv5x5g2 = lambda i, o, s=1: conv5x5(i, o, s, groups=2) -conv5x5g4 = lambda i, o, s=1: conv5x5(i, o, s, groups=4) -conv5x5g8 = lambda i, o, s=1: conv5x5(i, o, s, groups=8) -conv5x5dw = lambda i, o, s=1: conv5x5(i, o, s, groups=i) - - -conv5x5d2 = lambda i, o, s=1: conv5x5(i, o, s, dilation=2) -conv5x5d3 = lambda i, o, s=1: conv5x5(i, o, s, dilation=3) -conv5x5d4 = lambda i, o, s=1: conv5x5(i, o, s, dilation=4) - -conv7x7sep = lambda i, o, s=1: conv7x7(i, o, s, groups=i) -conv7x7g2 = lambda i, o, s=1: conv7x7(i, o, s, groups=2) -conv7x7g4 = lambda i, o, s=1: conv7x7(i, o, s, groups=4) -conv7x7g8 = lambda i, o, s=1: conv7x7(i, o, s, groups=8) -conv7x7dw = lambda i, o, s=1: conv7x7(i, o, s, groups=i) - -conv7x7d2 = lambda i, o, s=1: conv7x7(i, o, s, dilation=2) -conv7x7d3 = lambda i, o, s=1: conv7x7(i, o, s, dilation=3) -conv7x7d4 = lambda i, o, s=1: conv7x7(i, o, s, dilation=4) \ No newline at end of file diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/models/nlvr_encoder.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/models/nlvr_encoder.py deleted file mode 100644 index 1946bb4a300f75afa4848f6622839445903c34a9..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/models/nlvr_encoder.py +++ /dev/null @@ -1,843 +0,0 @@ -import math -import os -import warnings -from dataclasses import dataclass -from typing import Optional, Tuple - -import torch -from torch import Tensor, device, dtype, nn -import torch.utils.checkpoint -from torch import nn -from torch.nn import CrossEntropyLoss -import torch.nn.functional as F - -from transformers.activations import ACT2FN -from transformers.file_utils import ( - ModelOutput, -) -from transformers.modeling_outputs import ( - BaseModelOutputWithPastAndCrossAttentions, - BaseModelOutputWithPoolingAndCrossAttentions, - CausalLMOutputWithCrossAttentions, - MaskedLMOutput, - MultipleChoiceModelOutput, - NextSentencePredictorOutput, - QuestionAnsweringModelOutput, - SequenceClassifierOutput, - TokenClassifierOutput, -) -from transformers.modeling_utils import ( - PreTrainedModel, - apply_chunking_to_forward, - find_pruneable_heads_and_indices, - prune_linear_layer, -) -from transformers.utils import logging -from transformers.models.bert.configuration_bert import BertConfig - - -logger = logging.get_logger(__name__) - - -class BertEmbeddings(nn.Module): - """Construct the embeddings from word and position embeddings.""" - - def __init__(self, config): - super().__init__() - self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id) - self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size) - - # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load - # any TensorFlow checkpoint file - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - # position_ids (1, len position emb) is contiguous in memory and exported when serialized - self.register_buffer("position_ids", torch.arange(config.max_position_embeddings).expand((1, -1))) - self.position_embedding_type = getattr(config, "position_embedding_type", "absolute") - - self.config = config - - def forward( - self, input_ids=None, position_ids=None, inputs_embeds=None, past_key_values_length=0 - ): - if input_ids is not None: - input_shape = input_ids.size() - else: - input_shape = inputs_embeds.size()[:-1] - - seq_length = input_shape[1] - - if position_ids is None: - position_ids = self.position_ids[:, past_key_values_length : seq_length + past_key_values_length] - - if inputs_embeds is None: - inputs_embeds = self.word_embeddings(input_ids) - - embeddings = inputs_embeds - - if self.position_embedding_type == "absolute": - position_embeddings = self.position_embeddings(position_ids) - embeddings += position_embeddings - embeddings = self.LayerNorm(embeddings) - embeddings = self.dropout(embeddings) - return embeddings - - -class BertSelfAttention(nn.Module): - def __init__(self, config, is_cross_attention): - super().__init__() - self.config = config - if config.hidden_size % config.num_attention_heads != 0 and not hasattr(config, "embedding_size"): - raise ValueError( - "The hidden size (%d) is not a multiple of the number of attention " - "heads (%d)" % (config.hidden_size, config.num_attention_heads) - ) - - self.num_attention_heads = config.num_attention_heads - self.attention_head_size = int(config.hidden_size / config.num_attention_heads) - self.all_head_size = self.num_attention_heads * self.attention_head_size - - self.query = nn.Linear(config.hidden_size, self.all_head_size) - if is_cross_attention: - self.key = nn.Linear(config.encoder_width, self.all_head_size) - self.value = nn.Linear(config.encoder_width, self.all_head_size) - else: - self.key = nn.Linear(config.hidden_size, self.all_head_size) - self.value = nn.Linear(config.hidden_size, self.all_head_size) - - self.dropout = nn.Dropout(config.attention_probs_dropout_prob) - self.position_embedding_type = getattr(config, "position_embedding_type", "absolute") - if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query": - self.max_position_embeddings = config.max_position_embeddings - self.distance_embedding = nn.Embedding(2 * config.max_position_embeddings - 1, self.attention_head_size) - self.save_attention = False - - def save_attn_gradients(self, attn_gradients): - self.attn_gradients = attn_gradients - - def get_attn_gradients(self): - return self.attn_gradients - - def save_attention_map(self, attention_map): - self.attention_map = attention_map - - def get_attention_map(self): - return self.attention_map - - def transpose_for_scores(self, x): - new_x_shape = x.size()[:-1] + (self.num_attention_heads, self.attention_head_size) - x = x.view(*new_x_shape) - return x.permute(0, 2, 1, 3) - - def forward( - self, - hidden_states, - attention_mask=None, - head_mask=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - past_key_value=None, - output_attentions=False, - ): - mixed_query_layer = self.query(hidden_states) - - # If this is instantiated as a cross-attention module, the keys - # and values come from an encoder; the attention mask needs to be - # such that the encoder's padding tokens are not attended to. - is_cross_attention = encoder_hidden_states is not None - - if is_cross_attention: - key_layer = self.transpose_for_scores(self.key(encoder_hidden_states)) - value_layer = self.transpose_for_scores(self.value(encoder_hidden_states)) - attention_mask = encoder_attention_mask - elif past_key_value is not None: - key_layer = self.transpose_for_scores(self.key(hidden_states)) - value_layer = self.transpose_for_scores(self.value(hidden_states)) - key_layer = torch.cat([past_key_value[0], key_layer], dim=2) - value_layer = torch.cat([past_key_value[1], value_layer], dim=2) - else: - key_layer = self.transpose_for_scores(self.key(hidden_states)) - value_layer = self.transpose_for_scores(self.value(hidden_states)) - - query_layer = self.transpose_for_scores(mixed_query_layer) - - past_key_value = (key_layer, value_layer) - - # Take the dot product between "query" and "key" to get the raw attention scores. - attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2)) - - if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query": - seq_length = hidden_states.size()[1] - position_ids_l = torch.arange(seq_length, dtype=torch.long, device=hidden_states.device).view(-1, 1) - position_ids_r = torch.arange(seq_length, dtype=torch.long, device=hidden_states.device).view(1, -1) - distance = position_ids_l - position_ids_r - positional_embedding = self.distance_embedding(distance + self.max_position_embeddings - 1) - positional_embedding = positional_embedding.to(dtype=query_layer.dtype) # fp16 compatibility - - if self.position_embedding_type == "relative_key": - relative_position_scores = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding) - attention_scores = attention_scores + relative_position_scores - elif self.position_embedding_type == "relative_key_query": - relative_position_scores_query = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding) - relative_position_scores_key = torch.einsum("bhrd,lrd->bhlr", key_layer, positional_embedding) - attention_scores = attention_scores + relative_position_scores_query + relative_position_scores_key - - attention_scores = attention_scores / math.sqrt(self.attention_head_size) - if attention_mask is not None: - # Apply the attention mask is (precomputed for all layers in BertModel forward() function) - attention_scores = attention_scores + attention_mask - - # Normalize the attention scores to probabilities. - attention_probs = nn.Softmax(dim=-1)(attention_scores) - - if is_cross_attention and self.save_attention: - self.save_attention_map(attention_probs) - attention_probs.register_hook(self.save_attn_gradients) - - # This is actually dropping out entire tokens to attend to, which might - # seem a bit unusual, but is taken from the original Transformer paper. - attention_probs_dropped = self.dropout(attention_probs) - - # Mask heads if we want to - if head_mask is not None: - attention_probs_dropped = attention_probs_dropped * head_mask - - context_layer = torch.matmul(attention_probs_dropped, value_layer) - - context_layer = context_layer.permute(0, 2, 1, 3).contiguous() - new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,) - context_layer = context_layer.view(*new_context_layer_shape) - - outputs = (context_layer, attention_probs) if output_attentions else (context_layer,) - - outputs = outputs + (past_key_value,) - return outputs - - -class BertSelfOutput(nn.Module): - def __init__(self, config, twin=False, merge=False): - super().__init__() - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - if twin: - self.dense0 = nn.Linear(config.hidden_size, config.hidden_size) - self.dense1 = nn.Linear(config.hidden_size, config.hidden_size) - else: - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - if merge: - self.act = ACT2FN[config.hidden_act] - self.merge_layer = nn.Linear(config.hidden_size * 2, config.hidden_size) - self.merge = True - else: - self.merge = False - - def forward(self, hidden_states, input_tensor): - if type(hidden_states) == list: - hidden_states0 = self.dense0(hidden_states[0]) - hidden_states1 = self.dense1(hidden_states[1]) - if self.merge: - #hidden_states = self.merge_layer(self.act(torch.cat([hidden_states0,hidden_states1],dim=-1))) - hidden_states = self.merge_layer(torch.cat([hidden_states0,hidden_states1],dim=-1)) - else: - hidden_states = (hidden_states0+hidden_states1)/2 - else: - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states) - hidden_states = self.LayerNorm(hidden_states + input_tensor) - return hidden_states - - -class BertAttention(nn.Module): - def __init__(self, config, is_cross_attention=False, layer_num=-1): - super().__init__() - if is_cross_attention: - self.self0 = BertSelfAttention(config, is_cross_attention) - self.self1 = BertSelfAttention(config, is_cross_attention) - else: - self.self = BertSelfAttention(config, is_cross_attention) - self.output = BertSelfOutput(config, twin=is_cross_attention, merge=(is_cross_attention and layer_num>=6)) - self.pruned_heads = set() - - def prune_heads(self, heads): - if len(heads) == 0: - return - heads, index = find_pruneable_heads_and_indices( - heads, self.self.num_attention_heads, self.self.attention_head_size, self.pruned_heads - ) - - # Prune linear layers - self.self.query = prune_linear_layer(self.self.query, index) - self.self.key = prune_linear_layer(self.self.key, index) - self.self.value = prune_linear_layer(self.self.value, index) - self.output.dense = prune_linear_layer(self.output.dense, index, dim=1) - - # Update hyper params and store pruned heads - self.self.num_attention_heads = self.self.num_attention_heads - len(heads) - self.self.all_head_size = self.self.attention_head_size * self.self.num_attention_heads - self.pruned_heads = self.pruned_heads.union(heads) - - def forward( - self, - hidden_states, - attention_mask=None, - head_mask=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - past_key_value=None, - output_attentions=False, - ): - if type(encoder_hidden_states)==list: - self_outputs0 = self.self0( - hidden_states, - attention_mask, - head_mask, - encoder_hidden_states[0], - encoder_attention_mask[0], - past_key_value, - output_attentions, - ) - self_outputs1 = self.self1( - hidden_states, - attention_mask, - head_mask, - encoder_hidden_states[1], - encoder_attention_mask[1], - past_key_value, - output_attentions, - ) - attention_output = self.output([self_outputs0[0],self_outputs1[0]], hidden_states) - - outputs = (attention_output,) + self_outputs0[1:] # add attentions if we output them - else: - self_outputs = self.self( - hidden_states, - attention_mask, - head_mask, - encoder_hidden_states, - encoder_attention_mask, - past_key_value, - output_attentions, - ) - attention_output = self.output(self_outputs[0], hidden_states) - outputs = (attention_output,) + self_outputs[1:] # add attentions if we output them - return outputs - - -class BertIntermediate(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.intermediate_size) - if isinstance(config.hidden_act, str): - self.intermediate_act_fn = ACT2FN[config.hidden_act] - else: - self.intermediate_act_fn = config.hidden_act - - def forward(self, hidden_states): - hidden_states = self.dense(hidden_states) - hidden_states = self.intermediate_act_fn(hidden_states) - return hidden_states - - -class BertOutput(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.intermediate_size, config.hidden_size) - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - def forward(self, hidden_states, input_tensor): - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states) - hidden_states = self.LayerNorm(hidden_states + input_tensor) - return hidden_states - - -class BertLayer(nn.Module): - def __init__(self, config, layer_num): - super().__init__() - self.config = config - self.chunk_size_feed_forward = config.chunk_size_feed_forward - self.seq_len_dim = 1 - self.attention = BertAttention(config) - self.layer_num = layer_num - if self.config.add_cross_attention: - self.crossattention = BertAttention(config, is_cross_attention=self.config.add_cross_attention, layer_num=layer_num) - self.intermediate = BertIntermediate(config) - self.output = BertOutput(config) - - def forward( - self, - hidden_states, - attention_mask=None, - head_mask=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - past_key_value=None, - output_attentions=False, - mode=None, - ): - # decoder uni-directional self-attention cached key/values tuple is at positions 1,2 - self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None - self_attention_outputs = self.attention( - hidden_states, - attention_mask, - head_mask, - output_attentions=output_attentions, - past_key_value=self_attn_past_key_value, - ) - attention_output = self_attention_outputs[0] - - outputs = self_attention_outputs[1:-1] - present_key_value = self_attention_outputs[-1] - - if mode=='multimodal': - assert encoder_hidden_states is not None, "encoder_hidden_states must be given for cross-attention layers" - cross_attention_outputs = self.crossattention( - attention_output, - attention_mask, - head_mask, - encoder_hidden_states, - encoder_attention_mask, - output_attentions=output_attentions, - ) - attention_output = cross_attention_outputs[0] - outputs = outputs + cross_attention_outputs[1:-1] # add cross attentions if we output attention weights - layer_output = apply_chunking_to_forward( - self.feed_forward_chunk, self.chunk_size_feed_forward, self.seq_len_dim, attention_output - ) - outputs = (layer_output,) + outputs - - outputs = outputs + (present_key_value,) - - return outputs - - def feed_forward_chunk(self, attention_output): - intermediate_output = self.intermediate(attention_output) - layer_output = self.output(intermediate_output, attention_output) - return layer_output - - -class BertEncoder(nn.Module): - def __init__(self, config): - super().__init__() - self.config = config - self.layer = nn.ModuleList([BertLayer(config,i) for i in range(config.num_hidden_layers)]) - self.gradient_checkpointing = False - - def forward( - self, - hidden_states, - attention_mask=None, - head_mask=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - past_key_values=None, - use_cache=None, - output_attentions=False, - output_hidden_states=False, - return_dict=True, - mode='multimodal', - ): - all_hidden_states = () if output_hidden_states else None - all_self_attentions = () if output_attentions else None - all_cross_attentions = () if output_attentions and self.config.add_cross_attention else None - - next_decoder_cache = () if use_cache else None - - for i in range(self.config.num_hidden_layers): - layer_module = self.layer[i] - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - layer_head_mask = head_mask[i] if head_mask is not None else None - past_key_value = past_key_values[i] if past_key_values is not None else None - - if self.gradient_checkpointing and self.training: - - if use_cache: - logger.warn( - "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." - ) - use_cache = False - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs, past_key_value, output_attentions) - - return custom_forward - - layer_outputs = torch.utils.checkpoint.checkpoint( - create_custom_forward(layer_module), - hidden_states, - attention_mask, - layer_head_mask, - encoder_hidden_states, - encoder_attention_mask, - mode=mode, - ) - else: - layer_outputs = layer_module( - hidden_states, - attention_mask, - layer_head_mask, - encoder_hidden_states, - encoder_attention_mask, - past_key_value, - output_attentions, - mode=mode, - ) - - hidden_states = layer_outputs[0] - if use_cache: - next_decoder_cache += (layer_outputs[-1],) - if output_attentions: - all_self_attentions = all_self_attentions + (layer_outputs[1],) - - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if not return_dict: - return tuple( - v - for v in [ - hidden_states, - next_decoder_cache, - all_hidden_states, - all_self_attentions, - all_cross_attentions, - ] - if v is not None - ) - return BaseModelOutputWithPastAndCrossAttentions( - last_hidden_state=hidden_states, - past_key_values=next_decoder_cache, - hidden_states=all_hidden_states, - attentions=all_self_attentions, - cross_attentions=all_cross_attentions, - ) - - -class BertPooler(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - self.activation = nn.Tanh() - - def forward(self, hidden_states): - # We "pool" the model by simply taking the hidden state corresponding - # to the first token. - first_token_tensor = hidden_states[:, 0] - pooled_output = self.dense(first_token_tensor) - pooled_output = self.activation(pooled_output) - return pooled_output - - -class BertPredictionHeadTransform(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - if isinstance(config.hidden_act, str): - self.transform_act_fn = ACT2FN[config.hidden_act] - else: - self.transform_act_fn = config.hidden_act - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - - def forward(self, hidden_states): - hidden_states = self.dense(hidden_states) - hidden_states = self.transform_act_fn(hidden_states) - hidden_states = self.LayerNorm(hidden_states) - return hidden_states - - -class BertLMPredictionHead(nn.Module): - def __init__(self, config): - super().__init__() - self.transform = BertPredictionHeadTransform(config) - - # The output weights are the same as the input embeddings, but there is - # an output-only bias for each token. - self.decoder = nn.Linear(config.hidden_size, config.vocab_size, bias=False) - - self.bias = nn.Parameter(torch.zeros(config.vocab_size)) - - # Need a link between the two variables so that the bias is correctly resized with `resize_token_embeddings` - self.decoder.bias = self.bias - - def forward(self, hidden_states): - hidden_states = self.transform(hidden_states) - hidden_states = self.decoder(hidden_states) - return hidden_states - - -class BertOnlyMLMHead(nn.Module): - def __init__(self, config): - super().__init__() - self.predictions = BertLMPredictionHead(config) - - def forward(self, sequence_output): - prediction_scores = self.predictions(sequence_output) - return prediction_scores - - -class BertPreTrainedModel(PreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = BertConfig - base_model_prefix = "bert" - _keys_to_ignore_on_load_missing = [r"position_ids"] - - def _init_weights(self, module): - """ Initialize the weights """ - if isinstance(module, (nn.Linear, nn.Embedding)): - # Slightly different from the TF version which uses truncated_normal for initialization - # cf https://github.com/pytorch/pytorch/pull/5617 - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - if isinstance(module, nn.Linear) and module.bias is not None: - module.bias.data.zero_() - - -class BertModel(BertPreTrainedModel): - """ - The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of - cross-attention is added between the self-attention layers, following the architecture described in `Attention is - all you need `__ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, - Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. - argument and :obj:`add_cross_attention` set to :obj:`True`; an :obj:`encoder_hidden_states` is then expected as an - input to the forward pass. - """ - - def __init__(self, config, add_pooling_layer=True): - super().__init__(config) - self.config = config - - self.embeddings = BertEmbeddings(config) - - self.encoder = BertEncoder(config) - - self.pooler = BertPooler(config) if add_pooling_layer else None - - self.init_weights() - - - def get_input_embeddings(self): - return self.embeddings.word_embeddings - - def set_input_embeddings(self, value): - self.embeddings.word_embeddings = value - - def _prune_heads(self, heads_to_prune): - """ - Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base - class PreTrainedModel - """ - for layer, heads in heads_to_prune.items(): - self.encoder.layer[layer].attention.prune_heads(heads) - - - def get_extended_attention_mask(self, attention_mask: Tensor, input_shape: Tuple[int], device: device, is_decoder: bool) -> Tensor: - """ - Makes broadcastable attention and causal masks so that future and masked tokens are ignored. - - Arguments: - attention_mask (:obj:`torch.Tensor`): - Mask with ones indicating tokens to attend to, zeros for tokens to ignore. - input_shape (:obj:`Tuple[int]`): - The shape of the input to the model. - device: (:obj:`torch.device`): - The device of the input to the model. - - Returns: - :obj:`torch.Tensor` The extended attention mask, with a the same dtype as :obj:`attention_mask.dtype`. - """ - # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length] - # ourselves in which case we just need to make it broadcastable to all heads. - if attention_mask.dim() == 3: - extended_attention_mask = attention_mask[:, None, :, :] - elif attention_mask.dim() == 2: - # Provided a padding mask of dimensions [batch_size, seq_length] - # - if the model is a decoder, apply a causal mask in addition to the padding mask - # - if the model is an encoder, make the mask broadcastable to [batch_size, num_heads, seq_length, seq_length] - if is_decoder: - batch_size, seq_length = input_shape - - seq_ids = torch.arange(seq_length, device=device) - causal_mask = seq_ids[None, None, :].repeat(batch_size, seq_length, 1) <= seq_ids[None, :, None] - # in case past_key_values are used we need to add a prefix ones mask to the causal mask - # causal and attention masks must have same type with pytorch version < 1.3 - causal_mask = causal_mask.to(attention_mask.dtype) - - if causal_mask.shape[1] < attention_mask.shape[1]: - prefix_seq_len = attention_mask.shape[1] - causal_mask.shape[1] - causal_mask = torch.cat( - [ - torch.ones((batch_size, seq_length, prefix_seq_len), device=device, dtype=causal_mask.dtype), - causal_mask, - ], - axis=-1, - ) - - extended_attention_mask = causal_mask[:, None, :, :] * attention_mask[:, None, None, :] - else: - extended_attention_mask = attention_mask[:, None, None, :] - else: - raise ValueError( - "Wrong shape for input_ids (shape {}) or attention_mask (shape {})".format( - input_shape, attention_mask.shape - ) - ) - - # Since attention_mask is 1.0 for positions we want to attend and 0.0 for - # masked positions, this operation will create a tensor which is 0.0 for - # positions we want to attend and -10000.0 for masked positions. - # Since we are adding it to the raw scores before the softmax, this is - # effectively the same as removing these entirely. - extended_attention_mask = extended_attention_mask.to(dtype=self.dtype) # fp16 compatibility - extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0 - return extended_attention_mask - - def forward( - self, - input_ids=None, - attention_mask=None, - position_ids=None, - head_mask=None, - inputs_embeds=None, - encoder_embeds=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - past_key_values=None, - use_cache=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - is_decoder=False, - mode='multimodal', - ): - r""" - encoder_hidden_states (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`): - Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if - the model is configured as a decoder. - encoder_attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): - Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in - the cross-attention if the model is configured as a decoder. Mask values selected in ``[0, 1]``: - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - past_key_values (:obj:`tuple(tuple(torch.FloatTensor))` of length :obj:`config.n_layers` with each tuple having 4 tensors of shape :obj:`(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`): - Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. - If :obj:`past_key_values` are used, the user can optionally input only the last :obj:`decoder_input_ids` - (those that don't have their past key value states given to this model) of shape :obj:`(batch_size, 1)` - instead of all :obj:`decoder_input_ids` of shape :obj:`(batch_size, sequence_length)`. - use_cache (:obj:`bool`, `optional`): - If set to :obj:`True`, :obj:`past_key_values` key value states are returned and can be used to speed up - decoding (see :obj:`past_key_values`). - """ - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if is_decoder: - use_cache = use_cache if use_cache is not None else self.config.use_cache - else: - use_cache = False - - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") - elif input_ids is not None: - input_shape = input_ids.size() - batch_size, seq_length = input_shape - device = input_ids.device - elif inputs_embeds is not None: - input_shape = inputs_embeds.size()[:-1] - batch_size, seq_length = input_shape - device = inputs_embeds.device - elif encoder_embeds is not None: - input_shape = encoder_embeds.size()[:-1] - batch_size, seq_length = input_shape - device = encoder_embeds.device - else: - raise ValueError("You have to specify either input_ids or inputs_embeds or encoder_embeds") - - # past_key_values_length - past_key_values_length = past_key_values[0][0].shape[2] if past_key_values is not None else 0 - - if attention_mask is None: - attention_mask = torch.ones(((batch_size, seq_length + past_key_values_length)), device=device) - - # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length] - # ourselves in which case we just need to make it broadcastable to all heads. - extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape, - device, is_decoder) - - # If a 2D or 3D attention mask is provided for the cross-attention - # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length] - if encoder_hidden_states is not None: - if type(encoder_hidden_states) == list: - encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states[0].size() - else: - encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size() - encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length) - - if type(encoder_attention_mask) == list: - encoder_extended_attention_mask = [self.invert_attention_mask(mask) for mask in encoder_attention_mask] - elif encoder_attention_mask is None: - encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device) - encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask) - else: - encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask) - else: - encoder_extended_attention_mask = None - - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - - if encoder_embeds is None: - embedding_output = self.embeddings( - input_ids=input_ids, - position_ids=position_ids, - inputs_embeds=inputs_embeds, - past_key_values_length=past_key_values_length, - ) - else: - embedding_output = encoder_embeds - - encoder_outputs = self.encoder( - embedding_output, - attention_mask=extended_attention_mask, - head_mask=head_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_extended_attention_mask, - past_key_values=past_key_values, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - mode=mode, - ) - sequence_output = encoder_outputs[0] - pooled_output = self.pooler(sequence_output) if self.pooler is not None else None - - if not return_dict: - return (sequence_output, pooled_output) + encoder_outputs[1:] - - return BaseModelOutputWithPoolingAndCrossAttentions( - last_hidden_state=sequence_output, - pooler_output=pooled_output, - past_key_values=encoder_outputs.past_key_values, - hidden_states=encoder_outputs.hidden_states, - attentions=encoder_outputs.attentions, - cross_attentions=encoder_outputs.cross_attentions, - ) - diff --git a/spaces/Plachta/VITS-Umamusume-voice-synthesizer/transforms.py b/spaces/Plachta/VITS-Umamusume-voice-synthesizer/transforms.py deleted file mode 100644 index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000 --- a/spaces/Plachta/VITS-Umamusume-voice-synthesizer/transforms.py +++ /dev/null @@ -1,193 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/Podtekatel/Avatar2VSK/inference/__init__.py b/spaces/Podtekatel/Avatar2VSK/inference/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Purple11/Grounded-Diffusion/ldm/modules/attention.py b/spaces/Purple11/Grounded-Diffusion/ldm/modules/attention.py deleted file mode 100644 index 590cf034a434dc9c4a95f621ea4bc8a4d6225926..0000000000000000000000000000000000000000 --- a/spaces/Purple11/Grounded-Diffusion/ldm/modules/attention.py +++ /dev/null @@ -1,267 +0,0 @@ -from inspect import isfunction -import math -import torch -import torch.nn.functional as F -from torch import nn, einsum -from einops import rearrange, repeat -from collections import defaultdict -from ldm.modules.diffusionmodules.util import checkpoint - - -def exists(val): - return val is not None - - -def uniq(arr): - return{el: True for el in arr}.keys() - - -def default(val, d): - if exists(val): - return val - return d() if isfunction(d) else d - - -def max_neg_value(t): - return -torch.finfo(t.dtype).max - - -def init_(tensor): - dim = tensor.shape[-1] - std = 1 / math.sqrt(dim) - tensor.uniform_(-std, std) - return tensor - - -# feedforward -class GEGLU(nn.Module): - def __init__(self, dim_in, dim_out): - super().__init__() - self.proj = nn.Linear(dim_in, dim_out * 2) - - def forward(self, x): - x, gate = self.proj(x).chunk(2, dim=-1) - return x * F.gelu(gate) - - -class FeedForward(nn.Module): - def __init__(self, dim, dim_out=None, mult=4, glu=False, dropout=0.): - super().__init__() - inner_dim = int(dim * mult) - dim_out = default(dim_out, dim) - project_in = nn.Sequential( - nn.Linear(dim, inner_dim), - nn.GELU() - ) if not glu else GEGLU(dim, inner_dim) - - self.net = nn.Sequential( - project_in, - nn.Dropout(dropout), - nn.Linear(inner_dim, dim_out) - ) - - def forward(self, x): - return self.net(x) - - -def zero_module(module): - """ - Zero out the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().zero_() - return module - - -def Normalize(in_channels): - return torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True) - - - -class LinearAttention(nn.Module): - def __init__(self, dim, heads=4, dim_head=32): - super().__init__() - self.heads = heads - hidden_dim = dim_head * heads - self.to_qkv = nn.Conv2d(dim, hidden_dim * 3, 1, bias = False) - self.to_out = nn.Conv2d(hidden_dim, dim, 1) - - def forward(self, x): - b, c, h, w = x.shape - qkv = self.to_qkv(x) - q, k, v = rearrange(qkv, 'b (qkv heads c) h w -> qkv b heads c (h w)', heads = self.heads, qkv=3) - k = k.softmax(dim=-1) - context = torch.einsum('bhdn,bhen->bhde', k, v) - out = torch.einsum('bhde,bhdn->bhen', context, q) - out = rearrange(out, 'b heads c (h w) -> b (heads c) h w', heads=self.heads, h=h, w=w) - return self.to_out(out) - - -class SpatialSelfAttention(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.in_channels = in_channels - - self.norm = Normalize(in_channels) - self.q = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.k = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.v = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.proj_out = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - - def forward(self, x): - h_ = x - h_ = self.norm(h_) - q = self.q(h_) - k = self.k(h_) - v = self.v(h_) - - b,c,h,w = q.shape - q = rearrange(q, 'b c h w -> b (h w) c') - k = rearrange(k, 'b c h w -> b c (h w)') - w_ = torch.einsum('bij,bjk->bik', q, k) - - w_ = w_ * (int(c)**(-0.5)) - w_ = torch.nn.functional.softmax(w_, dim=2) - - v = rearrange(v, 'b c h w -> b c (h w)') - w_ = rearrange(w_, 'b i j -> b j i') - h_ = torch.einsum('bij,bjk->bik', v, w_) - h_ = rearrange(h_, 'b c (h w) -> b c h w', h=h) - h_ = self.proj_out(h_) - - return x+h_ - - -class CrossAttention(nn.Module): - def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0.): - super().__init__() - inner_dim = dim_head * heads - context_dim = default(context_dim, query_dim) - - self.scale = dim_head ** -0.5 - self.heads = heads - - self.to_q = nn.Linear(query_dim, inner_dim, bias=False) - self.to_k = nn.Linear(context_dim, inner_dim, bias=False) - self.to_v = nn.Linear(context_dim, inner_dim, bias=False) - - self.to_out = nn.Sequential( - nn.Linear(inner_dim, query_dim), - nn.Dropout(dropout) - ) - - - def forward(self, x, context=None, mask=None,class_token_index=[]): - - h = self.heads - - q = self.to_q(x) - context = default(context, x) - k = self.to_k(context) - v = self.to_v(context) - - q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v)) - - sim = einsum('b i d, b j d -> b i j', q, k) * self.scale - - if exists(mask): - mask = rearrange(mask, 'b ... -> b (...)') - max_neg_value = -torch.finfo(sim.dtype).max - mask = repeat(mask, 'b j -> (b h) () j', h=h) - sim.masked_fill_(~mask, max_neg_value) - - attn = sim.softmax(dim=-1) - - out = einsum('b i j, b j d -> b i d', attn, v) - out = rearrange(out, '(b h) n d -> b n (h d)', h=h) - - return self.to_out(out) - - -class BasicTransformerBlock(nn.Module): - def __init__(self, dim, n_heads, d_head, dropout=0., context_dim=None, gated_ff=True, checkpoint=True): - super().__init__() - self.attn1 = CrossAttention(query_dim=dim, heads=n_heads, dim_head=d_head, dropout=dropout) - self.ff = FeedForward(dim, dropout=dropout, glu=gated_ff) - self.attn2 = CrossAttention(query_dim=dim, context_dim=context_dim, - heads=n_heads, dim_head=d_head, dropout=dropout) - self.norm1 = nn.LayerNorm(dim) - self.norm2 = nn.LayerNorm(dim) - self.norm3 = nn.LayerNorm(dim) - self.checkpoint = checkpoint - - def forward(self, x, context=None,class_token_index=[]): - return checkpoint(self._forward, (x, context,class_token_index), self.parameters(), self.checkpoint) - - def _forward(self, x, context=None,class_token_index=[]): - - x = self.attn1(self.norm1(x),class_token_index=[]) + x - - x1 = self.attn2(self.norm2(x), context=context,class_token_index=class_token_index) - x=x+x1 - x = self.ff(self.norm3(x)) + x - - return x - - -class SpatialTransformer(nn.Module): - """ - Transformer block for image-like data. - First, project the input (aka embedding) - and reshape to b, t, d. - Then apply standard transformer action. - Finally, reshape to image - """ - def __init__(self, in_channels, n_heads, d_head, - depth=1, dropout=0., context_dim=None): - super().__init__() - self.in_channels = in_channels - inner_dim = n_heads * d_head - self.norm = Normalize(in_channels) - - self.proj_in = nn.Conv2d(in_channels, - inner_dim, - kernel_size=1, - stride=1, - padding=0) - - self.transformer_blocks = nn.ModuleList( - [BasicTransformerBlock(inner_dim, n_heads, d_head, dropout=dropout, context_dim=context_dim) - for d in range(depth)] - ) - - self.proj_out = zero_module(nn.Conv2d(inner_dim, - in_channels, - kernel_size=1, - stride=1, - padding=0)) - - def forward(self, x, context=None,class_token_index=[]): - # note: if no context is given, cross-attention defaults to self-attention - b, c, h, w = x.shape - x_in = x - x = self.norm(x) - x = self.proj_in(x) - x = rearrange(x, 'b c h w -> b (h w) c') - for block in self.transformer_blocks: - x = block(x, context=context,class_token_index=class_token_index) - x = rearrange(x, 'b (h w) c -> b c h w', h=h, w=w) - x = self.proj_out(x) - - return x + x_in \ No newline at end of file diff --git a/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/scripts/make_scene_samples.py b/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/scripts/make_scene_samples.py deleted file mode 100644 index c096b98460874be0acbe5b85464593fbad4bedd0..0000000000000000000000000000000000000000 --- a/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/scripts/make_scene_samples.py +++ /dev/null @@ -1,198 +0,0 @@ -import glob -import os -import sys -from itertools import product -from pathlib import Path -from typing import Literal, List, Optional, Tuple - -import numpy as np -import torch -from omegaconf import OmegaConf -from pytorch_lightning import seed_everything -from torch import Tensor -from torchvision.utils import save_image -from tqdm import tqdm - -from scripts.make_samples import get_parser, load_model_and_dset -from taming.data.conditional_builder.objects_center_points import ObjectsCenterPointsConditionalBuilder -from taming.data.helper_types import BoundingBox, Annotation -from taming.data.annotated_objects_dataset import AnnotatedObjectsDataset -from taming.models.cond_transformer import Net2NetTransformer - -seed_everything(42424242) -device: Literal['cuda', 'cpu'] = 'cuda' -first_stage_factor = 16 -trained_on_res = 256 - - -def _helper(coord: int, coord_max: int, coord_window: int) -> (int, int): - assert 0 <= coord < coord_max - coord_desired_center = (coord_window - 1) // 2 - return np.clip(coord - coord_desired_center, 0, coord_max - coord_window) - - -def get_crop_coordinates(x: int, y: int) -> BoundingBox: - WIDTH, HEIGHT = desired_z_shape[1], desired_z_shape[0] - x0 = _helper(x, WIDTH, first_stage_factor) / WIDTH - y0 = _helper(y, HEIGHT, first_stage_factor) / HEIGHT - w = first_stage_factor / WIDTH - h = first_stage_factor / HEIGHT - return x0, y0, w, h - - -def get_z_indices_crop_out(z_indices: Tensor, predict_x: int, predict_y: int) -> Tensor: - WIDTH, HEIGHT = desired_z_shape[1], desired_z_shape[0] - x0 = _helper(predict_x, WIDTH, first_stage_factor) - y0 = _helper(predict_y, HEIGHT, first_stage_factor) - no_images = z_indices.shape[0] - cut_out_1 = z_indices[:, y0:predict_y, x0:x0+first_stage_factor].reshape((no_images, -1)) - cut_out_2 = z_indices[:, predict_y, x0:predict_x] - return torch.cat((cut_out_1, cut_out_2), dim=1) - - -@torch.no_grad() -def sample(model: Net2NetTransformer, annotations: List[Annotation], dataset: AnnotatedObjectsDataset, - conditional_builder: ObjectsCenterPointsConditionalBuilder, no_samples: int, - temperature: float, top_k: int) -> Tensor: - x_max, y_max = desired_z_shape[1], desired_z_shape[0] - - annotations = [a._replace(category_no=dataset.get_category_number(a.category_id)) for a in annotations] - - recompute_conditional = any((desired_resolution[0] > trained_on_res, desired_resolution[1] > trained_on_res)) - if not recompute_conditional: - crop_coordinates = get_crop_coordinates(0, 0) - conditional_indices = conditional_builder.build(annotations, crop_coordinates) - c_indices = conditional_indices.to(device).repeat(no_samples, 1) - z_indices = torch.zeros((no_samples, 0), device=device).long() - output_indices = model.sample(z_indices, c_indices, steps=x_max*y_max, temperature=temperature, - sample=True, top_k=top_k) - else: - output_indices = torch.zeros((no_samples, y_max, x_max), device=device).long() - for predict_y, predict_x in tqdm(product(range(y_max), range(x_max)), desc='sampling_image', total=x_max*y_max): - crop_coordinates = get_crop_coordinates(predict_x, predict_y) - z_indices = get_z_indices_crop_out(output_indices, predict_x, predict_y) - conditional_indices = conditional_builder.build(annotations, crop_coordinates) - c_indices = conditional_indices.to(device).repeat(no_samples, 1) - new_index = model.sample(z_indices, c_indices, steps=1, temperature=temperature, sample=True, top_k=top_k) - output_indices[:, predict_y, predict_x] = new_index[:, -1] - z_shape = ( - no_samples, - model.first_stage_model.quantize.e_dim, # codebook embed_dim - desired_z_shape[0], # z_height - desired_z_shape[1] # z_width - ) - x_sample = model.decode_to_img(output_indices, z_shape) * 0.5 + 0.5 - x_sample = x_sample.to('cpu') - - plotter = conditional_builder.plot - figure_size = (x_sample.shape[2], x_sample.shape[3]) - scene_graph = conditional_builder.build(annotations, (0., 0., 1., 1.)) - plot = plotter(scene_graph, dataset.get_textual_label_for_category_no, figure_size) - return torch.cat((x_sample, plot.unsqueeze(0))) - - -def get_resolution(resolution_str: str) -> (Tuple[int, int], Tuple[int, int]): - if not resolution_str.count(',') == 1: - raise ValueError("Give resolution as in 'height,width'") - res_h, res_w = resolution_str.split(',') - res_h = max(int(res_h), trained_on_res) - res_w = max(int(res_w), trained_on_res) - z_h = int(round(res_h/first_stage_factor)) - z_w = int(round(res_w/first_stage_factor)) - return (z_h, z_w), (z_h*first_stage_factor, z_w*first_stage_factor) - - -def add_arg_to_parser(parser): - parser.add_argument( - "-R", - "--resolution", - type=str, - default='256,256', - help=f"give resolution in multiples of {first_stage_factor}, default is '256,256'", - ) - parser.add_argument( - "-C", - "--conditional", - type=str, - default='objects_bbox', - help=f"objects_bbox or objects_center_points", - ) - parser.add_argument( - "-N", - "--n_samples_per_layout", - type=int, - default=4, - help=f"how many samples to generate per layout", - ) - return parser - - -if __name__ == "__main__": - sys.path.append(os.getcwd()) - - parser = get_parser() - parser = add_arg_to_parser(parser) - - opt, unknown = parser.parse_known_args() - - ckpt = None - if opt.resume: - if not os.path.exists(opt.resume): - raise ValueError("Cannot find {}".format(opt.resume)) - if os.path.isfile(opt.resume): - paths = opt.resume.split("/") - try: - idx = len(paths)-paths[::-1].index("logs")+1 - except ValueError: - idx = -2 # take a guess: path/to/logdir/checkpoints/model.ckpt - logdir = "/".join(paths[:idx]) - ckpt = opt.resume - else: - assert os.path.isdir(opt.resume), opt.resume - logdir = opt.resume.rstrip("/") - ckpt = os.path.join(logdir, "checkpoints", "last.ckpt") - print(f"logdir:{logdir}") - base_configs = sorted(glob.glob(os.path.join(logdir, "configs/*-project.yaml"))) - opt.base = base_configs+opt.base - - if opt.config: - if type(opt.config) == str: - opt.base = [opt.config] - else: - opt.base = [opt.base[-1]] - - configs = [OmegaConf.load(cfg) for cfg in opt.base] - cli = OmegaConf.from_dotlist(unknown) - if opt.ignore_base_data: - for config in configs: - if hasattr(config, "data"): - del config["data"] - config = OmegaConf.merge(*configs, cli) - desired_z_shape, desired_resolution = get_resolution(opt.resolution) - conditional = opt.conditional - - print(ckpt) - gpu = True - eval_mode = True - show_config = False - if show_config: - print(OmegaConf.to_container(config)) - - dsets, model, global_step = load_model_and_dset(config, ckpt, gpu, eval_mode) - print(f"Global step: {global_step}") - - data_loader = dsets.val_dataloader() - print(dsets.datasets["validation"].conditional_builders) - conditional_builder = dsets.datasets["validation"].conditional_builders[conditional] - - outdir = Path(opt.outdir).joinpath(f"{global_step:06}_{opt.top_k}_{opt.temperature}") - outdir.mkdir(exist_ok=True, parents=True) - print("Writing samples to ", outdir) - - p_bar_1 = tqdm(enumerate(iter(data_loader)), desc='batch', total=len(data_loader)) - for batch_no, batch in p_bar_1: - save_img: Optional[Tensor] = None - for i, annotations in tqdm(enumerate(batch['annotations']), desc='within_batch', total=data_loader.batch_size): - imgs = sample(model, annotations, dsets.datasets["validation"], conditional_builder, - opt.n_samples_per_layout, opt.temperature, opt.top_k) - save_image(imgs, outdir.joinpath(f'{batch_no:04}_{i:02}.png'), n_row=opt.n_samples_per_layout+1) diff --git a/spaces/QINGCHE/TSA/abstract.py b/spaces/QINGCHE/TSA/abstract.py deleted file mode 100644 index 090e72d07f94caad2b1d3451c1cc4d962dca4c5d..0000000000000000000000000000000000000000 --- a/spaces/QINGCHE/TSA/abstract.py +++ /dev/null @@ -1,71 +0,0 @@ -# 导入所需的库 -import json -import paddlenlp -import gensim -import sklearn -from collections import Counter -from gensim import corpora, models, similarities -import numpy as np -import matplotlib.pyplot as plt - - - - - -def build_corpus(sentences): - # 使用paddlenlp提供的预训练词典 - vocab = paddlenlp.transformers.BertTokenizer.from_pretrained('bert-base-chinese').vocab - - # 创建分词器 - tokenizer = paddlenlp.data.JiebaTokenizer(vocab) - # 对每个句子进行分词,并去除停用词,得到一个二维列表 - stopwords = [""] - words_list = [] - for sentence in sentences: - words = [word for word in tokenizer.cut(sentence) if word not in stopwords] - words_list.append(words) - # print(words_list) - # 将二维列表转换为一维列表 - words = [word for sentence in words_list for word in sentence] - - dictionary = corpora.Dictionary(words_list) - corpus = [dictionary.doc2bow(text) for text in words_list] - - return corpus,dictionary,words_list - -def lda(words_list,sentences,corpus,dictionary,num): - lda = gensim.models.ldamodel.LdaModel(corpus=corpus,id2word=dictionary, num_topics=num) - - topics = lda.print_topics(num_topics=num, num_words=10) - - # 根据关键词的匹配度,选择最能代表每个主题的句子,作为中心句 - - central_sentences = [] - for topic in topics: - topic_id, topic_words = topic - topic_words = [word.split("*")[1].strip('"') for word in topic_words.split("+")] - max_score = 0 - candidates = [] # 存储候选中心句 - for sentence, words in zip(sentences, words_list): - score = 0 - for word in words: - if word in topic_words: - score += 1 - if score > max_score: - max_score = score - candidates = [sentence] # 如果找到更高的匹配度,更新候选列表 - elif score == max_score: - candidates.append(sentence) # 如果匹配度相同,添加到候选列表 - for candidate in candidates: # 遍历候选列表 - if candidate not in central_sentences: # 检查是否已经存在相同的句子 - central_sentence = candidate # 如果不存在,选择为中心句 - central_sentences.append(central_sentence) - break # 跳出循环 - - return central_sentences - - -def abstruct_main(sentences,num): - corpus,dictionary,words_list = build_corpus(sentences) - central_sentences= lda(words_list, sentences, corpus, dictionary,num) - return central_sentences diff --git a/spaces/Realcat/image-matching-webui/third_party/ASpanFormer/src/ASpanFormer/aspan_module/fine_preprocess.py b/spaces/Realcat/image-matching-webui/third_party/ASpanFormer/src/ASpanFormer/aspan_module/fine_preprocess.py deleted file mode 100644 index 6c37f76c3d5735508f950bb1239f5e93039b27ff..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/ASpanFormer/src/ASpanFormer/aspan_module/fine_preprocess.py +++ /dev/null @@ -1,75 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from einops.einops import rearrange, repeat - - -class FinePreprocess(nn.Module): - def __init__(self, config): - super().__init__() - - self.config = config - self.cat_c_feat = config["fine_concat_coarse_feat"] - self.W = self.config["fine_window_size"] - - d_model_c = self.config["coarse"]["d_model"] - d_model_f = self.config["fine"]["d_model"] - self.d_model_f = d_model_f - if self.cat_c_feat: - self.down_proj = nn.Linear(d_model_c, d_model_f, bias=True) - self.merge_feat = nn.Linear(2 * d_model_f, d_model_f, bias=True) - - self._reset_parameters() - - def _reset_parameters(self): - for p in self.parameters(): - if p.dim() > 1: - nn.init.kaiming_normal_(p, mode="fan_out", nonlinearity="relu") - - def forward(self, feat_f0, feat_f1, feat_c0, feat_c1, data): - W = self.W - stride = data["hw0_f"][0] // data["hw0_c"][0] - - data.update({"W": W}) - if data["b_ids"].shape[0] == 0: - feat0 = torch.empty(0, self.W**2, self.d_model_f, device=feat_f0.device) - feat1 = torch.empty(0, self.W**2, self.d_model_f, device=feat_f0.device) - return feat0, feat1 - - # 1. unfold(crop) all local windows - feat_f0_unfold = F.unfold( - feat_f0, kernel_size=(W, W), stride=stride, padding=W // 2 - ) - feat_f0_unfold = rearrange(feat_f0_unfold, "n (c ww) l -> n l ww c", ww=W**2) - feat_f1_unfold = F.unfold( - feat_f1, kernel_size=(W, W), stride=stride, padding=W // 2 - ) - feat_f1_unfold = rearrange(feat_f1_unfold, "n (c ww) l -> n l ww c", ww=W**2) - - # 2. select only the predicted matches - feat_f0_unfold = feat_f0_unfold[data["b_ids"], data["i_ids"]] # [n, ww, cf] - feat_f1_unfold = feat_f1_unfold[data["b_ids"], data["j_ids"]] - - # option: use coarse-level loftr feature as context: concat and linear - if self.cat_c_feat: - feat_c_win = self.down_proj( - torch.cat( - [ - feat_c0[data["b_ids"], data["i_ids"]], - feat_c1[data["b_ids"], data["j_ids"]], - ], - 0, - ) - ) # [2n, c] - feat_cf_win = self.merge_feat( - torch.cat( - [ - torch.cat([feat_f0_unfold, feat_f1_unfold], 0), # [2n, ww, cf] - repeat(feat_c_win, "n c -> n ww c", ww=W**2), # [2n, ww, cf] - ], - -1, - ) - ) - feat_f0_unfold, feat_f1_unfold = torch.chunk(feat_cf_win, 2, dim=0) - - return feat_f0_unfold, feat_f1_unfold diff --git a/spaces/Ritori/TTS_Yui/README.md b/spaces/Ritori/TTS_Yui/README.md deleted file mode 100644 index a11d6b7de0d855901571ff2e4b4ba3bc13e0fa2f..0000000000000000000000000000000000000000 --- a/spaces/Ritori/TTS_Yui/README.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -title: TTS_Yui -app_file: Yue_gradio_cpu.py -sdk: gradio -sdk_version: 3.36.1 ---- -# Tacotron2-Japanese -- Tacotron2 implementation of Japanese -## Links -* Reference: [NVIDIA/tacotron2](https://github.com/NVIDIA/tacotron2) -* [Pre-training tacotron2 models](https://github.com/CjangCjengh/TTSModels) -* [latest changes can be viewed in this repository](https://github.com/StarxSky/tacotron2-JP) - -## How to use -1. Put raw Japanese texts in ./filelists -2. Put WAV files in ./wav -3. (Optional) Download NVIDIA's [pretrained model](https://drive.google.com/file/d/1c5ZTuT7J08wLUoVZ2KkUs_VdZuJ86ZqA/view?usp=sharing) -4. Open ./train.ipynb to install requirements and start training -5. Download NVIDIA's [WaveGlow model](https://drive.google.com/open?id=1rpK8CzAAirq9sWZhe9nlfvxMF1dRgFbF) -6. Open ./inference.ipynb to generate voice - -## Cleaners -File ./hparams.py line 30 -### 1. 'japanese_cleaners' -#### Before -何かあったらいつでも話して下さい。学院のことじゃなく、私事に関することでも何でも -#### After -nanikaacltaraitsudemohanashItekudasai.gakuiNnokotojanaku,shijinikaNsurukotodemonanidemo. -### 2. 'japanese_tokenization_cleaners' -#### Before -何かあったらいつでも話して下さい。学院のことじゃなく、私事に関することでも何でも -#### After -nani ka acl tara itsu demo hanashi te kudasai. gakuiN no koto ja naku, shiji nikaNsuru koto de mo naNdemo. -### 3. 'japanese_accent_cleaners' -#### Before -何かあったらいつでも話して下さい。学院のことじゃなく、私事に関することでも何でも -#### After -:na)nika a)cltara i)tsudemo ha(na)shIte ku(dasa)i.:ga(kuiNno ko(to)janaku,:shi)jini ka(Nsu)ru ko(to)demo na)nidemo. -### 4. 'japanese_phrase_cleaners' -#### Before -何かあったらいつでも話して下さい。学院のことじゃなく、私事に関することでも何でも -#### After -nanika acltara itsudemo hanashIte kudasai. gakuiNno kotojanaku, shijini kaNsuru kotodemo nanidemo. diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/fileio/handlers/__init__.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/fileio/handlers/__init__.py deleted file mode 100644 index aa24d91972837b8756b225f4879bac20436eb72a..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/fileio/handlers/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base import BaseFileHandler -from .json_handler import JsonHandler -from .pickle_handler import PickleHandler -from .yaml_handler import YamlHandler - -__all__ = ['BaseFileHandler', 'JsonHandler', 'PickleHandler', 'YamlHandler'] diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/roi_heads/test_mixins.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/roi_heads/test_mixins.py deleted file mode 100644 index 78a092a431aa884ab7dfd08346f79a4ccf8303bf..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/roi_heads/test_mixins.py +++ /dev/null @@ -1,348 +0,0 @@ -import logging -import sys - -import torch - -from mmdet.core import (bbox2roi, bbox_mapping, merge_aug_bboxes, - merge_aug_masks, multiclass_nms) - -logger = logging.getLogger(__name__) - -if sys.version_info >= (3, 7): - from mmdet.utils.contextmanagers import completed - - -class BBoxTestMixin(object): - - if sys.version_info >= (3, 7): - - async def async_test_bboxes(self, - x, - img_metas, - proposals, - rcnn_test_cfg, - rescale=False, - bbox_semaphore=None, - global_lock=None): - """Asynchronized test for box head without augmentation.""" - rois = bbox2roi(proposals) - roi_feats = self.bbox_roi_extractor( - x[:len(self.bbox_roi_extractor.featmap_strides)], rois) - if self.with_shared_head: - roi_feats = self.shared_head(roi_feats) - sleep_interval = rcnn_test_cfg.get('async_sleep_interval', 0.017) - - async with completed( - __name__, 'bbox_head_forward', - sleep_interval=sleep_interval): - cls_score, bbox_pred = self.bbox_head(roi_feats) - - img_shape = img_metas[0]['img_shape'] - scale_factor = img_metas[0]['scale_factor'] - det_bboxes, det_labels = self.bbox_head.get_bboxes( - rois, - cls_score, - bbox_pred, - img_shape, - scale_factor, - rescale=rescale, - cfg=rcnn_test_cfg) - return det_bboxes, det_labels - - def simple_test_bboxes(self, - x, - img_metas, - proposals, - rcnn_test_cfg, - rescale=False): - """Test only det bboxes without augmentation. - - Args: - x (tuple[Tensor]): Feature maps of all scale level. - img_metas (list[dict]): Image meta info. - proposals (Tensor or List[Tensor]): Region proposals. - rcnn_test_cfg (obj:`ConfigDict`): `test_cfg` of R-CNN. - rescale (bool): If True, return boxes in original image space. - Default: False. - - Returns: - tuple[list[Tensor], list[Tensor]]: The first list contains - the boxes of the corresponding image in a batch, each - tensor has the shape (num_boxes, 5) and last dimension - 5 represent (tl_x, tl_y, br_x, br_y, score). Each Tensor - in the second list is the labels with shape (num_boxes, ). - The length of both lists should be equal to batch_size. - """ - # get origin input shape to support onnx dynamic input shape - if torch.onnx.is_in_onnx_export(): - assert len( - img_metas - ) == 1, 'Only support one input image while in exporting to ONNX' - img_shapes = img_metas[0]['img_shape_for_onnx'] - else: - img_shapes = tuple(meta['img_shape'] for meta in img_metas) - scale_factors = tuple(meta['scale_factor'] for meta in img_metas) - - # The length of proposals of different batches may be different. - # In order to form a batch, a padding operation is required. - if isinstance(proposals, list): - # padding to form a batch - max_size = max([proposal.size(0) for proposal in proposals]) - for i, proposal in enumerate(proposals): - supplement = proposal.new_full( - (max_size - proposal.size(0), proposal.size(1)), 0) - proposals[i] = torch.cat((supplement, proposal), dim=0) - rois = torch.stack(proposals, dim=0) - else: - rois = proposals - - batch_index = torch.arange( - rois.size(0), device=rois.device).float().view(-1, 1, 1).expand( - rois.size(0), rois.size(1), 1) - rois = torch.cat([batch_index, rois[..., :4]], dim=-1) - batch_size = rois.shape[0] - num_proposals_per_img = rois.shape[1] - - # Eliminate the batch dimension - rois = rois.view(-1, 5) - bbox_results = self._bbox_forward(x, rois) - cls_score = bbox_results['cls_score'] - bbox_pred = bbox_results['bbox_pred'] - - # Recover the batch dimension - rois = rois.reshape(batch_size, num_proposals_per_img, -1) - cls_score = cls_score.reshape(batch_size, num_proposals_per_img, -1) - - if not torch.onnx.is_in_onnx_export(): - # remove padding - supplement_mask = rois[..., -1] == 0 - cls_score[supplement_mask, :] = 0 - - # bbox_pred would be None in some detector when with_reg is False, - # e.g. Grid R-CNN. - if bbox_pred is not None: - # the bbox prediction of some detectors like SABL is not Tensor - if isinstance(bbox_pred, torch.Tensor): - bbox_pred = bbox_pred.reshape(batch_size, - num_proposals_per_img, -1) - if not torch.onnx.is_in_onnx_export(): - bbox_pred[supplement_mask, :] = 0 - else: - # TODO: Looking forward to a better way - # For SABL - bbox_preds = self.bbox_head.bbox_pred_split( - bbox_pred, num_proposals_per_img) - # apply bbox post-processing to each image individually - det_bboxes = [] - det_labels = [] - for i in range(len(proposals)): - # remove padding - supplement_mask = proposals[i][..., -1] == 0 - for bbox in bbox_preds[i]: - bbox[supplement_mask] = 0 - det_bbox, det_label = self.bbox_head.get_bboxes( - rois[i], - cls_score[i], - bbox_preds[i], - img_shapes[i], - scale_factors[i], - rescale=rescale, - cfg=rcnn_test_cfg) - det_bboxes.append(det_bbox) - det_labels.append(det_label) - return det_bboxes, det_labels - else: - bbox_pred = None - - return self.bbox_head.get_bboxes( - rois, - cls_score, - bbox_pred, - img_shapes, - scale_factors, - rescale=rescale, - cfg=rcnn_test_cfg) - - def aug_test_bboxes(self, feats, img_metas, proposal_list, rcnn_test_cfg): - """Test det bboxes with test time augmentation.""" - aug_bboxes = [] - aug_scores = [] - for x, img_meta in zip(feats, img_metas): - # only one image in the batch - img_shape = img_meta[0]['img_shape'] - scale_factor = img_meta[0]['scale_factor'] - flip = img_meta[0]['flip'] - flip_direction = img_meta[0]['flip_direction'] - # TODO more flexible - proposals = bbox_mapping(proposal_list[0][:, :4], img_shape, - scale_factor, flip, flip_direction) - rois = bbox2roi([proposals]) - bbox_results = self._bbox_forward(x, rois) - bboxes, scores = self.bbox_head.get_bboxes( - rois, - bbox_results['cls_score'], - bbox_results['bbox_pred'], - img_shape, - scale_factor, - rescale=False, - cfg=None) - aug_bboxes.append(bboxes) - aug_scores.append(scores) - # after merging, bboxes will be rescaled to the original image size - merged_bboxes, merged_scores = merge_aug_bboxes( - aug_bboxes, aug_scores, img_metas, rcnn_test_cfg) - det_bboxes, det_labels = multiclass_nms(merged_bboxes, merged_scores, - rcnn_test_cfg.score_thr, - rcnn_test_cfg.nms, - rcnn_test_cfg.max_per_img) - return det_bboxes, det_labels - - -class MaskTestMixin(object): - - if sys.version_info >= (3, 7): - - async def async_test_mask(self, - x, - img_metas, - det_bboxes, - det_labels, - rescale=False, - mask_test_cfg=None): - """Asynchronized test for mask head without augmentation.""" - # image shape of the first image in the batch (only one) - ori_shape = img_metas[0]['ori_shape'] - scale_factor = img_metas[0]['scale_factor'] - if det_bboxes.shape[0] == 0: - segm_result = [[] for _ in range(self.mask_head.num_classes)] - else: - if rescale and not isinstance(scale_factor, - (float, torch.Tensor)): - scale_factor = det_bboxes.new_tensor(scale_factor) - _bboxes = ( - det_bboxes[:, :4] * - scale_factor if rescale else det_bboxes) - mask_rois = bbox2roi([_bboxes]) - mask_feats = self.mask_roi_extractor( - x[:len(self.mask_roi_extractor.featmap_strides)], - mask_rois) - - if self.with_shared_head: - mask_feats = self.shared_head(mask_feats) - if mask_test_cfg and mask_test_cfg.get('async_sleep_interval'): - sleep_interval = mask_test_cfg['async_sleep_interval'] - else: - sleep_interval = 0.035 - async with completed( - __name__, - 'mask_head_forward', - sleep_interval=sleep_interval): - mask_pred = self.mask_head(mask_feats) - segm_result = self.mask_head.get_seg_masks( - mask_pred, _bboxes, det_labels, self.test_cfg, ori_shape, - scale_factor, rescale) - return segm_result - - def simple_test_mask(self, - x, - img_metas, - det_bboxes, - det_labels, - rescale=False): - """Simple test for mask head without augmentation.""" - # image shapes of images in the batch - ori_shapes = tuple(meta['ori_shape'] for meta in img_metas) - scale_factors = tuple(meta['scale_factor'] for meta in img_metas) - - # The length of proposals of different batches may be different. - # In order to form a batch, a padding operation is required. - if isinstance(det_bboxes, list): - # padding to form a batch - max_size = max([bboxes.size(0) for bboxes in det_bboxes]) - for i, (bbox, label) in enumerate(zip(det_bboxes, det_labels)): - supplement_bbox = bbox.new_full( - (max_size - bbox.size(0), bbox.size(1)), 0) - supplement_label = label.new_full((max_size - label.size(0), ), - 0) - det_bboxes[i] = torch.cat((supplement_bbox, bbox), dim=0) - det_labels[i] = torch.cat((supplement_label, label), dim=0) - det_bboxes = torch.stack(det_bboxes, dim=0) - det_labels = torch.stack(det_labels, dim=0) - - batch_size = det_bboxes.size(0) - num_proposals_per_img = det_bboxes.shape[1] - - # if det_bboxes is rescaled to the original image size, we need to - # rescale it back to the testing scale to obtain RoIs. - det_bboxes = det_bboxes[..., :4] - if rescale: - if not isinstance(scale_factors[0], float): - scale_factors = det_bboxes.new_tensor(scale_factors) - det_bboxes = det_bboxes * scale_factors.unsqueeze(1) - - batch_index = torch.arange( - det_bboxes.size(0), device=det_bboxes.device).float().view( - -1, 1, 1).expand(det_bboxes.size(0), det_bboxes.size(1), 1) - mask_rois = torch.cat([batch_index, det_bboxes], dim=-1) - mask_rois = mask_rois.view(-1, 5) - mask_results = self._mask_forward(x, mask_rois) - mask_pred = mask_results['mask_pred'] - - # Recover the batch dimension - mask_preds = mask_pred.reshape(batch_size, num_proposals_per_img, - *mask_pred.shape[1:]) - - # apply mask post-processing to each image individually - segm_results = [] - for i in range(batch_size): - mask_pred = mask_preds[i] - det_bbox = det_bboxes[i] - det_label = det_labels[i] - - # remove padding - supplement_mask = det_bbox[..., -1] != 0 - mask_pred = mask_pred[supplement_mask] - det_bbox = det_bbox[supplement_mask] - det_label = det_label[supplement_mask] - - if det_label.shape[0] == 0: - segm_results.append([[] - for _ in range(self.mask_head.num_classes) - ]) - else: - segm_result = self.mask_head.get_seg_masks( - mask_pred, det_bbox, det_label, self.test_cfg, - ori_shapes[i], scale_factors[i], rescale) - segm_results.append(segm_result) - return segm_results - - def aug_test_mask(self, feats, img_metas, det_bboxes, det_labels): - """Test for mask head with test time augmentation.""" - if det_bboxes.shape[0] == 0: - segm_result = [[] for _ in range(self.mask_head.num_classes)] - else: - aug_masks = [] - for x, img_meta in zip(feats, img_metas): - img_shape = img_meta[0]['img_shape'] - scale_factor = img_meta[0]['scale_factor'] - flip = img_meta[0]['flip'] - flip_direction = img_meta[0]['flip_direction'] - _bboxes = bbox_mapping(det_bboxes[:, :4], img_shape, - scale_factor, flip, flip_direction) - mask_rois = bbox2roi([_bboxes]) - mask_results = self._mask_forward(x, mask_rois) - # convert to numpy array to save memory - aug_masks.append( - mask_results['mask_pred'].sigmoid().cpu().numpy()) - merged_masks = merge_aug_masks(aug_masks, img_metas, self.test_cfg) - - ori_shape = img_metas[0][0]['ori_shape'] - segm_result = self.mask_head.get_seg_masks( - merged_masks, - det_bboxes, - det_labels, - self.test_cfg, - ori_shape, - scale_factor=1.0, - rescale=False) - return segm_result diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/fused_bias_leakyrelu.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/fused_bias_leakyrelu.py deleted file mode 100644 index 6d12508469c6c8fa1884debece44c58d158cb6fa..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/fused_bias_leakyrelu.py +++ /dev/null @@ -1,268 +0,0 @@ -# modified from https://github.com/rosinality/stylegan2-pytorch/blob/master/op/fused_act.py # noqa:E501 - -# Copyright (c) 2021, NVIDIA Corporation. All rights reserved. -# NVIDIA Source Code License for StyleGAN2 with Adaptive Discriminator -# Augmentation (ADA) -# ======================================================================= - -# 1. Definitions - -# "Licensor" means any person or entity that distributes its Work. - -# "Software" means the original work of authorship made available under -# this License. - -# "Work" means the Software and any additions to or derivative works of -# the Software that are made available under this License. - -# The terms "reproduce," "reproduction," "derivative works," and -# "distribution" have the meaning as provided under U.S. copyright law; -# provided, however, that for the purposes of this License, derivative -# works shall not include works that remain separable from, or merely -# link (or bind by name) to the interfaces of, the Work. - -# Works, including the Software, are "made available" under this License -# by including in or with the Work either (a) a copyright notice -# referencing the applicability of this License to the Work, or (b) a -# copy of this License. - -# 2. License Grants - -# 2.1 Copyright Grant. Subject to the terms and conditions of this -# License, each Licensor grants to you a perpetual, worldwide, -# non-exclusive, royalty-free, copyright license to reproduce, -# prepare derivative works of, publicly display, publicly perform, -# sublicense and distribute its Work and any resulting derivative -# works in any form. - -# 3. Limitations - -# 3.1 Redistribution. You may reproduce or distribute the Work only -# if (a) you do so under this License, (b) you include a complete -# copy of this License with your distribution, and (c) you retain -# without modification any copyright, patent, trademark, or -# attribution notices that are present in the Work. - -# 3.2 Derivative Works. You may specify that additional or different -# terms apply to the use, reproduction, and distribution of your -# derivative works of the Work ("Your Terms") only if (a) Your Terms -# provide that the use limitation in Section 3.3 applies to your -# derivative works, and (b) you identify the specific derivative -# works that are subject to Your Terms. Notwithstanding Your Terms, -# this License (including the redistribution requirements in Section -# 3.1) will continue to apply to the Work itself. - -# 3.3 Use Limitation. The Work and any derivative works thereof only -# may be used or intended for use non-commercially. Notwithstanding -# the foregoing, NVIDIA and its affiliates may use the Work and any -# derivative works commercially. As used herein, "non-commercially" -# means for research or evaluation purposes only. - -# 3.4 Patent Claims. If you bring or threaten to bring a patent claim -# against any Licensor (including any claim, cross-claim or -# counterclaim in a lawsuit) to enforce any patents that you allege -# are infringed by any Work, then your rights under this License from -# such Licensor (including the grant in Section 2.1) will terminate -# immediately. - -# 3.5 Trademarks. This License does not grant any rights to use any -# Licensor’s or its affiliates’ names, logos, or trademarks, except -# as necessary to reproduce the notices described in this License. - -# 3.6 Termination. If you violate any term of this License, then your -# rights under this License (including the grant in Section 2.1) will -# terminate immediately. - -# 4. Disclaimer of Warranty. - -# THE WORK IS PROVIDED "AS IS" WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, EITHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OR CONDITIONS OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE OR -# NON-INFRINGEMENT. YOU BEAR THE RISK OF UNDERTAKING ANY ACTIVITIES UNDER -# THIS LICENSE. - -# 5. Limitation of Liability. - -# EXCEPT AS PROHIBITED BY APPLICABLE LAW, IN NO EVENT AND UNDER NO LEGAL -# THEORY, WHETHER IN TORT (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE -# SHALL ANY LICENSOR BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY DIRECT, -# INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF -# OR RELATED TO THIS LICENSE, THE USE OR INABILITY TO USE THE WORK -# (INCLUDING BUT NOT LIMITED TO LOSS OF GOODWILL, BUSINESS INTERRUPTION, -# LOST PROFITS OR DATA, COMPUTER FAILURE OR MALFUNCTION, OR ANY OTHER -# COMMERCIAL DAMAGES OR LOSSES), EVEN IF THE LICENSOR HAS BEEN ADVISED OF -# THE POSSIBILITY OF SUCH DAMAGES. - -# ======================================================================= - -import torch -import torch.nn.functional as F -from torch import nn -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', ['fused_bias_leakyrelu']) - - -class FusedBiasLeakyReLUFunctionBackward(Function): - """Calculate second order deviation. - - This function is to compute the second order deviation for the fused leaky - relu operation. - """ - - @staticmethod - def forward(ctx, grad_output, out, negative_slope, scale): - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - empty = grad_output.new_empty(0) - - grad_input = ext_module.fused_bias_leakyrelu( - grad_output, - empty, - out, - act=3, - grad=1, - alpha=negative_slope, - scale=scale) - - dim = [0] - - if grad_input.ndim > 2: - dim += list(range(2, grad_input.ndim)) - - grad_bias = grad_input.sum(dim).detach() - - return grad_input, grad_bias - - @staticmethod - def backward(ctx, gradgrad_input, gradgrad_bias): - out, = ctx.saved_tensors - - # The second order deviation, in fact, contains two parts, while the - # the first part is zero. Thus, we direct consider the second part - # which is similar with the first order deviation in implementation. - gradgrad_out = ext_module.fused_bias_leakyrelu( - gradgrad_input, - gradgrad_bias.to(out.dtype), - out, - act=3, - grad=1, - alpha=ctx.negative_slope, - scale=ctx.scale) - - return gradgrad_out, None, None, None - - -class FusedBiasLeakyReLUFunction(Function): - - @staticmethod - def forward(ctx, input, bias, negative_slope, scale): - empty = input.new_empty(0) - - out = ext_module.fused_bias_leakyrelu( - input, - bias, - empty, - act=3, - grad=0, - alpha=negative_slope, - scale=scale) - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - return out - - @staticmethod - def backward(ctx, grad_output): - out, = ctx.saved_tensors - - grad_input, grad_bias = FusedBiasLeakyReLUFunctionBackward.apply( - grad_output, out, ctx.negative_slope, ctx.scale) - - return grad_input, grad_bias, None, None - - -class FusedBiasLeakyReLU(nn.Module): - """Fused bias leaky ReLU. - - This function is introduced in the StyleGAN2: - http://arxiv.org/abs/1912.04958 - - The bias term comes from the convolution operation. In addition, to keep - the variance of the feature map or gradients unchanged, they also adopt a - scale similarly with Kaiming initialization. However, since the - :math:`1+{alpha}^2` : is too small, we can just ignore it. Therefore, the - final scale is just :math:`\sqrt{2}`:. Of course, you may change it with # noqa: W605, E501 - your own scale. - - TODO: Implement the CPU version. - - Args: - channel (int): The channel number of the feature map. - negative_slope (float, optional): Same as nn.LeakyRelu. - Defaults to 0.2. - scale (float, optional): A scalar to adjust the variance of the feature - map. Defaults to 2**0.5. - """ - - def __init__(self, num_channels, negative_slope=0.2, scale=2**0.5): - super(FusedBiasLeakyReLU, self).__init__() - - self.bias = nn.Parameter(torch.zeros(num_channels)) - self.negative_slope = negative_slope - self.scale = scale - - def forward(self, input): - return fused_bias_leakyrelu(input, self.bias, self.negative_slope, - self.scale) - - -def fused_bias_leakyrelu(input, bias, negative_slope=0.2, scale=2**0.5): - """Fused bias leaky ReLU function. - - This function is introduced in the StyleGAN2: - http://arxiv.org/abs/1912.04958 - - The bias term comes from the convolution operation. In addition, to keep - the variance of the feature map or gradients unchanged, they also adopt a - scale similarly with Kaiming initialization. However, since the - :math:`1+{alpha}^2` : is too small, we can just ignore it. Therefore, the - final scale is just :math:`\sqrt{2}`:. Of course, you may change it with # noqa: W605, E501 - your own scale. - - Args: - input (torch.Tensor): Input feature map. - bias (nn.Parameter): The bias from convolution operation. - negative_slope (float, optional): Same as nn.LeakyRelu. - Defaults to 0.2. - scale (float, optional): A scalar to adjust the variance of the feature - map. Defaults to 2**0.5. - - Returns: - torch.Tensor: Feature map after non-linear activation. - """ - - if not input.is_cuda: - return bias_leakyrelu_ref(input, bias, negative_slope, scale) - - return FusedBiasLeakyReLUFunction.apply(input, bias.to(input.dtype), - negative_slope, scale) - - -def bias_leakyrelu_ref(x, bias, negative_slope=0.2, scale=2**0.5): - - if bias is not None: - assert bias.ndim == 1 - assert bias.shape[0] == x.shape[1] - x = x + bias.reshape([-1 if i == 1 else 1 for i in range(x.ndim)]) - - x = F.leaky_relu(x, negative_slope) - if scale != 1: - x = x * scale - - return x diff --git a/spaces/SIGGRAPH2022/DCT-Net/source/utils.py b/spaces/SIGGRAPH2022/DCT-Net/source/utils.py deleted file mode 100644 index 45c31a3f642101b81ae1dae4ddc0a889e67349c2..0000000000000000000000000000000000000000 --- a/spaces/SIGGRAPH2022/DCT-Net/source/utils.py +++ /dev/null @@ -1,107 +0,0 @@ -import os - -import cv2 -import numpy as np - - -def resize_size(image, size=720): - h, w, c = np.shape(image) - if min(h, w) > size: - if h > w: - h, w = int(size * h / w), size - else: - h, w = size, int(size * w / h) - image = cv2.resize(image, (w, h), interpolation=cv2.INTER_AREA) - return image - - -def padTo16x(image): - h, w, c = np.shape(image) - if h % 16 == 0 and w % 16 == 0: - return image, h, w - nh, nw = (h // 16 + 1) * 16, (w // 16 + 1) * 16 - img_new = np.ones((nh, nw, 3), np.uint8) * 255 - img_new[:h, :w, :] = image - - return img_new, h, w - - -def get_f5p(landmarks, np_img): - eye_left = find_pupil(landmarks[36:41], np_img) - eye_right = find_pupil(landmarks[42:47], np_img) - if eye_left is None or eye_right is None: - print('cannot find 5 points with find_puil, used mean instead.!') - eye_left = landmarks[36:41].mean(axis=0) - eye_right = landmarks[42:47].mean(axis=0) - nose = landmarks[30] - mouth_left = landmarks[48] - mouth_right = landmarks[54] - f5p = [[eye_left[0], eye_left[1]], [eye_right[0], eye_right[1]], - [nose[0], nose[1]], [mouth_left[0], mouth_left[1]], - [mouth_right[0], mouth_right[1]]] - return f5p - - -def find_pupil(landmarks, np_img): - h, w, _ = np_img.shape - xmax = int(landmarks[:, 0].max()) - xmin = int(landmarks[:, 0].min()) - ymax = int(landmarks[:, 1].max()) - ymin = int(landmarks[:, 1].min()) - - if ymin >= ymax or xmin >= xmax or ymin < 0 or xmin < 0 or ymax > h or xmax > w: - return None - eye_img_bgr = np_img[ymin:ymax, xmin:xmax, :] - eye_img = cv2.cvtColor(eye_img_bgr, cv2.COLOR_BGR2GRAY) - eye_img = cv2.equalizeHist(eye_img) - n_marks = landmarks - np.array([xmin, ymin]).reshape([1, 2]) - eye_mask = cv2.fillConvexPoly( - np.zeros_like(eye_img), n_marks.astype(np.int32), 1) - ret, thresh = cv2.threshold(eye_img, 100, 255, - cv2.THRESH_BINARY | cv2.THRESH_OTSU) - thresh = (1 - thresh / 255.) * eye_mask - cnt = 0 - xm = [] - ym = [] - for i in range(thresh.shape[0]): - for j in range(thresh.shape[1]): - if thresh[i, j] > 0.5: - xm.append(j) - ym.append(i) - cnt += 1 - if cnt != 0: - xm.sort() - ym.sort() - xm = xm[cnt // 2] - ym = ym[cnt // 2] - else: - xm = thresh.shape[1] / 2 - ym = thresh.shape[0] / 2 - - return xm + xmin, ym + ymin - - -def all_file(file_dir): - L = [] - for root, dirs, files in os.walk(file_dir): - for file in files: - extend = os.path.splitext(file)[1] - if extend == '.png' or extend == '.jpg' or extend == '.jpeg': - L.append(os.path.join(root, file)) - return L - -def initialize_mask(box_width): - h, w = [box_width, box_width] - mask = np.zeros((h, w), np.uint8) - - center = (int(w / 2), int(h / 2)) - axes = (int(w * 0.4), int(h * 0.49)) - mask = cv2.ellipse(img=mask, center=center, axes=axes, angle=0, startAngle=0, endAngle=360, color=(1), - thickness=-1) - mask = cv2.distanceTransform(mask, cv2.DIST_L2, 3) - - maxn = max(w, h) * 0.15 - mask[(mask < 255) & (mask > 0)] = mask[(mask < 255) & (mask > 0)] / maxn - mask = np.clip(mask, 0, 1) - - return mask.astype(float) diff --git a/spaces/SIGGRAPH2022/Text2Human/Text2Human/sample_from_pose.py b/spaces/SIGGRAPH2022/Text2Human/Text2Human/sample_from_pose.py deleted file mode 100644 index ad1efa7835a5977dbf7fc99ebe037d2f3452d27c..0000000000000000000000000000000000000000 --- a/spaces/SIGGRAPH2022/Text2Human/Text2Human/sample_from_pose.py +++ /dev/null @@ -1,52 +0,0 @@ -import argparse -import logging -import os.path as osp -import random - -import torch - -from data.pose_attr_dataset import DeepFashionAttrPoseDataset -from models import create_model -from utils.logger import get_root_logger -from utils.options import dict2str, dict_to_nonedict, parse -from utils.util import make_exp_dirs, set_random_seed - - -def main(): - # options - parser = argparse.ArgumentParser() - parser.add_argument('-opt', type=str, help='Path to option YAML file.') - args = parser.parse_args() - opt = parse(args.opt, is_train=False) - - # mkdir and loggers - make_exp_dirs(opt) - log_file = osp.join(opt['path']['log'], f"test_{opt['name']}.log") - logger = get_root_logger( - logger_name='base', log_level=logging.INFO, log_file=log_file) - logger.info(dict2str(opt)) - - # convert to NoneDict, which returns None for missing keys - opt = dict_to_nonedict(opt) - - # random seed - seed = opt['manual_seed'] - if seed is None: - seed = random.randint(1, 10000) - logger.info(f'Random seed: {seed}') - set_random_seed(seed) - - test_dataset = DeepFashionAttrPoseDataset( - pose_dir=opt['pose_dir'], - texture_ann_dir=opt['texture_ann_file'], - shape_ann_path=opt['shape_ann_path']) - test_loader = torch.utils.data.DataLoader( - dataset=test_dataset, batch_size=4, shuffle=False) - logger.info(f'Number of test set: {len(test_dataset)}.') - - model = create_model(opt) - _ = model.inference(test_loader, opt['path']['results_root']) - - -if __name__ == '__main__': - main() diff --git a/spaces/SIH/Augmented-Retrieval-qa-ChatGPT/static/__init__.py b/spaces/SIH/Augmented-Retrieval-qa-ChatGPT/static/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Salesforce/EDICT/my_diffusers/utils/__init__.py b/spaces/Salesforce/EDICT/my_diffusers/utils/__init__.py deleted file mode 100644 index c00a28e1058fbd47451bfe48e23865876c08ed69..0000000000000000000000000000000000000000 --- a/spaces/Salesforce/EDICT/my_diffusers/utils/__init__.py +++ /dev/null @@ -1,53 +0,0 @@ -# Copyright 2022 The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -import os - -from .import_utils import ( - ENV_VARS_TRUE_AND_AUTO_VALUES, - ENV_VARS_TRUE_VALUES, - USE_JAX, - USE_TF, - USE_TORCH, - DummyObject, - is_flax_available, - is_inflect_available, - is_modelcards_available, - is_onnx_available, - is_scipy_available, - is_tf_available, - is_torch_available, - is_transformers_available, - is_unidecode_available, - requires_backends, -) -from .logging import get_logger -from .outputs import BaseOutput - - -logger = get_logger(__name__) - - -hf_cache_home = os.path.expanduser( - os.getenv("HF_HOME", os.path.join(os.getenv("XDG_CACHE_HOME", "~/.cache"), "huggingface")) -) -default_cache_path = os.path.join(hf_cache_home, "diffusers") - - -CONFIG_NAME = "config.json" -HUGGINGFACE_CO_RESOLVE_ENDPOINT = "https://huggingface.co" -DIFFUSERS_CACHE = default_cache_path -DIFFUSERS_DYNAMIC_MODULE_NAME = "diffusers_modules" -HF_MODULES_CACHE = os.getenv("HF_MODULES_CACHE", os.path.join(hf_cache_home, "modules")) diff --git a/spaces/Saturdays/desertIAragon/README.md b/spaces/Saturdays/desertIAragon/README.md deleted file mode 100644 index 90e8fef872b8f5b4314aff81d0832ee650ba372a..0000000000000000000000000000000000000000 --- a/spaces/Saturdays/desertIAragon/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: desertIAragon -emoji: 🌍 -colorFrom: yellow -colorTo: blue -sdk: streamlit -sdk_version: 1.2.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference - diff --git a/spaces/ServerX/PorcoDiaz/lib/infer_pack/attentions.py b/spaces/ServerX/PorcoDiaz/lib/infer_pack/attentions.py deleted file mode 100644 index 05501be1871643f78dddbeaa529c96667031a8db..0000000000000000000000000000000000000000 --- a/spaces/ServerX/PorcoDiaz/lib/infer_pack/attentions.py +++ /dev/null @@ -1,417 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from lib.infer_pack import commons -from lib.infer_pack import modules -from lib.infer_pack.modules import LayerNorm - - -class Encoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - window_size=10, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - window_size=window_size, - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - proximal_bias=False, - proximal_init=True, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - proximal_bias=proximal_bias, - proximal_init=proximal_init, - ) - ) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append( - MultiHeadAttention( - hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - causal=True, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to( - device=x.device, dtype=x.dtype - ) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__( - self, - channels, - out_channels, - n_heads, - p_dropout=0.0, - window_size=None, - heads_share=True, - block_length=None, - proximal_bias=False, - proximal_init=False, - ): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - self.emb_rel_v = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert ( - t_s == t_t - ), "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys( - query / math.sqrt(self.k_channels), key_relative_embeddings - ) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to( - device=scores.device, dtype=scores.dtype - ) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert ( - t_s == t_t - ), "Local attention is only available for self-attention." - block_mask = ( - torch.ones_like(scores) - .triu(-self.block_length) - .tril(self.block_length) - ) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings( - self.emb_rel_v, t_s - ) - output = output + self._matmul_with_relative_values( - relative_weights, value_relative_embeddings - ) - output = ( - output.transpose(2, 3).contiguous().view(b, d, t_t) - ) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]), - ) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[ - :, slice_start_position:slice_end_position - ] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad( - x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]) - ) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[ - :, :, :length, length - 1 : - ] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad( - x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]) - ) - x_flat = x.view([batch, heads, length**2 + length * (length - 1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__( - self, - in_channels, - out_channels, - filter_channels, - kernel_size, - p_dropout=0.0, - activation=None, - causal=False, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/Shawn37/UTR_LM/README.md b/spaces/Shawn37/UTR_LM/README.md deleted file mode 100644 index 6f7ac10dc5a645cd7d30180e9b43f5f75e97b8ad..0000000000000000000000000000000000000000 --- a/spaces/Shawn37/UTR_LM/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: UTR_pred -app_file: app.py -python_version: 3.9.4 -pinned: true -license: bsd -sdk: streamlit ---- - -# UTR_pred -5' UTR prediction \ No newline at end of file diff --git a/spaces/ShayanP/Salesforce-codegen2-3_7B/README.md b/spaces/ShayanP/Salesforce-codegen2-3_7B/README.md deleted file mode 100644 index b0594a281c5dc3fa3cdc134ae05fd166a0d3044f..0000000000000000000000000000000000000000 --- a/spaces/ShayanP/Salesforce-codegen2-3_7B/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Salesforce-codegen2-3 7B -emoji: 📊 -colorFrom: red -colorTo: pink -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Straits/SI43-photostyle1/models/stmodel.py b/spaces/Straits/SI43-photostyle1/models/stmodel.py deleted file mode 100644 index e7aacd1799c5d17a0e1ca5a795c63c501bb7cae9..0000000000000000000000000000000000000000 --- a/spaces/Straits/SI43-photostyle1/models/stmodel.py +++ /dev/null @@ -1,117 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -import numpy as np -import math -import time - -class ConvCIN(nn.Module): - def __init__(self, n_styles, C_in, C_out, kernel_size, padding, stride, activation=None): - super(ConvCIN, self).__init__() - - self.reflection = nn.ReflectionPad2d(padding) - self.conv = nn.Conv2d(in_channels=C_in, out_channels=C_out, kernel_size=kernel_size, stride=stride) - nn.init.normal_(self.conv.weight, mean=0, std=1e-2) - - self.instnorm = nn.InstanceNorm2d(C_out)#, affine=True) - #nn.init.normal_(self.instnorm.weight, mean=1, std=1e-2) - #nn.init.normal_(self.instnorm.bias, mean=0, std=1e-2) - - - self.gamma = torch.nn.Parameter(data=torch.randn(n_styles, C_out)*1e-2 + 1, requires_grad=True) - #self.gamma.data.uniform_(1.0, 1.0) - - self.beta = torch.nn.Parameter(data=torch.randn(n_styles, C_out)*1e-2, requires_grad=True) - #self.beta.data.uniform_(0, 0) - - self.activation = activation - - def forward(self, x, style_1, style_2, alpha): - - x = self.reflection(x) - x = self.conv(x) - - x = self.instnorm(x) - - - if style_2 != None: - gamma = alpha*self.gamma[style_1] + (1-alpha)*self.gamma[style_2] - beta = alpha*self.beta[style_1] + (1-alpha)*self.beta[style_2] - else: - gamma = self.gamma[style_1] - beta = self.beta[style_1] - - - b,d,w,h = x.size() - x = x.view(b,d,w*h) - - x = (x*gamma.unsqueeze(-1) + beta.unsqueeze(-1)).view(b,d,w,h) - - if self.activation == 'relu': - x = F.relu(x) - elif self.activation == 'sigmoid': - x = torch.sigmoid(x) - - return x - -class ResidualBlock(nn.Module): - def __init__(self, n_styles, C_in, C_out): - super(ResidualBlock,self).__init__() - - self.convcin1 = ConvCIN(n_styles, C_in, C_out, kernel_size=3, padding=1, stride=1, activation='relu') - self.convcin2 = ConvCIN(n_styles, C_in, C_out, kernel_size=3, padding=1, stride=1) - - def forward(self, x, style_1, style_2, alpha): - out = self.convcin1(x, style_1, style_2, alpha) - out = self.convcin2(out, style_1, style_2, alpha) - return x + out - -class UpSampling(nn.Module): - def __init__(self, n_styles, C_in, C_out): - super(UpSampling,self).__init__() - - self.upsample = nn.Upsample(scale_factor=2, mode='nearest') - self.convcin = ConvCIN(n_styles, C_in, C_out, kernel_size=3, padding=1, stride=1, activation='relu') - - def forward(self, x, style_1, style_2, alpha): - x = self.upsample(x) - x = self.convcin(x, style_1, style_2, alpha) - return x - -class STModel(nn.Module): - def __init__(self, n_styles): - super(STModel,self).__init__() - - self.convcin1 = ConvCIN(n_styles, C_in=3, C_out=32, kernel_size=9, padding=4, stride=1, activation='relu') - self.convcin2 = ConvCIN(n_styles, C_in=32, C_out=64, kernel_size=3, padding=1, stride=2, activation='relu') - self.convcin3 = ConvCIN(n_styles, C_in=64, C_out=128, kernel_size=3, padding=1, stride=2, activation='relu') - - self.rb1 = ResidualBlock(n_styles, 128, 128) - self.rb2 = ResidualBlock(n_styles, 128, 128) - self.rb3 = ResidualBlock(n_styles, 128, 128) - self.rb4 = ResidualBlock(n_styles, 128, 128) - self.rb5 = ResidualBlock(n_styles, 128, 128) - - self.upsample1 = UpSampling(n_styles, 128, 64) - self.upsample2 = UpSampling(n_styles, 64, 32) - - self.convcin4 = ConvCIN(n_styles, C_in=32, C_out=3, kernel_size=9, padding=4, stride=1, activation='sigmoid') - - def forward(self, x, style_1, style_2=None, alpha=0.5): - x = self.convcin1(x, style_1, style_2, alpha) - x = self.convcin2(x, style_1, style_2, alpha) - x = self.convcin3(x, style_1, style_2, alpha) - - x = self.rb1(x, style_1, style_2, alpha) - x = self.rb2(x, style_1, style_2, alpha) - x = self.rb3(x, style_1, style_2, alpha) - x = self.rb4(x, style_1, style_2, alpha) - x = self.rb5(x, style_1, style_2, alpha) - - x = self.upsample1(x, style_1, style_2, alpha) - x = self.upsample2(x, style_1, style_2, alpha) - - x = self.convcin4(x, style_1, style_2, alpha) - - return x \ No newline at end of file diff --git a/spaces/Sumit7864/Image-Enhancer/realesrgan/archs/discriminator_arch.py b/spaces/Sumit7864/Image-Enhancer/realesrgan/archs/discriminator_arch.py deleted file mode 100644 index 4b66ab1226d6793de846bc9828bbe427031a0e2d..0000000000000000000000000000000000000000 --- a/spaces/Sumit7864/Image-Enhancer/realesrgan/archs/discriminator_arch.py +++ /dev/null @@ -1,67 +0,0 @@ -from basicsr.utils.registry import ARCH_REGISTRY -from torch import nn as nn -from torch.nn import functional as F -from torch.nn.utils import spectral_norm - - -@ARCH_REGISTRY.register() -class UNetDiscriminatorSN(nn.Module): - """Defines a U-Net discriminator with spectral normalization (SN) - - It is used in Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data. - - Arg: - num_in_ch (int): Channel number of inputs. Default: 3. - num_feat (int): Channel number of base intermediate features. Default: 64. - skip_connection (bool): Whether to use skip connections between U-Net. Default: True. - """ - - def __init__(self, num_in_ch, num_feat=64, skip_connection=True): - super(UNetDiscriminatorSN, self).__init__() - self.skip_connection = skip_connection - norm = spectral_norm - # the first convolution - self.conv0 = nn.Conv2d(num_in_ch, num_feat, kernel_size=3, stride=1, padding=1) - # downsample - self.conv1 = norm(nn.Conv2d(num_feat, num_feat * 2, 4, 2, 1, bias=False)) - self.conv2 = norm(nn.Conv2d(num_feat * 2, num_feat * 4, 4, 2, 1, bias=False)) - self.conv3 = norm(nn.Conv2d(num_feat * 4, num_feat * 8, 4, 2, 1, bias=False)) - # upsample - self.conv4 = norm(nn.Conv2d(num_feat * 8, num_feat * 4, 3, 1, 1, bias=False)) - self.conv5 = norm(nn.Conv2d(num_feat * 4, num_feat * 2, 3, 1, 1, bias=False)) - self.conv6 = norm(nn.Conv2d(num_feat * 2, num_feat, 3, 1, 1, bias=False)) - # extra convolutions - self.conv7 = norm(nn.Conv2d(num_feat, num_feat, 3, 1, 1, bias=False)) - self.conv8 = norm(nn.Conv2d(num_feat, num_feat, 3, 1, 1, bias=False)) - self.conv9 = nn.Conv2d(num_feat, 1, 3, 1, 1) - - def forward(self, x): - # downsample - x0 = F.leaky_relu(self.conv0(x), negative_slope=0.2, inplace=True) - x1 = F.leaky_relu(self.conv1(x0), negative_slope=0.2, inplace=True) - x2 = F.leaky_relu(self.conv2(x1), negative_slope=0.2, inplace=True) - x3 = F.leaky_relu(self.conv3(x2), negative_slope=0.2, inplace=True) - - # upsample - x3 = F.interpolate(x3, scale_factor=2, mode='bilinear', align_corners=False) - x4 = F.leaky_relu(self.conv4(x3), negative_slope=0.2, inplace=True) - - if self.skip_connection: - x4 = x4 + x2 - x4 = F.interpolate(x4, scale_factor=2, mode='bilinear', align_corners=False) - x5 = F.leaky_relu(self.conv5(x4), negative_slope=0.2, inplace=True) - - if self.skip_connection: - x5 = x5 + x1 - x5 = F.interpolate(x5, scale_factor=2, mode='bilinear', align_corners=False) - x6 = F.leaky_relu(self.conv6(x5), negative_slope=0.2, inplace=True) - - if self.skip_connection: - x6 = x6 + x0 - - # extra convolutions - out = F.leaky_relu(self.conv7(x6), negative_slope=0.2, inplace=True) - out = F.leaky_relu(self.conv8(out), negative_slope=0.2, inplace=True) - out = self.conv9(out) - - return out diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/web_response.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/web_response.py deleted file mode 100644 index ce07f8153deb29c4cf5856fae0d92ac1170c1441..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/web_response.py +++ /dev/null @@ -1,825 +0,0 @@ -import asyncio -import collections.abc -import datetime -import enum -import json -import math -import time -import warnings -import zlib -from concurrent.futures import Executor -from http.cookies import Morsel, SimpleCookie -from typing import ( - TYPE_CHECKING, - Any, - Dict, - Iterator, - Mapping, - MutableMapping, - Optional, - Tuple, - Union, - cast, -) - -from multidict import CIMultiDict, istr - -from . import hdrs, payload -from .abc import AbstractStreamWriter -from .helpers import ( - ETAG_ANY, - PY_38, - QUOTED_ETAG_RE, - ETag, - HeadersMixin, - parse_http_date, - rfc822_formatted_time, - sentinel, - validate_etag_value, -) -from .http import RESPONSES, SERVER_SOFTWARE, HttpVersion10, HttpVersion11 -from .payload import Payload -from .typedefs import JSONEncoder, LooseHeaders - -__all__ = ("ContentCoding", "StreamResponse", "Response", "json_response") - - -if TYPE_CHECKING: # pragma: no cover - from .web_request import BaseRequest - - BaseClass = MutableMapping[str, Any] -else: - BaseClass = collections.abc.MutableMapping - - -if not PY_38: - # allow samesite to be used in python < 3.8 - # already permitted in python 3.8, see https://bugs.python.org/issue29613 - Morsel._reserved["samesite"] = "SameSite" # type: ignore[attr-defined] - - -class ContentCoding(enum.Enum): - # The content codings that we have support for. - # - # Additional registered codings are listed at: - # https://www.iana.org/assignments/http-parameters/http-parameters.xhtml#content-coding - deflate = "deflate" - gzip = "gzip" - identity = "identity" - - -############################################################ -# HTTP Response classes -############################################################ - - -class StreamResponse(BaseClass, HeadersMixin): - - _length_check = True - - def __init__( - self, - *, - status: int = 200, - reason: Optional[str] = None, - headers: Optional[LooseHeaders] = None, - ) -> None: - self._body = None - self._keep_alive: Optional[bool] = None - self._chunked = False - self._compression = False - self._compression_force: Optional[ContentCoding] = None - self._cookies: SimpleCookie[str] = SimpleCookie() - - self._req: Optional[BaseRequest] = None - self._payload_writer: Optional[AbstractStreamWriter] = None - self._eof_sent = False - self._body_length = 0 - self._state: Dict[str, Any] = {} - - if headers is not None: - self._headers: CIMultiDict[str] = CIMultiDict(headers) - else: - self._headers = CIMultiDict() - - self.set_status(status, reason) - - @property - def prepared(self) -> bool: - return self._payload_writer is not None - - @property - def task(self) -> "Optional[asyncio.Task[None]]": - if self._req: - return self._req.task - else: - return None - - @property - def status(self) -> int: - return self._status - - @property - def chunked(self) -> bool: - return self._chunked - - @property - def compression(self) -> bool: - return self._compression - - @property - def reason(self) -> str: - return self._reason - - def set_status( - self, - status: int, - reason: Optional[str] = None, - _RESPONSES: Mapping[int, Tuple[str, str]] = RESPONSES, - ) -> None: - assert not self.prepared, ( - "Cannot change the response status code after " "the headers have been sent" - ) - self._status = int(status) - if reason is None: - try: - reason = _RESPONSES[self._status][0] - except Exception: - reason = "" - self._reason = reason - - @property - def keep_alive(self) -> Optional[bool]: - return self._keep_alive - - def force_close(self) -> None: - self._keep_alive = False - - @property - def body_length(self) -> int: - return self._body_length - - @property - def output_length(self) -> int: - warnings.warn("output_length is deprecated", DeprecationWarning) - assert self._payload_writer - return self._payload_writer.buffer_size - - def enable_chunked_encoding(self, chunk_size: Optional[int] = None) -> None: - """Enables automatic chunked transfer encoding.""" - self._chunked = True - - if hdrs.CONTENT_LENGTH in self._headers: - raise RuntimeError( - "You can't enable chunked encoding when " "a content length is set" - ) - if chunk_size is not None: - warnings.warn("Chunk size is deprecated #1615", DeprecationWarning) - - def enable_compression( - self, force: Optional[Union[bool, ContentCoding]] = None - ) -> None: - """Enables response compression encoding.""" - # Backwards compatibility for when force was a bool <0.17. - if type(force) == bool: - force = ContentCoding.deflate if force else ContentCoding.identity - warnings.warn( - "Using boolean for force is deprecated #3318", DeprecationWarning - ) - elif force is not None: - assert isinstance(force, ContentCoding), ( - "force should one of " "None, bool or " "ContentEncoding" - ) - - self._compression = True - self._compression_force = force - - @property - def headers(self) -> "CIMultiDict[str]": - return self._headers - - @property - def cookies(self) -> "SimpleCookie[str]": - return self._cookies - - def set_cookie( - self, - name: str, - value: str, - *, - expires: Optional[str] = None, - domain: Optional[str] = None, - max_age: Optional[Union[int, str]] = None, - path: str = "/", - secure: Optional[bool] = None, - httponly: Optional[bool] = None, - version: Optional[str] = None, - samesite: Optional[str] = None, - ) -> None: - """Set or update response cookie. - - Sets new cookie or updates existent with new value. - Also updates only those params which are not None. - """ - old = self._cookies.get(name) - if old is not None and old.coded_value == "": - # deleted cookie - self._cookies.pop(name, None) - - self._cookies[name] = value - c = self._cookies[name] - - if expires is not None: - c["expires"] = expires - elif c.get("expires") == "Thu, 01 Jan 1970 00:00:00 GMT": - del c["expires"] - - if domain is not None: - c["domain"] = domain - - if max_age is not None: - c["max-age"] = str(max_age) - elif "max-age" in c: - del c["max-age"] - - c["path"] = path - - if secure is not None: - c["secure"] = secure - if httponly is not None: - c["httponly"] = httponly - if version is not None: - c["version"] = version - if samesite is not None: - c["samesite"] = samesite - - def del_cookie( - self, name: str, *, domain: Optional[str] = None, path: str = "/" - ) -> None: - """Delete cookie. - - Creates new empty expired cookie. - """ - # TODO: do we need domain/path here? - self._cookies.pop(name, None) - self.set_cookie( - name, - "", - max_age=0, - expires="Thu, 01 Jan 1970 00:00:00 GMT", - domain=domain, - path=path, - ) - - @property - def content_length(self) -> Optional[int]: - # Just a placeholder for adding setter - return super().content_length - - @content_length.setter - def content_length(self, value: Optional[int]) -> None: - if value is not None: - value = int(value) - if self._chunked: - raise RuntimeError( - "You can't set content length when " "chunked encoding is enable" - ) - self._headers[hdrs.CONTENT_LENGTH] = str(value) - else: - self._headers.pop(hdrs.CONTENT_LENGTH, None) - - @property - def content_type(self) -> str: - # Just a placeholder for adding setter - return super().content_type - - @content_type.setter - def content_type(self, value: str) -> None: - self.content_type # read header values if needed - self._content_type = str(value) - self._generate_content_type_header() - - @property - def charset(self) -> Optional[str]: - # Just a placeholder for adding setter - return super().charset - - @charset.setter - def charset(self, value: Optional[str]) -> None: - ctype = self.content_type # read header values if needed - if ctype == "application/octet-stream": - raise RuntimeError( - "Setting charset for application/octet-stream " - "doesn't make sense, setup content_type first" - ) - assert self._content_dict is not None - if value is None: - self._content_dict.pop("charset", None) - else: - self._content_dict["charset"] = str(value).lower() - self._generate_content_type_header() - - @property - def last_modified(self) -> Optional[datetime.datetime]: - """The value of Last-Modified HTTP header, or None. - - This header is represented as a `datetime` object. - """ - return parse_http_date(self._headers.get(hdrs.LAST_MODIFIED)) - - @last_modified.setter - def last_modified( - self, value: Optional[Union[int, float, datetime.datetime, str]] - ) -> None: - if value is None: - self._headers.pop(hdrs.LAST_MODIFIED, None) - elif isinstance(value, (int, float)): - self._headers[hdrs.LAST_MODIFIED] = time.strftime( - "%a, %d %b %Y %H:%M:%S GMT", time.gmtime(math.ceil(value)) - ) - elif isinstance(value, datetime.datetime): - self._headers[hdrs.LAST_MODIFIED] = time.strftime( - "%a, %d %b %Y %H:%M:%S GMT", value.utctimetuple() - ) - elif isinstance(value, str): - self._headers[hdrs.LAST_MODIFIED] = value - - @property - def etag(self) -> Optional[ETag]: - quoted_value = self._headers.get(hdrs.ETAG) - if not quoted_value: - return None - elif quoted_value == ETAG_ANY: - return ETag(value=ETAG_ANY) - match = QUOTED_ETAG_RE.fullmatch(quoted_value) - if not match: - return None - is_weak, value = match.group(1, 2) - return ETag( - is_weak=bool(is_weak), - value=value, - ) - - @etag.setter - def etag(self, value: Optional[Union[ETag, str]]) -> None: - if value is None: - self._headers.pop(hdrs.ETAG, None) - elif (isinstance(value, str) and value == ETAG_ANY) or ( - isinstance(value, ETag) and value.value == ETAG_ANY - ): - self._headers[hdrs.ETAG] = ETAG_ANY - elif isinstance(value, str): - validate_etag_value(value) - self._headers[hdrs.ETAG] = f'"{value}"' - elif isinstance(value, ETag) and isinstance(value.value, str): - validate_etag_value(value.value) - hdr_value = f'W/"{value.value}"' if value.is_weak else f'"{value.value}"' - self._headers[hdrs.ETAG] = hdr_value - else: - raise ValueError( - f"Unsupported etag type: {type(value)}. " - f"etag must be str, ETag or None" - ) - - def _generate_content_type_header( - self, CONTENT_TYPE: istr = hdrs.CONTENT_TYPE - ) -> None: - assert self._content_dict is not None - assert self._content_type is not None - params = "; ".join(f"{k}={v}" for k, v in self._content_dict.items()) - if params: - ctype = self._content_type + "; " + params - else: - ctype = self._content_type - self._headers[CONTENT_TYPE] = ctype - - async def _do_start_compression(self, coding: ContentCoding) -> None: - if coding != ContentCoding.identity: - assert self._payload_writer is not None - self._headers[hdrs.CONTENT_ENCODING] = coding.value - self._payload_writer.enable_compression(coding.value) - # Compressed payload may have different content length, - # remove the header - self._headers.popall(hdrs.CONTENT_LENGTH, None) - - async def _start_compression(self, request: "BaseRequest") -> None: - if self._compression_force: - await self._do_start_compression(self._compression_force) - else: - accept_encoding = request.headers.get(hdrs.ACCEPT_ENCODING, "").lower() - for coding in ContentCoding: - if coding.value in accept_encoding: - await self._do_start_compression(coding) - return - - async def prepare(self, request: "BaseRequest") -> Optional[AbstractStreamWriter]: - if self._eof_sent: - return None - if self._payload_writer is not None: - return self._payload_writer - - return await self._start(request) - - async def _start(self, request: "BaseRequest") -> AbstractStreamWriter: - self._req = request - writer = self._payload_writer = request._payload_writer - - await self._prepare_headers() - await request._prepare_hook(self) - await self._write_headers() - - return writer - - async def _prepare_headers(self) -> None: - request = self._req - assert request is not None - writer = self._payload_writer - assert writer is not None - keep_alive = self._keep_alive - if keep_alive is None: - keep_alive = request.keep_alive - self._keep_alive = keep_alive - - version = request.version - - headers = self._headers - for cookie in self._cookies.values(): - value = cookie.output(header="")[1:] - headers.add(hdrs.SET_COOKIE, value) - - if self._compression: - await self._start_compression(request) - - if self._chunked: - if version != HttpVersion11: - raise RuntimeError( - "Using chunked encoding is forbidden " - "for HTTP/{0.major}.{0.minor}".format(request.version) - ) - writer.enable_chunking() - headers[hdrs.TRANSFER_ENCODING] = "chunked" - if hdrs.CONTENT_LENGTH in headers: - del headers[hdrs.CONTENT_LENGTH] - elif self._length_check: - writer.length = self.content_length - if writer.length is None: - if version >= HttpVersion11 and self.status != 204: - writer.enable_chunking() - headers[hdrs.TRANSFER_ENCODING] = "chunked" - if hdrs.CONTENT_LENGTH in headers: - del headers[hdrs.CONTENT_LENGTH] - else: - keep_alive = False - # HTTP 1.1: https://tools.ietf.org/html/rfc7230#section-3.3.2 - # HTTP 1.0: https://tools.ietf.org/html/rfc1945#section-10.4 - elif version >= HttpVersion11 and self.status in (100, 101, 102, 103, 204): - del headers[hdrs.CONTENT_LENGTH] - - if self.status not in (204, 304): - headers.setdefault(hdrs.CONTENT_TYPE, "application/octet-stream") - headers.setdefault(hdrs.DATE, rfc822_formatted_time()) - headers.setdefault(hdrs.SERVER, SERVER_SOFTWARE) - - # connection header - if hdrs.CONNECTION not in headers: - if keep_alive: - if version == HttpVersion10: - headers[hdrs.CONNECTION] = "keep-alive" - else: - if version == HttpVersion11: - headers[hdrs.CONNECTION] = "close" - - async def _write_headers(self) -> None: - request = self._req - assert request is not None - writer = self._payload_writer - assert writer is not None - # status line - version = request.version - status_line = "HTTP/{}.{} {} {}".format( - version[0], version[1], self._status, self._reason - ) - await writer.write_headers(status_line, self._headers) - - async def write(self, data: bytes) -> None: - assert isinstance( - data, (bytes, bytearray, memoryview) - ), "data argument must be byte-ish (%r)" % type(data) - - if self._eof_sent: - raise RuntimeError("Cannot call write() after write_eof()") - if self._payload_writer is None: - raise RuntimeError("Cannot call write() before prepare()") - - await self._payload_writer.write(data) - - async def drain(self) -> None: - assert not self._eof_sent, "EOF has already been sent" - assert self._payload_writer is not None, "Response has not been started" - warnings.warn( - "drain method is deprecated, use await resp.write()", - DeprecationWarning, - stacklevel=2, - ) - await self._payload_writer.drain() - - async def write_eof(self, data: bytes = b"") -> None: - assert isinstance( - data, (bytes, bytearray, memoryview) - ), "data argument must be byte-ish (%r)" % type(data) - - if self._eof_sent: - return - - assert self._payload_writer is not None, "Response has not been started" - - await self._payload_writer.write_eof(data) - self._eof_sent = True - self._req = None - self._body_length = self._payload_writer.output_size - self._payload_writer = None - - def __repr__(self) -> str: - if self._eof_sent: - info = "eof" - elif self.prepared: - assert self._req is not None - info = f"{self._req.method} {self._req.path} " - else: - info = "not prepared" - return f"<{self.__class__.__name__} {self.reason} {info}>" - - def __getitem__(self, key: str) -> Any: - return self._state[key] - - def __setitem__(self, key: str, value: Any) -> None: - self._state[key] = value - - def __delitem__(self, key: str) -> None: - del self._state[key] - - def __len__(self) -> int: - return len(self._state) - - def __iter__(self) -> Iterator[str]: - return iter(self._state) - - def __hash__(self) -> int: - return hash(id(self)) - - def __eq__(self, other: object) -> bool: - return self is other - - -class Response(StreamResponse): - def __init__( - self, - *, - body: Any = None, - status: int = 200, - reason: Optional[str] = None, - text: Optional[str] = None, - headers: Optional[LooseHeaders] = None, - content_type: Optional[str] = None, - charset: Optional[str] = None, - zlib_executor_size: Optional[int] = None, - zlib_executor: Optional[Executor] = None, - ) -> None: - if body is not None and text is not None: - raise ValueError("body and text are not allowed together") - - if headers is None: - real_headers: CIMultiDict[str] = CIMultiDict() - elif not isinstance(headers, CIMultiDict): - real_headers = CIMultiDict(headers) - else: - real_headers = headers # = cast('CIMultiDict[str]', headers) - - if content_type is not None and "charset" in content_type: - raise ValueError("charset must not be in content_type " "argument") - - if text is not None: - if hdrs.CONTENT_TYPE in real_headers: - if content_type or charset: - raise ValueError( - "passing both Content-Type header and " - "content_type or charset params " - "is forbidden" - ) - else: - # fast path for filling headers - if not isinstance(text, str): - raise TypeError("text argument must be str (%r)" % type(text)) - if content_type is None: - content_type = "text/plain" - if charset is None: - charset = "utf-8" - real_headers[hdrs.CONTENT_TYPE] = content_type + "; charset=" + charset - body = text.encode(charset) - text = None - else: - if hdrs.CONTENT_TYPE in real_headers: - if content_type is not None or charset is not None: - raise ValueError( - "passing both Content-Type header and " - "content_type or charset params " - "is forbidden" - ) - else: - if content_type is not None: - if charset is not None: - content_type += "; charset=" + charset - real_headers[hdrs.CONTENT_TYPE] = content_type - - super().__init__(status=status, reason=reason, headers=real_headers) - - if text is not None: - self.text = text - else: - self.body = body - - self._compressed_body: Optional[bytes] = None - self._zlib_executor_size = zlib_executor_size - self._zlib_executor = zlib_executor - - @property - def body(self) -> Optional[Union[bytes, Payload]]: - return self._body - - @body.setter - def body( - self, - body: bytes, - CONTENT_TYPE: istr = hdrs.CONTENT_TYPE, - CONTENT_LENGTH: istr = hdrs.CONTENT_LENGTH, - ) -> None: - if body is None: - self._body: Optional[bytes] = None - self._body_payload: bool = False - elif isinstance(body, (bytes, bytearray)): - self._body = body - self._body_payload = False - else: - try: - self._body = body = payload.PAYLOAD_REGISTRY.get(body) - except payload.LookupError: - raise ValueError("Unsupported body type %r" % type(body)) - - self._body_payload = True - - headers = self._headers - - # set content-length header if needed - if not self._chunked and CONTENT_LENGTH not in headers: - size = body.size - if size is not None: - headers[CONTENT_LENGTH] = str(size) - - # set content-type - if CONTENT_TYPE not in headers: - headers[CONTENT_TYPE] = body.content_type - - # copy payload headers - if body.headers: - for (key, value) in body.headers.items(): - if key not in headers: - headers[key] = value - - self._compressed_body = None - - @property - def text(self) -> Optional[str]: - if self._body is None: - return None - return self._body.decode(self.charset or "utf-8") - - @text.setter - def text(self, text: str) -> None: - assert text is None or isinstance( - text, str - ), "text argument must be str (%r)" % type(text) - - if self.content_type == "application/octet-stream": - self.content_type = "text/plain" - if self.charset is None: - self.charset = "utf-8" - - self._body = text.encode(self.charset) - self._body_payload = False - self._compressed_body = None - - @property - def content_length(self) -> Optional[int]: - if self._chunked: - return None - - if hdrs.CONTENT_LENGTH in self._headers: - return super().content_length - - if self._compressed_body is not None: - # Return length of the compressed body - return len(self._compressed_body) - elif self._body_payload: - # A payload without content length, or a compressed payload - return None - elif self._body is not None: - return len(self._body) - else: - return 0 - - @content_length.setter - def content_length(self, value: Optional[int]) -> None: - raise RuntimeError("Content length is set automatically") - - async def write_eof(self, data: bytes = b"") -> None: - if self._eof_sent: - return - if self._compressed_body is None: - body: Optional[Union[bytes, Payload]] = self._body - else: - body = self._compressed_body - assert not data, f"data arg is not supported, got {data!r}" - assert self._req is not None - assert self._payload_writer is not None - if body is not None: - if self._req._method == hdrs.METH_HEAD or self._status in [204, 304]: - await super().write_eof() - elif self._body_payload: - payload = cast(Payload, body) - await payload.write(self._payload_writer) - await super().write_eof() - else: - await super().write_eof(cast(bytes, body)) - else: - await super().write_eof() - - async def _start(self, request: "BaseRequest") -> AbstractStreamWriter: - if not self._chunked and hdrs.CONTENT_LENGTH not in self._headers: - if not self._body_payload: - if self._body is not None: - self._headers[hdrs.CONTENT_LENGTH] = str(len(self._body)) - else: - self._headers[hdrs.CONTENT_LENGTH] = "0" - - return await super()._start(request) - - def _compress_body(self, zlib_mode: int) -> None: - assert zlib_mode > 0 - compressobj = zlib.compressobj(wbits=zlib_mode) - body_in = self._body - assert body_in is not None - self._compressed_body = compressobj.compress(body_in) + compressobj.flush() - - async def _do_start_compression(self, coding: ContentCoding) -> None: - if self._body_payload or self._chunked: - return await super()._do_start_compression(coding) - - if coding != ContentCoding.identity: - # Instead of using _payload_writer.enable_compression, - # compress the whole body - zlib_mode = ( - 16 + zlib.MAX_WBITS if coding == ContentCoding.gzip else zlib.MAX_WBITS - ) - body_in = self._body - assert body_in is not None - if ( - self._zlib_executor_size is not None - and len(body_in) > self._zlib_executor_size - ): - await asyncio.get_event_loop().run_in_executor( - self._zlib_executor, self._compress_body, zlib_mode - ) - else: - self._compress_body(zlib_mode) - - body_out = self._compressed_body - assert body_out is not None - - self._headers[hdrs.CONTENT_ENCODING] = coding.value - self._headers[hdrs.CONTENT_LENGTH] = str(len(body_out)) - - -def json_response( - data: Any = sentinel, - *, - text: Optional[str] = None, - body: Optional[bytes] = None, - status: int = 200, - reason: Optional[str] = None, - headers: Optional[LooseHeaders] = None, - content_type: str = "application/json", - dumps: JSONEncoder = json.dumps, -) -> Response: - if data is not sentinel: - if text or body: - raise ValueError("only one of data, text, or body should be specified") - else: - text = dumps(data) - return Response( - text=text, - body=body, - status=status, - reason=reason, - headers=headers, - content_type=content_type, - ) diff --git a/spaces/Superlang/ImageProcessor/annotator/mediapipe_face/__init__.py b/spaces/Superlang/ImageProcessor/annotator/mediapipe_face/__init__.py deleted file mode 100644 index 0eb212301f1069661dec92b796e931d694d4fd86..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/mediapipe_face/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -from .mediapipe_face_common import generate_annotation - - -class MediaPipeFace: - def __call__(self, image, max_faces: int = 1, min_confidence: float = 0.5, **kwargs): - return generate_annotation(image, max_faces, min_confidence) diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/modeling/meta_arch/__init__.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/modeling/meta_arch/__init__.py deleted file mode 100644 index 6b0668157052ce7b796ef50bc7ee85361e7605b9..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/modeling/meta_arch/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. - -from .build import META_ARCH_REGISTRY, build_model # isort:skip - -from .panoptic_fpn import PanopticFPN - -# import all the meta_arch, so they will be registered -from .rcnn import GeneralizedRCNN, ProposalNetwork -from .dense_detector import DenseDetector -from .retinanet import RetinaNet -from .fcos import FCOS -from .semantic_seg import SEM_SEG_HEADS_REGISTRY, SemanticSegmentor, build_sem_seg_head - - -__all__ = list(globals().keys()) diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/utils/colormap.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/utils/colormap.py deleted file mode 100644 index 14ded1659b40b161358c4aaf9cc84ffe0ffafe64..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/utils/colormap.py +++ /dev/null @@ -1,158 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -""" -An awesome colormap for really neat visualizations. -Copied from Detectron, and removed gray colors. -""" - -import numpy as np -import random - -__all__ = ["colormap", "random_color", "random_colors"] - -# fmt: off -# RGB: -_COLORS = np.array( - [ - 0.000, 0.447, 0.741, - 0.850, 0.325, 0.098, - 0.929, 0.694, 0.125, - 0.494, 0.184, 0.556, - 0.466, 0.674, 0.188, - 0.301, 0.745, 0.933, - 0.635, 0.078, 0.184, - 0.300, 0.300, 0.300, - 0.600, 0.600, 0.600, - 1.000, 0.000, 0.000, - 1.000, 0.500, 0.000, - 0.749, 0.749, 0.000, - 0.000, 1.000, 0.000, - 0.000, 0.000, 1.000, - 0.667, 0.000, 1.000, - 0.333, 0.333, 0.000, - 0.333, 0.667, 0.000, - 0.333, 1.000, 0.000, - 0.667, 0.333, 0.000, - 0.667, 0.667, 0.000, - 0.667, 1.000, 0.000, - 1.000, 0.333, 0.000, - 1.000, 0.667, 0.000, - 1.000, 1.000, 0.000, - 0.000, 0.333, 0.500, - 0.000, 0.667, 0.500, - 0.000, 1.000, 0.500, - 0.333, 0.000, 0.500, - 0.333, 0.333, 0.500, - 0.333, 0.667, 0.500, - 0.333, 1.000, 0.500, - 0.667, 0.000, 0.500, - 0.667, 0.333, 0.500, - 0.667, 0.667, 0.500, - 0.667, 1.000, 0.500, - 1.000, 0.000, 0.500, - 1.000, 0.333, 0.500, - 1.000, 0.667, 0.500, - 1.000, 1.000, 0.500, - 0.000, 0.333, 1.000, - 0.000, 0.667, 1.000, - 0.000, 1.000, 1.000, - 0.333, 0.000, 1.000, - 0.333, 0.333, 1.000, - 0.333, 0.667, 1.000, - 0.333, 1.000, 1.000, - 0.667, 0.000, 1.000, - 0.667, 0.333, 1.000, - 0.667, 0.667, 1.000, - 0.667, 1.000, 1.000, - 1.000, 0.000, 1.000, - 1.000, 0.333, 1.000, - 1.000, 0.667, 1.000, - 0.333, 0.000, 0.000, - 0.500, 0.000, 0.000, - 0.667, 0.000, 0.000, - 0.833, 0.000, 0.000, - 1.000, 0.000, 0.000, - 0.000, 0.167, 0.000, - 0.000, 0.333, 0.000, - 0.000, 0.500, 0.000, - 0.000, 0.667, 0.000, - 0.000, 0.833, 0.000, - 0.000, 1.000, 0.000, - 0.000, 0.000, 0.167, - 0.000, 0.000, 0.333, - 0.000, 0.000, 0.500, - 0.000, 0.000, 0.667, - 0.000, 0.000, 0.833, - 0.000, 0.000, 1.000, - 0.000, 0.000, 0.000, - 0.143, 0.143, 0.143, - 0.857, 0.857, 0.857, - 1.000, 1.000, 1.000 - ] -).astype(np.float32).reshape(-1, 3) -# fmt: on - - -def colormap(rgb=False, maximum=255): - """ - Args: - rgb (bool): whether to return RGB colors or BGR colors. - maximum (int): either 255 or 1 - - Returns: - ndarray: a float32 array of Nx3 colors, in range [0, 255] or [0, 1] - """ - assert maximum in [255, 1], maximum - c = _COLORS * maximum - if not rgb: - c = c[:, ::-1] - return c - - -def random_color(rgb=False, maximum=255): - """ - Args: - rgb (bool): whether to return RGB colors or BGR colors. - maximum (int): either 255 or 1 - - Returns: - ndarray: a vector of 3 numbers - """ - idx = np.random.randint(0, len(_COLORS)) - ret = _COLORS[idx] * maximum - if not rgb: - ret = ret[::-1] - return ret - - -def random_colors(N, rgb=False, maximum=255): - """ - Args: - N (int): number of unique colors needed - rgb (bool): whether to return RGB colors or BGR colors. - maximum (int): either 255 or 1 - - Returns: - ndarray: a list of random_color - """ - indices = random.sample(range(len(_COLORS)), N) - ret = [_COLORS[i] * maximum for i in indices] - if not rgb: - ret = [x[::-1] for x in ret] - return ret - - -if __name__ == "__main__": - import cv2 - - size = 100 - H, W = 10, 10 - canvas = np.random.rand(H * size, W * size, 3).astype("float32") - for h in range(H): - for w in range(W): - idx = h * W + w - if idx >= len(_COLORS): - break - canvas[h * size : (h + 1) * size, w * size : (w + 1) * size] = _COLORS[idx] - cv2.imshow("a", canvas) - cv2.waitKey(0) diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/modeling/transformer_decoder/oneformer_transformer_decoder.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/modeling/transformer_decoder/oneformer_transformer_decoder.py deleted file mode 100644 index 2887c7718f864f5c64f245c7eee307c04835c41f..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/modeling/transformer_decoder/oneformer_transformer_decoder.py +++ /dev/null @@ -1,528 +0,0 @@ -# ------------------------------------------------------------------------------ -# Reference: https://github.com/facebookresearch/Mask2Former/blob/main/mask2former/modeling/transformer_decoder/mask2former_transformer_decoder.py -# Modified by Jitesh Jain (https://github.com/praeclarumjj3) -# ------------------------------------------------------------------------------ - -import logging -import fvcore.nn.weight_init as weight_init -from typing import Optional -import torch -from torch import nn, Tensor -from torch.nn import functional as F - -from annotator.oneformer.detectron2.config import configurable -from annotator.oneformer.detectron2.layers import Conv2d - -from .position_encoding import PositionEmbeddingSine -from .transformer import Transformer - -from annotator.oneformer.detectron2.utils.registry import Registry - - -TRANSFORMER_DECODER_REGISTRY = Registry("TRANSFORMER_MODULE") -TRANSFORMER_DECODER_REGISTRY.__doc__ = """ -Registry for transformer module in OneFormer. -""" - - -def build_transformer_decoder(cfg, in_channels, mask_classification=True): - """ - Build a instance embedding branch from `cfg.MODEL.INS_EMBED_HEAD.NAME`. - """ - name = cfg.MODEL.ONE_FORMER.TRANSFORMER_DECODER_NAME - return TRANSFORMER_DECODER_REGISTRY.get(name)(cfg, in_channels, mask_classification) - - -class SelfAttentionLayer(nn.Module): - - def __init__(self, d_model, nhead, dropout=0.0, - activation="relu", normalize_before=False): - super().__init__() - self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout) - - self.norm = nn.LayerNorm(d_model) - self.dropout = nn.Dropout(dropout) - - self.activation = _get_activation_fn(activation) - self.normalize_before = normalize_before - - self._reset_parameters() - - def _reset_parameters(self): - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - - def with_pos_embed(self, tensor, pos: Optional[Tensor]): - return tensor if pos is None else tensor + pos - - def forward_post(self, tgt, - tgt_mask: Optional[Tensor] = None, - tgt_key_padding_mask: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None): - q = k = self.with_pos_embed(tgt, query_pos) - tgt2 = self.self_attn(q, k, value=tgt, attn_mask=tgt_mask, - key_padding_mask=tgt_key_padding_mask)[0] - tgt = tgt + self.dropout(tgt2) - tgt = self.norm(tgt) - - return tgt - - def forward_pre(self, tgt, - tgt_mask: Optional[Tensor] = None, - tgt_key_padding_mask: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None): - tgt2 = self.norm(tgt) - q = k = self.with_pos_embed(tgt2, query_pos) - tgt2 = self.self_attn(q, k, value=tgt2, attn_mask=tgt_mask, - key_padding_mask=tgt_key_padding_mask)[0] - tgt = tgt + self.dropout(tgt2) - - return tgt - - def forward(self, tgt, - tgt_mask: Optional[Tensor] = None, - tgt_key_padding_mask: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None): - if self.normalize_before: - return self.forward_pre(tgt, tgt_mask, - tgt_key_padding_mask, query_pos) - return self.forward_post(tgt, tgt_mask, - tgt_key_padding_mask, query_pos) - - -class CrossAttentionLayer(nn.Module): - - def __init__(self, d_model, nhead, dropout=0.0, - activation="relu", normalize_before=False): - super().__init__() - self.multihead_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout) - - self.norm = nn.LayerNorm(d_model) - self.dropout = nn.Dropout(dropout) - - self.activation = _get_activation_fn(activation) - self.normalize_before = normalize_before - - self._reset_parameters() - - def _reset_parameters(self): - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - - def with_pos_embed(self, tensor, pos: Optional[Tensor]): - return tensor if pos is None else tensor + pos - - def forward_post(self, tgt, memory, - memory_mask: Optional[Tensor] = None, - memory_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None): - tgt2 = self.multihead_attn(query=self.with_pos_embed(tgt, query_pos), - key=self.with_pos_embed(memory, pos), - value=memory, attn_mask=memory_mask, - key_padding_mask=memory_key_padding_mask)[0] - tgt = tgt + self.dropout(tgt2) - tgt = self.norm(tgt) - - return tgt - - def forward_pre(self, tgt, memory, - memory_mask: Optional[Tensor] = None, - memory_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None): - tgt2 = self.norm(tgt) - tgt2 = self.multihead_attn(query=self.with_pos_embed(tgt2, query_pos), - key=self.with_pos_embed(memory, pos), - value=memory, attn_mask=memory_mask, - key_padding_mask=memory_key_padding_mask)[0] - tgt = tgt + self.dropout(tgt2) - - return tgt - - def forward(self, tgt, memory, - memory_mask: Optional[Tensor] = None, - memory_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None): - if self.normalize_before: - return self.forward_pre(tgt, memory, memory_mask, - memory_key_padding_mask, pos, query_pos) - return self.forward_post(tgt, memory, memory_mask, - memory_key_padding_mask, pos, query_pos) - - -class FFNLayer(nn.Module): - - def __init__(self, d_model, dim_feedforward=2048, dropout=0.0, - activation="relu", normalize_before=False): - super().__init__() - # Implementation of Feedforward model - self.linear1 = nn.Linear(d_model, dim_feedforward) - self.dropout = nn.Dropout(dropout) - self.linear2 = nn.Linear(dim_feedforward, d_model) - - self.norm = nn.LayerNorm(d_model) - - self.activation = _get_activation_fn(activation) - self.normalize_before = normalize_before - - self._reset_parameters() - - def _reset_parameters(self): - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - - def with_pos_embed(self, tensor, pos: Optional[Tensor]): - return tensor if pos is None else tensor + pos - - def forward_post(self, tgt): - tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt)))) - tgt = tgt + self.dropout(tgt2) - tgt = self.norm(tgt) - return tgt - - def forward_pre(self, tgt): - tgt2 = self.norm(tgt) - tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt2)))) - tgt = tgt + self.dropout(tgt2) - return tgt - - def forward(self, tgt): - if self.normalize_before: - return self.forward_pre(tgt) - return self.forward_post(tgt) - - -def _get_activation_fn(activation): - """Return an activation function given a string""" - if activation == "relu": - return F.relu - if activation == "gelu": - return F.gelu - if activation == "glu": - return F.glu - raise RuntimeError(F"activation should be relu/gelu, not {activation}.") - - -class MLP(nn.Module): - """ Very simple multi-layer perceptron (also called FFN)""" - - def __init__(self, input_dim, hidden_dim, output_dim, num_layers): - super().__init__() - self.num_layers = num_layers - h = [hidden_dim] * (num_layers - 1) - self.layers = nn.ModuleList(nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim])) - - def forward(self, x): - for i, layer in enumerate(self.layers): - x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x) - return x - - -@TRANSFORMER_DECODER_REGISTRY.register() -class ContrastiveMultiScaleMaskedTransformerDecoder(nn.Module): - - _version = 2 - - def _load_from_state_dict( - self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs - ): - version = local_metadata.get("version", None) - if version is None or version < 2: - # Do not warn if train from scratch - scratch = True - logger = logging.getLogger(__name__) - for k in list(state_dict.keys()): - newk = k - if "static_query" in k: - newk = k.replace("static_query", "query_feat") - if newk != k: - state_dict[newk] = state_dict[k] - del state_dict[k] - scratch = False - - if not scratch: - logger.warning( - f"Weight format of {self.__class__.__name__} have changed! " - "Please upgrade your models. Applying automatic conversion now ..." - ) - - @configurable - def __init__( - self, - in_channels, - mask_classification=True, - *, - num_classes: int, - hidden_dim: int, - num_queries: int, - nheads: int, - dropout: float, - dim_feedforward: int, - enc_layers: int, - is_train: bool, - dec_layers: int, - class_dec_layers: int, - pre_norm: bool, - mask_dim: int, - enforce_input_project: bool, - use_task_norm: bool, - ): - """ - NOTE: this interface is experimental. - Args: - in_channels: channels of the input features - mask_classification: whether to add mask classifier or not - num_classes: number of classes - hidden_dim: Transformer feature dimension - num_queries: number of queries - nheads: number of heads - dim_feedforward: feature dimension in feedforward network - enc_layers: number of Transformer encoder layers - dec_layers: number of Transformer decoder layers - pre_norm: whether to use pre-LayerNorm or not - mask_dim: mask feature dimension - enforce_input_project: add input project 1x1 conv even if input - channels and hidden dim is identical - """ - super().__init__() - - assert mask_classification, "Only support mask classification model" - self.mask_classification = mask_classification - self.is_train = is_train - self.use_task_norm = use_task_norm - - # positional encoding - N_steps = hidden_dim // 2 - self.pe_layer = PositionEmbeddingSine(N_steps, normalize=True) - - self.class_transformer = Transformer( - d_model=hidden_dim, - dropout=dropout, - nhead=nheads, - dim_feedforward=dim_feedforward, - num_encoder_layers=enc_layers, - num_decoder_layers=class_dec_layers, - normalize_before=pre_norm, - return_intermediate_dec=False, - ) - - # define Transformer decoder here - self.num_heads = nheads - self.num_layers = dec_layers - self.transformer_self_attention_layers = nn.ModuleList() - self.transformer_cross_attention_layers = nn.ModuleList() - self.transformer_ffn_layers = nn.ModuleList() - - for _ in range(self.num_layers): - self.transformer_self_attention_layers.append( - SelfAttentionLayer( - d_model=hidden_dim, - nhead=nheads, - dropout=0.0, - normalize_before=pre_norm, - ) - ) - - self.transformer_cross_attention_layers.append( - CrossAttentionLayer( - d_model=hidden_dim, - nhead=nheads, - dropout=0.0, - normalize_before=pre_norm, - ) - ) - - self.transformer_ffn_layers.append( - FFNLayer( - d_model=hidden_dim, - dim_feedforward=dim_feedforward, - dropout=0.0, - normalize_before=pre_norm, - ) - ) - - self.decoder_norm = nn.LayerNorm(hidden_dim) - - self.num_queries = num_queries - # learnable query p.e. - self.query_embed = nn.Embedding(num_queries, hidden_dim) - - # level embedding (we always use 3 scales) - self.num_feature_levels = 3 - self.level_embed = nn.Embedding(self.num_feature_levels, hidden_dim) - self.input_proj = nn.ModuleList() - for _ in range(self.num_feature_levels): - if in_channels != hidden_dim or enforce_input_project: - self.input_proj.append(Conv2d(in_channels, hidden_dim, kernel_size=1)) - weight_init.c2_xavier_fill(self.input_proj[-1]) - else: - self.input_proj.append(nn.Sequential()) - - self.class_input_proj = Conv2d(in_channels, hidden_dim, kernel_size=1) - weight_init.c2_xavier_fill(self.class_input_proj) - - # output FFNs - if self.mask_classification: - self.class_embed = nn.Linear(hidden_dim, num_classes + 1) - self.mask_embed = MLP(hidden_dim, hidden_dim, mask_dim, 3) - - @classmethod - def from_config(cls, cfg, in_channels, mask_classification): - ret = {} - ret["in_channels"] = in_channels - ret["mask_classification"] = mask_classification - - ret["num_classes"] = cfg.MODEL.SEM_SEG_HEAD.NUM_CLASSES - ret["hidden_dim"] = cfg.MODEL.ONE_FORMER.HIDDEN_DIM - ret["num_queries"] = cfg.MODEL.ONE_FORMER.NUM_OBJECT_QUERIES - # Transformer parameters: - ret["nheads"] = cfg.MODEL.ONE_FORMER.NHEADS - ret["dim_feedforward"] = cfg.MODEL.ONE_FORMER.DIM_FEEDFORWARD - - # NOTE: because we add learnable query features which requires supervision, - # we add minus 1 to decoder layers to be consistent with our loss - # implementation: that is, number of auxiliary losses is always - # equal to number of decoder layers. With learnable query features, the number of - # auxiliary losses equals number of decoders plus 1. - assert cfg.MODEL.ONE_FORMER.DEC_LAYERS >= 1 - ret["dec_layers"] = cfg.MODEL.ONE_FORMER.DEC_LAYERS - 1 - ret["class_dec_layers"] = cfg.MODEL.ONE_FORMER.CLASS_DEC_LAYERS - ret["enc_layers"] = cfg.MODEL.ONE_FORMER.ENC_LAYERS - ret["dropout"] = cfg.MODEL.ONE_FORMER.DROPOUT - ret["pre_norm"] = cfg.MODEL.ONE_FORMER.PRE_NORM - ret["enforce_input_project"] = cfg.MODEL.ONE_FORMER.ENFORCE_INPUT_PROJ - ret["is_train"] = cfg.MODEL.IS_TRAIN - ret["mask_dim"] = cfg.MODEL.SEM_SEG_HEAD.MASK_DIM - ret["use_task_norm"] = cfg.MODEL.ONE_FORMER.USE_TASK_NORM - - return ret - - def forward(self, x, mask_features, tasks, mask = None): - # x is a list of multi-scale feature - assert len(x) == self.num_feature_levels - src = [] - pos = [] - size_list = [] - - # disable mask, it does not affect performance - del mask - - for i in range(self.num_feature_levels): - size_list.append(x[i].shape[-2:]) - pos.append(self.pe_layer(x[i], None).flatten(2)) - src.append(self.input_proj[i](x[i]).flatten(2) + self.level_embed.weight[i][None, :, None]) - - # flatten NxCxHxW to HWxNxC - pos[-1] = pos[-1].permute(2, 0, 1) - src[-1] = src[-1].permute(2, 0, 1) - - _, bs, _ = src[0].shape - - # QxNxC - query_embed = self.query_embed.weight.unsqueeze(1).repeat(1, bs, 1) - tasks = tasks.unsqueeze(0) - if self.use_task_norm: - tasks = self.decoder_norm(tasks) - - feats = self.pe_layer(mask_features, None) - - out_t, _ = self.class_transformer(feats, None, - self.query_embed.weight[:-1], - self.class_input_proj(mask_features), - tasks if self.use_task_norm else None) - out_t = out_t[0].permute(1, 0, 2) - - out = torch.cat([out_t, tasks], dim=0) - - output = out.clone() - - predictions_class = [] - predictions_mask = [] - - # prediction heads on learnable query features - outputs_class, outputs_mask, attn_mask = self.forward_prediction_heads(output, mask_features, attn_mask_target_size=size_list[0], i=0) - predictions_class.append(outputs_class) - predictions_mask.append(outputs_mask) - - for i in range(self.num_layers): - level_index = i % self.num_feature_levels - attn_mask[torch.where(attn_mask.sum(-1) == attn_mask.shape[-1])] = False - # attention: cross-attention first - output = self.transformer_cross_attention_layers[i]( - output, src[level_index], - memory_mask=attn_mask, - memory_key_padding_mask=None, # here we do not apply masking on padded region - pos=pos[level_index], query_pos=query_embed - ) - - output = self.transformer_self_attention_layers[i]( - output, tgt_mask=None, - tgt_key_padding_mask=None, - query_pos=query_embed - ) - - # FFN - output = self.transformer_ffn_layers[i]( - output - ) - - outputs_class, outputs_mask, attn_mask = self.forward_prediction_heads(output, mask_features, attn_mask_target_size=size_list[(i + 1) % self.num_feature_levels], i=i+1) - predictions_class.append(outputs_class) - predictions_mask.append(outputs_mask) - - assert len(predictions_class) == self.num_layers + 1 - if self.is_train: - query_class = out.permute(1, 0, 2) - else: - query_class = None - out = { - 'contrastive_logits': query_class, - 'pred_logits': predictions_class[-1], - 'pred_masks': predictions_mask[-1], - 'aux_outputs': self._set_aux_loss( - predictions_class if self.mask_classification else None, - predictions_mask, - ) - } - - return out - - def forward_prediction_heads(self, output, mask_features, attn_mask_target_size, i): - decoder_output = self.decoder_norm(output) - decoder_output = decoder_output.transpose(0, 1) - outputs_class = self.class_embed(decoder_output) - mask_embed = self.mask_embed(decoder_output) - outputs_mask = torch.einsum("bqc,bchw->bqhw", mask_embed, mask_features) - - # NOTE: prediction is of higher-resolution - # [B, Q, H, W] -> [B, Q, H*W] -> [B, h, Q, H*W] -> [B*h, Q, HW] - attn_mask = F.interpolate(outputs_mask, size=attn_mask_target_size, mode="bilinear", align_corners=False) - - # save_attn_masks(attn_mask.sigmoid() < 0.5, fname=f'demo/maps/{i}_pre_bool') - - # must use bool type - # If a BoolTensor is provided, positions with ``True`` are not allowed to attend while ``False`` values will be unchanged. - attn_mask = (attn_mask.sigmoid().flatten(2).unsqueeze(1).repeat(1, self.num_heads, 1, 1).flatten(0, 1) < 0.5).bool() - attn_mask = attn_mask.detach() - - return outputs_class, outputs_mask, attn_mask - - @torch.jit.unused - def _set_aux_loss(self, outputs_class, outputs_seg_masks): - # this is a workaround to make torchscript happy, as torchscript - # doesn't support dictionary with non-homogeneous values, such - # as a dict having both a Tensor and a list. - if self.mask_classification: - aux_list = [ - {"pred_logits": a, "pred_masks": b} - for a, b in zip(outputs_class[:-1], outputs_seg_masks[:-1]) - ] - else: - aux_list = [{"pred_masks": b} for b, in outputs_seg_masks[:-1]] - - return aux_list \ No newline at end of file diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/ops/__init__.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/ops/__init__.py deleted file mode 100644 index bec51c75b9363a9a19e9fb5c35f4e7dbd6f7751c..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/ops/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .encoding import Encoding -from .wrappers import Upsample, resize - -__all__ = ['Upsample', 'resize', 'Encoding'] diff --git a/spaces/THUDM/CogView2/model.py b/spaces/THUDM/CogView2/model.py deleted file mode 100644 index 9c60db1d80f19e53ad64fd47736e056cc2c01505..0000000000000000000000000000000000000000 --- a/spaces/THUDM/CogView2/model.py +++ /dev/null @@ -1,449 +0,0 @@ -# This code is adapted from https://github.com/THUDM/CogView2/blob/4e55cce981eb94b9c8c1f19ba9f632fd3ee42ba8/cogview2_text2image.py - -from __future__ import annotations - -import argparse -import functools -import logging -import os -import pathlib -import random -import subprocess -import sys -import time -import zipfile -from typing import Any - -if os.getenv('SYSTEM') == 'spaces': - subprocess.run('pip install icetk==0.0.3'.split()) - subprocess.run('pip install SwissArmyTransformer==0.2.4'.split()) - subprocess.run( - 'pip install git+https://github.com/Sleepychord/Image-Local-Attention@43fee31' - .split()) - #subprocess.run('git clone https://github.com/NVIDIA/apex'.split()) - #subprocess.run('git checkout 1403c21'.split(), cwd='apex') - #with open('patch.apex') as f: - # subprocess.run('patch -p1'.split(), cwd='apex', stdin=f) - #subprocess.run( - # 'pip install -v --disable-pip-version-check --no-cache-dir --global-option --cpp_ext --global-option --cuda_ext ./' - # .split(), - # cwd='apex') - #subprocess.run('rm -rf apex'.split()) - with open('patch') as f: - subprocess.run('patch -p1'.split(), cwd='CogView2', stdin=f) - - from huggingface_hub import hf_hub_download - - def download_and_extract_icetk_models() -> None: - icetk_model_dir = pathlib.Path('/home/user/.icetk_models') - icetk_model_dir.mkdir() - path = hf_hub_download('THUDM/icetk', - 'models.zip', - use_auth_token=os.getenv('HF_TOKEN')) - with zipfile.ZipFile(path) as f: - f.extractall(path=icetk_model_dir.as_posix()) - - def download_and_extract_cogview2_models(name: str) -> None: - path = hf_hub_download('THUDM/CogView2', - name, - use_auth_token=os.getenv('HF_TOKEN')) - with zipfile.ZipFile(path) as f: - f.extractall() - os.remove(path) - - download_and_extract_icetk_models() - names = [ - 'coglm.zip', - 'cogview2-dsr.zip', - 'cogview2-itersr.zip', - ] - for name in names: - download_and_extract_cogview2_models(name) - - os.environ['SAT_HOME'] = '/home/user/app/sharefs/cogview-new' - -import gradio as gr -import numpy as np -import torch -from icetk import icetk as tokenizer -from SwissArmyTransformer import get_args -from SwissArmyTransformer.arguments import set_random_seed -from SwissArmyTransformer.generation.autoregressive_sampling import \ - filling_sequence -from SwissArmyTransformer.model import CachedAutoregressiveModel - -app_dir = pathlib.Path(__file__).parent -submodule_dir = app_dir / 'CogView2' -sys.path.insert(0, submodule_dir.as_posix()) - -from coglm_strategy import CoglmStrategy -from sr_pipeline import SRGroup - -formatter = logging.Formatter( - '[%(asctime)s] %(name)s %(levelname)s: %(message)s', - datefmt='%Y-%m-%d %H:%M:%S') -stream_handler = logging.StreamHandler(stream=sys.stdout) -stream_handler.setLevel(logging.INFO) -stream_handler.setFormatter(formatter) -logger = logging.getLogger(__name__) -logger.setLevel(logging.INFO) -logger.propagate = False -logger.addHandler(stream_handler) - -tokenizer.add_special_tokens( - ['', '', '']) - - -def get_masks_and_position_ids_coglm( - seq: torch.Tensor, context_length: int -) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor]: - tokens = seq.unsqueeze(0) - - attention_mask = torch.ones((1, len(seq), len(seq)), device=tokens.device) - attention_mask.tril_() - attention_mask[..., :context_length] = 1 - attention_mask.unsqueeze_(1) - - position_ids = torch.zeros(len(seq), - device=tokens.device, - dtype=torch.long) - torch.arange(0, context_length, out=position_ids[:context_length]) - torch.arange(512, - 512 + len(seq) - context_length, - out=position_ids[context_length:]) - - position_ids = position_ids.unsqueeze(0) - return tokens, attention_mask, position_ids - - -class InferenceModel(CachedAutoregressiveModel): - def final_forward(self, logits, **kwargs): - logits_parallel = logits - logits_parallel = torch.nn.functional.linear( - logits_parallel.float(), - self.transformer.word_embeddings.weight[:20000].float()) - return logits_parallel - - -def get_recipe(name: str) -> dict[str, Any]: - r = { - 'attn_plus': 1.4, - 'temp_all_gen': 1.15, - 'topk_gen': 16, - 'temp_cluster_gen': 1., - 'temp_all_dsr': 1.5, - 'topk_dsr': 100, - 'temp_cluster_dsr': 0.89, - 'temp_all_itersr': 1.3, - 'topk_itersr': 16, - 'query_template': '{}', - } - if name == 'none': - pass - elif name == 'mainbody': - r['query_template'] = '{} 高清摄影 隔绝' - - elif name == 'photo': - r['query_template'] = '{} 高清摄影' - - elif name == 'flat': - r['query_template'] = '{} 平面风格' - # r['attn_plus'] = 1.8 - # r['temp_cluster_gen'] = 0.75 - r['temp_all_gen'] = 1.1 - r['topk_dsr'] = 5 - r['temp_cluster_dsr'] = 0.4 - - r['temp_all_itersr'] = 1 - r['topk_itersr'] = 5 - elif name == 'comics': - r['query_template'] = '{} 漫画 隔绝' - r['topk_dsr'] = 5 - r['temp_cluster_dsr'] = 0.4 - r['temp_all_gen'] = 1.1 - r['temp_all_itersr'] = 1 - r['topk_itersr'] = 5 - elif name == 'oil': - r['query_template'] = '{} 油画风格' - pass - elif name == 'sketch': - r['query_template'] = '{} 素描风格' - r['temp_all_gen'] = 1.1 - elif name == 'isometric': - r['query_template'] = '{} 等距矢量图' - r['temp_all_gen'] = 1.1 - elif name == 'chinese': - r['query_template'] = '{} 水墨国画' - r['temp_all_gen'] = 1.12 - elif name == 'watercolor': - r['query_template'] = '{} 水彩画风格' - return r - - -def get_default_args() -> argparse.Namespace: - arg_list = ['--mode', 'inference', '--fp16'] - args = get_args(arg_list) - known = argparse.Namespace(img_size=160, - only_first_stage=False, - inverse_prompt=False, - style='mainbody') - args = argparse.Namespace(**vars(args), **vars(known), - **get_recipe(known.style)) - return args - - -class Model: - def __init__(self, - max_inference_batch_size: int, - only_first_stage: bool = False): - self.args = get_default_args() - self.args.only_first_stage = only_first_stage - self.args.max_inference_batch_size = max_inference_batch_size - - self.model, self.args = self.load_model() - self.strategy = self.load_strategy() - self.srg = self.load_srg() - - self.query_template = self.args.query_template - self.style = self.args.style - self.device = torch.device(self.args.device) - self.fp16 = self.args.fp16 - self.max_batch_size = self.args.max_inference_batch_size - self.only_first_stage = self.args.only_first_stage - - def load_model(self) -> tuple[InferenceModel, argparse.Namespace]: - logger.info('--- load_model ---') - start = time.perf_counter() - - model, args = InferenceModel.from_pretrained(self.args, 'coglm') - if not self.args.only_first_stage: - model.transformer.cpu() - - elapsed = time.perf_counter() - start - logger.info(f'--- done ({elapsed=:.3f}) ---') - return model, args - - def load_strategy(self) -> CoglmStrategy: - logger.info('--- load_strategy ---') - start = time.perf_counter() - - invalid_slices = [slice(tokenizer.num_image_tokens, None)] - strategy = CoglmStrategy(invalid_slices, - temperature=self.args.temp_all_gen, - top_k=self.args.topk_gen, - top_k_cluster=self.args.temp_cluster_gen) - - elapsed = time.perf_counter() - start - logger.info(f'--- done ({elapsed=:.3f}) ---') - return strategy - - def load_srg(self) -> SRGroup: - logger.info('--- load_srg ---') - start = time.perf_counter() - - srg = None if self.args.only_first_stage else SRGroup(self.args) - if srg is not None: - srg.dsr.max_bz = 2 - - elapsed = time.perf_counter() - start - logger.info(f'--- done ({elapsed=:.3f}) ---') - return srg - - def update_style(self, style: str) -> None: - if style == self.style: - return - logger.info('--- update_style ---') - start = time.perf_counter() - - self.style = style - self.args = argparse.Namespace(**(vars(self.args) | get_recipe(style))) - self.query_template = self.args.query_template - logger.debug(f'{self.query_template=}') - - self.strategy.temperature = self.args.temp_all_gen - - if self.srg is not None: - self.srg.dsr.strategy.temperature = self.args.temp_all_dsr - self.srg.dsr.strategy.topk = self.args.topk_dsr - self.srg.dsr.strategy.temperature2 = self.args.temp_cluster_dsr - - self.srg.itersr.strategy.temperature = self.args.temp_all_itersr - self.srg.itersr.strategy.topk = self.args.topk_itersr - - elapsed = time.perf_counter() - start - logger.info(f'--- done ({elapsed=:.3f}) ---') - - def run(self, text: str, style: str, seed: int, only_first_stage: bool, - num: int) -> list[np.ndarray] | None: - logger.info('==================== run ====================') - start = time.perf_counter() - - self.update_style(style) - set_random_seed(seed) - seq, txt_len = self.preprocess_text(text) - if seq is None: - return None - - self.only_first_stage = only_first_stage - if not self.only_first_stage or self.srg is not None: - self.srg.dsr.model.cpu() - self.srg.itersr.model.cpu() - torch.cuda.empty_cache() - self.model.transformer.to(self.device) - tokens = self.generate_tokens(seq, txt_len, num) - - if not self.only_first_stage: - self.model.transformer.cpu() - torch.cuda.empty_cache() - self.srg.dsr.model.to(self.device) - self.srg.itersr.model.to(self.device) - torch.cuda.empty_cache() - res = self.generate_images(seq, txt_len, tokens) - - elapsed = time.perf_counter() - start - logger.info(f'Elapsed: {elapsed}') - logger.info('==================== done ====================') - return res - - @torch.inference_mode() - def preprocess_text( - self, text: str) -> tuple[torch.Tensor, int] | tuple[None, None]: - logger.info('--- preprocess_text ---') - start = time.perf_counter() - - text = self.query_template.format(text) - logger.debug(f'{text=}') - seq = tokenizer.encode(text) - logger.info(f'{len(seq)=}') - if len(seq) > 110: - logger.info('The input text is too long.') - return None, None - txt_len = len(seq) - 1 - seq = torch.tensor(seq + [-1] * 400, device=self.device) - - elapsed = time.perf_counter() - start - logger.info(f'--- done ({elapsed=:.3f}) ---') - return seq, txt_len - - @torch.inference_mode() - def generate_tokens(self, - seq: torch.Tensor, - txt_len: int, - num: int = 8) -> torch.Tensor: - logger.info('--- generate_tokens ---') - start = time.perf_counter() - - # calibrate text length - log_attention_weights = torch.zeros( - len(seq), - len(seq), - device=self.device, - dtype=torch.half if self.fp16 else torch.float32) - log_attention_weights[:, :txt_len] = self.args.attn_plus - get_func = functools.partial(get_masks_and_position_ids_coglm, - context_length=txt_len) - - output_list = [] - remaining = num - for _ in range((num + self.max_batch_size - 1) // self.max_batch_size): - self.strategy.start_pos = txt_len + 1 - coarse_samples = filling_sequence( - self.model, - seq.clone(), - batch_size=min(remaining, self.max_batch_size), - strategy=self.strategy, - log_attention_weights=log_attention_weights, - get_masks_and_position_ids=get_func)[0] - output_list.append(coarse_samples) - remaining -= self.max_batch_size - output_tokens = torch.cat(output_list, dim=0) - logger.debug(f'{output_tokens.shape=}') - - elapsed = time.perf_counter() - start - logger.info(f'--- done ({elapsed=:.3f}) ---') - return output_tokens - - @staticmethod - def postprocess(tensor: torch.Tensor) -> np.ndarray: - return tensor.cpu().mul(255).add_(0.5).clamp_(0, 255).permute( - 1, 2, 0).to(torch.uint8).numpy() - - @torch.inference_mode() - def generate_images(self, seq: torch.Tensor, txt_len: int, - tokens: torch.Tensor) -> list[np.ndarray]: - logger.info('--- generate_images ---') - start = time.perf_counter() - - logger.debug(f'{self.only_first_stage=}') - res = [] - if self.only_first_stage: - for i in range(len(tokens)): - seq = tokens[i] - decoded_img = tokenizer.decode(image_ids=seq[-400:]) - decoded_img = torch.nn.functional.interpolate(decoded_img, - size=(480, 480)) - decoded_img = self.postprocess(decoded_img[0]) - res.append(decoded_img) # only the last image (target) - else: # sr - iter_tokens = self.srg.sr_base(tokens[:, -400:], seq[:txt_len]) - for seq in iter_tokens: - decoded_img = tokenizer.decode(image_ids=seq[-3600:]) - decoded_img = torch.nn.functional.interpolate(decoded_img, - size=(480, 480)) - decoded_img = self.postprocess(decoded_img[0]) - res.append(decoded_img) # only the last image (target) - - elapsed = time.perf_counter() - start - logger.info(f'--- done ({elapsed=:.3f}) ---') - return res - - -class AppModel(Model): - def __init__(self, max_inference_batch_size: int, only_first_stage: bool): - super().__init__(max_inference_batch_size, only_first_stage) - self.translator = gr.Interface.load( - 'spaces/chinhon/translation_eng2ch') - self.rng = random.Random() - - def make_grid(self, images: list[np.ndarray] | None) -> np.ndarray | None: - if images is None or len(images) == 0: - return None - ncols = 1 - while True: - if ncols**2 >= len(images): - break - ncols += 1 - nrows = (len(images) + ncols - 1) // ncols - h, w = images[0].shape[:2] - grid = np.zeros((h * nrows, w * ncols, 3), dtype=np.uint8) - for i in range(nrows): - for j in range(ncols): - index = ncols * i + j - if index >= len(images): - break - grid[h * i:h * (i + 1), w * j:w * (j + 1)] = images[index] - return grid - - def run_advanced( - self, text: str, translate: bool, style: str, seed: int, - only_first_stage: bool, num: int - ) -> tuple[str | None, np.ndarray | None, list[np.ndarray] | None]: - logger.info( - f'{text=}, {translate=}, {style=}, {seed=}, {only_first_stage=}, {num=}' - ) - if translate: - text = translated_text = self.translator(text) - else: - translated_text = None - results = self.run(text, style, seed, only_first_stage, num) - grid_image = self.make_grid(results) - return translated_text, grid_image, results - - def run_simple(self, text: str) -> np.ndarray | None: - logger.info(f'{text=}') - if text.isascii(): - text = self.translator(text) - seed = self.rng.randint(0, 100000) - results = self.run(text, 'photo', seed, False, 4) - grid_image = self.make_grid(results) - return grid_image diff --git a/spaces/TRI-ML/risk_biased_prediction/tests/risk_biased/utils/test_cost.py b/spaces/TRI-ML/risk_biased_prediction/tests/risk_biased/utils/test_cost.py deleted file mode 100644 index 2475feb936e8a4c74437e6effb59ca57a9e8c31f..0000000000000000000000000000000000000000 --- a/spaces/TRI-ML/risk_biased_prediction/tests/risk_biased/utils/test_cost.py +++ /dev/null @@ -1,168 +0,0 @@ -import os -import pytest -import torch -import numpy as np -from mmcv import Config - -from risk_biased.utils.cost import BaseCostTorch, TTCCostTorch, DistanceCostTorch -from risk_biased.utils.cost import BaseCostNumpy, TTCCostNumpy, DistanceCostNumpy -from risk_biased.utils.cost import ( - CostParams, - TTCCostParams, - DistanceCostParams, -) - - -@pytest.fixture(scope="module") -def params(): - torch.manual_seed(0) - working_dir = os.path.dirname(os.path.realpath(__file__)) - config_path = os.path.join( - working_dir, "..", "..", "..", "risk_biased", "config", "learning_config.py" - ) - cfg = Config.fromfile(config_path) - - cfg.cost_scale = 1 - cfg.cost_reduce = "mean" - cfg.ego_length = 4 - cfg.ego_width = 1.75 - cfg.distance_bandwidth = 2 - cfg.time_bandwidth = 2 - cfg.min_velocity_diff = 0.01 - - return cfg - - -def get_fake_input(batch_size, num_steps, is_torch, use_mask, num_agents=0): - if num_agents <= 0: - shape = [batch_size, num_steps, 2] - else: - shape = [batch_size, num_agents, num_steps, 2] - if is_torch: - x1 = torch.rand(shape) - x2 = torch.rand(shape) - v1 = torch.rand(shape) - v2 = torch.rand(shape) - if use_mask: - mask = torch.rand(shape[:-1]) > 0.1 - else: - mask = None - else: - x1 = np.random.uniform(size=shape) - x2 = np.random.uniform(size=shape) - v1 = np.random.uniform(size=shape) - v2 = np.random.uniform(size=shape) - if use_mask: - mask = np.random.uniform(size=shape[:-1]) > 0.1 - else: - mask = None - return x1, x2, v1, v2, mask - - -@pytest.mark.parametrize( - "reduce, batch_size, num_steps, is_torch, use_mask, num_agents", - [ - ("mean", 8, 5, True, True, 0), - ("min", 4, 2, False, True, 2), - ("max", 4, 2, True, False, 3), - ("now", 16, 1, False, False, 1), - ("final", 1, 4, True, True, 0), - ], -) -def test_base_cost( - params, - reduce: str, - batch_size: int, - num_steps: int, - is_torch: bool, - use_mask: bool, - num_agents: int, -): - params.cost_reduce = reduce - cost_params = CostParams.from_config(params) - if is_torch: - base_cost = BaseCostTorch(cost_params) - else: - base_cost = BaseCostNumpy(cost_params) - - x1, x2, v1, v2, mask = get_fake_input( - batch_size, num_steps, is_torch, use_mask, num_agents - ) - cost, _ = base_cost(x1, x2, v1, v2, mask) - if num_agents > 0: - assert cost.shape == ( - batch_size, - num_agents, - ) - else: - assert cost.shape == (batch_size,) - assert (cost == 0).all() - assert base_cost.scale == params.cost_scale - assert base_cost.distance_bandwidth == 1 - assert base_cost.time_bandwidth == 1 - - -@pytest.mark.parametrize( - "param_class, cost_class, reduce, batch_size, num_steps, is_torch, use_mask, num_agents", - [ - (DistanceCostParams, DistanceCostTorch, "max", 4, 2, True, True, 3), - (DistanceCostParams, DistanceCostNumpy, "now", 16, 1, False, True, 0), - (DistanceCostParams, DistanceCostTorch, "final", 1, 4, True, False, 2), - (TTCCostParams, TTCCostTorch, "max", 4, 2, True, False, 0), - (TTCCostParams, TTCCostNumpy, "now", 16, 1, False, True, 3), - (TTCCostParams, TTCCostNumpy, "final", 1, 4, False, True, 1), - ], -) -def test_generic_cost( - params, - param_class, - cost_class, - reduce: str, - batch_size: int, - num_steps: int, - is_torch: bool, - use_mask: bool, - num_agents: int, -): - params.cost_reduce = reduce - cost_params = param_class.from_config(params) - x1, x2, v1, v2, mask = get_fake_input( - batch_size, num_steps, is_torch, use_mask, num_agents - ) - - compute_cost = cost_class(cost_params) - - cost, _ = compute_cost(x1, x2, v1, v2, mask) - # Shaped is reduced - if num_agents > 0: - assert cost.shape == (batch_size, num_agents) - else: - assert cost.shape == (batch_size,) - assert (cost != 0).any() - assert compute_cost.scale == params.cost_scale - # Rescale the cost for comparison - compute_cost.scale = params.cost_scale + 10 - assert compute_cost.scale != params.cost_scale - rescaled_cost, _ = compute_cost(x1, x2, v1, v2, mask) - # all rescaled cost are larger but 0 cost is equal to rescaled cost - assert (rescaled_cost >= cost).all() - # at least some rescaled cost are strictly larger than normal scale cost - assert (rescaled_cost > cost).any() - - # Compute mean and min costs to compare - params.cost_reduce = "mean" - cost_params_mean = param_class.from_config(params) - cost_function_mean = cost_class(cost_params_mean) - cost_mean, _ = cost_function_mean(x1, x2, v1, v2) - - params.cost_reduce = "min" - cost_params_min = param_class.from_config(params) - cost_function_min = cost_class(cost_params_min) - cost_min, _ = cost_function_min(x1, x2, v1, v2) - - # max reduce is larger than mean - if reduce == "max": - assert (cost >= cost_mean).all() - # min reduce is lower than any othir - assert (cost_mean >= cost_min).all() - assert (cost >= cost_min).all() diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/docs/_static/css/custom.css b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/docs/_static/css/custom.css deleted file mode 100644 index 6c511764cf4c1d55a227619a98e5ba6578619ad7..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/docs/_static/css/custom.css +++ /dev/null @@ -1,30 +0,0 @@ -/* - * Copyright (c) Facebook, Inc. and its affiliates. - * some extra css to make markdown look similar between github/sphinx - */ - -/* - * Below is for install.md: - */ -.rst-content code { - white-space: pre; - border: 0px; -} - -.rst-content th { - border: 1px solid #e1e4e5; -} - -.rst-content th p { - /* otherwise will be default 24px for regular paragraph */ - margin-bottom: 0px; -} - -.rst-content .line-block { - /* otherwise will be 24px */ - margin-bottom: 0px; -} - -div.section > details { - padding-bottom: 1em; -} diff --git a/spaces/Theivaprakasham/yolov6/tools/quantization/mnn/README.md b/spaces/Theivaprakasham/yolov6/tools/quantization/mnn/README.md deleted file mode 100644 index 12c3c0415352060b7e5c8f437730a9c6e35dfbef..0000000000000000000000000000000000000000 --- a/spaces/Theivaprakasham/yolov6/tools/quantization/mnn/README.md +++ /dev/null @@ -1 +0,0 @@ -# Coming soon \ No newline at end of file diff --git a/spaces/Thumas/DogCat/README.md b/spaces/Thumas/DogCat/README.md deleted file mode 100644 index c6e71cee0b941b70fa4b90334560a63bbe205709..0000000000000000000000000000000000000000 --- a/spaces/Thumas/DogCat/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Lec2 -emoji: 🦀 -colorFrom: yellow -colorTo: red -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/VIPLab/Track-Anything/overleaf/Track Anything/neurips_2022.tex b/spaces/VIPLab/Track-Anything/overleaf/Track Anything/neurips_2022.tex deleted file mode 100644 index f14a483fc519ccd4312233d185747c6d0bbaf1c7..0000000000000000000000000000000000000000 --- a/spaces/VIPLab/Track-Anything/overleaf/Track Anything/neurips_2022.tex +++ /dev/null @@ -1,378 +0,0 @@ -\documentclass{article} - - -% if you need to pass options to natbib, use, e.g.: -% \PassOptionsToPackage{numbers, compress}{natbib} -% before loading neurips_2022 - - -% ready for submission -% \usepackage{neurips_2022} - - -% to compile a preprint version, e.g., for submission to arXiv, add add the -% [preprint] option: - \usepackage[preprint]{neurips_2022} - -% to compile a camera-ready version, add the [final] option, e.g.: -% \usepackage[final]{neurips_2022} - - -% to avoid loading the natbib package, add option nonatbib: -% \usepackage[nonatbib]{neurips_2022} -\usepackage{graphicx} -\usepackage[utf8]{inputenc} % allow utf-8 input -\usepackage[T1]{fontenc} % use 8-bit T1 fonts -\usepackage{hyperref} % hyperlinks -\usepackage{url} % simple URL typesetting -\usepackage{booktabs} % professional-quality tables -\usepackage{amsfonts} % blackboard math symbols -\usepackage{nicefrac} % compact symbols for 1/2, etc. -\usepackage{microtype} % microtypography -\usepackage{xcolor} % colors -% \usepackage{acmart} - -\title{Track Anything: High-performance Interactive Tracking and Segmentation} -\title{Track Anything: High-performance Object Tracking in Videos by Interactive Masks} -% \title{Track Anything: Interaction to Mask in Videos} -\title{Track Anything: Segment Anything Meets Videos} - -% \author{% -% David S.~Hippocampus\thanks{Use footnote for providing further information -% about author (webpage, alternative address)---\emph{not} for acknowledging -% funding agencies.} \\ -% SUSTech VIPG\\ - -% \author{Jinyu Yang} -% \authornote{equal} - -% \author{Mingqi Gao} -% \authornotemark[1] - -\author{% - Jinyu Yang\thanks{Equal contribution. Alphabetical order.},\enskip Mingqi Gao\footnotemark[1],\enskip Zhe Li\footnotemark[1],\enskip Shang Gao, Fangjing Wang, Feng Zheng \\ - SUSTech VIP Lab\\ - % Cranberry-Lemon University\\ - % Pittsburgh, PA 15213 \\ - % \texttt{hippo@cs.cranberry-lemon.edu} \\ - % \url{https://github.com/gaomingqi/Track-Anything}\\ - % examples of more authors - % \And - % Coauthor \\ - % Affiliation \\ - % Address \\ - % \texttt{email} \\ - % \AND - % Coauthor \\ - % Affiliation \\ - % Address \\ - % \texttt{email} \\ - % \And - % Coauthor \\ - % Affiliation \\ - % Address \\ - % \texttt{email} \\ - % \And - % Coauthor \\ - % Affiliation \\ - % Address \\ - % \texttt{email} \\ - % \thanks{these authors contributed equally} -} -% \affiliation{\institution{SUSTech VIP Lab}} -% \footnote{Equal contribution. Alphabetical order.} - -\begin{document} - - -\maketitle - - -\begin{abstract} - -Recently, the Segment Anything Model (SAM) gains lots of attention rapidly due to its impressive segmentation performance on images. -Regarding its strong ability on image segmentation and high interactivity with different prompts, we found that it performs poorly on consistent segmentation in videos. -Therefore, in this report, we propose Track Anything Model (TAM), which achieves high-performance interactive tracking and segmentation in videos. -To be detailed, given a video sequence, only with very little human participation, \textit{i.e.}, several clicks, people can track anything they are interested in, and get satisfactory results in one-pass inference. -Without additional training, such an interactive design performs impressively on video object tracking and segmentation. -% superior to prior works on video object tracking and segmentation. -All resources are available on \url{https://github.com/gaomingqi/Track-Anything}. -We hope this work can facilitate related research. - -\end{abstract} - -\section{Introduction} - -Tracking an arbitrary object in generic scenes is important, and Video Object Tracking (VOT) is a fundamental task in computer vision. -Similar to VOT, Video Object Segmentation (VOS) aims to separate the target (region of interest) from the background in a video sequence, which can be seen as a kind of more fine-grained object tracking. -We notice that current state-of-the-art video trackers/segmenters are trained on large-scale manually-annotated datasets and initialized by a bounding box or a segmentation mask. -On the one hand, the massive human labor force is hidden behind huge amounts of labeled data. -% Recently, interactive algorithms help to liberate users from labor-expensive initialization and annotation. -Moreover, current initialization settings, especially the semi-supervised VOS, need specific object mask groundtruth for model initialization. -How to liberate researchers from labor-expensive annotation and initialization is much of important. - - -Recently, Segment-Anything Model (SAM)~\cite{sam} has been proposed, which is a large foundation model for image segmentation. -It supports flexible prompts and computes masks in real-time, thus allowing interactive use. -We conclude that SAM has the following advantages that can assist interactive tracking: -\textbf{1) Strong image segmentation ability.} -Trained on 11 million images and 1.1 billion masks, SAM can produce high-quality masks and do zero-shot segmentation in generic scenarios. -\textbf{2) High interactivity with different kinds of prompts. } -With input user-friendly prompts of points, boxes, or language, SAM can give satisfactory segmentation masks on specific image areas. -However, using SAM in videos directly did not give us an impressive performance due to its deficiency in temporal correspondence. - -On the other hand, tracking or segmenting in videos faces challenges from scale variation, target deformation, motion blur, camera motion, similar objects, and so on~\cite{vos,vot6,vot7,vot8,vot9,vot10}. -Even the state-of-the-art models suffer from complex scenarios in the public datasets~\cite{xmem}, not to mention the real-world applications. -Therefore, a question is considered by us: -\textit{can we achieve high-performance tracking/segmentation in videos through the way of interaction?} - -In this technical report, we introduce our Track-Anything project, which develops an efficient toolkit for high-performance object tracking and segmentation in videos. -With a user-friendly interface, the Track Anything Model (TAM) can track and segment any objects in a given video with only one-pass inference. -Figure~\ref{fig:overview} shows the one-pass interactive process in the proposed TAM. -In detail, TAM combines SAM~\cite{sam}, a large segmentation model, and XMem~\cite{xmem}, an advanced VOS model. -As shown, we integrate them in an interactive way. -Firstly, users can interactively initialize the SAM, \textit{i.e.}, clicking on the object, to define a target object; -then, XMem is used to give a mask prediction of the object in the next frame according to both temporal and spatial correspondence; -next, SAM is utilized to give a more precise mask description; -during the tracking process, users can pause and correct as soon as they notice tracking failures. - -Our contributions can be concluded as follows: - -1) We promote the SAM applications to the video level to achieve interactive video object tracking and segmentation. -% We combine the SAM with VOS models to achieve interactive video object tracking and segmentation. -Rather than separately using SAM per frame, we integrate SAM into the process of temporal correspondence construction. - -2) We propose one-pass interactive tracking and segmentation for efficient annotation and a user-friendly tracking interface, which uses very small amounts of human participation to solve extreme difficulties in video object perception. - -3) Our proposed method shows superior performance and high usability in complex scenes and has many potential applications. - -% \section{Related Works} - -% \textbf{Video Object Tracking.} - - - -% \textbf{Video Object Segmentation.} -\section{Track Anything Task} - -Inspired by the Segment Anything task~\cite{sam}, we propose the Track Anything task, which aims to flexible object tracking in arbitrary videos. -Here we define that the target objects can be flexibly selected, added, or removed in any way according to the users' interests. -Also, the video length and types can be arbitrary rather than limited to trimmed or natural videos. -With such settings, diverse downstream tasks can be achieved, including single/multiple object tracking, short-/long-term object tracking, unsupervised VOS, semi-supervised VOS, referring VOS, interactive VOS, long-term VOS, and so on. - -\section{Methodology} - -\subsection{Preliminaries} - -\textbf{Segment Anything Model~\cite{sam}.} -Very recently, the Segment Anything Model (SAM) has been proposed by Meta AI Research and gets numerous attention. -As a foundation model for image segmentation, SAM is based on ViT~\cite{vit} and trained on the large-scale dataset SA-1B~\cite{sam}. -Obviously, SAM shows promising segmentation ability on images, especially on zero-shot segmentation tasks. -Unfortunately, SAM only shows superior performance on image segmentation, while it cannot deal with complex video segmentation. - - -\textbf{XMem~\cite{xmem}.} -Given the mask description of the target object at the first frame, XMem can track the object and generate corresponding masks in the subsequent frames. -Inspired by the Atkinson-Shiffrin memory model, it aims to solve the difficulties in long-term videos with unified feature memory stores. -The drawbacks of XMem are also obvious: 1) as a semi-supervised VOS model, it requires a precise mask to initialize; 2) for long videos, it is difficult for XMem to recover from tracking or segmentation failure. -In this paper, we solve both difficulties by importing interactive tracking with SAM. - - -\textbf{Interactive Video Object Segmentation.} -Interactive VOS~\cite{mivos} takes user interactions as inputs, \textit{e.g.}, scribbles. -Then, users can iteratively refine the segmentation results until they are satisfied with them. -Interactive VOS gains lots of attention as it is much easier to provide scribbles than to specify every pixel for an object mask. -However, we found that current interactive VOS methods require multiple rounds to refine the results, which impedes their efficiency in real-world applications. - -\begin{figure}[t] -\centering -\includegraphics[width=\linewidth]{figs/overview_4.pdf} -\caption{Pipeline of our proposed Track Anything Model (TAM). Only within one round of inference can the TAM obtain impressive tracking and segmentation performance on the human-selected target.} -\label{fig:overview} -\end{figure} - -\begin{table} - \caption{Results on DAVIS-2016-val and DAVIS-2017-test-dev datasets~\cite{davis}.} - \label{davis1617} - \centering - \small - \setlength\tabcolsep{4pt} - \begin{tabular}{l|c|c|c|ccc|ccc} - \toprule - & & & &\multicolumn{3}{c|}{DAVIS-2016-val} &\multicolumn{3}{c}{DAVIS-2017-test-dev} \\ - Method & Venue & Initialization & Evaluation& $J\&F$ & $J$ &$F$ &$J\&F$ & $J$ &$F$\\ - \midrule - STM~\cite{stm} & ICCV2019 &Mask & One Pass &89.3 &88.7 &89.9 & 72.2 & 69.3 & 75.2 \\ - AOT~\cite{aot} &NeurIPS2021 &Mask & One Pass & 91.1 & 90.1 & 92.1 & 79.6 & 75.9 & 83.3 \\ - XMem~\cite{xmem} & NeurIPS2022 &Mask & One Pass & 92.0 &90.7 &93.2 & 81.2 & 77.6 & 84.7\\ - \midrule - % SiamMask~\cite{siammask}& CVPR2019 &Box & One Pass & 69.8 &71.7 &67.8 &56.4 &54.3 &58.5 \\ - SiamMask~\cite{siammask}& CVPR2019 &Box & One Pass & 69.8 &71.7 &67.8 &- &- &- \\ - \midrule - % MiVOS~\cite{mivos} & CVPR2021 &Scribble &8 Rounds &91.0 &89.6 &92.4 & 84.5 &81.7 &87.4\\ - MiVOS~\cite{mivos} & CVPR2021 &Scribble &8 Rounds &91.0 &89.6 &92.4 &78.6 &74.9 &82.2\\ - % \midrule - % & ICIP2022 &Click & \\ - \midrule - TAM (Proposed) &- & Click & One Pass & 88.4 & 87.5 &89.4 & 73.1 & 69.8 & 76.4\\ - % Ours & & 5 Clicks & \\ - \bottomrule - \end{tabular} -\end{table} - - - -\subsection{Implementation}\label{implementation} - -Inspired by SAM, we consider tracking anything in videos. -We aim to define this task with high interactivity and ease of use. -It leads to ease of use and is able to obtain high performance with very little human interaction effort. -Figure~\ref{fig:overview} shows the pipeline of our Track Anything Model (TAM). -As shown, we divide our Track-Anything process into the following four steps: - -\textbf{Step 1: Initialization with SAM~\cite{sam}.} -As SAM provides us an opportunity to segment a region of interest with weak prompts, \textit{e.g.}, points, and bounding boxes, we use it to give an initial mask of the target object. -Following SAM, users can get a mask description of the interested object by a click or modify the object mask with several clicks to get a satisfactory initialization. - -\textbf{Step 2: Tracking with XMem~\cite{xmem}.} -Given the initialized mask, XMem performs semi-supervised VOS on the following frames. -Since XMem is an advanced VOS method that can output satisfactory results on simple scenarios, we output the predicted masks of XMem on most occasions. -When the mask quality is not such good, we save the XMem predictions and corresponding intermediate parameters, \textit{i.e.}, probes and affinities, and skip to step 3. -% Given the initialized mask and the whole sequence, XMem performs semi-supervised VOS, which aims to solve the performance decay in long-term prediction with memory potentiation. - - -\textbf{Step 3: Refinement with SAM~\cite{sam}.} -We notice that during the inference of VOS models, keep predicting consistent and precise masks are challenging. -In fact, most state-of-the-art VOS models tend to segment more and more coarsely over time during inference. -Therefore, we utilize SAM to refine the masks predicted by XMem when its quality assessment is not satisfactory. -Specifically, we project the probes and affinities to be point prompts for SAM, and the predicted mask from Step 2 is used as a mask prompt for SAM. -Then, with these prompts, SAM is able to produce a refined segmentation mask. -Such refined masks will also be added to the temporal correspondence of XMem to refine all subsequent object discrimination. - -\textbf{Step 4: Correction with human participation.} -% Long video annotation. -After the above three steps, the TAM can now successfully solve some common challenges and predict segmentation masks. -However, we notice that it is still difficult to accurately distinguish the objects in some extremely challenging scenarios, especially when processing long videos. -Therefore, we propose to add human correction during inference, which can bring a qualitative leap in performance with only very small human efforts. -In detail, users can compulsively stop the TAM process and correct the mask of the current frame with positive and negative clicks. - -\section{Experiments} - -\subsection{Quantitative Results} - - -To evaluate TAM, we utilize the validation set of DAVIS-2016 and test-development set of DAVIS-2017~\cite{davis}. -% The evaluation process follows the one we proposed in Section~\ref{implementation}. -Then, we execute the proposed TAM as demonstrated in Section~\ref{implementation}. -The results are given in Table~\ref{davis1617}. -As shown, our TAM obtains $J\&F$ scores of 88.4 and 73.1 on DAVIS-2016-val and DAVIS-2017-test-dev datasets, respectively. -Note that TAM is initialized by clicks and evaluated in one pass. -Notably, we found that TAM performs well when against difficult and complex scenarios. -% During the evaluation, - -% click-based interactive video object segmentation - -% CLICK-BASED INTERACTIVE VIDEO OBJECT -% SEGMENTATION - - -\begin{figure}[t] -\centering -\includegraphics[width=\linewidth]{figs/davisresults.pdf} -\caption{Qualitative results on video sequences from DAVIS-16 and DAVIS-17 datasets~\cite{davis}.} -\label{fig:davisresult} -\end{figure} - - -\begin{figure}[t] -\centering -\includegraphics[width=\linewidth]{figs/failedcases.pdf} -\caption{Failed cases.} -\label{fig:failedcases} -\end{figure} - -\subsection{Qualitative Results} - -% As we use a new one-pass interactive method to evaluation our TAM, here we only present some qualitative results. -We also give some qualitative results in Figure~\ref{fig:davisresult}. -As shown, TAM can handle multi-object separation, target deformation, scale change, and camera motion well, which demonstrates its superior tracking and segmentation abilities within only click initialization and one-round inference. - -\subsection{Failed Cases} -We here also analyze the failed cases, as shown in Figure~\ref{fig:failedcases}. -Overall, we notice that the failed cases typically appear on the following two occasions. -1) -% Separated masks of one object in a long video. -Current VOS models are mostly designed for short videos, which focus more on maintaining short-term memory rather than long-term memory. -This leads to mask shrinkage or lacking refinement in long-term videos, as shown in seq (a). -Essentially, we aim to solve them in step 3 by the refinement ability of SAM, while its effectiveness is lower than expected in realistic applications. -It indicates that the ability of SAM refinement based on multiple prompts can be further improved in the future. -On the other hand, human participation/interaction in TAM can be an approach to solving such difficulties, while too much interaction will also result in low efficiency. -Thus, the mechanism of long-term memory preserving and transient memory updating is still important. -% Limited refinement by SAM. Although SAM supports to refine previous predictions, via point and mask prompts, . How to . -2) When the object structure is complex, \textit{e.g.}, the bicycle wheels in seq (b) contain many cavities in groundtruth masks. We found it very difficult to get a fine-grained initialized mask by propagating the clicks. -Thus, the coarse initialized masks may have side effects on the subsequent frames and lead to poor predictions. -This also inspires us that SAM is still struggling with complex and precision structures. - - -\begin{figure}[t] -\centering -\includegraphics[width=\linewidth]{figs/avengers_1.pdf} -\caption{Raw frames, object masks, and inpainted results from the movie \textit{Captain America: Civil War (2016)}.} -\label{fig:captain} -\end{figure} - - - -\section{Applications} -The proposed Track Anything Model (TAM) provides many possibilities for flexible tracking and segmentation in videos. -Here, we demonstrate several applications enabled by our proposed method. -% Our method may be able to a variety of applications. -In such an interactive way, diverse downstream tasks can be easily achieved. -% \textbf{Demo.} -% It is able to solve diverse downstream tasks in such a interactive way. - -\textbf{Efficient video annotation.} -TAM has the ability to segment the regions of interest in videos and flexibly choose the objects users want to track. Thus, it can be used for video annotation for tasks like video object tracking and video object segmentation. -On the other hand, click-based interaction makes it easy to use, and the annotation process is of high efficiency. - - -\textbf{Long-term object tracking.} -The study of long-term tracking is gaining more and more attention because it is much closer to practical applications. -Current long-term object tracking task requires the tracker to have the ability to handle target disappearance and reappearance while it is still limited in the scope of trimmed videos. -Our TAM is more advanced in real-world applications which can handle the shot changes in long videos. - - -\textbf{User-friendly video editing.} -Track Anything Model provides us the opportunities to segment objects -With the object segmentation masks provided by TAM, we are then able to remove or alter any of the existing objects in a given video. -Here we combine E$^2$FGVI~\cite{e2fgvi} to evaluate its application value. - -\textbf{Visualized development toolkit for video tasks.} -For ease of use, we also provide visualized interfaces for multiple video tasks, \textit{e.g.}, VOS, VOT, video inpainting, and so on. -With the provided toolkit, users can apply their models on real-world videos and visualize the results instantaneously. -Corresponding demos are available in Hugging Face\footnote{\url{https://huggingface.co/spaces/watchtowerss/Track-Anything}}. - - -To show the effectiveness, we give a comprehensive test by applying TAM on the movie \textit{Captain America: Civil War (2016)}. -Some representative results are given in Figure \ref{fig:captain}. -As shown, TAM can present multiple object tracking precisely in videos with lots of shot changes and can further be helpful in video inpainting. - -% \section{Further work} - - -% \section*{Acknowledgements} - -% \appendix - -% \section{Appendix} - - -% Optionally include extra information (complete proofs, additional experiments and plots) in the appendix. -% This section will often be part of the supplemental material. - - - -\bibliographystyle{plain} -\bibliography{neurips_2022} - -\end{document} diff --git a/spaces/Vision-CAIR/MiniGPT-v2/minigpt4/common/registry.py b/spaces/Vision-CAIR/MiniGPT-v2/minigpt4/common/registry.py deleted file mode 100644 index 679467a7411eda19ed956b810c21234322f06779..0000000000000000000000000000000000000000 --- a/spaces/Vision-CAIR/MiniGPT-v2/minigpt4/common/registry.py +++ /dev/null @@ -1,329 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE_Lavis file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - - -class Registry: - mapping = { - "builder_name_mapping": {}, - "task_name_mapping": {}, - "processor_name_mapping": {}, - "model_name_mapping": {}, - "lr_scheduler_name_mapping": {}, - "runner_name_mapping": {}, - "state": {}, - "paths": {}, - } - - @classmethod - def register_builder(cls, name): - r"""Register a dataset builder to registry with key 'name' - - Args: - name: Key with which the builder will be registered. - - Usage: - - from minigpt4.common.registry import registry - from minigpt4.datasets.base_dataset_builder import BaseDatasetBuilder - """ - - def wrap(builder_cls): - from minigpt4.datasets.builders.base_dataset_builder import BaseDatasetBuilder - - assert issubclass( - builder_cls, BaseDatasetBuilder - ), "All builders must inherit BaseDatasetBuilder class, found {}".format( - builder_cls - ) - if name in cls.mapping["builder_name_mapping"]: - raise KeyError( - "Name '{}' already registered for {}.".format( - name, cls.mapping["builder_name_mapping"][name] - ) - ) - cls.mapping["builder_name_mapping"][name] = builder_cls - return builder_cls - - return wrap - - @classmethod - def register_task(cls, name): - r"""Register a task to registry with key 'name' - - Args: - name: Key with which the task will be registered. - - Usage: - - from minigpt4.common.registry import registry - """ - - def wrap(task_cls): - from minigpt4.tasks.base_task import BaseTask - - assert issubclass( - task_cls, BaseTask - ), "All tasks must inherit BaseTask class" - if name in cls.mapping["task_name_mapping"]: - raise KeyError( - "Name '{}' already registered for {}.".format( - name, cls.mapping["task_name_mapping"][name] - ) - ) - cls.mapping["task_name_mapping"][name] = task_cls - return task_cls - - return wrap - - @classmethod - def register_model(cls, name): - r"""Register a task to registry with key 'name' - - Args: - name: Key with which the task will be registered. - - Usage: - - from minigpt4.common.registry import registry - """ - - def wrap(model_cls): - from minigpt4.models import BaseModel - - assert issubclass( - model_cls, BaseModel - ), "All models must inherit BaseModel class" - if name in cls.mapping["model_name_mapping"]: - raise KeyError( - "Name '{}' already registered for {}.".format( - name, cls.mapping["model_name_mapping"][name] - ) - ) - cls.mapping["model_name_mapping"][name] = model_cls - return model_cls - - return wrap - - @classmethod - def register_processor(cls, name): - r"""Register a processor to registry with key 'name' - - Args: - name: Key with which the task will be registered. - - Usage: - - from minigpt4.common.registry import registry - """ - - def wrap(processor_cls): - from minigpt4.processors import BaseProcessor - - assert issubclass( - processor_cls, BaseProcessor - ), "All processors must inherit BaseProcessor class" - if name in cls.mapping["processor_name_mapping"]: - raise KeyError( - "Name '{}' already registered for {}.".format( - name, cls.mapping["processor_name_mapping"][name] - ) - ) - cls.mapping["processor_name_mapping"][name] = processor_cls - return processor_cls - - return wrap - - @classmethod - def register_lr_scheduler(cls, name): - r"""Register a model to registry with key 'name' - - Args: - name: Key with which the task will be registered. - - Usage: - - from minigpt4.common.registry import registry - """ - - def wrap(lr_sched_cls): - if name in cls.mapping["lr_scheduler_name_mapping"]: - raise KeyError( - "Name '{}' already registered for {}.".format( - name, cls.mapping["lr_scheduler_name_mapping"][name] - ) - ) - cls.mapping["lr_scheduler_name_mapping"][name] = lr_sched_cls - return lr_sched_cls - - return wrap - - @classmethod - def register_runner(cls, name): - r"""Register a model to registry with key 'name' - - Args: - name: Key with which the task will be registered. - - Usage: - - from minigpt4.common.registry import registry - """ - - def wrap(runner_cls): - if name in cls.mapping["runner_name_mapping"]: - raise KeyError( - "Name '{}' already registered for {}.".format( - name, cls.mapping["runner_name_mapping"][name] - ) - ) - cls.mapping["runner_name_mapping"][name] = runner_cls - return runner_cls - - return wrap - - @classmethod - def register_path(cls, name, path): - r"""Register a path to registry with key 'name' - - Args: - name: Key with which the path will be registered. - - Usage: - - from minigpt4.common.registry import registry - """ - assert isinstance(path, str), "All path must be str." - if name in cls.mapping["paths"]: - raise KeyError("Name '{}' already registered.".format(name)) - cls.mapping["paths"][name] = path - - @classmethod - def register(cls, name, obj): - r"""Register an item to registry with key 'name' - - Args: - name: Key with which the item will be registered. - - Usage:: - - from minigpt4.common.registry import registry - - registry.register("config", {}) - """ - path = name.split(".") - current = cls.mapping["state"] - - for part in path[:-1]: - if part not in current: - current[part] = {} - current = current[part] - - current[path[-1]] = obj - - # @classmethod - # def get_trainer_class(cls, name): - # return cls.mapping["trainer_name_mapping"].get(name, None) - - @classmethod - def get_builder_class(cls, name): - return cls.mapping["builder_name_mapping"].get(name, None) - - @classmethod - def get_model_class(cls, name): - return cls.mapping["model_name_mapping"].get(name, None) - - @classmethod - def get_task_class(cls, name): - return cls.mapping["task_name_mapping"].get(name, None) - - @classmethod - def get_processor_class(cls, name): - return cls.mapping["processor_name_mapping"].get(name, None) - - @classmethod - def get_lr_scheduler_class(cls, name): - return cls.mapping["lr_scheduler_name_mapping"].get(name, None) - - @classmethod - def get_runner_class(cls, name): - return cls.mapping["runner_name_mapping"].get(name, None) - - @classmethod - def list_runners(cls): - return sorted(cls.mapping["runner_name_mapping"].keys()) - - @classmethod - def list_models(cls): - return sorted(cls.mapping["model_name_mapping"].keys()) - - @classmethod - def list_tasks(cls): - return sorted(cls.mapping["task_name_mapping"].keys()) - - @classmethod - def list_processors(cls): - return sorted(cls.mapping["processor_name_mapping"].keys()) - - @classmethod - def list_lr_schedulers(cls): - return sorted(cls.mapping["lr_scheduler_name_mapping"].keys()) - - @classmethod - def list_datasets(cls): - return sorted(cls.mapping["builder_name_mapping"].keys()) - - @classmethod - def get_path(cls, name): - return cls.mapping["paths"].get(name, None) - - @classmethod - def get(cls, name, default=None, no_warning=False): - r"""Get an item from registry with key 'name' - - Args: - name (string): Key whose value needs to be retrieved. - default: If passed and key is not in registry, default value will - be returned with a warning. Default: None - no_warning (bool): If passed as True, warning when key doesn't exist - will not be generated. Useful for MMF's - internal operations. Default: False - """ - original_name = name - name = name.split(".") - value = cls.mapping["state"] - for subname in name: - value = value.get(subname, default) - if value is default: - break - - if ( - "writer" in cls.mapping["state"] - and value == default - and no_warning is False - ): - cls.mapping["state"]["writer"].warning( - "Key {} is not present in registry, returning default value " - "of {}".format(original_name, default) - ) - return value - - @classmethod - def unregister(cls, name): - r"""Remove an item from registry with key 'name' - - Args: - name: Key which needs to be removed. - Usage:: - - from mmf.common.registry import registry - - config = registry.unregister("config") - """ - return cls.mapping["state"].pop(name, None) - - -registry = Registry() diff --git a/spaces/XPMaster/Motor_Vehicle_Collisions_NY/README.md b/spaces/XPMaster/Motor_Vehicle_Collisions_NY/README.md deleted file mode 100644 index 06394568cb31d26fe5fdfa04f81172c6716e9edb..0000000000000000000000000000000000000000 --- a/spaces/XPMaster/Motor_Vehicle_Collisions_NY/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Motor Vehicle Collisions NY -emoji: 🌍 -colorFrom: red -colorTo: blue -sdk: streamlit -sdk_version: 1.15.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Xenova/next-example-app/_next/static/chunks/587.7590d497bcece81c.js b/spaces/Xenova/next-example-app/_next/static/chunks/587.7590d497bcece81c.js deleted file mode 100644 index 984429505d04ce2ee3b74ae06e524480e10f9838..0000000000000000000000000000000000000000 --- a/spaces/Xenova/next-example-app/_next/static/chunks/587.7590d497bcece81c.js +++ /dev/null @@ -1,6 +0,0 @@ -(self.webpackChunk_N_E=self.webpackChunk_N_E||[]).push([[587],{4699:function(e,t){"use strict";t.byteLength=function(e){var t=l(e),r=t[0],n=t[1];return(r+n)*3/4-n},t.toByteArray=function(e){var t,r,i=l(e),o=i[0],a=i[1],h=new s((o+a)*3/4-a),c=0,u=a>0?o-4:o;for(r=0;r>16&255,h[c++]=t>>8&255,h[c++]=255&t;return 2===a&&(t=n[e.charCodeAt(r)]<<2|n[e.charCodeAt(r+1)]>>4,h[c++]=255&t),1===a&&(t=n[e.charCodeAt(r)]<<10|n[e.charCodeAt(r+1)]<<4|n[e.charCodeAt(r+2)]>>2,h[c++]=t>>8&255,h[c++]=255&t),h},t.fromByteArray=function(e){for(var t,n=e.length,s=n%3,i=[],o=0,a=n-s;o>18&63]+r[s>>12&63]+r[s>>6&63]+r[63&s]);return i.join("")}(e,o,o+16383>a?a:o+16383));return 1===s?i.push(r[(t=e[n-1])>>2]+r[t<<4&63]+"=="):2===s&&i.push(r[(t=(e[n-2]<<8)+e[n-1])>>10]+r[t>>4&63]+r[t<<2&63]+"="),i.join("")};for(var r=[],n=[],s="undefined"!=typeof Uint8Array?Uint8Array:Array,i="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/",o=0,a=i.length;o0)throw Error("Invalid string. Length must be a multiple of 4");var r=e.indexOf("=");-1===r&&(r=t);var n=r===t?0:4-r%4;return[r,n]}n["-".charCodeAt(0)]=62,n["_".charCodeAt(0)]=63},7133:function(e,t,r){"use strict";/*! - * The buffer module from node.js, for the browser. - * - * @author Feross Aboukhadijeh - * @license MIT - */var n=r(4699),s=r(9087),i="function"==typeof Symbol&&"function"==typeof Symbol.for?Symbol.for("nodejs.util.inspect.custom"):null;function o(e){if(e>2147483647)throw RangeError('The value "'+e+'" is invalid for option "size"');var t=new Uint8Array(e);return Object.setPrototypeOf(t,a.prototype),t}function a(e,t,r){if("number"==typeof e){if("string"==typeof t)throw TypeError('The "string" argument must be of type string. Received type number');return c(e)}return l(e,t,r)}function l(e,t,r){if("string"==typeof e)return function(e,t){if(("string"!=typeof t||""===t)&&(t="utf8"),!a.isEncoding(t))throw TypeError("Unknown encoding: "+t);var r=0|p(e,t),n=o(r),s=n.write(e,t);return s!==r&&(n=n.slice(0,s)),n}(e,t);if(ArrayBuffer.isView(e))return function(e){if(I(e,Uint8Array)){var t=new Uint8Array(e);return d(t.buffer,t.byteOffset,t.byteLength)}return u(e)}(e);if(null==e)throw TypeError("The first argument must be one of type string, Buffer, ArrayBuffer, Array, or Array-like Object. Received type "+typeof e);if(I(e,ArrayBuffer)||e&&I(e.buffer,ArrayBuffer)||"undefined"!=typeof SharedArrayBuffer&&(I(e,SharedArrayBuffer)||e&&I(e.buffer,SharedArrayBuffer)))return d(e,t,r);if("number"==typeof e)throw TypeError('The "value" argument must not be of type number. Received type number');var n=e.valueOf&&e.valueOf();if(null!=n&&n!==e)return a.from(n,t,r);var s=function(e){if(a.isBuffer(e)){var t,r=0|f(e.length),n=o(r);return 0===n.length||e.copy(n,0,0,r),n}return void 0!==e.length?"number"!=typeof e.length||(t=e.length)!=t?o(0):u(e):"Buffer"===e.type&&Array.isArray(e.data)?u(e.data):void 0}(e);if(s)return s;if("undefined"!=typeof Symbol&&null!=Symbol.toPrimitive&&"function"==typeof e[Symbol.toPrimitive])return a.from(e[Symbol.toPrimitive]("string"),t,r);throw TypeError("The first argument must be one of type string, Buffer, ArrayBuffer, Array, or Array-like Object. Received type "+typeof e)}function h(e){if("number"!=typeof e)throw TypeError('"size" argument must be of type number');if(e<0)throw RangeError('The value "'+e+'" is invalid for option "size"')}function c(e){return h(e),o(e<0?0:0|f(e))}function u(e){for(var t=e.length<0?0:0|f(e.length),r=o(t),n=0;n=2147483647)throw RangeError("Attempt to allocate Buffer larger than maximum size: 0x7fffffff bytes");return 0|e}function p(e,t){if(a.isBuffer(e))return e.length;if(ArrayBuffer.isView(e)||I(e,ArrayBuffer))return e.byteLength;if("string"!=typeof e)throw TypeError('The "string" argument must be one of type string, Buffer, or ArrayBuffer. Received type '+typeof e);var r=e.length,n=arguments.length>2&&!0===arguments[2];if(!n&&0===r)return 0;for(var s=!1;;)switch(t){case"ascii":case"latin1":case"binary":return r;case"utf8":case"utf-8":return z(e).length;case"ucs2":case"ucs-2":case"utf16le":case"utf-16le":return 2*r;case"hex":return r>>>1;case"base64":return T(e).length;default:if(s)return n?-1:z(e).length;t=(""+t).toLowerCase(),s=!0}}function _(e,t,r){var s,i,o=!1;if((void 0===t||t<0)&&(t=0),t>this.length||((void 0===r||r>this.length)&&(r=this.length),r<=0||(r>>>=0)<=(t>>>=0)))return"";for(e||(e="utf8");;)switch(e){case"hex":return function(e,t,r){var n=e.length;(!t||t<0)&&(t=0),(!r||r<0||r>n)&&(r=n);for(var s="",i=t;i2147483647?r=2147483647:r<-2147483648&&(r=-2147483648),(i=r=+r)!=i&&(r=s?0:e.length-1),r<0&&(r=e.length+r),r>=e.length){if(s)return -1;r=e.length-1}else if(r<0){if(!s)return -1;r=0}if("string"==typeof t&&(t=a.from(t,n)),a.isBuffer(t))return 0===t.length?-1:y(e,t,r,n,s);if("number"==typeof t)return(t&=255,"function"==typeof Uint8Array.prototype.indexOf)?s?Uint8Array.prototype.indexOf.call(e,t,r):Uint8Array.prototype.lastIndexOf.call(e,t,r):y(e,[t],r,n,s);throw TypeError("val must be string, number or Buffer")}function y(e,t,r,n,s){var i,o=1,a=e.length,l=t.length;if(void 0!==n&&("ucs2"===(n=String(n).toLowerCase())||"ucs-2"===n||"utf16le"===n||"utf-16le"===n)){if(e.length<2||t.length<2)return -1;o=2,a/=2,l/=2,r/=2}function h(e,t){return 1===o?e[t]:e.readUInt16BE(t*o)}if(s){var c=-1;for(i=r;ia&&(r=a-l),i=r;i>=0;i--){for(var u=!0,d=0;d239?4:h>223?3:h>191?2:1;if(s+u<=r)switch(u){case 1:h<128&&(c=h);break;case 2:(192&(i=e[s+1]))==128&&(l=(31&h)<<6|63&i)>127&&(c=l);break;case 3:i=e[s+1],o=e[s+2],(192&i)==128&&(192&o)==128&&(l=(15&h)<<12|(63&i)<<6|63&o)>2047&&(l<55296||l>57343)&&(c=l);break;case 4:i=e[s+1],o=e[s+2],a=e[s+3],(192&i)==128&&(192&o)==128&&(192&a)==128&&(l=(15&h)<<18|(63&i)<<12|(63&o)<<6|63&a)>65535&&l<1114112&&(c=l)}null===c?(c=65533,u=1):c>65535&&(c-=65536,n.push(c>>>10&1023|55296),c=56320|1023&c),n.push(c),s+=u}return function(e){var t=e.length;if(t<=4096)return String.fromCharCode.apply(String,e);for(var r="",n=0;nr)throw RangeError("Trying to access beyond buffer length")}function k(e,t,r,n,s,i){if(!a.isBuffer(e))throw TypeError('"buffer" argument must be a Buffer instance');if(t>s||te.length)throw RangeError("Index out of range")}function v(e,t,r,n,s,i){if(r+n>e.length||r<0)throw RangeError("Index out of range")}function x(e,t,r,n,i){return t=+t,r>>>=0,i||v(e,t,r,4,34028234663852886e22,-34028234663852886e22),s.write(e,t,r,n,23,4),r+4}function A(e,t,r,n,i){return t=+t,r>>>=0,i||v(e,t,r,8,17976931348623157e292,-17976931348623157e292),s.write(e,t,r,n,52,8),r+8}t.lW=a,t.h2=50,a.TYPED_ARRAY_SUPPORT=function(){try{var e=new Uint8Array(1),t={foo:function(){return 42}};return Object.setPrototypeOf(t,Uint8Array.prototype),Object.setPrototypeOf(e,t),42===e.foo()}catch(e){return!1}}(),a.TYPED_ARRAY_SUPPORT||"undefined"==typeof console||"function"!=typeof console.error||console.error("This browser lacks typed array (Uint8Array) support which is required by `buffer` v5.x. Use `buffer` v4.x if you require old browser support."),Object.defineProperty(a.prototype,"parent",{enumerable:!0,get:function(){if(a.isBuffer(this))return this.buffer}}),Object.defineProperty(a.prototype,"offset",{enumerable:!0,get:function(){if(a.isBuffer(this))return this.byteOffset}}),a.poolSize=8192,a.from=function(e,t,r){return l(e,t,r)},Object.setPrototypeOf(a.prototype,Uint8Array.prototype),Object.setPrototypeOf(a,Uint8Array),a.alloc=function(e,t,r){return(h(e),e<=0)?o(e):void 0!==t?"string"==typeof r?o(e).fill(t,r):o(e).fill(t):o(e)},a.allocUnsafe=function(e){return c(e)},a.allocUnsafeSlow=function(e){return c(e)},a.isBuffer=function(e){return null!=e&&!0===e._isBuffer&&e!==a.prototype},a.compare=function(e,t){if(I(e,Uint8Array)&&(e=a.from(e,e.offset,e.byteLength)),I(t,Uint8Array)&&(t=a.from(t,t.offset,t.byteLength)),!a.isBuffer(e)||!a.isBuffer(t))throw TypeError('The "buf1", "buf2" arguments must be one of type Buffer or Uint8Array');if(e===t)return 0;for(var r=e.length,n=t.length,s=0,i=Math.min(r,n);sn.length?a.from(i).copy(n,s):Uint8Array.prototype.set.call(n,i,s);else if(a.isBuffer(i))i.copy(n,s);else throw TypeError('"list" argument must be an Array of Buffers');s+=i.length}return n},a.byteLength=p,a.prototype._isBuffer=!0,a.prototype.swap16=function(){var e=this.length;if(e%2!=0)throw RangeError("Buffer size must be a multiple of 16-bits");for(var t=0;tr&&(e+=" ... "),""},i&&(a.prototype[i]=a.prototype.inspect),a.prototype.compare=function(e,t,r,n,s){if(I(e,Uint8Array)&&(e=a.from(e,e.offset,e.byteLength)),!a.isBuffer(e))throw TypeError('The "target" argument must be one of type Buffer or Uint8Array. Received type '+typeof e);if(void 0===t&&(t=0),void 0===r&&(r=e?e.length:0),void 0===n&&(n=0),void 0===s&&(s=this.length),t<0||r>e.length||n<0||s>this.length)throw RangeError("out of range index");if(n>=s&&t>=r)return 0;if(n>=s)return -1;if(t>=r)return 1;if(t>>>=0,r>>>=0,n>>>=0,s>>>=0,this===e)return 0;for(var i=s-n,o=r-t,l=Math.min(i,o),h=this.slice(n,s),c=e.slice(t,r),u=0;u>>=0,isFinite(r)?(r>>>=0,void 0===n&&(n="utf8")):(n=r,r=void 0);else throw Error("Buffer.write(string, encoding, offset[, length]) is no longer supported");var s,i,o,a,l,h,c,u,d=this.length-t;if((void 0===r||r>d)&&(r=d),e.length>0&&(r<0||t<0)||t>this.length)throw RangeError("Attempt to write outside buffer bounds");n||(n="utf8");for(var f=!1;;)switch(n){case"hex":return function(e,t,r,n){r=Number(r)||0;var s=e.length-r;n?(n=Number(n))>s&&(n=s):n=s;var i=t.length;n>i/2&&(n=i/2);for(var o=0;o>8,s.push(r%256),s.push(n);return s}(e,this.length-c),this,c,u);default:if(f)throw TypeError("Unknown encoding: "+n);n=(""+n).toLowerCase(),f=!0}},a.prototype.toJSON=function(){return{type:"Buffer",data:Array.prototype.slice.call(this._arr||this,0)}},a.prototype.slice=function(e,t){var r=this.length;e=~~e,t=void 0===t?r:~~t,e<0?(e+=r)<0&&(e=0):e>r&&(e=r),t<0?(t+=r)<0&&(t=0):t>r&&(t=r),t>>=0,t>>>=0,r||b(e,t,this.length);for(var n=this[e],s=1,i=0;++i>>=0,t>>>=0,r||b(e,t,this.length);for(var n=this[e+--t],s=1;t>0&&(s*=256);)n+=this[e+--t]*s;return n},a.prototype.readUint8=a.prototype.readUInt8=function(e,t){return e>>>=0,t||b(e,1,this.length),this[e]},a.prototype.readUint16LE=a.prototype.readUInt16LE=function(e,t){return e>>>=0,t||b(e,2,this.length),this[e]|this[e+1]<<8},a.prototype.readUint16BE=a.prototype.readUInt16BE=function(e,t){return e>>>=0,t||b(e,2,this.length),this[e]<<8|this[e+1]},a.prototype.readUint32LE=a.prototype.readUInt32LE=function(e,t){return e>>>=0,t||b(e,4,this.length),(this[e]|this[e+1]<<8|this[e+2]<<16)+16777216*this[e+3]},a.prototype.readUint32BE=a.prototype.readUInt32BE=function(e,t){return e>>>=0,t||b(e,4,this.length),16777216*this[e]+(this[e+1]<<16|this[e+2]<<8|this[e+3])},a.prototype.readIntLE=function(e,t,r){e>>>=0,t>>>=0,r||b(e,t,this.length);for(var n=this[e],s=1,i=0;++i=(s*=128)&&(n-=Math.pow(2,8*t)),n},a.prototype.readIntBE=function(e,t,r){e>>>=0,t>>>=0,r||b(e,t,this.length);for(var n=t,s=1,i=this[e+--n];n>0&&(s*=256);)i+=this[e+--n]*s;return i>=(s*=128)&&(i-=Math.pow(2,8*t)),i},a.prototype.readInt8=function(e,t){return(e>>>=0,t||b(e,1,this.length),128&this[e])?-((255-this[e]+1)*1):this[e]},a.prototype.readInt16LE=function(e,t){e>>>=0,t||b(e,2,this.length);var r=this[e]|this[e+1]<<8;return 32768&r?4294901760|r:r},a.prototype.readInt16BE=function(e,t){e>>>=0,t||b(e,2,this.length);var r=this[e+1]|this[e]<<8;return 32768&r?4294901760|r:r},a.prototype.readInt32LE=function(e,t){return e>>>=0,t||b(e,4,this.length),this[e]|this[e+1]<<8|this[e+2]<<16|this[e+3]<<24},a.prototype.readInt32BE=function(e,t){return e>>>=0,t||b(e,4,this.length),this[e]<<24|this[e+1]<<16|this[e+2]<<8|this[e+3]},a.prototype.readFloatLE=function(e,t){return e>>>=0,t||b(e,4,this.length),s.read(this,e,!0,23,4)},a.prototype.readFloatBE=function(e,t){return e>>>=0,t||b(e,4,this.length),s.read(this,e,!1,23,4)},a.prototype.readDoubleLE=function(e,t){return e>>>=0,t||b(e,8,this.length),s.read(this,e,!0,52,8)},a.prototype.readDoubleBE=function(e,t){return e>>>=0,t||b(e,8,this.length),s.read(this,e,!1,52,8)},a.prototype.writeUintLE=a.prototype.writeUIntLE=function(e,t,r,n){if(e=+e,t>>>=0,r>>>=0,!n){var s=Math.pow(2,8*r)-1;k(this,e,t,r,s,0)}var i=1,o=0;for(this[t]=255&e;++o>>=0,r>>>=0,!n){var s=Math.pow(2,8*r)-1;k(this,e,t,r,s,0)}var i=r-1,o=1;for(this[t+i]=255&e;--i>=0&&(o*=256);)this[t+i]=e/o&255;return t+r},a.prototype.writeUint8=a.prototype.writeUInt8=function(e,t,r){return e=+e,t>>>=0,r||k(this,e,t,1,255,0),this[t]=255&e,t+1},a.prototype.writeUint16LE=a.prototype.writeUInt16LE=function(e,t,r){return e=+e,t>>>=0,r||k(this,e,t,2,65535,0),this[t]=255&e,this[t+1]=e>>>8,t+2},a.prototype.writeUint16BE=a.prototype.writeUInt16BE=function(e,t,r){return e=+e,t>>>=0,r||k(this,e,t,2,65535,0),this[t]=e>>>8,this[t+1]=255&e,t+2},a.prototype.writeUint32LE=a.prototype.writeUInt32LE=function(e,t,r){return e=+e,t>>>=0,r||k(this,e,t,4,4294967295,0),this[t+3]=e>>>24,this[t+2]=e>>>16,this[t+1]=e>>>8,this[t]=255&e,t+4},a.prototype.writeUint32BE=a.prototype.writeUInt32BE=function(e,t,r){return e=+e,t>>>=0,r||k(this,e,t,4,4294967295,0),this[t]=e>>>24,this[t+1]=e>>>16,this[t+2]=e>>>8,this[t+3]=255&e,t+4},a.prototype.writeIntLE=function(e,t,r,n){if(e=+e,t>>>=0,!n){var s=Math.pow(2,8*r-1);k(this,e,t,r,s-1,-s)}var i=0,o=1,a=0;for(this[t]=255&e;++i>0)-a&255;return t+r},a.prototype.writeIntBE=function(e,t,r,n){if(e=+e,t>>>=0,!n){var s=Math.pow(2,8*r-1);k(this,e,t,r,s-1,-s)}var i=r-1,o=1,a=0;for(this[t+i]=255&e;--i>=0&&(o*=256);)e<0&&0===a&&0!==this[t+i+1]&&(a=1),this[t+i]=(e/o>>0)-a&255;return t+r},a.prototype.writeInt8=function(e,t,r){return e=+e,t>>>=0,r||k(this,e,t,1,127,-128),e<0&&(e=255+e+1),this[t]=255&e,t+1},a.prototype.writeInt16LE=function(e,t,r){return e=+e,t>>>=0,r||k(this,e,t,2,32767,-32768),this[t]=255&e,this[t+1]=e>>>8,t+2},a.prototype.writeInt16BE=function(e,t,r){return e=+e,t>>>=0,r||k(this,e,t,2,32767,-32768),this[t]=e>>>8,this[t+1]=255&e,t+2},a.prototype.writeInt32LE=function(e,t,r){return e=+e,t>>>=0,r||k(this,e,t,4,2147483647,-2147483648),this[t]=255&e,this[t+1]=e>>>8,this[t+2]=e>>>16,this[t+3]=e>>>24,t+4},a.prototype.writeInt32BE=function(e,t,r){return e=+e,t>>>=0,r||k(this,e,t,4,2147483647,-2147483648),e<0&&(e=4294967295+e+1),this[t]=e>>>24,this[t+1]=e>>>16,this[t+2]=e>>>8,this[t+3]=255&e,t+4},a.prototype.writeFloatLE=function(e,t,r){return x(this,e,t,!0,r)},a.prototype.writeFloatBE=function(e,t,r){return x(this,e,t,!1,r)},a.prototype.writeDoubleLE=function(e,t,r){return A(this,e,t,!0,r)},a.prototype.writeDoubleBE=function(e,t,r){return A(this,e,t,!1,r)},a.prototype.copy=function(e,t,r,n){if(!a.isBuffer(e))throw TypeError("argument should be a Buffer");if(r||(r=0),n||0===n||(n=this.length),t>=e.length&&(t=e.length),t||(t=0),n>0&&n=this.length)throw RangeError("Index out of range");if(n<0)throw RangeError("sourceEnd out of bounds");n>this.length&&(n=this.length),e.length-t>>=0,r=void 0===r?this.length:r>>>0,e||(e=0),"number"==typeof e)for(s=t;s55295&&r<57344){if(!s){if(r>56319||o+1===n){(t-=3)>-1&&i.push(239,191,189);continue}s=r;continue}if(r<56320){(t-=3)>-1&&i.push(239,191,189),s=r;continue}r=(s-55296<<10|r-56320)+65536}else s&&(t-=3)>-1&&i.push(239,191,189);if(s=null,r<128){if((t-=1)<0)break;i.push(r)}else if(r<2048){if((t-=2)<0)break;i.push(r>>6|192,63&r|128)}else if(r<65536){if((t-=3)<0)break;i.push(r>>12|224,r>>6&63|128,63&r|128)}else if(r<1114112){if((t-=4)<0)break;i.push(r>>18|240,r>>12&63|128,r>>6&63|128,63&r|128)}else throw Error("Invalid code point")}return i}function T(e){return n.toByteArray(function(e){if((e=(e=e.split("=")[0]).trim().replace(E,"")).length<2)return"";for(;e.length%4!=0;)e+="=";return e}(e))}function M(e,t,r,n){for(var s=0;s=t.length)&&!(s>=e.length);++s)t[s+r]=e[s];return s}function I(e,t){return e instanceof t||null!=e&&null!=e.constructor&&null!=e.constructor.name&&e.constructor.name===t.name}var B=function(){for(var e="0123456789abcdef",t=Array(256),r=0;r<16;++r)for(var n=16*r,s=0;s<16;++s)t[n+s]=e[r]+e[s];return t}()},9087:function(e,t){/*! ieee754. BSD-3-Clause License. Feross Aboukhadijeh */t.read=function(e,t,r,n,s){var i,o,a=8*s-n-1,l=(1<>1,c=-7,u=r?s-1:0,d=r?-1:1,f=e[t+u];for(u+=d,i=f&(1<<-c)-1,f>>=-c,c+=a;c>0;i=256*i+e[t+u],u+=d,c-=8);for(o=i&(1<<-c)-1,i>>=-c,c+=n;c>0;o=256*o+e[t+u],u+=d,c-=8);if(0===i)i=1-h;else{if(i===l)return o?NaN:(f?-1:1)*(1/0);o+=Math.pow(2,n),i-=h}return(f?-1:1)*o*Math.pow(2,i-n)},t.write=function(e,t,r,n,s,i){var o,a,l,h=8*i-s-1,c=(1<>1,d=23===s?5960464477539062e-23:0,f=n?0:i-1,p=n?1:-1,_=t<0||0===t&&1/t<0?1:0;for(isNaN(t=Math.abs(t))||t===1/0?(a=isNaN(t)?1:0,o=c):(o=Math.floor(Math.log(t)/Math.LN2),t*(l=Math.pow(2,-o))<1&&(o--,l*=2),o+u>=1?t+=d/l:t+=d*Math.pow(2,1-u),t*l>=2&&(o++,l/=2),o+u>=c?(a=0,o=c):o+u>=1?(a=(t*l-1)*Math.pow(2,s),o+=u):(a=t*Math.pow(2,u-1)*Math.pow(2,s),o=0));s>=8;e[r+f]=255&a,f+=p,a/=256,s-=8);for(o=o<0;e[r+f]=255&o,f+=p,o/=256,h-=8);e[r+f-p]|=128*_}},2601:function(e,t,r){"use strict";var n,s;e.exports=(null==(n=r.g.process)?void 0:n.env)&&"object"==typeof(null==(s=r.g.process)?void 0:s.env)?r.g.process:r(8960)},692:function(e,t,r){!function(){var t={452:function(e){"use strict";e.exports=r(9875)}},n={};function s(e){var r=n[e];if(void 0!==r)return r.exports;var i=n[e]={exports:{}},o=!0;try{t[e](i,i.exports,s),o=!1}finally{o&&delete n[e]}return i.exports}s.ab="//";var i={};!function(){var e,t=(e=s(452))&&"object"==typeof e&&"default"in e?e.default:e,r=/https?|ftp|gopher|file/;function n(e){"string"==typeof e&&(e=g(e));var n,s,i,o,a,l,h,c,u,d=(s=(n=e).auth,i=n.hostname,o=n.protocol||"",a=n.pathname||"",l=n.hash||"",h=n.query||"",c=!1,s=s?encodeURIComponent(s).replace(/%3A/i,":")+"@":"",n.host?c=s+n.host:i&&(c=s+(~i.indexOf(":")?"["+i+"]":i),n.port&&(c+=":"+n.port)),h&&"object"==typeof h&&(h=t.encode(h)),u=n.search||h&&"?"+h||"",o&&":"!==o.substr(-1)&&(o+=":"),n.slashes||(!o||r.test(o))&&!1!==c?(c="//"+(c||""),a&&"/"!==a[0]&&(a="/"+a)):c||(c=""),l&&"#"!==l[0]&&(l="#"+l),u&&"?"!==u[0]&&(u="?"+u),{protocol:o,host:c,pathname:a=a.replace(/[?#]/g,encodeURIComponent),search:u=u.replace("#","%23"),hash:l});return""+d.protocol+d.host+d.pathname+d.search+d.hash}var o="http://",a=o+"w.w",l=/^([a-z0-9.+-]*:\/\/\/)([a-z0-9.+-]:\/*)?/i,h=/https?|ftp|gopher|file/;function c(e,t){var r="string"==typeof e?g(e):e;e="object"==typeof e?n(e):e;var s=g(t),i="";r.protocol&&!r.slashes&&(i=r.protocol,e=e.replace(r.protocol,""),i+="/"===t[0]||"/"===e[0]?"/":""),i&&s.protocol&&(i="",s.slashes||(i=s.protocol,t=t.replace(s.protocol,"")));var c=e.match(l);c&&!s.protocol&&(e=e.substr((i=c[1]+(c[2]||"")).length),/^\/\/[^/]/.test(t)&&(i=i.slice(0,-1)));var u=new URL(e,a+"/"),d=new URL(t,u).toString().replace(a,""),f=s.protocol||r.protocol;return f+=r.slashes||s.slashes?"//":"",!i&&f?d=d.replace(o,f):i&&(d=d.replace(o,"")),h.test(d)||~t.indexOf(".")||"/"===e.slice(-1)||"/"===t.slice(-1)||"/"!==d.slice(-1)||(d=d.slice(0,-1)),i&&(d=i+("/"===d[0]?d.substr(1):d)),d}function u(){}u.prototype.parse=g,u.prototype.format=n,u.prototype.resolve=c,u.prototype.resolveObject=c;var d=/^https?|ftp|gopher|file/,f=/^(.*?)([#?].*)/,p=/^([a-z0-9.+-]*:)(\/{0,3})(.*)/i,_=/^([a-z0-9.+-]*:)?\/\/\/*/i,m=/^([a-z0-9.+-]*:)(\/{0,2})\[(.*)\]$/i;function g(e,r,s){if(void 0===r&&(r=!1),void 0===s&&(s=!1),e&&"object"==typeof e&&e instanceof u)return e;var i=(e=e.trim()).match(f);e=i?i[1].replace(/\\/g,"/")+i[2]:e.replace(/\\/g,"/"),m.test(e)&&"/"!==e.slice(-1)&&(e+="/");var o=!/(^javascript)/.test(e)&&e.match(p),l=_.test(e),h="";o&&(d.test(o[1])||(h=o[1].toLowerCase(),e=""+o[2]+o[3]),o[2]||(l=!1,d.test(o[1])?(h=o[1],e=""+o[3]):e="//"+o[3]),3!==o[2].length&&1!==o[2].length||(h=o[1],e="/"+o[3]));var c,g=(i?i[1]:e).match(/^https?:\/\/[^/]+(:[0-9]+)(?=\/|$)/),y=g&&g[1],w=new u,b="",k="";try{c=new URL(e)}catch(t){b=t,h||s||!/^\/\//.test(e)||/^\/\/.+[@.]/.test(e)||(k="/",e=e.substr(1));try{c=new URL(e,a)}catch(e){return w.protocol=h,w.href=h,w}}w.slashes=l&&!k,w.host="w.w"===c.host?"":c.host,w.hostname="w.w"===c.hostname?"":c.hostname.replace(/(\[|\])/g,""),w.protocol=b?h||null:c.protocol,w.search=c.search.replace(/\\/g,"%5C"),w.hash=c.hash.replace(/\\/g,"%5C");var v=e.split("#");!w.search&&~v[0].indexOf("?")&&(w.search="?"),w.hash||""!==v[1]||(w.hash="#"),w.query=r?t.decode(c.search.substr(1)):w.search.substr(1),w.pathname=k+(o?c.pathname.replace(/['^|`]/g,function(e){return"%"+e.charCodeAt().toString(16).toUpperCase()}).replace(/((?:%[0-9A-F]{2})+)/g,function(e,t){try{return decodeURIComponent(t).split("").map(function(e){var t=e.charCodeAt();return t>256||/^[a-z0-9]$/i.test(e)?e:"%"+t.toString(16).toUpperCase()}).join("")}catch(e){return t}}):c.pathname),"about:"===w.protocol&&"blank"===w.pathname&&(w.protocol="",w.pathname=""),b&&"/"!==e[0]&&(w.pathname=w.pathname.substr(1)),h&&!d.test(h)&&"/"!==e.slice(-1)&&"/"===w.pathname&&(w.pathname=""),w.path=w.pathname+w.search,w.auth=[c.username,c.password].map(decodeURIComponent).filter(Boolean).join(":"),w.port=c.port,y&&!w.host.endsWith(y)&&(w.host+=y,w.port=y.slice(1)),w.href=k?""+w.pathname+w.search+w.hash:n(w);var x=/^(file)/.test(w.href)?["host","hostname"]:[];return Object.keys(w).forEach(function(e){~x.indexOf(e)||(w[e]=w[e]||null)}),w}i.parse=g,i.format=n,i.resolve=c,i.resolveObject=function(e,t){return g(c(e,t))},i.Url=u}(),e.exports=i}()},8960:function(e){!function(){var t={229:function(e){var t,r,n,s=e.exports={};function i(){throw Error("setTimeout has not been defined")}function o(){throw Error("clearTimeout has not been defined")}function a(e){if(t===setTimeout)return setTimeout(e,0);if((t===i||!t)&&setTimeout)return t=setTimeout,setTimeout(e,0);try{return t(e,0)}catch(r){try{return t.call(null,e,0)}catch(r){return t.call(this,e,0)}}}!function(){try{t="function"==typeof setTimeout?setTimeout:i}catch(e){t=i}try{r="function"==typeof clearTimeout?clearTimeout:o}catch(e){r=o}}();var l=[],h=!1,c=-1;function u(){h&&n&&(h=!1,n.length?l=n.concat(l):c=-1,l.length&&d())}function d(){if(!h){var e=a(u);h=!0;for(var t=l.length;t;){for(n=l,l=[];++c1)for(var r=1;r0&&l>a&&(l=a);for(var h=0;h=0?(c=p.substr(0,_),u=p.substr(_+1)):(c=p,u=""),d=decodeURIComponent(c),f=decodeURIComponent(u),Object.prototype.hasOwnProperty.call(i,d))?t(i[d])?i[d].push(f):i[d]=[i[d],f]:i[d]=f}return i};var t=Array.isArray||function(e){return"[object Array]"===Object.prototype.toString.call(e)}},577:function(e){var t=function(e){switch(typeof e){case"string":return e;case"boolean":return e?"true":"false";case"number":return isFinite(e)?e:"";default:return""}};e.exports=function(e,i,o,a){return(i=i||"&",o=o||"=",null===e&&(e=void 0),"object"==typeof e)?n(s(e),function(s){var a=encodeURIComponent(t(s))+o;return r(e[s])?n(e[s],function(e){return a+encodeURIComponent(t(e))}).join(i):a+encodeURIComponent(t(e[s]))}).join(i):a?encodeURIComponent(t(a))+o+encodeURIComponent(t(e)):""};var r=Array.isArray||function(e){return"[object Array]"===Object.prototype.toString.call(e)};function n(e,t){if(e.map)return e.map(t);for(var r=[],n=0;n{if(t&&"function"==typeof t.init&&"function"==typeof t.createSessionHandler){let i=n[e];if(void 0===i)n[e]={backend:t,priority:r};else if(i.priority>r)return;else if(i.priority===r&&i.backend!==t)throw Error(`cannot register backend "${e}" using priority ${r}`);if(r>=0){let t=s.indexOf(e);-1!==t&&s.splice(t,1);for(let t=0;t{let t=0===e.length?s:e,r=[];for(let e of t){let t=n[e];if(t){if(t.initialized)return t.backend;if(t.aborted)continue;let n=!!t.initPromise;try{return n||(t.initPromise=t.backend.init()),await t.initPromise,t.initialized=!0,t.backend}catch(s){n||r.push({name:e,err:s}),t.aborted=!0}finally{delete t.initPromise}}}throw Error(`no available backend found. ERR: ${r.map(e=>`[${e.name}] ${e.err}`).join(", ")}`)},a=new class{constructor(){this.wasm={},this.webgl={},this.logLevelInternal="warning"}set logLevel(e){if(void 0!==e){if("string"!=typeof e||-1===["verbose","info","warning","error","fatal"].indexOf(e))throw Error(`Unsupported logging level: ${e}`);this.logLevelInternal=e}}get logLevel(){return this.logLevelInternal}},l="undefined"!=typeof BigInt64Array&&"function"==typeof BigInt64Array.from,h="undefined"!=typeof BigUint64Array&&"function"==typeof BigUint64Array.from,c=new Map([["float32",Float32Array],["uint8",Uint8Array],["int8",Int8Array],["uint16",Uint16Array],["int16",Int16Array],["int32",Int32Array],["bool",Uint8Array],["float64",Float64Array],["uint32",Uint32Array]]),u=new Map([[Float32Array,"float32"],[Uint8Array,"uint8"],[Int8Array,"int8"],[Uint16Array,"uint16"],[Int16Array,"int16"],[Int32Array,"int32"],[Float64Array,"float64"],[Uint32Array,"uint32"]]);l&&(c.set("int64",BigInt64Array),u.set(BigInt64Array,"int64")),h&&(c.set("uint64",BigUint64Array),u.set(BigUint64Array,"uint64"));let d=e=>{let t=1;for(let r=0;r{let s=document.createElement("canvas"),i=s.getContext("2d");if(!e||!i)return n();let o=new Image;o.crossOrigin="Anonymous",o.src=e,o.onload=()=>{s.width=o.width,s.height=o.height,i.drawImage(o,0,0,s.width,s.height);let e=i.getImageData(0,0,s.width,s.height);if(void 0!==t){if(void 0!==t.height&&t.height!==s.height)throw Error("Image input config height doesn't match ImageBitmap height");if(a.height=s.height,void 0!==t.width&&t.width!==s.width)throw Error("Image input config width doesn't match ImageBitmap width");a.width=s.width}else a.height=s.height,a.width=s.width;r(f.bufferToTensor(e.data,a))}});else throw Error("Input data provided is not supported - aborted tensor creation");if(void 0!==r)return f.bufferToTensor(r,a);throw Error("Input data provided is not supported - aborted tensor creation")}toImageData(e){var t,r;let n;let s=document.createElement("canvas").getContext("2d");if(null!=s){let i=this.dims[3],o=this.dims[2],a=this.dims[1],l=void 0!==e&&void 0!==e.format?e.format:"RGB",h=void 0!==e&&(null===(t=e.norm)||void 0===t?void 0:t.mean)!==void 0?e.norm.mean:255,c=void 0!==e&&(null===(r=e.norm)||void 0===r?void 0:r.bias)!==void 0?e.norm.bias:0,u=o*i;if(void 0!==e){if(void 0!==e.height&&e.height!==o)throw Error("Image output config height doesn't match tensor height");if(void 0!==e.width&&e.width!==i)throw Error("Image output config width doesn't match tensor width");if(void 0!==e.format&&4===a&&"RGBA"!==e.format||3===a&&"RGB"!==e.format&&"BGR"!==e.format)throw Error("Tensor format doesn't match input tensor dims")}let d=0,f=1,p=2,_=3,m=0,g=u,y=2*u,w=-1;"RGBA"===l?(m=0,g=u,y=2*u,w=3*u):"RGB"===l?(m=0,g=u,y=2*u):"RBG"===l&&(m=0,y=u,g=2*u),n=s.createImageData(i,o);for(let e=0;e=e.byteLength)throw RangeError(`'byteOffset' is out of range [0, ${e.byteLength}).`);if(a=e.byteLength-o,"number"==typeof r){if(!Number.isSafeInteger(a=r))throw RangeError("'byteLength' must be an integer.");if(a<=0||o+a>e.byteLength)throw RangeError(`'byteLength' is out of range (0, ${e.byteLength-o}].`);if("object"==typeof n&&null!==n)i=n;else if(void 0!==n)throw TypeError("'options' must be an object.")}else if(void 0!==r)throw TypeError("'byteLength' must be a number.")}else if(void 0!==t)throw TypeError("'options' must be an object.");s=new Uint8Array(e,o,a)}else throw TypeError("Unexpected argument[0]: must be 'path' or 'buffer'.");let a=i.executionProviders||[],l=a.map(e=>"string"==typeof e?e:e.name),h=await o(l),c=await h.createSessionHandler(s,i);return new _(c)}startProfiling(){this.handler.startProfiling()}endProfiling(){this.handler.endProfiling()}get inputNames(){return this.handler.inputNames}get outputNames(){return this.handler.outputNames}}let m=_},4975:function(e,t,r){"use strict";let n,s,i,o;function a(e,t){null!==e&&e(t)}function l(e){return e.replace(/[.*+?^${}()|[\]\\]/g,"\\$&")}r.d(t,{OBj:function(){return O},qCb:function(){return ee},EUT:function(){return nn}});let h=class{constructor(){let e=function(...t){return e._call(...t)};return Object.setPrototypeOf(e,new.target.prototype)}_call(...e){throw Error("Must implement _call method in subclass")}};function c(e){return"string"==typeof e||e instanceof String}function u(e){return Number.isInteger(e)||"bigint"==typeof e}function d(e,t,r){let n=e[t];if(void 0!==n)return delete e[t],n;if(void 0===r)throw Error(`Key ${t} does not exist in object.`);return r}function f(...e){return Array.prototype.concat.apply([],e)}var p=r(7147),_=r(1418),m=r(319),g=r(8386),y=r(3342),w=r(692),b=r(495),k=r.t(b,2),v=r(2018),x=r.t(v,2),A=r(2601);let E=["wasm"];if(void 0!==A&&A?.release?.name==="node")n=b??k,E.unshift("cpu");else{n=v??x;let e="undefined"!=typeof navigator&&/iP(hone|od|ad).+16_4.+AppleWebKit/.test(navigator.userAgent);e&&(n.env.wasm.simd=!1)}let{env:z}=n,T="2.4.2",M="undefined"!=typeof self&&"caches"in self,I=!R(g),B=!R(y),S=I&&B,C=S?y.dirname(y.dirname(w.fileURLToPath("file:///workspaces/transformers.js/examples/next/node_modules/@xenova/transformers/src/env.js"))):"./",P=S?y.join(C,"/.cache/"):null,U="/models/",L=S?y.join(C,U):U;z.wasm.wasmPaths=S?y.join(C,"/dist/"):`https://cdn.jsdelivr.net/npm/@xenova/transformers@${T}/dist/`;let O={backends:{onnx:z,tfjs:{}},__dirname:C,version:T,allowRemoteModels:!0,remoteHost:"https://huggingface.co/",remotePathTemplate:"{model}/resolve/{revision}/",allowLocalModels:!0,localModelPath:L,useFS:I,useBrowserCache:M,useFSCache:I,cacheDir:P};function R(e){return 0===Object.keys(e).length}var j=r(2601),N=r(7133).lW;globalThis.ReadableStream||(globalThis.ReadableStream=m.ReadableStream);class ${_CONTENT_TYPE_MAP={txt:"text/plain",html:"text/html",css:"text/css",js:"text/javascript",json:"application/json",png:"image/png",jpg:"image/jpeg",jpeg:"image/jpeg",gif:"image/gif"};constructor(e){if(this.filePath=e,this.headers=new Headers,this.exists=p.existsSync(e),this.exists){this.status=200,this.statusText="OK";let t=p.statSync(e);this.headers.set("content-length",t.size.toString()),this.updateContentType();let r=this;this.body=new ReadableStream({start(e){r.arrayBuffer().then(t=>{e.enqueue(new Uint8Array(t)),e.close()})}})}else this.status=404,this.statusText="Not Found",this.body=null}updateContentType(){let e=this.filePath.toString().split(".").pop().toLowerCase();this.headers.set("content-type",this._CONTENT_TYPE_MAP[e]??"application/octet-stream")}clone(){let e=new $(this.filePath);return e.exists=this.exists,e.status=this.status,e.statusText=this.statusText,e.headers=new Headers(this.headers),e}async arrayBuffer(){let e=await p.promises.readFile(this.filePath);return e.buffer}async blob(){let e=await p.promises.readFile(this.filePath);return new Blob([e],{type:this.headers.get("content-type")})}async text(){let e=await p.promises.readFile(this.filePath,"utf8");return e}async json(){return JSON.parse(await this.text())}}function F(e,t=null){let r;try{r=new URL(e)}catch(e){return!1}return(!t||!!t.includes(r.hostname))&&("http:"===r.protocol||"https:"===r.protocol)}async function G(e){if(O.useFS&&!F(e))return new $(e);if(void 0===j||j?.release?.name!=="node")return fetch(e);{let t=!!j.env?.TESTING_REMOTELY,r=O.version,n=new Headers;n.set("User-Agent",`transformers.js/${r}; is_ci/${t};`);let s=F(e,["huggingface.co","hf.co"]);if(s){let e=j.env?.HF_ACCESS_TOKEN;e&&n.set("Authorization",`Bearer ${e}`)}return fetch(e,{headers:n})}}let q={400:"Bad request error occurred while trying to load file",401:"Unauthorized access to file",403:"Forbidden access to file",404:"Could not locate file",408:"Request timeout error occurred while trying to load file",500:"Internal server error error occurred while trying to load file",502:"Bad gateway error occurred while trying to load file",503:"Service unavailable error occurred while trying to load file",504:"Gateway timeout error occurred while trying to load file"};class D{constructor(e){this.path=e}async match(e){let t=_.join(this.path,e),r=new $(t);return r.exists?r:void 0}async put(e,t){let r=N.from(await t.arrayBuffer()),n=_.join(this.path,e);try{await p.promises.mkdir(_.dirname(n),{recursive:!0}),await p.promises.writeFile(n,r)}catch(e){console.warn("An error occurred while writing the file to cache:",e)}}}async function W(e,...t){for(let r of t)try{let t=await e.match(r);if(t)return t}catch(e){continue}}async function K(e,t,r=!0,n={}){let s,i,o,l;if(!O.allowLocalModels){if(n.local_files_only)throw Error("Invalid configuration detected: local models are disabled (`env.allowLocalModels=false`) but you have requested to only use local models (`local_files_only=true`).");if(!O.allowRemoteModels)throw Error("Invalid configuration detected: both local and remote models are disabled. Fix by setting `env.allowLocalModels` or `env.allowRemoteModels` to `true`.")}if(a(n.progress_callback,{status:"initiate",name:e,file:t}),!s&&O.useBrowserCache){if("undefined"==typeof caches)throw Error("Browser cache is not available in this environment.");try{s=await caches.open("transformers-cache")}catch(e){console.warn("An error occurred while opening the browser cache:",e)}}!s&&O.useFSCache&&(s=new D(n.cache_dir??O.cacheDir));let h=n.revision??"main",c=V(e,t),u=V(O.localModelPath,c),d=V(O.remoteHost,O.remotePathTemplate.replaceAll("{model}",e).replaceAll("{revision}",h),t),f="main"===h?c:V(e,h,t),p=s instanceof D?f:d;if(s&&(l=await W(s,u,p)),void 0===l){if(O.allowLocalModels){let e=F(c);if(e){if(n.local_files_only)throw Error(`\`local_files_only=true\`, but attempted to load a remote file from: ${c}.`);if(!O.allowRemoteModels)throw Error(`\`env.allowRemoteModels=false\`, but attempted to load a remote file from: ${c}.`)}else try{l=await G(u),i=u}catch(e){console.warn(`Unable to load from local path "${u}": "${e}"`)}}if(void 0===l||404===l.status){if(n.local_files_only||!O.allowRemoteModels){if(!r)return null;throw Error(`\`local_files_only=true\` or \`env.allowRemoteModels=false\` and file was not found locally at "${u}".`)}if(200!==(l=await G(d)).status)return function(e,t,r){if(!r)return null;let n=q[e]??`Error (${e}) occurred while trying to load file`;throw Error(`${n}: "${t}".`)}(l.status,d,r);i=p}s&&l instanceof Response&&200===l.status&&(o=l.clone())}a(n.progress_callback,{status:"download",name:e,file:t});let _=await X(l,r=>{a(n.progress_callback,{status:"progress",...r,name:e,file:t})});return o&&i&&await s.match(i)===void 0&&await s.put(i,o).catch(e=>{console.warn(`Unable to add response to browser cache: ${e}.`)}),a(n.progress_callback,{status:"done",name:e,file:t}),_}async function H(e,t,r=!0,n={}){let s=await K(e,t,r,n);return null===s?{}:JSON.parse(new TextDecoder("utf-8").decode(s))}async function X(e,t){let r=e.headers.get("Content-Length");null===r&&console.warn("Unable to determine content-length from response headers. Will expand buffer when needed.");let n=parseInt(r??"0"),s=new Uint8Array(n),i=0,o=e.body.getReader();async function a(){let{done:e,value:r}=await o.read();if(e)return;let l=i+r.length;if(l>n){n=l;let e=new Uint8Array(n);e.set(s),s=e}s.set(r,i),i=l;let h=i/n*100;return t({progress:h,loaded:i,total:n}),a()}return await a(),s}function V(...e){return(e=e.map((t,r)=>(r&&(t=t.replace(RegExp("^/"),"")),r!==e.length-1&&(t=t.replace(RegExp("/$"),"")),t))).join("/")}function J(e){let t=Z(e)[0],r=e.map(e=>Math.exp(e-t)),n=r.reduce((e,t)=>e+t,0),s=r.map(e=>e/n);return s}function Y(e,t=0){return e=Array.from(e).map((e,t)=>[t,e]).sort((e,t)=>t[1]-e[1]),t>0&&(e=e.slice(0,t)),e}function Z(e){if(0===e.length)throw Error("Array must not be empty");let t=e[0],r=0;for(let n=1;nt&&(t=e[n],r=n);return[t,r]}class Q{constructor(e){if(this.size=0|e,this.size<=1||(this.size&this.size-1)!=0)throw Error("FFT size must be a power of two and bigger than 1");this._csize=e<<1,this.table=new Float32Array(2*this.size);for(let e=0;ee;e<<=1)++t;this._width=t%2==0?t-1:t,this._bitrev=new Int32Array(1<>>t&3)<>>1);for(let t=0;t>>1]=e[t];return r}toComplexArray(e,t){let r=t||this.createComplexArray();for(let t=0;t>>1],r[t+1]=0;return r}completeSpectrum(e){let t=this._csize,r=t>>>1;for(let n=2;n>=2;a>=2;a>>=2){let t=(l=i/a<<1)>>>2;for(n=0;n>>1,i>>>1)}else for(a=0,l=0;a>>1,i>>>1,r)}for(i>>=2;i>=2;i>>=2){o=n/i<<1;let t=o>>>1,s=t>>>1,l=s>>>1;for(a=0;a=e.length&&(s=2*(e.length-1)-s),n[i++]=e[s]}n.sort(),r[t]=n[s]}return r}function et(e,t){let r=Math.pow(10,t);return Math.round(e*r)/r}let er=n.Tensor;class en extends er{constructor(...e){return e[0]instanceof n.Tensor?super(e[0].type,e[0].data,e[0].dims):super(...e),new Proxy(this,{get:(e,t)=>{if("string"==typeof t){let r=Number(t);if(Number.isInteger(r))return e._getitem(r)}return e[t]},set:(e,t,r)=>e[t]=r})}*[Symbol.iterator](){let[e,...t]=this.dims;if(t.length>0){let r=t.reduce((e,t)=>e*t);for(let n=0;n0))return new en(this.type,[this.data[e]],r);{let t=r.reduce((e,t)=>e*t);return this._subarray(e,t,r)}}indexOf(e){for(let t=0;te*t);if(r!==n)throw Error(`cannot reshape array of size ${r} into shape (${t})`);let s=e;for(let e=t.length-1;e>=0;e--)s=s.reduce((r,n)=>{let s=r[r.length-1];return s.lengths[1])throw Error(`Invalid slice: ${s}`);let e=[Math.max(s[0],0),Math.min(s[1],this.dims[n])];r.push(e),t.push(e[1]-e[0])}else throw Error(`Invalid slice: ${s}`)}let n=r.map(([e,t])=>t-e),s=n.reduce((e,t)=>e*t),i=new this.data.constructor(s),o=this.stride();for(let e=0;e=0;--s){let e=n[s];t+=(i%e+r[s][0])*o[s],i=Math.floor(i/e)}i[e]=this.data[t]}return new en(this.type,i,t)}transpose(...e){return es(this,e)}sum(e=null,t=!1){return this.norm(1,e,t)}norm(e="fro",t=null,r=!1){if("fro"===e)e=2;else if("string"==typeof e)throw Error(`Unsupported norm: ${e}`);if(null===t){let t=this.data.reduce((t,r)=>t+r**e,0)**(1/e);return new en(this.type,[t],[])}t=el(t,this.dims.length);let n=this.dims.slice();n[t]=1;let s=new this.data.constructor(this.data.length/this.dims[t]);for(let r=0;r=0;--e){let r=this.dims[e];if(e!==t){let t=s%r;i+=t*o,o*=n[e]}s=Math.floor(s/r)}s[i]+=this.data[r]**e}if(1!==e)for(let t=0;t=0;--r){let e=this.dims[r];if(r!==t){let t=s%e;n+=t*i,i*=this.dims[r]}s=Math.floor(s/e)}this.data[e]/=r.data[n]}return this}normalize(e=2,t=1){return this.clone().normalize_(e,t)}stride(){return function(e){let t=Array(e.length);for(let r=e.length-1,n=1;r>=0;--r)t[r]=n,n*=e[r];return t}(this.dims)}squeeze(e=null){return new en(this.type,this.data,eo(this.dims,e))}squeeze_(e=null){return this.dims=eo(this.dims,e),this}unsqueeze(e=null){return new en(this.type,this.data,ea(this.dims,e))}unsqueeze_(e=null){return this.dims=ea(this.dims,e),this}flatten_(e=0,t=-1){t=(t+this.dims.length)%this.dims.length;let r=this.dims.slice(0,e),n=this.dims.slice(e,t+1),s=this.dims.slice(t+1);return this.dims=[...r,n.reduce((e,t)=>e*t,1),...s],this}flatten(e=0,t=-1){return this.clone().flatten_(e,t)}view(...e){let t=-1;for(let r=0;rn!==t?e*r:e,1);e[t]=this.data.length/r}return new en(this.type,this.data,e)}neg_(){for(let e=0;e=0;--e)s[e]=i,n[e]=t[r[e]],i*=n[e];let i=r.map((e,t)=>s[r.indexOf(t)]),o=new e.constructor(e.length);for(let r=0;r=0;--e)n+=s%t[e]*i[e],s=Math.floor(s/t[e]);o[n]=e[r]}return[o,n]}(e.data,e.dims,t);return new en(e.type,r,n)}function ei(e,[t,r],n="bilinear",s=!1){let i=e.dims.at(-3)??1,o=e.dims.at(-2),a=e.dims.at(-1),l=function(e,[t,r,n],[s,i],o="bilinear",a=!1){let l=i/n,h=s/r,c=new e.constructor(s*i*t),u=r*n,d=s*i;for(let o=0;o1!==e):"number"==typeof t?1===e[t]&&e.splice(t,1):Array.isArray(t)&&(e=e.filter((e,r)=>1!==e||!t.includes(r))),e}function ea(e,t){return t=el(t,e.length+1),(e=e.slice()).splice(t,0,1),e}function el(e,t,r=null){if(e<-t||e>=t)throw Error(`IndexError: index ${e} is out of bounds for dimension${null===r?"":" "+r} with size ${t}`);return e<0&&(e=(e%t+t)%t),e}function eh(e,t=0){t=el(t,e[0].dims.length);let r=e[0].dims.slice();r[t]=e.reduce((e,r)=>e+r.dims[t],0);let n=r.reduce((e,t)=>e*t,1),s=new e[0].data.constructor(n),i=e[0].type;if(0===t){let t=0;for(let r of e)s.set(r.data,t),t+=r.data.length}else{let n=0;for(let i=0;i=0;--s){let e=o.dims[s],h=a%e;s===t&&(h+=n),i+=h*l,l*=r[s],a=Math.floor(a/e)}s[i]=o.data[e]}n+=o.dims[t]}}return new en(i,s,r)}function ec(e,t=null,r=!1){if(null===t){let t=e.data.reduce((e,t)=>e+t,0);return new en(e.type,[t/e.data.length],[])}t=el(t,e.dims.length);let n=e.dims.slice();n[t]=1;let s=new e.data.constructor(e.data.length/e.dims[t]);for(let r=0;r=0;--s){let r=e.dims[s];if(s!==t){let e=o%r;i+=e*a,a*=n[s]}o=Math.floor(o/r)}s[i]+=e.data[r]}if(1!==e.dims[t])for(let r=0;rthis.tokens_to_ids.get(e)??this.unk_token_id);return this.fuse_unk&&(t=function(e,t){let r=[],n=0;for(;nthis.vocab[e]??this.unk_token)}}class em extends e_{constructor(e){for(let[t,r]of(super(e),this.tokens_to_ids=e.vocab,this.unk_token_id=this.tokens_to_ids.get(e.unk_token),this.unk_token=e.unk_token,this.vocab=Array(this.tokens_to_ids.size),this.tokens_to_ids))this.vocab[r]=t}encode(e){let t=[];for(let r of e){let e=[...r],n=!1,s=0,i=[];for(;s0&&(n=this.config.continuing_subword_prefix+n),this.tokens_to_ids.has(n)){r=n;break}--t}if(null===r){n=!0;break}i.push(r),s=t}n?t.push(this.unk_token):t.push(...i)}return t}}class eg extends e_{constructor(e,t){super(e),this.vocab=Array(e.vocab.size),this.scores=Array(e.vocab.size);let r=0;e.vocab.forEach((e,t)=>{this.vocab[r]=t,this.scores[r]=e,++r}),this.unk_token_id=e.unk_id,this.unk_token=this.vocab[e.unk_id],this.tokens_to_ids=new Map(this.vocab.map((e,t)=>[e,t])),this.bosToken=" ",this.bosTokenId=this.tokens_to_ids.get(this.bosToken),this.eosToken=t.eos_token,this.eosTokenId=this.tokens_to_ids.get(this.eosToken),this.unkToken=this.vocab[this.unk_token_id],this.minScore=function(e){if(0===e.length)throw Error("Array must not be empty");let t=e[0],r=0;for(let n=1;n{let e=[...Array.from({length:94},(e,t)=>t+33),...Array.from({length:12},(e,t)=>t+161),...Array.from({length:82},(e,t)=>t+174)],t=e.slice(),r=0;for(let n=0;n<256;++n)e.includes(n)||(e.push(n),t.push(256+r),r+=1);let n=t.map(e=>String.fromCharCode(e));return Object.fromEntries(e.map((e,t)=>[e,n[t]]))})(),ew=Object.fromEntries(Object.entries(ey).map(([e,t])=>[t,e]));class eb extends e_{constructor(e){for(let[t,r]of(super(e),this.BPE_SPLIT_TOKEN=" ",this.tokens_to_ids=e.vocab,this.unk_token_id=this.tokens_to_ids.get(e.unk_token),this.unk_token=e.unk_token,this.vocab=Array(this.tokens_to_ids.size),this.tokens_to_ids))this.vocab[r]=t;this.bpe_ranks=Object.fromEntries(e.merges.map((e,t)=>[e,t])),this.merges=e.merges.map(e=>e.split(this.BPE_SPLIT_TOKEN)),this.end_of_word_suffix=e.end_of_word_suffix,this.byte_fallback=this.config.byte_fallback??!1,this.byte_fallback&&(this.text_encoder=new TextEncoder),this.cache=Object.create(null),this.fuse_unk??=this.config.fuse_unk}get_pairs(e){let t=new Set,r=e[0];for(let n=1;n(this.bpe_ranks[e]??1/0)<=(this.bpe_ranks[t]??1/0)?e:t);if(!(e in this.bpe_ranks))break;let[n,s]=e.split(this.BPE_SPLIT_TOKEN),i=[],o=0,a=-1;for(;o`<0x${e.toString(16).toUpperCase().padStart(2,"0")}>`)):t.push(this.unk_token);return t}}class ek extends h{constructor(e){super(),this.config=e}static fromConfig(e){if(null===e)return null;switch(e.type){case"BertNormalizer":return new eI(e);case"Precompiled":return new eJ(e);case"Sequence":return new eM(e);case"Replace":return new ev(e);case"NFC":return new ex(e);case"NFKD":return new eA(e);case"StripAccents":return new eE(e);case"Lowercase":return new ez(e);case"Prepend":return new eT(e);default:throw Error(`Unknown Normalizer type: ${e.type}`)}}normalize(e){throw Error("normalize should be implemented in subclass.")}_call(e){return this.normalize(e)}}class ev extends ek{normalize(e){let t=ed(this.config.pattern);return null===t?e:e=e.replaceAll(t,this.config.content)}}class ex extends ek{normalize(e){return e=e.normalize("NFC")}}class eA extends ek{normalize(e){return e=e.normalize("NFKD")}}class eE extends ek{normalize(e){return e=e.replace(/[\u0300-\u036f]/g,"")}}class ez extends ek{normalize(e){return e=e.toLowerCase()}}class eT extends ek{normalize(e){return e=this.config.prepend+e}}class eM extends ek{constructor(e){super(e),this.normalizers=e.normalizers.map(e=>ek.fromConfig(e))}normalize(e){return this.normalizers.reduce((e,t)=>t.normalize(e),e)}}class eI extends ek{_tokenize_chinese_chars(e){let t=[];for(let r=0;r=19968&&e<=40959||e>=13312&&e<=19903||e>=131072&&e<=173791||e>=173824&&e<=177983||e>=177984&&e<=178207||e>=178208&&e<=183983||e>=63744&&e<=64255||e>=194560&&e<=195103}stripAccents(e){return e.normalize("NFD").replace(/[\u0300-\u036f]/g,"")}normalize(e){return this.config.handle_chinese_chars&&(e=this._tokenize_chinese_chars(e)),this.config.lowercase?(e=e.toLowerCase(),!1!==this.config.strip_accents&&(e=this.stripAccents(e))):this.config.strip_accents&&(e=this.stripAccents(e)),e}}class eB extends h{static fromConfig(e){if(null===e)return null;switch(e.type){case"BertPreTokenizer":return new eS(e);case"Sequence":return new eY(e);case"WhitespaceSplit":return new eZ(e);case"Metaspace":return new eX(e);case"ByteLevel":return new eC(e);case"Split":return new eP(e);case"Punctuation":return new eU(e);case"Digits":return new eL(e);default:throw Error(`Unknown PreTokenizer type: ${e.type}`)}}pre_tokenize_text(e){throw Error("pre_tokenize_text should be implemented in subclass.")}pre_tokenize(e){return(Array.isArray(e)?e.map(e=>this.pre_tokenize_text(e)):this.pre_tokenize_text(e)).flat()}_call(e){return this.pre_tokenize(e)}}class eS extends eB{constructor(e){super(),this.pattern=RegExp(`[^\\s${ep}]+|[${ep}]`,"gu")}pre_tokenize_text(e){return e.trim().match(this.pattern)||[]}}class eC extends eB{constructor(e){super(),this.config=e,this.add_prefix_space=this.config.add_prefix_space,this.trim_offsets=this.config.trim_offsets,this.use_regex=this.config.use_regex??!0,this.pattern=/'s|'t|'re|'ve|'m|'ll|'d| ?\p{L}+| ?\p{N}+| ?[^\s\p{L}\p{N}]+|\s+(?!\S)|\s+/gu,this.byte_encoder=ey,this.text_encoder=new TextEncoder}pre_tokenize_text(e){return(this.use_regex?e.match(this.pattern)||[]:[e]).map(e=>(this.add_prefix_space&&!e.startsWith(" ")&&(e=" "+e),e=Array.from(this.text_encoder.encode(e),e=>this.byte_encoder[e]).join("")))}}class eP extends eB{constructor(e){super(),this.config=e,this.pattern=ed(this.config.pattern,this.config.invert)}pre_tokenize_text(e){return null===this.pattern?[]:this.config.invert?e.match(this.pattern)||[]:e.split(this.pattern).filter(e=>e)}}class eU extends eB{constructor(e){super(),this.config=e,this.pattern=RegExp(`[^${ep}]+|[${ep}]+`,"gu")}pre_tokenize_text(e){return e.match(this.pattern)||[]}}class eL extends eB{constructor(e){super(),this.config=e;let t=`[^\\d]+|\\d${this.config.individual_digits?"":"+"}`;this.pattern=RegExp(t,"gu")}pre_tokenize_text(e){return e.match(this.pattern)||[]}}class eO extends h{constructor(e){super(),this.config=e}static fromConfig(e){if(null===e)return null;switch(e.type){case"TemplateProcessing":return new ej(e);case"ByteLevel":return new eN(e);case"RobertaProcessing":return new eR(e);default:throw Error(`Unknown PostProcessor type: ${e.type}`)}}post_process(e,...t){throw Error("post_process should be implemented in subclass.")}_call(e,...t){return this.post_process(e,...t)}}class eR extends eO{constructor(e){super(e),this.cls=e.cls[0],this.sep=e.sep[0]}post_process(e,t=null){return e=f([this.cls],e,[this.sep]),null!==t&&(e=f(e,[this.sep],t,[this.sep])),e}}class ej extends eO{constructor(e){super(e),this.single=e.single,this.pair=e.pair}post_process(e,t=null){let r=null===t?this.single:this.pair,n=[];for(let s of r)"SpecialToken"in s?n.push(s.SpecialToken.id):"Sequence"in s&&("A"===s.Sequence.id?n=f(n,e):"B"===s.Sequence.id&&(n=f(n,t)));return n}}class eN extends eO{post_process(e){return e}}class e$ extends h{constructor(e){super(),this.config=e,this.added_tokens=[],this.end_of_word_suffix=null,this.trim_offsets=e.trim_offsets}static fromConfig(e){switch(e.type){case"WordPiece":return new eW(e);case"Metaspace":return new eV(e);case"ByteLevel":return new eK(e);case"Replace":return new eF(e);case"ByteFallback":return new eG(e);case"Fuse":return new eq(e);case"Strip":return new eD(e);case"Sequence":return new eH(e);default:throw Error(`Unknown Decoder type: ${e.type}`)}}_call(e){return this.decode(e)}decode(e){return this.decode_chain(e).join("")}decode_chain(e){throw Error("`decode_chain` should be implemented in subclass.")}}class eF extends e${constructor(e){super(e)}decode_chain(e){let t=ed(this.config.pattern);return null===t?e:e.map(e=>e.replaceAll(t,this.config.content))}}class eG extends e${constructor(e){super(e),this.text_decoder=new TextDecoder}decode_chain(e){let t=[],r=[];for(let n of e){let e=null;if(6===n.length&&n.startsWith("<0x")&&n.endsWith(">")){let t=parseInt(n.slice(3,5),16);isNaN(t)||(e=t)}if(null!==e)r.push(e);else{if(r.length>0){let e=this.text_decoder.decode(Uint8Array.from(r));t.push(e),r=[]}t.push(n)}}if(r.length>0){let e=this.text_decoder.decode(Uint8Array.from(r));t.push(e),r=[]}return t}}class eq extends e${constructor(e){super(e)}decode_chain(e){return[e.join("")]}}class eD extends e${constructor(e){super(e),this.content=this.config.content,this.start=this.config.start,this.stop=this.config.stop}decode_chain(e){return e.map(e=>{let t=0;for(let r=0;r(0!==t&&(e=e.startsWith(this.config.prefix)?e.replace(this.config.prefix,""):" "+e),this.cleanup&&(e=ef(e)),e))}}class eK extends e${constructor(e){super(e),this.byte_decoder=ew,this.text_decoder=new TextDecoder("utf-8",{fatal:!1,ignoreBOM:!0}),this.end_of_word_suffix=null}convert_tokens_to_string(e){let t=e.join(""),r=new Uint8Array([...t].map(e=>this.byte_decoder[e]));return this.text_decoder.decode(r)}decode_chain(e){let t=[],r=[];for(let n of e)this.added_tokens.includes(n)?(r.length>0&&(t.push(this.convert_tokens_to_string(r)),r=[]),t.push(n)):r.push(n);return r.length>0&&t.push(this.convert_tokens_to_string(r)),t}}class eH extends e${constructor(e){super(e),this.decoders=e.decoders.map(e=>e$.fromConfig(e))}decode_chain(e){return this.decoders.reduce((e,t)=>t.decode_chain(e),e)}}class eX extends eB{constructor(e){super(),this.addPrefixSpace=e.add_prefix_space,this.replacement=e.replacement,this.strRep=e.str_rep||this.replacement}pre_tokenize(e){"string"==typeof e&&(e=e.trimStart().split(/\s+/));let t=[];for(let r of e){let e=r.replaceAll(" ",this.strRep);this.addPrefixSpace&&!e.startsWith(this.replacement)&&(e=this.strRep+e),t.push(e)}return t}}class eV extends e${constructor(e){super(e),this.addPrefixSpace=e.add_prefix_space,this.replacement=e.replacement}decode_chain(e){let t=[];for(let r=0;reB.fromConfig(e))}pre_tokenize_text(e){return"string"==typeof e&&(e=[e]),this.tokenizers.reduce((e,t)=>t.pre_tokenize(e),e)}}class eZ extends eB{constructor(e){super()}pre_tokenize_text(e){return e.match(/\S+/g)||[]}}class eQ extends h{constructor(e,t){for(let r of(super(),this.normalizer=ek.fromConfig(e.normalizer),this.pre_tokenizer=eB.fromConfig(e.pre_tokenizer),e.model.vocab&&(Array.isArray(e.model.vocab)||(e.model.vocab=Object.entries(e.model.vocab)),e.model.vocab=new Map(e.model.vocab)),this.model=e_.fromConfig(e.model,t),this.post_processor=eO.fromConfig(e.post_processor),this.decoder=e$.fromConfig(e.decoder),this.decoder.end_of_word_suffix=this.model.end_of_word_suffix,this.special_tokens=[],this.all_special_ids=[],this.added_tokens=[],e.added_tokens)){let e=r.id,t=r.content;this.added_tokens.push(t),this.model.tokens_to_ids.set(t,e),this.model.vocab[e]=t,r.special&&(this.special_tokens.push(t),this.all_special_ids.push(e))}this.decoder.added_tokens=this.added_tokens,this.added_tokens_regex=RegExp("("+this.added_tokens.map(l).join("|")+")"),this.mask_token=this.getToken(t,"mask_token"),this.mask_token_id=this.model.tokens_to_ids.get(this.mask_token),this.pad_token=this.getToken(t,"pad_token","eos_token"),this.pad_token_id=this.model.tokens_to_ids.get(this.pad_token),this.sep_token=this.getToken(t,"sep_token"),this.sep_token_id=this.model.tokens_to_ids.get(this.sep_token),this.model_max_length=t.model_max_length,this.remove_space=t.remove_space,this.clean_up_tokenization_spaces=t.clean_up_tokenization_spaces??!0,this.padding_side="right"}getToken(e,...t){for(let r of t){let t=e[r];if(t){if("object"!=typeof t)return t;if("AddedToken"===t.__type)return t.content;throw Error(`Unknown token: ${t}`)}}return null}static async from_pretrained(e,{progress_callback:t=null,config:r=null,cache_dir:n=null,local_files_only:s=!1,revision:i="main"}={}){let o=await eu(e,{progress_callback:t,config:r,cache_dir:n,local_files_only:s,revision:i});return new this(...o)}prepare_model_inputs(e){return e}_call(e,{text_pair:t=null,padding:r=!1,truncation:n=null,max_length:s=null,return_tensor:i=!0}={}){let o;if(Array.isArray(e)){if(0===e.length)throw Error("text array must be non-empty");if(null!==t){if(Array.isArray(t)){if(e.length!==t.length)throw Error("text and text_pair must have the same length")}else throw Error("text_pair must also be an array");o=e.map((e,r)=>this.encode(e,t[r]))}else o=e.map(e=>this.encode(e))}else{if(null===e)throw Error("text may not be null");if(Array.isArray(t))throw Error("When specifying `text_pair`, since `text` is a string, `text_pair` must also be a string (i.e., not an array).");o=[this.encode(e,t)]}let a=Z(o.map(e=>e.length))[0];null===s&&(s=a),s=Math.min(s,this.model_max_length);let l=[];if(r||n)for(let e=0;es)n&&(o[e]=o[e].slice(0,s)),l.push(Array(o[e].length).fill(1));else if(r){let t=s-o[e].length;"right"===this.padding_side?(l.push(Array(o[e].length).fill(1).concat(Array(t).fill(0))),o[e].push(...Array(t).fill(this.pad_token_id))):(l.push(Array(t).fill(0).concat(Array(o[e].length).fill(1))),o[e].unshift(...Array(t).fill(this.pad_token_id)))}else l.push(Array(o[e].length).fill(1))}else l=o.map(e=>Array(e.length).fill(1));if(i){if(!(r&&n)&&o.some(e=>e.length!==o[0].length))throw Error("Unable to create tensor, you should probably activate truncation and/or padding with 'padding=true' and 'truncation=true' to have batched tensors with the same length.");let e=[o.length,o[0].length];o=new en("int64",BigInt64Array.from(o.flat().map(BigInt)),e),l=new en("int64",BigInt64Array.from(l.flat().map(BigInt)),e)}else Array.isArray(e)||(o=o[0],l=l[0]);let h={input_ids:o,attention_mask:l};return this.prepare_model_inputs(h)}_encode_text(e){if(null===e)return null;let t=e.split(this.added_tokens_regex).filter(e=>e);return t.map(e=>{if(this.added_tokens.includes(e))return e;{!0===this.remove_space&&(e=e.trim().split(/\s+/).join(" ")),null!==this.normalizer&&(e=this.normalizer(e));let t=null!==this.pre_tokenizer?this.pre_tokenizer(e):[e];return this.model(t)}}).flat()}encode(e,t=null){let r=this._encode_text(e),n=this._encode_text(t),s=null!==this.post_processor?this.post_processor(r,n):f(r??[],n??[]);return this.model.convert_tokens_to_ids(s)}batch_decode(e,t={}){return e.map(e=>this.decode(e,t))}decode(e,t={}){if(!Array.isArray(e)||0===e.length||!u(e[0]))throw Error("token_ids must be a non-empty array of integers.");return this.decode_single(e,t)}decode_single(e,{skip_special_tokens:t=!1,clean_up_tokenization_spaces:r=null}){let n=this.model.convert_ids_to_tokens(e);t&&(n=n.filter(e=>!this.special_tokens.includes(e)));let s=this.decoder(n);return this.decoder.end_of_word_suffix&&(s=s.replaceAll(this.decoder.end_of_word_suffix," "),t&&(s=s.trim())),(r??this.clean_up_tokenization_spaces)&&(s=ef(s)),s}}function e0(e){if(e.input_ids instanceof en)e.token_type_ids=new en("int64",new BigInt64Array(e.input_ids.data.length),e.input_ids.dims);else if(Array.isArray(e.input_ids))Array.isArray(e.input_ids[0])?e.token_type_ids=e.input_ids.map(e=>Array(e.length).fill(0)):e.token_type_ids=Array(e.input_ids.length).fill(0);else throw Error("Input ids must be a Tensor or an Array");return e}class e1 extends eQ{prepare_model_inputs(e){return e0(e)}}class e2 extends eQ{prepare_model_inputs(e){return e0(e)}}class e3 extends eQ{prepare_model_inputs(e){return e0(e)}}class e6 extends eQ{prepare_model_inputs(e){return e0(e)}}class e4 extends eQ{}class e5 extends eQ{}class e8 extends eQ{}class e7 extends eQ{}class e9 extends eQ{}class te extends eQ{}class tt extends eQ{}class tr extends eQ{}class tn extends eQ{}class ts extends eQ{prepare_model_inputs(e){return e0(e)}}class ti extends eQ{}class to extends eQ{constructor(e,t){super(e,t),this.languageRegex=/^[a-z]{3}_[A-Z][a-z]{3}$/,this.language_codes=this.special_tokens.filter(e=>this.languageRegex.test(e))}_build_translation_inputs(e,t,r){if(!this.language_codes.includes(r.tgt_lang))throw Error(`Target language code "${r.tgt_lang}" is not valid. Must be one of: {${this.language_codes.join(", ")}}`);if(void 0!==r.src_lang){if(!this.language_codes.includes(r.src_lang))throw Error(`Source language code "${r.src_lang}" is not valid. Must be one of: {${this.language_codes.join(", ")}}`);for(let e of this.post_processor.config.single)if("SpecialToken"in e&&this.languageRegex.test(e.SpecialToken.id)){e.SpecialToken.id=r.src_lang;break}}return r.forced_bos_token_id=this.model.convert_tokens_to_ids([r.tgt_lang])[0],this._call(e,t)}}let ta=[["en","english"],["zh","chinese"],["de","german"],["es","spanish"],["ru","russian"],["ko","korean"],["fr","french"],["ja","japanese"],["pt","portuguese"],["tr","turkish"],["pl","polish"],["ca","catalan"],["nl","dutch"],["ar","arabic"],["sv","swedish"],["it","italian"],["id","indonesian"],["hi","hindi"],["fi","finnish"],["vi","vietnamese"],["he","hebrew"],["uk","ukrainian"],["el","greek"],["ms","malay"],["cs","czech"],["ro","romanian"],["da","danish"],["hu","hungarian"],["ta","tamil"],["no","norwegian"],["th","thai"],["ur","urdu"],["hr","croatian"],["bg","bulgarian"],["lt","lithuanian"],["la","latin"],["mi","maori"],["ml","malayalam"],["cy","welsh"],["sk","slovak"],["te","telugu"],["fa","persian"],["lv","latvian"],["bn","bengali"],["sr","serbian"],["az","azerbaijani"],["sl","slovenian"],["kn","kannada"],["et","estonian"],["mk","macedonian"],["br","breton"],["eu","basque"],["is","icelandic"],["hy","armenian"],["ne","nepali"],["mn","mongolian"],["bs","bosnian"],["kk","kazakh"],["sq","albanian"],["sw","swahili"],["gl","galician"],["mr","marathi"],["pa","punjabi"],["si","sinhala"],["km","khmer"],["sn","shona"],["yo","yoruba"],["so","somali"],["af","afrikaans"],["oc","occitan"],["ka","georgian"],["be","belarusian"],["tg","tajik"],["sd","sindhi"],["gu","gujarati"],["am","amharic"],["yi","yiddish"],["lo","lao"],["uz","uzbek"],["fo","faroese"],["ht","haitian creole"],["ps","pashto"],["tk","turkmen"],["nn","nynorsk"],["mt","maltese"],["sa","sanskrit"],["lb","luxembourgish"],["my","myanmar"],["bo","tibetan"],["tl","tagalog"],["mg","malagasy"],["as","assamese"],["tt","tatar"],["haw","hawaiian"],["ln","lingala"],["ha","hausa"],["ba","bashkir"],["jw","javanese"],["su","sundanese"]],tl=new Map(ta),th=new Map([...ta.map(([e,t])=>[t,e]),["burmese","my"],["valencian","ca"],["flemish","nl"],["haitian","ht"],["letzeburgesch","lb"],["pushto","ps"],["panjabi","pa"],["moldavian","ro"],["moldovan","ro"],["sinhalese","si"],["castilian","es"]]);class tc extends eQ{_decode_asr(e,{return_timestamps:t=!1,return_language:r=!1,time_precision:n=null,force_full_sequences:s=!0}={}){if(null===n)throw Error("Must specify time_precision");let i=null,o="word"===t;function a(){return{language:i,timestamp:[null,null],text:""}}let l=[],h=a(),c=0,u=this.model.convert_tokens_to_ids(["<|notimestamps|>"])[0]+1,d=[],f=[],p=!1,_=null,m=new Set(this.all_special_ids);for(let r of e){let e=r.tokens,s=o?r.token_timestamps:null,g=null,y=u;if("stride"in r){let[t,s,i]=r.stride;if(c-=s,_=t-i,s&&(y=s/n+u),i)for(let t=e.length-1;t>=0;--t){let r=e[t];if(r>=u){if(null!==g&&(r-u)*n<_)break;g=r}}}let w=[],b=[];for(let r=0;r=u){let e=(_-u)*n+c,t=et(e,2);if(null!==g&&_>=g)p=!0;else if(p||d.length>0&&_0?(d.push(w),o&&f.push(b)):d.every(e=>0===e.length)&&(h=a(),d=[],w=[],f=[],b=[])}if(d.length>0){if(s&&t)throw Error("Whisper did not predict an ending timestamp, which can happen if audio is cut off in the middle of a word. Also make sure WhisperTimeStampLogitsProcessor was used during generation.");let[e,r]=this.findLongestCommonSequence(d,f),n=this.decode(e);h.text=n,o&&(h.words=this.collateWordTimestamps(e,r,i)),l.push(h)}let g=Object.create(null),y=l.map(e=>e.text).join("");if(t||r){for(let e=0;e0,o=i?[]:null,a=i?t[0]:null;for(let l=1;le===f[t]).length,_=p/e+t;p>1&&_>c&&(c=_,u=[s,i,a,l])}let[f,p,_,m]=u,g=Math.floor((p+f)/2),y=Math.floor((m+_)/2);s.push(...r.slice(0,g)),n=(r=h.slice(y)).length,i&&(o.push(...a.slice(0,g)),a=t[l].slice(y))}return(s.push(...r),i)?(o.push(...a),[s,o]):[s,[]]}collateWordTimestamps(e,t,r){let[n,s,i]=this.combineTokensIntoWords(e,r),o=[];for(let e=0;e=n){let e=(t-n)*r;e=et(e,2),s.push(`<|${e}|>`),s.push([])}else s[s.length-1].push(t);return(s=s.map(e=>"string"==typeof e?e:super.decode(e,t))).join("")}splitTokensOnUnicode(e){let t=this.decode(e,{decode_with_timestamps:!0}),r=[],n=[],s=[],i=[],o=[],a=0;for(let l=0;l=this.model.tokens_to_ids.get("<|endoftext|>"),d=l.startsWith(" "),f=l.trim(),p=a.test(f);if(u||d||p||0===s.length)s.push(l),i.push(h),o.push(c);else{let e=s.length-1;s[e]+=l,i[e].push(...h),o[e].push(...c)}}return[s,i,o]}mergePunctuations(e,t,r,n,s){let i=structuredClone(e),o=structuredClone(t),a=structuredClone(r),l=i.length-2,h=i.length-1;for(;l>=0;)i[l].startsWith(" ")&&n.includes(i[l].trim())?(i[h]=i[l]+i[h],o[h]=f(o[l],o[h]),a[h]=f(a[l],a[h]),i[l]="",o[l]=[],a[l]=[]):h=l,--l;for(l=0,h=1;he),o.filter(e=>e.length>0),a.filter(e=>e.length>0)]}get_decoder_prompt_ids({language:e=null,task:t=null,no_timestamps:r=!0}={}){let n=[];if(e){e=e.toLowerCase();let t=th.get(e);if(void 0===t){if(tl.has(e))t=e;else{let t=2===e.length,r=t?tl.keys():tl.values();throw Error(`Language "${e}" is not supported. Must be one of: ${JSON.stringify(r)}`)}}let r=this.model.tokens_to_ids.get(`<|${t}|>`);if(void 0===r)throw Error(`Unable to find language "${t}" in model vocabulary. Please report this issue at https://github.com/xenova/transformers.js/issues/new/choose.`);n.push(r)}else n.push(null);if(t){if("transcribe"!==(t=t.toLowerCase())&&"translate"!==t)throw Error(`Task "${t}" is not supported. Must be one of: ["transcribe", "translate"]`);let e=this.model.tokens_to_ids.get(`<|${t}|>`);if(void 0===e)throw Error(`Unable to find task "${t}" in model vocabulary. Please report this issue at https://github.com/xenova/transformers.js/issues/new/choose.`);n.push(e)}else n.push(null);if(r){let e=this.model.tokens_to_ids.get("<|notimestamps|>");if(void 0===e)throw Error('Unable to find "<|notimestamps|>" in model vocabulary. Please report this issue at https://github.com/xenova/transformers.js/issues/new/choose.');n.push(e)}return n.map((e,t)=>[t+1,e]).filter(e=>null!==e[1])}}class tu extends eQ{}class td extends eQ{}class tf extends eQ{constructor(e,t){super(e,t),this.languageRegex=/^(>>\w+<<)\s*/g,this.supported_language_codes=this.model.vocab.filter(e=>this.languageRegex.test(e)),console.warn('WARNING: `MarianTokenizer` is not yet supported by Hugging Face\'s "fast" tokenizers library. Therefore, you may experience slightly inaccurate results.')}_encode_text(e){if(null===e)return null;let[t,...r]=e.trim().split(this.languageRegex);if(0===r.length)return super._encode_text(t);if(2===r.length){let[e,t]=r;return this.supported_language_codes.includes(e)||console.warn(`Unsupported language code "${e}" detected, which may lead to unexpected behavior. Should be one of: ${JSON.stringify(this.supported_language_codes)}`),f([e],super._encode_text(t))}}}class tp{constructor(){this.root=t_.default()}extend(e){for(let t of e)this.push(t)}push(e){let t=this.root;for(let r of e){let e=t.children.get(r);void 0===e&&(e=t_.default(),t.children.set(r,e)),t=e}t.isLeaf=!0}*commonPrefixSearch(e){let t=this.root,r="";for(let n=0;nr)&&(n=s.clone(),r=t)}if(null===n)return[];e.prev=n,e.backtraceScore=r}++t}let r=[],n=this.beginNodes[e][0],s=n.prev;if(null===s)return[];let i=s.clone();for(;null!==i.prev;){r.push(i.clone());let e=i.clone();i=e.prev.clone()}return r.reverse(),r}piece(e){return this.sentence.slice(e.pos,e.pos+e.length)}tokens(){let e=this.viterbi();return e.map(e=>this.piece(e))}tokenIds(){let e=this.viterbi();return e.map(e=>e.tokenId)}}class tg{constructor(e,t,r,n,s){this.tokenId=e,this.nodeId=t,this.pos=r,this.length=n,this.score=s,this.prev=null,this.backtraceScore=0}clone(){let e=new tg(this.tokenId,this.nodeId,this.pos,this.length,this.score);return e.prev=this.prev,e.backtraceScore=this.backtraceScore,e}}class ty{static TOKENIZER_CLASS_MAPPING={T5Tokenizer:e5,DistilBertTokenizer:e4,BertTokenizer:e1,MobileBertTokenizer:e3,SqueezeBertTokenizer:e6,AlbertTokenizer:e2,GPT2Tokenizer:e8,BartTokenizer:e7,RobertaTokenizer:e9,WhisperTokenizer:tc,CodeGenTokenizer:tu,CLIPTokenizer:td,MarianTokenizer:tf,BloomTokenizer:te,NllbTokenizer:to,LlamaTokenizer:tt,XLMRobertaTokenizer:tr,MPNetTokenizer:tn,FalconTokenizer:ts,GPTNeoXTokenizer:ti,PreTrainedTokenizer:eQ};static async from_pretrained(e,{quantized:t=!0,progress_callback:r=null,config:n=null,cache_dir:s=null,local_files_only:i=!1,revision:o="main"}={}){let[a,l]=await eu(e,{quantized:t,progress_callback:r,config:n,cache_dir:s,local_files_only:i,revision:o}),h=l.tokenizer_class.replace(/Fast$/,""),c=this.TOKENIZER_CLASS_MAPPING[h];return c||(console.warn(`Unknown tokenizer class "${h}", attempting to construct from base class.`),c=eQ),new c(a,l)}}async function tw(e,t){return await H(e,"config.json",!0,t)}class tb{constructor(e){this.model_type=null,this.is_encoder_decoder=!1,Object.assign(this,e)}static async from_pretrained(e,{progress_callback:t=null,config:r=null,cache_dir:n=null,local_files_only:s=!1,revision:i="main"}={}){let o=r??await tw(e,{progress_callback:t,config:r,cache_dir:n,local_files_only:s,revision:i});return new this(o)}}class tk{static async from_pretrained(...e){return tb.from_pretrained(...e)}}class tv extends h{constructor(){super(),this.processors=[]}push(e){this.processors.push(e)}extend(e){this.processors.push(...e)}_call(e,t){for(let r of t)this.processors.forEach(t=>t(e,r))}[Symbol.iterator](){return this.processors.values()}}class tx extends h{_call(e,t){throw Error("`_call` should be implemented in a subclass")}}class tA extends tx{constructor(e){super(),this.force_token_map=Object.fromEntries(e??[])}_call(e,t){let r=this.force_token_map[e.length];return null!=r&&(t.data.fill(-1/0),t.data[r]=0),t}}class tE extends tx{constructor(e){super(),this.bos_token_id=e}_call(e,t){return 1===e.length&&(t.data.fill(-1/0),t.data[this.bos_token_id]=0),t}}class tz extends tx{constructor(e,t){super(),this.max_length=e,this.forced_eos_token_id=t}_call(e,t){}}class tT extends tx{constructor(e,t){super(),this.begin_suppress_tokens=e,this.begin_index=t}_call(e,t){if(e.length===this.begin_index)for(let e of this.begin_suppress_tokens)t.data[e]=-1/0;return t}}class tM extends tx{constructor(e){super(),this.eos_token_id=e.eos_token_id,this.no_timestamps_token_id=e.no_timestamps_token_id,this.timestamp_begin=this.no_timestamps_token_id+1,this.begin_index=(e.forced_decoder_ids||[]).length+2,e.forced_decoder_ids.slice(-1)[0][1]===this.no_timestamps_token_id&&(this.begin_index-=1),this.max_initial_timestamp_index=e.max_initial_timestamp_index}_call(e,t){if(t.data[this.no_timestamps_token_id]=-1/0,e.length===this.begin_index-1)return t.data.fill(-1/0),t.data[this.timestamp_begin]=0,t;let r=e.slice(this.begin_index),n=r.length>=1&&r[r.length-1]>=this.timestamp_begin,s=r.length<2||r[r.length-2]>=this.timestamp_begin;if(n&&(s?t.data.subarray(this.timestamp_begin).fill(-1/0):t.data.subarray(0,this.eos_token_id).fill(-1/0)),e.length===this.begin_index&&null!==this.max_initial_timestamp_index){let e=this.timestamp_begin+this.max_initial_timestamp_index;t.data.subarray(e+1).fill(-1/0)}let i=function(e){let t=J(e),r=t.map(e=>Math.log(e));return r}(t.data),o=Math.log(i.subarray(this.timestamp_begin).map(Math.exp).reduce((e,t)=>e+t)),a=Z(i.subarray(0,this.timestamp_begin))[0];return o>a&&t.data.subarray(0,this.timestamp_begin).fill(-1/0),t}}class tI extends tx{constructor(e){super(),this.no_repeat_ngram_size=e}getNgrams(e){let t=e.length,r=[];for(let n=0;n0&&(n=n.map(e=>e/this.generation_config.temperature)),n}randomSelect(e){let t=Math.random()*e.reduce((e,t)=>e+t,0);for(let r=0;r1)return new tL(e);if(e.num_return_sequences>1)throw Error(`num_return_sequences has to be 1 when doing greedy search, but is ${e.num_return_sequences}.`);return new tP(e)}}class tP extends tC{sample(e,t=-1){return[[Z(this.getLogits(e,t))[1],0]]}}class tU extends tC{sample(e,t=-1){let r=e.dims.at(-1);this.generation_config.top_k>0&&(r=Math.min(this.generation_config.top_k,r));let n=this.getLogits(e,t),s=Y(n,r),i=J(s.map(e=>e[1]));return Array.from({length:this.generation_config.num_beams},()=>{let e=this.randomSelect(i);return[s[e][0],Math.log(i[e])]})}}class tL extends tC{sample(e,t=-1){let r=e.dims.at(-1);this.generation_config.top_k>0&&(r=Math.min(this.generation_config.top_k,r));let n=this.getLogits(e,t),s=Y(n,r),i=J(s.map(e=>e[1]));return Array.from({length:this.generation_config.num_beams},(e,t)=>[s[t][0],Math.log(i[t])])}}let{InferenceSession:tO,Tensor:tR}=n;class tj{}class tN extends tj{}class t$ extends tj{}class tF extends t${}class tG extends tj{}let tq=new Map,tD=new Map;async function tW(e,t){return tq.get(e.constructor.name)===tG?await t1(e,t):await t0(e,t)}async function tK(e,t,r){let n=`onnx/${t}${r.quantized?"_quantized":""}.onnx`,s=await K(e,n,!0,r);try{return await tO.create(s,{executionProviders:E})}catch(e){if(1===E.length&&"wasm"===E[0])throw e;return console.warn(e),console.warn("Something went wrong during model construction (most likely a missing operation). Using `wasm` as a fallback. "),await tO.create(s,{executionProviders:["wasm"]})}}async function tH(e,t){let r={},n=[];for(let s of e.inputNames)void 0===t[s]?n.push(s):r[s]=t[s];if(n.length>0)throw Error(`An error occurred during model execution: "Missing the following inputs: ${n.join(", ")}.`);let s=Object.keys(t).length,i=e.inputNames.length;return s>i&&console.warn(`WARNING: Too many inputs were provided (${s} > ${i}). The following inputs will be ignored: "${Object.keys(t).filter(t=>!e.inputNames.includes(t)).join(", ")}".`),r}async function tX(e,t){let r=await tH(e,t);try{let t=await e.run(r);return t=function e(t){for(let r in t)t[r]instanceof tR?t[r]=new en(t[r]):"object"==typeof t[r]&&e(t[r]);return t}(t)}catch(e){throw console.error(`An error occurred during model execution: "${e}".`),console.error("Inputs given to model:",r),e}}function tV(e,t){let r=e.config.pad_token_id??null,n=e.config.eos_token_id??null;u(n)&&(n=[n]);let s=-1!==t.indexOf(r),i=null===n||!n.includes(r);if(!s||!i)return new en("int64",new BigInt64Array(t.data.length).fill(1n),t.dims);{let e=BigInt64Array.from(t.data.map(e=>e!=r));return new en("int64",e,t.dims)}}function tJ(e){return new en("bool",[e],[1])}async function tY(e,t,{add_decoder_pkv:r=!0}={}){let{encoder_outputs:n,past_key_values:s}=t;n||(n=(await t0(e,t)).last_hidden_state);let i={input_ids:t.decoder_input_ids,encoder_hidden_states:n,use_cache_branch:tJ(!!s)};e.decoder_merged_session.inputNames.includes("encoder_attention_mask")&&(i.encoder_attention_mask=t.attention_mask),e.addPastKeyValues(i,s,r);let o=await tX(e.decoder_merged_session,i),a=o.logits;s=e.getPastKeyValues(o,s);let l=e.getAttentions(o);return new r$({logits:a,past_key_values:s,encoder_outputs:n,...l})}function tZ(e,t,r,n=!0){let s=[],i=0,o=e.config.decoder_start_token_id;for(let r of(Array.isArray(o)||(o=[o]),t)){r.dims=[1,...r.dims];let t={inputs:r,encoder_outputs:null,prev_model_outputs:null,output_token_ids:o,done:!1,score:0,id:i++};n&&(t.attention_mask=tV(e,r)),s.push(t)}return s}async function tQ(e,t,{input_name:r="input_ids"}={}){let n={[r]:t.inputs,decoder_input_ids:function(e){if(e instanceof en)return e;if(0===e.length)throw Error("items must be non-empty");if(!Array.isArray(e[0]))return new en("int64",BigInt64Array.from(e.map(e=>BigInt(e))),[1,e.length]);if(e.some(t=>t.length!==e[0].length))throw Error("Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' and/or 'truncation=True' to have batched tensors with the same length.");return new en("int64",BigInt64Array.from(e.flat().map(e=>BigInt(e))),[e.length,e[0].length])}(t.output_token_ids.slice(-1)),encoder_outputs:t.encoder_outputs,past_key_values:t.prev_model_outputs?.past_key_values};t.attention_mask&&(n.attention_mask=t.attention_mask);let s=await e.forward(n);return t.prev_model_outputs=s,t.encoder_outputs=s.encoder_outputs,s}async function t0(e,t){let r={};for(let n of e.session.inputNames)r[n]=t[n];return await tX(e.session,r)}async function t1(e,t){let{input_ids:r,past_key_values:n,attention_mask:s}=t,i={input_ids:r,attention_mask:s??tV(e,r),use_cache_branch:tJ(null!==n)};e.addPastKeyValues(i,n);let o=await tX(e.session,i);return{logits:o.logits,past_key_values:n=e.getPastKeyValues(o,n)}}function t2(e,t,r,n){let s=[],i=0;for(let o of t){let t,a=o.tolist().map(Number);o.dims=[1,...o.dims],n?(t=n[i]).dims=[1,...t.dims]:t=tV(e,o);let l={input:o,model_input_ids:o,attention_mask:t,prev_model_outputs:null,output_token_ids:a,num_output_tokens:r,done:!1,score:0,id:i++};s.push(l)}return s}async function t3(e,t){let r=new BigInt64Array(t.output_token_ids.length).fill(1n),n={input_ids:t.model_input_ids,attention_mask:new en("int64",r,[1,r.length]),past_key_values:t.prev_model_outputs?.past_key_values},s=await e.forward(n);return t.prev_model_outputs=s,s}function t6(e,t){e.output_token_ids=[...e.output_token_ids,t],e.model_input_ids=new en("int64",[BigInt(t)],[1,1])}class t4 extends h{constructor(e,t){super(),this.config=e,this.session=t}async dispose(){let e=[];for(let t of Object.keys(this)){let r=this[t];r instanceof tO&&e.push(r.handler.dispose())}return await Promise.all(e)}static async from_pretrained(e,{quantized:t=!0,progress_callback:r=null,config:n=null,cache_dir:s=null,local_files_only:i=!1,revision:o="main"}={}){let a,l={quantized:t,progress_callback:r,config:n,cache_dir:s,local_files_only:i,revision:o},h=tq.get(this.name);if(h===tG)a=await Promise.all([tk.from_pretrained(e,l),tK(e,"decoder_model_merged",l)]);else if(h===tF)a=await Promise.all([tk.from_pretrained(e,l),tK(e,"encoder_model",l),tK(e,"decoder_model_merged",l),H(e,"generation_config.json",!1,l)]);else if(h===t$)a=await Promise.all([tk.from_pretrained(e,l),tK(e,"encoder_model",l),tK(e,"decoder_model_merged",l)]);else if(h===tN)a=await Promise.all([tk.from_pretrained(e,l),tK(e,"model",l)]);else throw console.warn("Malformed class definition.",this),Error(`Unable to load model: ${e}. Please report this bug at https://github.com/xenova/transformers.js/issues/new/choose.`);return new this(...a)}async _call(e){return await this.forward(e)}async forward(e){return await tW(this,e)}_get_logits_processor(e,t,r=null){let n=new tv;if(null!==e.repetition_penalty&&1!==e.repetition_penalty&&n.push(new tB(e.repetition_penalty)),null!==e.no_repeat_ngram_size&&e.no_repeat_ngram_size>0&&n.push(new tI(e.no_repeat_ngram_size)),null!==e.forced_bos_token_id&&n.push(new tE(e.forced_bos_token_id)),null!==e.forced_eos_token_id&&n.push(new tz(e.max_length,e.forced_eos_token_id)),null!==e.begin_suppress_tokens){let r=t>1||null===e.forced_bos_token_id?t:t+1;null!==e.forced_decoder_ids&&(r+=e.forced_decoder_ids[e.forced_decoder_ids.length-1][0]),n.push(new tT(e.begin_suppress_tokens,r))}return null!==e.forced_decoder_ids&&n.push(new tA(e.forced_decoder_ids)),null!==r&&n.extend(r),n}_get_generation_config(e){let t=new tS;return"generation_config"in this&&Object.assign(t,this.generation_config),null!==e&&Object.assign(t,e),t}async generate(e,t=null,r=null,{inputs_attention_mask:n=null}={}){let s;if(!(e instanceof en)&&e?.prototype?.__proto__?.constructor?.name!=="TypedArray"&&!Array.isArray(e))throw Error(`\`inputs\` must be a Tensor, TypedArray, or Array, but is "${e.constructor.name}".`);if(this.config.is_encoder_decoder)s=0;else if(0===(s=e instanceof en?e.dims[0]:e.length))throw Error("Must supply a non-empty array of input token ids.");t=this._get_generation_config(t),r=r??new tv,r=this._get_logits_processor(t,s,r);let i=1,o=i+(t.max_new_tokens??1/0),a=Number.isInteger(t.max_length)&&(t.max_new_tokens??null)===null,l=tC.getSampler(t),h=this.getStartBeams(e,i,n);for(;h.some(e=>!e.done)&&i=t.max_length){n.done=!0,e.push(n);continue}let s=await this.runBeam(n);t.output_attentions&&this.addAttentionsToBeam(n,s),t.output_scores;let i=s.logits.slice(null,-1,null);for(let[t,s]of(r(n.output_token_ids,i),l(i))){let r={...n};this.updateBeam(r,t),r.score+=s,t===this.config.eos_token_id&&(r.done=!0),e.push(r)}}++i,h=(e=this.groupBeams(e).map(e=>e.sort((e,t)=>t.score-e.score).slice(0,t.num_beams))).flat(),t.callback_function&&t.callback_function(h)}let c=this.groupBeams(h),u=e=>c.map(r=>t.num_return_sequences>1?r.slice(0,t.num_return_sequences).map(t=>t[e]):[r[0][e]]).flat(),d=u("output_token_ids");if(!t.return_dict_in_generate)return d;{let e=u("decoder_attentions"),t=u("cross_attentions");return{sequences:d,decoder_attentions:e,cross_attentions:t}}}addAttentionsToBeam(e,t){if(this.config.is_encoder_decoder){if(!t.cross_attentions||0===t.cross_attentions.length)throw Error("`output_attentions` is true, but the model did not produce cross-attentions. This is most likely because the model was not exported with `output_attentions=True`.");e.cross_attentions||(e.cross_attentions=[]),e.cross_attentions.push(t.cross_attentions)}if(!t.decoder_attentions||0===t.decoder_attentions.length)throw Error("`output_attentions` is true, but the model did not produce decoder-attentions. This is most likely because the model was not exported with `output_attentions=True`.");e.decoder_attentions||(e.decoder_attentions=[]),e.decoder_attentions.push(t.decoder_attentions)}groupBeams(e){let t=Object.create(null);for(let r of e)void 0===t[r.id]?t[r.id]=[r]:t[r.id].push(r);return Object.values(t)}getPastKeyValues(e,t){let r=Object.create(null);for(let n in e)if(n.startsWith("present")){let s=n.replace("present","past_key_values");t&&n.includes("encoder")?r[s]=t[s]:r[s]=e[n]}return r}getAttentions(e){let t=Object.create(null);for(let r of["cross_attentions","decoder_attentions"]){let n=[];for(let t in e)if(t.startsWith(r)){let r=t.split(".").pop();n[r]=e[t]}t[r]=n}return t}addPastKeyValues(e,t,r=!1){if(t)Object.assign(e,t);else if(r){let t=[1,this.num_encoder_heads,0,this.encoder_dim_kv];for(let r=0;r{let r=Array.from({length:this.config.decoder_layers},(t,r)=>eh(e.map(e=>e[r]),2)),s=function(e,t=0){return eh(e.map(e=>e.unsqueeze(t)),t)}(t.map(([e,t])=>r[e].slice(null,t))),[i,o]=function(e,t=null,r=1,n=!1){if(null===t){let t=e.data.reduce((e,t)=>e+t,0),n=t/e.data.length,s=Math.sqrt(e.data.reduce((e,t)=>e+(t-n)**2,0)/(e.data.length-r)),i=new en(e.type,[n],[]),o=new en(e.type,[s],[]);return[o,i]}t=el(t,e.dims.length);let s=ec(e,t,n),i=e.dims.slice();i[t]=1;let o=new e.data.constructor(e.data.length/e.dims[t]);for(let r=0;r=0;--s){let r=e.dims[s];if(s!==t){let e=o%r;n+=e*a,a*=i[s]}o=Math.floor(o/r)}o[n]+=(e.data[r]-s.data[n])**2}for(let n=0;n0||a>0;){l.push(o-1),h.push(a-1);let e=i[o][a].item();switch(e){case 0:--o,--a;break;case 1:--o;break;case 2:--a;break;default:throw Error(`Internal error in dynamic time warping. Unexpected trace[${o}, ${a}]. Please file a bug report.`)}}return l.reverse(),h.reverse(),[l,h]}(t),a=f([1],Array.from({length:n.length-1},(e,t)=>n[t+1]-n[t])).map(e=>!!e),l=[];for(let e=0;e{if(!self.OffscreenCanvas)throw Error("OffscreenCanvas not supported by this browser.");return new self.OffscreenCanvas(e,t)},o=self.createImageBitmap,i=self.ImageData;else if(rW)o=async e=>{let t=await e.metadata(),r=t.channels,{data:n,info:s}=await e.raw().toBuffer({resolveWithObject:!0}),i=new rX(new Uint8ClampedArray(n),s.width,s.height,s.channels);return void 0!==r&&r!==s.channels&&i.convert(r),i};else throw Error("Unable to load image processing library.");let rH={0:"nearest",1:"lanczos",2:"bilinear",3:"bicubic",4:"box",5:"hamming"};class rX{constructor(e,t,r,n){this._update(e,t,r,n)}static async read(e){if(e instanceof rX)return e;if(c(e)||e instanceof URL)return await this.fromURL(e);throw Error(`Unsupported input type: ${typeof e}`)}static async fromURL(e){let t=await G(e),r=await t.blob();return this.fromBlob(r)}static async fromBlob(e){if(rK){let t=await o(e),r=s(t.width,t.height).getContext("2d");return r.drawImage(t,0,0),new this(r.getImageData(0,0,t.width,t.height).data,t.width,t.height,4)}{let t=rW(await e.arrayBuffer());return await o(t)}}grayscale(){if(1===this.channels)return this;let e=new Uint8ClampedArray(this.width*this.height*1);switch(this.channels){case 3:case 4:for(let t=0,r=0;t=0?l=r:c=-r,n>=0?h=n:u=-n,a.drawImage(o,l,h,e,t,c,u,e,t),new rX(a.getImageData(0,0,e,t).data,e,t,4).convert(i)}{let s=rW(this.data,{raw:{width:this.width,height:this.height,channels:this.channels}});if(r>=0&&n>=0)s=s.extract({left:Math.floor(r),top:Math.floor(n),width:e,height:t});else if(r<=0&&n<=0){let i=Math.floor(-n),o=Math.floor(-r);s=s.extend({top:i,left:o,right:e-this.width-o,bottom:t-this.height-i})}else{let i=[0,0],o=0;n<0?(i[0]=Math.floor(-n),i[1]=t-this.height-i[0]):o=Math.floor(n);let a=[0,0],l=0;r<0?(a[0]=Math.floor(-r),a[1]=e-this.width-a[0]):l=Math.floor(r),s=s.extend({top:i[0],bottom:i[1],left:a[0],right:a[1]}).extract({left:l,top:o,width:e,height:t})}return await o(s)}}toCanvas(){let e=this.clone().rgba(),t=s(e.width,e.height),r=new i(e.data,e.width,e.height);return t.getContext("2d").putImageData(r,0,0),t}_update(e,t,r,n=null){return this.data=e,this.width=t,this.height=r,null!==n&&(this.channels=n),this}clone(){return new rX(this.data.slice(),this.width,this.height,this.channels)}convert(e){if(this.channels===e)return this;switch(e){case 1:this.grayscale();break;case 3:this.rgb();break;case 4:this.rgba();break;default:throw Error(`Conversion failed due to unsupported number of channels: ${this.channels}`)}return this}save(e,t="image/png"){if(!O.useFS)throw Error("Unable to save the image because filesystem is disabled in this environment.");let r=this.toCanvas(),n=r.toBuffer(t);p.writeFileSync(e,n)}}async function rV(e,t){let r;if("undefined"==typeof AudioContext)throw Error("Unable to load audio from path/URL since `AudioContext` is not available in your environment. Instead, audio data should be passed directly to the pipeline/processor. For more information and some example code, see https://huggingface.co/docs/transformers.js/tutorials/node-audio-processing.");let n=await (await G(e)).arrayBuffer(),s=new AudioContext({sampleRate:t});void 0===t&&console.warn(`No sampling rate provided, using default of ${s.sampleRate}Hz.`);let i=await s.decodeAudioData(n);if(2===i.numberOfChannels){let e=Math.sqrt(2),t=i.getChannelData(0),n=i.getChannelData(1);r=new Float32Array(t.length);for(let s=0;sthis.preprocess(e)));return t.forEach(e=>e.pixel_values.dims=[1,...e.pixel_values.dims]),{pixel_values:eh(t.map(e=>e.pixel_values)),original_sizes:t.map(e=>e.original_size),reshaped_input_sizes:t.map(e=>e.reshaped_input_size)}}}class rZ extends rY{}class rQ extends rY{}class r0 extends rY{async _call(e){let t=await super._call(e),r=[t.pixel_values.dims[0],64,64];return t.pixel_mask=new en("int64",new BigInt64Array(r.reduce((e,t)=>e*t)).fill(1n),r),t}center_to_corners_format([e,t,r,n]){return[e-r/2,t-n/2,e+r/2,t+n/2]}post_process_object_detection(e,t=.5,r=null){let n=e.logits,s=e.pred_boxes,[i,o,a]=n.dims;if(null!==r&&r.length!==i)throw Error("Make sure that you pass in as many target sizes as the batch dimension of the logits");let l=[];for(let e=0;et){let t=u[e].data;t=this.center_to_corners_format(t),null!==i&&(t=t.map((e,t)=>e*i[(t+1)%2])),h.boxes.push(t),h.classes.push(n),h.scores.push(s)}}l.push(h)}return l}remove_low_and_no_objects(e,t,r,n){let s=[],i=[],o=[];for(let a=0;ar&&(s.push(h),i.push(u),o.push(c))}return[s,i,o]}check_segment_validity(e,t,r,n=.5,s=.8){let i=[],o=0,a=0;for(let s=0;s=n&&++a;let l=o>0&&a>0;return l&&(l=o/a>s),[l,i]}compute_segments(e,t,r,n,s,i=null,o=null){let[a,l]=o??e[0].dims,h=new en("int32",new Int32Array(a*l),[a,l]),c=[];if(null!==o)for(let t=0;td[t]&&(u[t]=r,d[t]=e[r].data[t])}let f=0;for(let i=0;iBigInt(Math.round(e)))),i)}}post_process_masks(e,t,r,{mask_threshold:n=0,binarize:s=!0,pad_size:i=null}={}){let o=[],a=[(i=i??this.pad_size).height,i.width];for(let i=0;ie>n),t.dims)),t.dims=[1,...t.dims],u.push(t)}let d=eh(u);o.push(d)}return o}}class r2 extends rJ{constructor(e){super(e),this.config.mel_filters??=function(e,t,r=128){r=Math.floor(r);let n=Math.floor(1+t/2),s=Array(r),i=function(e,t=1){if(!Number.isInteger(e))throw TypeError(`n should be an integer, but ${e} given.`);let r=1/(e*t),n=Math.floor(e/2)+1,s=Array(n);for(let e=0;e=h?l[e]=1e3*Math.exp(c*(t-h)):l[e]=0+a*t,u[e]=i.map(t=>l[e]-t)}let d=l.slice(1).map((e,t)=>1/(e-l[t]));for(let e=0;e>1;++e){let t=(e+1-r)**2/2,n=Math.sqrt(m**2+g**2)**t,s=t*Math.atan2(g,m),i=2*e;l[i]=n*Math.cos(s),l[i+1]=n*Math.sin(s),h[i]=l[i],h[i+1]=-l[i+1]}let y=l.subarray(n,s),w=new Q(i>>1);w.transform(d,h);for(let r=0;r>1,i=s[n]*t[n];c[e]=i*y[e],c[r]=i*y[r]}w.transform(f,c);for(let e=0;en?i-n:0,r=i>1,l=new Float32Array(o*a);for(let e=0;ethis.config.n_samples&&console.warn("Attempting to extract features for audio longer than 30 seconds. If using a pipeline to extract transcript from a long audio clip, remember to specify `chunk_length_s` and/or `stride_length_s`.");let t=e.slice(0,this.config.n_samples),r=this._extract_fbank_features(t);return{input_features:new en("float32",r.data,[1,...r.dims])}}}class r3 extends h{constructor(e){super(),this.feature_extractor=e}async _call(e){return await this.feature_extractor(e)}}class r6 extends r3{async _call(e,t){return await this.feature_extractor(e,t)}post_process_masks(...e){return this.feature_extractor.post_process_masks(...e)}}class r4 extends r3{async _call(e){return await this.feature_extractor(e)}}class r5{static FEATURE_EXTRACTOR_CLASS_MAPPING={WhisperFeatureExtractor:r2,ViTFeatureExtractor:rZ,MobileViTFeatureExtractor:rQ,DetrFeatureExtractor:r0,SamImageProcessor:r1};static PROCESSOR_CLASS_MAPPING={WhisperProcessor:r4,SamProcessor:r6};static async from_pretrained(e,{progress_callback:t=null,config:r=null,cache_dir:n=null,local_files_only:s=!1,revision:i="main"}={}){let o=r??await H(e,"preprocessor_config.json",!0,{progress_callback:t,config:r,cache_dir:n,local_files_only:s,revision:i}),a=o.feature_extractor_type??o.image_processor_type,l=this.FEATURE_EXTRACTOR_CLASS_MAPPING[a];if(!l){if(void 0!==o.size)console.warn("Feature extractor type not specified, assuming ImageFeatureExtractor due to size parameter in config."),l=rY;else throw Error(`Unknown Feature Extractor type: ${o.feature_extractor_type}`)}let h=this.PROCESSOR_CLASS_MAPPING[o.processor_class]??r3,c=new l(o);return new h(c)}}async function r8(e){return Array.isArray(e)||(e=[e]),e=await Promise.all(e.map(e=>rX.read(e)))}class r7 extends h{constructor(e,t,r){super(),this.task=e,this.tokenizer=t,this.model=r}async dispose(){await this.model.dispose()}async _call(e){let t=this.tokenizer(e,{padding:!0,truncation:!0}),r=await this.model(t);return[t,r]}}class r9 extends r7{_key=null;async _call(e,t={}){let r;Array.isArray(e)||(e=[e]),this.model.config.prefix&&(e=e.map(e=>this.model.config.prefix+e));let n=this.model.config.task_specific_params;n&&n[this.task]&&n[this.task].prefix&&(e=e.map(e=>n[this.task].prefix+e));let s={padding:!0,truncation:!0};r=this instanceof ne&&"_build_translation_inputs"in this.tokenizer?this.tokenizer._build_translation_inputs(e,s,t).input_ids:this.tokenizer(e,s).input_ids;let i=await this.model.generate(r,t),o=this.tokenizer.batch_decode(i,{skip_special_tokens:!0});return null!==this._key&&(o=o.map(e=>null===this._key?e:{[this._key]:e})),o}}class ne extends r9{_key="translation_text"}let nt={"text-classification":{tokenizer:ty,pipeline:class extends r7{async _call(e,{topk:t=1}={}){let[r,n]=await super._call(e),s=this.model.config.id2label,i=[];for(let e of n.logits){let r=Y(J(e.data),t).map(function(e){return{label:s[e[0]],score:e[1]}});1===t?i.push(...r):i.push(r)}return Array.isArray(e)||1===t?i:i[0]}},model:rj,default:{model:"Xenova/distilbert-base-uncased-finetuned-sst-2-english"},type:"text"},"token-classification":{tokenizer:ty,pipeline:class extends r7{async _call(e,{ignore_labels:t=["O"]}={}){let r=Array.isArray(e);r||(e=[e]);let n=this.tokenizer,[s,i]=await super._call(e),o=i.logits,a=this.model.config.id2label,l=[];for(let e=0;ee.flatMap(e=>t.map(t=>[e,t])))})(Array.from(J(s.start_logits[e].data)).map((e,t)=>[e,t]).filter(e=>e[1]>o),Array.from(J(s.end_logits[e].data)).map((e,t)=>[e,t]).filter(e=>e[1]>o)).filter(e=>e[0][1]<=e[1][1]).map(e=>[e[0][1],e[1][1],e[0][0]*e[1][0]]).sort((e,t)=>t[2]-e[2]);for(let e=0;e{let t=[...o];return t[a]=e[0],{score:e[1],token:e[0],token_str:s.model.vocab[e[0]],sequence:s.decode(t,{skip_special_tokens:!0})}}))}return Array.isArray(e)?i:i[0]}},model:class extends rv{static MODEL_CLASS_MAPPINGS=[rB]},default:{model:"Xenova/bert-base-uncased"},type:"text"},summarization:{tokenizer:ty,pipeline:class extends r9{_key="summary_text"},model:rN,default:{model:"Xenova/distilbart-cnn-6-6"},type:"text"},translation:{tokenizer:ty,pipeline:ne,model:rN,default:{model:"Xenova/t5-small"},type:"text"},"text2text-generation":{tokenizer:ty,pipeline:r9,model:rN,default:{model:"Xenova/flan-t5-small"},type:"text"},"text-generation":{tokenizer:ty,pipeline:class extends r7{async _call(e,t={}){let r="string"==typeof e||e instanceof String;r&&(e=[e]),this.tokenizer.padding_side="left";let n=this.tokenizer(e,{padding:!0,truncation:!0}),s=n.input_ids,i=n.attention_mask,o=await this.model.generate(s,t,null,{inputs_attention_mask:i}),a=this.tokenizer.batch_decode(o,{skip_special_tokens:!0}),l=Array.from({length:e.length},e=>[]);for(let t=0;t[e.toLowerCase(),t])),this.entailment_id=this.label2id.entailment,void 0===this.entailment_id&&(console.warn("Could not find 'entailment' in label2id mapping. Using 2 as entailment_id."),this.entailment_id=2),this.contradiction_id=this.label2id.contradiction,void 0===this.contradiction_id&&(console.warn("Could not find 'contradiction' in label2id mapping. Using 0 as contradiction_id."),this.contradiction_id=0)}async _call(e,t,{hypothesis_template:r="This example is {}.",multi_label:n=!1}={}){let s=Array.isArray(e);s||(e=[e]),Array.isArray(t)||(t=[t]);let i=t.map(e=>r.replace("{}",e)),o=n||1===t.length,a=[];for(let r of e){let e=[];for(let t of i){let n=this.tokenizer(r,{text_pair:t,padding:!0,truncation:!0}),s=await this.model(n);o?e.push([s.logits.data[this.contradiction_id],s.logits.data[this.entailment_id]]):e.push(s.logits.data[this.entailment_id])}let n=(o?e.map(e=>J(e)[1]):J(e)).map((e,t)=>[e,t]).sort((e,t)=>t[0]-e[0]);a.push({sequence:r,labels:n.map(e=>t[e[1]]),scores:n.map(e=>e[0])})}return s?a:a[0]}},model:rj,default:{model:"Xenova/distilbert-base-uncased-mnli"},type:"text"},"automatic-speech-recognition":{tokenizer:ty,pipeline:class extends r7{constructor(e,t,r,n){super(e,t,r),this.processor=n}async _preprocess(e,t){return c(e)&&(e=await rV(e,t)),e}async _call(e,t={}){let r=t.return_timestamps??!1,n=t.chunk_length_s??0,s=t.stride_length_s??null,i=t.chunk_callback??null,o=t.force_full_sequences??!1;"word"===r&&(t.return_token_timestamps=!0);let a=d(t,"language",null),l=d(t,"task",null);if(a||l||r){if(t.forced_decoder_ids)throw Error("Cannot specify `language`/`task`/`return_timestamps` and `forced_decoder_ids` at the same time.");let e=this.tokenizer.get_decoder_prompt_ids({language:a,task:l,no_timestamps:!r});e.length>0&&(t.forced_decoder_ids=e)}let h=!Array.isArray(e);h&&(e=[e]);let c=this.processor.feature_extractor.config.sampling_rate,u=this.processor.feature_extractor.config.chunk_length/this.model.config.max_source_positions,f=[];for(let a of e){a=await this._preprocess(a,c);let e=[];if(n>0){if(null===s)s=n/6;else if(n<=s)throw Error("`chunk_length_s` must be larger than `stride_length_s`.");let t=c*n,r=c*s,i=t-2*r,o=0;for(;o=a.length;e.push({stride:[n.length,l?0:r,h?0:r],input_features:s.input_features,is_last:h}),o+=i}}else e=[{stride:[a.length,0,0],input_features:(await this.processor(a)).input_features,is_last:!0}];for(let n of e){let e=await this.model.generate(n.input_features,t);"word"===r?(n.tokens=e.sequences[0],n.token_timestamps=e.token_timestamps.tolist()[0].map(e=>et(e,2))):n.tokens=e[0],n.stride=n.stride.map(e=>e/c),null!==i&&i(n)}let[l,h]=this.tokenizer._decode_asr(e,{time_precision:u,return_timestamps:r,force_full_sequences:o});f.push({text:l,...h})}return h?f[0]:f}},model:rN,processor:r5,default:{model:"Xenova/whisper-tiny.en"},type:"multimodal"},"image-to-text":{tokenizer:ty,pipeline:class extends r7{constructor(e,t,r,n){super(e,t,r),this.processor=n}async _call(e,t={}){let r=Array.isArray(e);e=await r8(e);let{pixel_values:n}=await this.processor(e),s=[];for(let e of n){e.dims=[1,...e.dims];let r=await this.model.generate(e,t),n=this.tokenizer.batch_decode(r,{skip_special_tokens:!0}).map(e=>({generated_text:e.trim()}));s.push(n)}return r?s:s[0]}},model:class extends rv{static MODEL_CLASS_MAPPINGS=[rC]},processor:r5,default:{model:"Xenova/vit-gpt2-image-captioning"},type:"multimodal"},"image-classification":{pipeline:class extends r7{constructor(e,t,r){super(e,null,t),this.processor=r}async _call(e,{topk:t=1}={}){let r=Array.isArray(e);e=await r8(e);let{pixel_values:n}=await this.processor(e),s=await this.model({pixel_values:n}),i=this.model.config.id2label,o=[];for(let e of s.logits){let r=Y(J(e.data),t).map(function(e){return{label:i[e[0]],score:e[1]}});1===t?o.push(...r):o.push(r)}return r||1===t?o:o[0]}},model:class extends rv{static MODEL_CLASS_MAPPINGS=[rP]},processor:r5,default:{model:"Xenova/vit-base-patch16-224"},type:"multimodal"},"image-segmentation":{pipeline:class extends r7{constructor(e,t,r){super(e,null,t),this.processor=r,this.subtasks_mapping={panoptic:"post_process_panoptic_segmentation",instance:"post_process_instance_segmentation",semantic:"post_process_semantic_segmentation"}}async _call(e,{threshold:t=.5,mask_threshold:r=.5,overlap_mask_area_threshold:n=.8,label_ids_to_fuse:s=null,target_sizes:i=null,subtask:o=null}={}){if(Array.isArray(e)&&1!==e.length)throw Error("Image segmentation pipeline currently only supports a batch size of 1.");let a=(e=await r8(e)).map(e=>[e.height,e.width]),{pixel_values:l,pixel_mask:h}=await this.processor(e),c=await this.model({pixel_values:l,pixel_mask:h}),u=null;if(null!==o)u=this.subtasks_mapping[o];else for(let[e,t]of Object.entries(this.subtasks_mapping))if(t in this.processor.feature_extractor){u=this.processor.feature_extractor[t].bind(this.processor.feature_extractor),o=e;break}let d=[];if("panoptic"===o||"instance"===o){let e=u(c,t,r,n,s,i??a)[0],o=e.segmentation,l=this.model.config.id2label;for(let t of e.segments_info){let e=new Uint8ClampedArray(o.data.length);for(let r=0;rr.replace("{}",e)),i=this.tokenizer(s,{padding:!0,truncation:!0}),{pixel_values:o}=await this.processor(e),a=await this.model({...i,pixel_values:o}),l=[];for(let e of a.logits_per_image){let r=J(e.data);l.push([...r].map((e,r)=>({score:e,label:t[r]})))}return n?l:l[0]}},model:rR,processor:r5,default:{model:"Xenova/clip-vit-base-patch32"},type:"multimodal"},"object-detection":{pipeline:class extends r7{constructor(e,t,r){super(e,null,t),this.processor=r}async _call(e,{threshold:t=.9,percentage:r=!1}={}){let n=Array.isArray(e);if(n&&1!==e.length)throw Error("Object detection pipeline currently only supports a batch size of 1.");e=await r8(e);let s=r?null:e.map(e=>[e.height,e.width]),{pixel_values:i,pixel_mask:o}=await this.processor(e),a=await this.model({pixel_values:i,pixel_mask:o}),l=this.processor.feature_extractor.post_process_object_detection(a,t,s),h=this.model.config.id2label,c=l.map(e=>e.boxes.map((t,n)=>({score:e.scores[n],label:h[e.classes[n]],box:this._get_bounding_box(t,!r)})));return n?c:c[0]}_get_bounding_box(e,t){t&&(e=e.map(e=>0|e));let[r,n,s,i]=e;return{xmin:r,ymin:n,xmax:s,ymax:i}}},model:class extends rv{static MODEL_CLASS_MAPPINGS=[rU]},processor:r5,default:{model:"Xenova/detr-resnet-50"},type:"multimodal"},"feature-extraction":{tokenizer:ty,pipeline:class extends r7{async _call(e,{pooling:t="none",normalize:r=!1}={}){let[n,s]=await super._call(e),i=s.last_hidden_state??s.logits;if("none"===t);else if("mean"===t)i=function(e,t){let r=[e.dims[0],e.dims[2]],n=new e.data.constructor(r[0]*r[1]),[s,i,o]=e.dims,a=0;for(let r=0;r - -# {{ model_name | default("Diffusion Model") }} - -## Model description - -This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library -on the `{{ dataset_name }}` dataset. - -## Intended uses & limitations - -#### How to use - -```python -# TODO: add an example code snippet for running this diffusion pipeline -``` - -#### Limitations and bias - -[TODO: provide examples of latent issues and potential remediations] - -## Training data - -[TODO: describe the data used to train the model] - -### Training hyperparameters - -The following hyperparameters were used during training: -- learning_rate: {{ learning_rate }} -- train_batch_size: {{ train_batch_size }} -- eval_batch_size: {{ eval_batch_size }} -- gradient_accumulation_steps: {{ gradient_accumulation_steps }} -- optimizer: AdamW with betas=({{ adam_beta1 }}, {{ adam_beta2 }}), weight_decay={{ adam_weight_decay }} and epsilon={{ adam_epsilon }} -- lr_scheduler: {{ lr_scheduler }} -- lr_warmup_steps: {{ lr_warmup_steps }} -- ema_inv_gamma: {{ ema_inv_gamma }} -- ema_inv_gamma: {{ ema_power }} -- ema_inv_gamma: {{ ema_max_decay }} -- mixed_precision: {{ mixed_precision }} - -### Training results - -📈 [TensorBoard logs](https://huggingface.co/{{ repo_name }}/tensorboard?#scalars) - - diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/data/datasets/README.md b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/data/datasets/README.md deleted file mode 100644 index 9fb3e4f7afec17137c95c78be6ef06d520ec8032..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/data/datasets/README.md +++ /dev/null @@ -1,9 +0,0 @@ - - -### Common Datasets - -The dataset implemented here do not need to load the data into the final format. -It should provide the minimal data structure needed to use the dataset, so it can be very efficient. - -For example, for an image dataset, just provide the file names and labels, but don't read the images. -Let the downstream decide how to read. diff --git a/spaces/Yusin/ChatGPT-Speech/text/__init__.py b/spaces/Yusin/ChatGPT-Speech/text/__init__.py deleted file mode 100644 index 4e69c354dd24e3243980236eca962cd5945a92fc..0000000000000000000000000000000000000000 --- a/spaces/Yusin/ChatGPT-Speech/text/__init__.py +++ /dev/null @@ -1,32 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners - - -def text_to_sequence(text, symbols, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - _symbol_to_id = {s: i for i, s in enumerate(symbols)} - - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - if symbol not in _symbol_to_id.keys(): - continue - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text diff --git a/spaces/abby711/FaceRestoration/experiments/pretrained_models/README.md b/spaces/abby711/FaceRestoration/experiments/pretrained_models/README.md deleted file mode 100644 index 3401a5ca9b393e0033f58c5af8905961565826d9..0000000000000000000000000000000000000000 --- a/spaces/abby711/FaceRestoration/experiments/pretrained_models/README.md +++ /dev/null @@ -1,7 +0,0 @@ -# Pre-trained Models and Other Data - -Download pre-trained models and other data. Put them in this folder. - -1. [Pretrained StyleGAN2 model: StyleGAN2_512_Cmul1_FFHQ_B12G4_scratch_800k.pth](https://github.com/TencentARC/GFPGAN/releases/download/v0.1.0/StyleGAN2_512_Cmul1_FFHQ_B12G4_scratch_800k.pth) -1. [Component locations of FFHQ: FFHQ_eye_mouth_landmarks_512.pth](https://github.com/TencentARC/GFPGAN/releases/download/v0.1.0/FFHQ_eye_mouth_landmarks_512.pth) -1. [A simple ArcFace model: arcface_resnet18.pth](https://github.com/TencentARC/GFPGAN/releases/download/v0.1.0/arcface_resnet18.pth) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/configs/_base_/models/encnet_r50-d8.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/configs/_base_/models/encnet_r50-d8.py deleted file mode 100644 index be777123a886503172a95fe0719e956a147bbd68..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/configs/_base_/models/encnet_r50-d8.py +++ /dev/null @@ -1,48 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 2, 4), - strides=(1, 2, 1, 1), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - decode_head=dict( - type='EncHead', - in_channels=[512, 1024, 2048], - in_index=(1, 2, 3), - channels=512, - num_codes=32, - use_se_loss=True, - add_lateral=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_se_decode=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=0.2)), - auxiliary_head=dict( - type='FCNHead', - in_channels=1024, - in_index=2, - channels=256, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/runner/hooks/evaluation.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/runner/hooks/evaluation.py deleted file mode 100644 index 4d00999ce5665c53bded8de9e084943eee2d230d..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/runner/hooks/evaluation.py +++ /dev/null @@ -1,509 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import warnings -from math import inf - -import torch.distributed as dist -from torch.nn.modules.batchnorm import _BatchNorm -from torch.utils.data import DataLoader - -from annotator.uniformer.mmcv.fileio import FileClient -from annotator.uniformer.mmcv.utils import is_seq_of -from .hook import Hook -from .logger import LoggerHook - - -class EvalHook(Hook): - """Non-Distributed evaluation hook. - - This hook will regularly perform evaluation in a given interval when - performing in non-distributed environment. - - Args: - dataloader (DataLoader): A PyTorch dataloader, whose dataset has - implemented ``evaluate`` function. - start (int | None, optional): Evaluation starting epoch. It enables - evaluation before the training starts if ``start`` <= the resuming - epoch. If None, whether to evaluate is merely decided by - ``interval``. Default: None. - interval (int): Evaluation interval. Default: 1. - by_epoch (bool): Determine perform evaluation by epoch or by iteration. - If set to True, it will perform by epoch. Otherwise, by iteration. - Default: True. - save_best (str, optional): If a metric is specified, it would measure - the best checkpoint during evaluation. The information about best - checkpoint would be saved in ``runner.meta['hook_msgs']`` to keep - best score value and best checkpoint path, which will be also - loaded when resume checkpoint. Options are the evaluation metrics - on the test dataset. e.g., ``bbox_mAP``, ``segm_mAP`` for bbox - detection and instance segmentation. ``AR@100`` for proposal - recall. If ``save_best`` is ``auto``, the first key of the returned - ``OrderedDict`` result will be used. Default: None. - rule (str | None, optional): Comparison rule for best score. If set to - None, it will infer a reasonable rule. Keys such as 'acc', 'top' - .etc will be inferred by 'greater' rule. Keys contain 'loss' will - be inferred by 'less' rule. Options are 'greater', 'less', None. - Default: None. - test_fn (callable, optional): test a model with samples from a - dataloader, and return the test results. If ``None``, the default - test function ``mmcv.engine.single_gpu_test`` will be used. - (default: ``None``) - greater_keys (List[str] | None, optional): Metric keys that will be - inferred by 'greater' comparison rule. If ``None``, - _default_greater_keys will be used. (default: ``None``) - less_keys (List[str] | None, optional): Metric keys that will be - inferred by 'less' comparison rule. If ``None``, _default_less_keys - will be used. (default: ``None``) - out_dir (str, optional): The root directory to save checkpoints. If not - specified, `runner.work_dir` will be used by default. If specified, - the `out_dir` will be the concatenation of `out_dir` and the last - level directory of `runner.work_dir`. - `New in version 1.3.16.` - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. Default: None. - `New in version 1.3.16.` - **eval_kwargs: Evaluation arguments fed into the evaluate function of - the dataset. - - Notes: - If new arguments are added for EvalHook, tools/test.py, - tools/eval_metric.py may be affected. - """ - - # Since the key for determine greater or less is related to the downstream - # tasks, downstream repos may need to overwrite the following inner - # variable accordingly. - - rule_map = {'greater': lambda x, y: x > y, 'less': lambda x, y: x < y} - init_value_map = {'greater': -inf, 'less': inf} - _default_greater_keys = [ - 'acc', 'top', 'AR@', 'auc', 'precision', 'mAP', 'mDice', 'mIoU', - 'mAcc', 'aAcc' - ] - _default_less_keys = ['loss'] - - def __init__(self, - dataloader, - start=None, - interval=1, - by_epoch=True, - save_best=None, - rule=None, - test_fn=None, - greater_keys=None, - less_keys=None, - out_dir=None, - file_client_args=None, - **eval_kwargs): - if not isinstance(dataloader, DataLoader): - raise TypeError(f'dataloader must be a pytorch DataLoader, ' - f'but got {type(dataloader)}') - - if interval <= 0: - raise ValueError(f'interval must be a positive number, ' - f'but got {interval}') - - assert isinstance(by_epoch, bool), '``by_epoch`` should be a boolean' - - if start is not None and start < 0: - raise ValueError(f'The evaluation start epoch {start} is smaller ' - f'than 0') - - self.dataloader = dataloader - self.interval = interval - self.start = start - self.by_epoch = by_epoch - - assert isinstance(save_best, str) or save_best is None, \ - '""save_best"" should be a str or None ' \ - f'rather than {type(save_best)}' - self.save_best = save_best - self.eval_kwargs = eval_kwargs - self.initial_flag = True - - if test_fn is None: - from annotator.uniformer.mmcv.engine import single_gpu_test - self.test_fn = single_gpu_test - else: - self.test_fn = test_fn - - if greater_keys is None: - self.greater_keys = self._default_greater_keys - else: - if not isinstance(greater_keys, (list, tuple)): - greater_keys = (greater_keys, ) - assert is_seq_of(greater_keys, str) - self.greater_keys = greater_keys - - if less_keys is None: - self.less_keys = self._default_less_keys - else: - if not isinstance(less_keys, (list, tuple)): - less_keys = (less_keys, ) - assert is_seq_of(less_keys, str) - self.less_keys = less_keys - - if self.save_best is not None: - self.best_ckpt_path = None - self._init_rule(rule, self.save_best) - - self.out_dir = out_dir - self.file_client_args = file_client_args - - def _init_rule(self, rule, key_indicator): - """Initialize rule, key_indicator, comparison_func, and best score. - - Here is the rule to determine which rule is used for key indicator - when the rule is not specific (note that the key indicator matching - is case-insensitive): - 1. If the key indicator is in ``self.greater_keys``, the rule will be - specified as 'greater'. - 2. Or if the key indicator is in ``self.less_keys``, the rule will be - specified as 'less'. - 3. Or if the key indicator is equal to the substring in any one item - in ``self.greater_keys``, the rule will be specified as 'greater'. - 4. Or if the key indicator is equal to the substring in any one item - in ``self.less_keys``, the rule will be specified as 'less'. - - Args: - rule (str | None): Comparison rule for best score. - key_indicator (str | None): Key indicator to determine the - comparison rule. - """ - if rule not in self.rule_map and rule is not None: - raise KeyError(f'rule must be greater, less or None, ' - f'but got {rule}.') - - if rule is None: - if key_indicator != 'auto': - # `_lc` here means we use the lower case of keys for - # case-insensitive matching - key_indicator_lc = key_indicator.lower() - greater_keys = [key.lower() for key in self.greater_keys] - less_keys = [key.lower() for key in self.less_keys] - - if key_indicator_lc in greater_keys: - rule = 'greater' - elif key_indicator_lc in less_keys: - rule = 'less' - elif any(key in key_indicator_lc for key in greater_keys): - rule = 'greater' - elif any(key in key_indicator_lc for key in less_keys): - rule = 'less' - else: - raise ValueError(f'Cannot infer the rule for key ' - f'{key_indicator}, thus a specific rule ' - f'must be specified.') - self.rule = rule - self.key_indicator = key_indicator - if self.rule is not None: - self.compare_func = self.rule_map[self.rule] - - def before_run(self, runner): - if not self.out_dir: - self.out_dir = runner.work_dir - - self.file_client = FileClient.infer_client(self.file_client_args, - self.out_dir) - - # if `self.out_dir` is not equal to `runner.work_dir`, it means that - # `self.out_dir` is set so the final `self.out_dir` is the - # concatenation of `self.out_dir` and the last level directory of - # `runner.work_dir` - if self.out_dir != runner.work_dir: - basename = osp.basename(runner.work_dir.rstrip(osp.sep)) - self.out_dir = self.file_client.join_path(self.out_dir, basename) - runner.logger.info( - (f'The best checkpoint will be saved to {self.out_dir} by ' - f'{self.file_client.name}')) - - if self.save_best is not None: - if runner.meta is None: - warnings.warn('runner.meta is None. Creating an empty one.') - runner.meta = dict() - runner.meta.setdefault('hook_msgs', dict()) - self.best_ckpt_path = runner.meta['hook_msgs'].get( - 'best_ckpt', None) - - def before_train_iter(self, runner): - """Evaluate the model only at the start of training by iteration.""" - if self.by_epoch or not self.initial_flag: - return - if self.start is not None and runner.iter >= self.start: - self.after_train_iter(runner) - self.initial_flag = False - - def before_train_epoch(self, runner): - """Evaluate the model only at the start of training by epoch.""" - if not (self.by_epoch and self.initial_flag): - return - if self.start is not None and runner.epoch >= self.start: - self.after_train_epoch(runner) - self.initial_flag = False - - def after_train_iter(self, runner): - """Called after every training iter to evaluate the results.""" - if not self.by_epoch and self._should_evaluate(runner): - # Because the priority of EvalHook is higher than LoggerHook, the - # training log and the evaluating log are mixed. Therefore, - # we need to dump the training log and clear it before evaluating - # log is generated. In addition, this problem will only appear in - # `IterBasedRunner` whose `self.by_epoch` is False, because - # `EpochBasedRunner` whose `self.by_epoch` is True calls - # `_do_evaluate` in `after_train_epoch` stage, and at this stage - # the training log has been printed, so it will not cause any - # problem. more details at - # https://github.com/open-mmlab/mmsegmentation/issues/694 - for hook in runner._hooks: - if isinstance(hook, LoggerHook): - hook.after_train_iter(runner) - runner.log_buffer.clear() - - self._do_evaluate(runner) - - def after_train_epoch(self, runner): - """Called after every training epoch to evaluate the results.""" - if self.by_epoch and self._should_evaluate(runner): - self._do_evaluate(runner) - - def _do_evaluate(self, runner): - """perform evaluation and save ckpt.""" - results = self.test_fn(runner.model, self.dataloader) - runner.log_buffer.output['eval_iter_num'] = len(self.dataloader) - key_score = self.evaluate(runner, results) - # the key_score may be `None` so it needs to skip the action to save - # the best checkpoint - if self.save_best and key_score: - self._save_ckpt(runner, key_score) - - def _should_evaluate(self, runner): - """Judge whether to perform evaluation. - - Here is the rule to judge whether to perform evaluation: - 1. It will not perform evaluation during the epoch/iteration interval, - which is determined by ``self.interval``. - 2. It will not perform evaluation if the start time is larger than - current time. - 3. It will not perform evaluation when current time is larger than - the start time but during epoch/iteration interval. - - Returns: - bool: The flag indicating whether to perform evaluation. - """ - if self.by_epoch: - current = runner.epoch - check_time = self.every_n_epochs - else: - current = runner.iter - check_time = self.every_n_iters - - if self.start is None: - if not check_time(runner, self.interval): - # No evaluation during the interval. - return False - elif (current + 1) < self.start: - # No evaluation if start is larger than the current time. - return False - else: - # Evaluation only at epochs/iters 3, 5, 7... - # if start==3 and interval==2 - if (current + 1 - self.start) % self.interval: - return False - return True - - def _save_ckpt(self, runner, key_score): - """Save the best checkpoint. - - It will compare the score according to the compare function, write - related information (best score, best checkpoint path) and save the - best checkpoint into ``work_dir``. - """ - if self.by_epoch: - current = f'epoch_{runner.epoch + 1}' - cur_type, cur_time = 'epoch', runner.epoch + 1 - else: - current = f'iter_{runner.iter + 1}' - cur_type, cur_time = 'iter', runner.iter + 1 - - best_score = runner.meta['hook_msgs'].get( - 'best_score', self.init_value_map[self.rule]) - if self.compare_func(key_score, best_score): - best_score = key_score - runner.meta['hook_msgs']['best_score'] = best_score - - if self.best_ckpt_path and self.file_client.isfile( - self.best_ckpt_path): - self.file_client.remove(self.best_ckpt_path) - runner.logger.info( - (f'The previous best checkpoint {self.best_ckpt_path} was ' - 'removed')) - - best_ckpt_name = f'best_{self.key_indicator}_{current}.pth' - self.best_ckpt_path = self.file_client.join_path( - self.out_dir, best_ckpt_name) - runner.meta['hook_msgs']['best_ckpt'] = self.best_ckpt_path - - runner.save_checkpoint( - self.out_dir, best_ckpt_name, create_symlink=False) - runner.logger.info( - f'Now best checkpoint is saved as {best_ckpt_name}.') - runner.logger.info( - f'Best {self.key_indicator} is {best_score:0.4f} ' - f'at {cur_time} {cur_type}.') - - def evaluate(self, runner, results): - """Evaluate the results. - - Args: - runner (:obj:`mmcv.Runner`): The underlined training runner. - results (list): Output results. - """ - eval_res = self.dataloader.dataset.evaluate( - results, logger=runner.logger, **self.eval_kwargs) - - for name, val in eval_res.items(): - runner.log_buffer.output[name] = val - runner.log_buffer.ready = True - - if self.save_best is not None: - # If the performance of model is pool, the `eval_res` may be an - # empty dict and it will raise exception when `self.save_best` is - # not None. More details at - # https://github.com/open-mmlab/mmdetection/issues/6265. - if not eval_res: - warnings.warn( - 'Since `eval_res` is an empty dict, the behavior to save ' - 'the best checkpoint will be skipped in this evaluation.') - return None - - if self.key_indicator == 'auto': - # infer from eval_results - self._init_rule(self.rule, list(eval_res.keys())[0]) - return eval_res[self.key_indicator] - - return None - - -class DistEvalHook(EvalHook): - """Distributed evaluation hook. - - This hook will regularly perform evaluation in a given interval when - performing in distributed environment. - - Args: - dataloader (DataLoader): A PyTorch dataloader, whose dataset has - implemented ``evaluate`` function. - start (int | None, optional): Evaluation starting epoch. It enables - evaluation before the training starts if ``start`` <= the resuming - epoch. If None, whether to evaluate is merely decided by - ``interval``. Default: None. - interval (int): Evaluation interval. Default: 1. - by_epoch (bool): Determine perform evaluation by epoch or by iteration. - If set to True, it will perform by epoch. Otherwise, by iteration. - default: True. - save_best (str, optional): If a metric is specified, it would measure - the best checkpoint during evaluation. The information about best - checkpoint would be saved in ``runner.meta['hook_msgs']`` to keep - best score value and best checkpoint path, which will be also - loaded when resume checkpoint. Options are the evaluation metrics - on the test dataset. e.g., ``bbox_mAP``, ``segm_mAP`` for bbox - detection and instance segmentation. ``AR@100`` for proposal - recall. If ``save_best`` is ``auto``, the first key of the returned - ``OrderedDict`` result will be used. Default: None. - rule (str | None, optional): Comparison rule for best score. If set to - None, it will infer a reasonable rule. Keys such as 'acc', 'top' - .etc will be inferred by 'greater' rule. Keys contain 'loss' will - be inferred by 'less' rule. Options are 'greater', 'less', None. - Default: None. - test_fn (callable, optional): test a model with samples from a - dataloader in a multi-gpu manner, and return the test results. If - ``None``, the default test function ``mmcv.engine.multi_gpu_test`` - will be used. (default: ``None``) - tmpdir (str | None): Temporary directory to save the results of all - processes. Default: None. - gpu_collect (bool): Whether to use gpu or cpu to collect results. - Default: False. - broadcast_bn_buffer (bool): Whether to broadcast the - buffer(running_mean and running_var) of rank 0 to other rank - before evaluation. Default: True. - out_dir (str, optional): The root directory to save checkpoints. If not - specified, `runner.work_dir` will be used by default. If specified, - the `out_dir` will be the concatenation of `out_dir` and the last - level directory of `runner.work_dir`. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. Default: None. - **eval_kwargs: Evaluation arguments fed into the evaluate function of - the dataset. - """ - - def __init__(self, - dataloader, - start=None, - interval=1, - by_epoch=True, - save_best=None, - rule=None, - test_fn=None, - greater_keys=None, - less_keys=None, - broadcast_bn_buffer=True, - tmpdir=None, - gpu_collect=False, - out_dir=None, - file_client_args=None, - **eval_kwargs): - - if test_fn is None: - from annotator.uniformer.mmcv.engine import multi_gpu_test - test_fn = multi_gpu_test - - super().__init__( - dataloader, - start=start, - interval=interval, - by_epoch=by_epoch, - save_best=save_best, - rule=rule, - test_fn=test_fn, - greater_keys=greater_keys, - less_keys=less_keys, - out_dir=out_dir, - file_client_args=file_client_args, - **eval_kwargs) - - self.broadcast_bn_buffer = broadcast_bn_buffer - self.tmpdir = tmpdir - self.gpu_collect = gpu_collect - - def _do_evaluate(self, runner): - """perform evaluation and save ckpt.""" - # Synchronization of BatchNorm's buffer (running_mean - # and running_var) is not supported in the DDP of pytorch, - # which may cause the inconsistent performance of models in - # different ranks, so we broadcast BatchNorm's buffers - # of rank 0 to other ranks to avoid this. - if self.broadcast_bn_buffer: - model = runner.model - for name, module in model.named_modules(): - if isinstance(module, - _BatchNorm) and module.track_running_stats: - dist.broadcast(module.running_var, 0) - dist.broadcast(module.running_mean, 0) - - tmpdir = self.tmpdir - if tmpdir is None: - tmpdir = osp.join(runner.work_dir, '.eval_hook') - - results = self.test_fn( - runner.model, - self.dataloader, - tmpdir=tmpdir, - gpu_collect=self.gpu_collect) - if runner.rank == 0: - print('\n') - runner.log_buffer.output['eval_iter_num'] = len(self.dataloader) - key_score = self.evaluate(runner, results) - # the key_score may be `None` so it needs to skip the action to - # save the best checkpoint - if self.save_best and key_score: - self._save_ckpt(runner, key_score) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/runner/hooks/profiler.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/runner/hooks/profiler.py deleted file mode 100644 index b70236997eec59c2209ef351ae38863b4112d0ec..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/runner/hooks/profiler.py +++ /dev/null @@ -1,180 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from typing import Callable, List, Optional, Union - -import torch - -from ..dist_utils import master_only -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class ProfilerHook(Hook): - """Profiler to analyze performance during training. - - PyTorch Profiler is a tool that allows the collection of the performance - metrics during the training. More details on Profiler can be found at - https://pytorch.org/docs/1.8.1/profiler.html#torch.profiler.profile - - Args: - by_epoch (bool): Profile performance by epoch or by iteration. - Default: True. - profile_iters (int): Number of iterations for profiling. - If ``by_epoch=True``, profile_iters indicates that they are the - first profile_iters epochs at the beginning of the - training, otherwise it indicates the first profile_iters - iterations. Default: 1. - activities (list[str]): List of activity groups (CPU, CUDA) to use in - profiling. Default: ['cpu', 'cuda']. - schedule (dict, optional): Config of generating the callable schedule. - if schedule is None, profiler will not add step markers into the - trace and table view. Default: None. - on_trace_ready (callable, dict): Either a handler or a dict of generate - handler. Default: None. - record_shapes (bool): Save information about operator's input shapes. - Default: False. - profile_memory (bool): Track tensor memory allocation/deallocation. - Default: False. - with_stack (bool): Record source information (file and line number) - for the ops. Default: False. - with_flops (bool): Use formula to estimate the FLOPS of specific - operators (matrix multiplication and 2D convolution). - Default: False. - json_trace_path (str, optional): Exports the collected trace in Chrome - JSON format. Default: None. - - Example: - >>> runner = ... # instantiate a Runner - >>> # tensorboard trace - >>> trace_config = dict(type='tb_trace', dir_name='work_dir') - >>> profiler_config = dict(on_trace_ready=trace_config) - >>> runner.register_profiler_hook(profiler_config) - >>> runner.run(data_loaders=[trainloader], workflow=[('train', 1)]) - """ - - def __init__(self, - by_epoch: bool = True, - profile_iters: int = 1, - activities: List[str] = ['cpu', 'cuda'], - schedule: Optional[dict] = None, - on_trace_ready: Optional[Union[Callable, dict]] = None, - record_shapes: bool = False, - profile_memory: bool = False, - with_stack: bool = False, - with_flops: bool = False, - json_trace_path: Optional[str] = None) -> None: - try: - from torch import profiler # torch version >= 1.8.1 - except ImportError: - raise ImportError('profiler is the new feature of torch1.8.1, ' - f'but your version is {torch.__version__}') - - assert isinstance(by_epoch, bool), '``by_epoch`` should be a boolean.' - self.by_epoch = by_epoch - - if profile_iters < 1: - raise ValueError('profile_iters should be greater than 0, but got ' - f'{profile_iters}') - self.profile_iters = profile_iters - - if not isinstance(activities, list): - raise ValueError( - f'activities should be list, but got {type(activities)}') - self.activities = [] - for activity in activities: - activity = activity.lower() - if activity == 'cpu': - self.activities.append(profiler.ProfilerActivity.CPU) - elif activity == 'cuda': - self.activities.append(profiler.ProfilerActivity.CUDA) - else: - raise ValueError( - f'activity should be "cpu" or "cuda", but got {activity}') - - if schedule is not None: - self.schedule = profiler.schedule(**schedule) - else: - self.schedule = None - - self.on_trace_ready = on_trace_ready - self.record_shapes = record_shapes - self.profile_memory = profile_memory - self.with_stack = with_stack - self.with_flops = with_flops - self.json_trace_path = json_trace_path - - @master_only - def before_run(self, runner): - if self.by_epoch and runner.max_epochs < self.profile_iters: - raise ValueError('self.profile_iters should not be greater than ' - f'{runner.max_epochs}') - - if not self.by_epoch and runner.max_iters < self.profile_iters: - raise ValueError('self.profile_iters should not be greater than ' - f'{runner.max_iters}') - - if callable(self.on_trace_ready): # handler - _on_trace_ready = self.on_trace_ready - elif isinstance(self.on_trace_ready, dict): # config of handler - trace_cfg = self.on_trace_ready.copy() - trace_type = trace_cfg.pop('type') # log_trace handler - if trace_type == 'log_trace': - - def _log_handler(prof): - print(prof.key_averages().table(**trace_cfg)) - - _on_trace_ready = _log_handler - elif trace_type == 'tb_trace': # tensorboard_trace handler - try: - import torch_tb_profiler # noqa: F401 - except ImportError: - raise ImportError('please run "pip install ' - 'torch-tb-profiler" to install ' - 'torch_tb_profiler') - _on_trace_ready = torch.profiler.tensorboard_trace_handler( - **trace_cfg) - else: - raise ValueError('trace_type should be "log_trace" or ' - f'"tb_trace", but got {trace_type}') - elif self.on_trace_ready is None: - _on_trace_ready = None # type: ignore - else: - raise ValueError('on_trace_ready should be handler, dict or None, ' - f'but got {type(self.on_trace_ready)}') - - if runner.max_epochs > 1: - warnings.warn(f'profiler will profile {runner.max_epochs} epochs ' - 'instead of 1 epoch. Since profiler will slow down ' - 'the training, it is recommended to train 1 epoch ' - 'with ProfilerHook and adjust your setting according' - ' to the profiler summary. During normal training ' - '(epoch > 1), you may disable the ProfilerHook.') - - self.profiler = torch.profiler.profile( - activities=self.activities, - schedule=self.schedule, - on_trace_ready=_on_trace_ready, - record_shapes=self.record_shapes, - profile_memory=self.profile_memory, - with_stack=self.with_stack, - with_flops=self.with_flops) - - self.profiler.__enter__() - runner.logger.info('profiler is profiling...') - - @master_only - def after_train_epoch(self, runner): - if self.by_epoch and runner.epoch == self.profile_iters - 1: - runner.logger.info('profiler may take a few minutes...') - self.profiler.__exit__(None, None, None) - if self.json_trace_path is not None: - self.profiler.export_chrome_trace(self.json_trace_path) - - @master_only - def after_train_iter(self, runner): - self.profiler.step() - if not self.by_epoch and runner.iter == self.profile_iters - 1: - runner.logger.info('profiler may take a few minutes...') - self.profiler.__exit__(None, None, None) - if self.json_trace_path is not None: - self.profiler.export_chrome_trace(self.json_trace_path) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/datasets/__init__.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/datasets/__init__.py deleted file mode 100644 index 9b18b30a258c32283cbfc03ba01781a19fd993c1..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/datasets/__init__.py +++ /dev/null @@ -1,24 +0,0 @@ -from .builder import DATASETS, PIPELINES, build_dataloader, build_dataset -from .cityscapes import CityscapesDataset -from .coco import CocoDataset -from .custom import CustomDataset -from .dataset_wrappers import (ClassBalancedDataset, ConcatDataset, - RepeatDataset) -from .deepfashion import DeepFashionDataset -from .lvis import LVISDataset, LVISV1Dataset, LVISV05Dataset -from .samplers import DistributedGroupSampler, DistributedSampler, GroupSampler -from .utils import (NumClassCheckHook, get_loading_pipeline, - replace_ImageToTensor) -from .voc import VOCDataset -from .wider_face import WIDERFaceDataset -from .xml_style import XMLDataset - -__all__ = [ - 'CustomDataset', 'XMLDataset', 'CocoDataset', 'DeepFashionDataset', - 'VOCDataset', 'CityscapesDataset', 'LVISDataset', 'LVISV05Dataset', - 'LVISV1Dataset', 'GroupSampler', 'DistributedGroupSampler', - 'DistributedSampler', 'build_dataloader', 'ConcatDataset', 'RepeatDataset', - 'ClassBalancedDataset', 'WIDERFaceDataset', 'DATASETS', 'PIPELINES', - 'build_dataset', 'replace_ImageToTensor', 'get_loading_pipeline', - 'NumClassCheckHook' -] diff --git a/spaces/abidlabs/Acapellify-Frontend/README.md b/spaces/abidlabs/Acapellify-Frontend/README.md deleted file mode 100644 index df4c688d4c4de4db45cac7bc4ac5700c74c41790..0000000000000000000000000000000000000000 --- a/spaces/abidlabs/Acapellify-Frontend/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Acapellify Frontend -emoji: 📈 -colorFrom: yellow -colorTo: pink -sdk: gradio -sdk_version: 3.47.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/model/codecs/obj.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/model/codecs/obj.py deleted file mode 100644 index 6b33bed63de081cb0e4a695de4a03ccbc86891df..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/model/codecs/obj.py +++ /dev/null @@ -1,238 +0,0 @@ -import os - -import pyglet - -from pyglet.gl import GL_TRIANGLES -from pyglet.util import asstr - -from .. import Model, Material, MaterialGroup, TexturedMaterialGroup -from . import ModelDecodeException, ModelDecoder - - -class Mesh: - def __init__(self, name): - self.name = name - self.material = None - - self.indices = [] - self.vertices = [] - self.normals = [] - self.tex_coords = [] - self.colors = [] - - -def load_material_library(filename): - file = open(filename, 'r') - - name = None - diffuse = [1.0, 1.0, 1.0] - ambient = [1.0, 1.0, 1.0] - specular = [1.0, 1.0, 1.0] - emission = [0.0, 0.0, 0.0] - shininess = 100.0 - opacity = 1.0 - texture_name = None - - matlib = {} - - for line in file: - if line.startswith('#'): - continue - values = line.split() - if not values: - continue - - if values[0] == 'newmtl': - if name is not None: - # save previous material - for item in (diffuse, ambient, specular, emission): - item.append(opacity) - matlib[name] = Material(name, diffuse, ambient, specular, emission, shininess, texture_name) - name = values[1] - - elif name is None: - raise ModelDecodeException(f'Expected "newmtl" in {filename}') - - try: - if values[0] == 'Kd': - diffuse = list(map(float, values[1:])) - elif values[0] == 'Ka': - ambient = list(map(float, values[1:])) - elif values[0] == 'Ks': - specular = list(map(float, values[1:])) - elif values[0] == 'Ke': - emission = list(map(float, values[1:])) - elif values[0] == 'Ns': - shininess = float(values[1]) # Blender exports 1~1000 - shininess = (shininess * 128) / 1000 # Normalize to 1~128 for OpenGL - elif values[0] == 'd': - opacity = float(values[1]) - elif values[0] == 'map_Kd': - texture_name = values[1] - - except BaseException as ex: - raise ModelDecodeException('Parsing error in {0}.'.format((filename, ex))) - - file.close() - - for item in (diffuse, ambient, specular, emission): - item.append(opacity) - - matlib[name] = Material(name, diffuse, ambient, specular, emission, shininess, texture_name) - - return matlib - - -def parse_obj_file(filename, file=None): - materials = {} - mesh_list = [] - - location = os.path.dirname(filename) - - try: - if file is None: - with open(filename, 'r') as f: - file_contents = f.read() - else: - file_contents = asstr(file.read()) - except (UnicodeDecodeError, OSError): - raise ModelDecodeException - - material = None - mesh = None - - vertices = [[0., 0., 0.]] - normals = [[0., 0., 0.]] - tex_coords = [[0., 0.]] - - diffuse = [1.0, 1.0, 1.0, 1.0] - ambient = [1.0, 1.0, 1.0, 1.0] - specular = [1.0, 1.0, 1.0, 1.0] - emission = [0.0, 0.0, 0.0, 1.0] - shininess = 100.0 - - default_material = Material("Default", diffuse, ambient, specular, emission, shininess) - - for line in file_contents.splitlines(): - - if line.startswith('#'): - continue - values = line.split() - if not values: - continue - - if values[0] == 'v': - vertices.append(list(map(float, values[1:4]))) - elif values[0] == 'vn': - normals.append(list(map(float, values[1:4]))) - elif values[0] == 'vt': - tex_coords.append(list(map(float, values[1:3]))) - - elif values[0] == 'mtllib': - material_abspath = os.path.join(location, values[1]) - materials = load_material_library(filename=material_abspath) - - elif values[0] in ('usemtl', 'usemat'): - material = materials.get(values[1]) - if mesh is not None: - mesh.material = material - - elif values[0] == 'o': - mesh = Mesh(name=values[1]) - mesh_list.append(mesh) - - elif values[0] == 'f': - if mesh is None: - mesh = Mesh(name='') - mesh_list.append(mesh) - if material is None: - material = default_material - if mesh.material is None: - mesh.material = material - - # For fan triangulation, remember first and latest vertices - n1 = None - nlast = None - t1 = None - tlast = None - v1 = None - vlast = None - - for i, v in enumerate(values[1:]): - v_i, t_i, n_i = (list(map(int, [j or 0 for j in v.split('/')])) + [0, 0])[:3] - if v_i < 0: - v_i += len(vertices) - 1 - if t_i < 0: - t_i += len(tex_coords) - 1 - if n_i < 0: - n_i += len(normals) - 1 - - mesh.normals += normals[n_i] - mesh.tex_coords += tex_coords[t_i] - mesh.vertices += vertices[v_i] - - if i >= 3: - # Triangulate - mesh.normals += n1 + nlast - mesh.tex_coords += t1 + tlast - mesh.vertices += v1 + vlast - - if i == 0: - n1 = normals[n_i] - t1 = tex_coords[t_i] - v1 = vertices[v_i] - nlast = normals[n_i] - tlast = tex_coords[t_i] - vlast = vertices[v_i] - - return mesh_list - - -################################################### -# Decoder definitions start here: -################################################### - -class OBJModelDecoder(ModelDecoder): - def get_file_extensions(self): - return ['.obj'] - - def decode(self, filename, file, batch, group=None): - - if not batch: - batch = pyglet.graphics.Batch() - - mesh_list = parse_obj_file(filename=filename, file=file) - - vertex_lists = [] - groups = [] - - for mesh in mesh_list: - material = mesh.material - count = len(mesh.vertices) // 3 - if material.texture_name: - program = pyglet.model.get_default_textured_shader() - texture = pyglet.resource.texture(material.texture_name) - matgroup = TexturedMaterialGroup(material, program, texture, parent=group) - vertex_lists.append(program.vertex_list(count, GL_TRIANGLES, batch, matgroup, - vertices=('f', mesh.vertices), - normals=('f', mesh.normals), - tex_coords=('f', mesh.tex_coords), - colors=('f', material.diffuse * count))) - else: - program = pyglet.model.get_default_shader() - matgroup = MaterialGroup(material, program, parent=group) - vertex_lists.append(program.vertex_list(count, GL_TRIANGLES, batch, matgroup, - vertices=('f', mesh.vertices), - normals=('f', mesh.normals), - colors=('f', material.diffuse * count))) - groups.append(matgroup) - - return Model(vertex_lists=vertex_lists, groups=groups, batch=batch) - - -def get_decoders(): - return [OBJModelDecoder()] - - -def get_encoders(): - return [] diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/build/lib/pyrender/node.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/build/lib/pyrender/node.py deleted file mode 100644 index 1f37f7856cc732a37dc58253022a7c331489493e..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/build/lib/pyrender/node.py +++ /dev/null @@ -1,263 +0,0 @@ -"""Nodes, conforming to the glTF 2.0 standards as specified in -https://github.com/KhronosGroup/glTF/tree/master/specification/2.0#reference-node - -Author: Matthew Matl -""" -import numpy as np - -import trimesh.transformations as transformations - -from .camera import Camera -from .mesh import Mesh -from .light import Light - - -class Node(object): - """A node in the node hierarchy. - - Parameters - ---------- - name : str, optional - The user-defined name of this object. - camera : :class:`Camera`, optional - The camera in this node. - children : list of :class:`Node` - The children of this node. - skin : int, optional - The index of the skin referenced by this node. - matrix : (4,4) float, optional - A floating-point 4x4 transformation matrix. - mesh : :class:`Mesh`, optional - The mesh in this node. - rotation : (4,) float, optional - The node's unit quaternion in the order (x, y, z, w), where - w is the scalar. - scale : (3,) float, optional - The node's non-uniform scale, given as the scaling factors along the x, - y, and z axes. - translation : (3,) float, optional - The node's translation along the x, y, and z axes. - weights : (n,) float - The weights of the instantiated Morph Target. Number of elements must - match number of Morph Targets of used mesh. - light : :class:`Light`, optional - The light in this node. - """ - - def __init__(self, - name=None, - camera=None, - children=None, - skin=None, - matrix=None, - mesh=None, - rotation=None, - scale=None, - translation=None, - weights=None, - light=None): - # Set defaults - if children is None: - children = [] - - self._matrix = None - self._scale = None - self._rotation = None - self._translation = None - if matrix is None: - if rotation is None: - rotation = np.array([0.0, 0.0, 0.0, 1.0]) - if translation is None: - translation = np.zeros(3) - if scale is None: - scale = np.ones(3) - self.rotation = rotation - self.translation = translation - self.scale = scale - else: - self.matrix = matrix - - self.name = name - self.camera = camera - self.children = children - self.skin = skin - self.mesh = mesh - self.weights = weights - self.light = light - - @property - def name(self): - """str : The user-defined name of this object. - """ - return self._name - - @name.setter - def name(self, value): - if value is not None: - value = str(value) - self._name = value - - @property - def camera(self): - """:class:`Camera` : The camera in this node. - """ - return self._camera - - @camera.setter - def camera(self, value): - if value is not None and not isinstance(value, Camera): - raise TypeError('Value must be a camera') - self._camera = value - - @property - def children(self): - """list of :class:`Node` : The children of this node. - """ - return self._children - - @children.setter - def children(self, value): - self._children = value - - @property - def skin(self): - """int : The skin index for this node. - """ - return self._skin - - @skin.setter - def skin(self, value): - self._skin = value - - @property - def mesh(self): - """:class:`Mesh` : The mesh in this node. - """ - return self._mesh - - @mesh.setter - def mesh(self, value): - if value is not None and not isinstance(value, Mesh): - raise TypeError('Value must be a mesh') - self._mesh = value - - @property - def light(self): - """:class:`Light` : The light in this node. - """ - return self._light - - @light.setter - def light(self, value): - if value is not None and not isinstance(value, Light): - raise TypeError('Value must be a light') - self._light = value - - @property - def rotation(self): - """(4,) float : The xyzw quaternion for this node. - """ - return self._rotation - - @rotation.setter - def rotation(self, value): - value = np.asanyarray(value) - if value.shape != (4,): - raise ValueError('Quaternion must be a (4,) vector') - if np.abs(np.linalg.norm(value) - 1.0) > 1e-3: - raise ValueError('Quaternion must have norm == 1.0') - self._rotation = value - self._matrix = None - - @property - def translation(self): - """(3,) float : The translation for this node. - """ - return self._translation - - @translation.setter - def translation(self, value): - value = np.asanyarray(value) - if value.shape != (3,): - raise ValueError('Translation must be a (3,) vector') - self._translation = value - self._matrix = None - - @property - def scale(self): - """(3,) float : The scale for this node. - """ - return self._scale - - @scale.setter - def scale(self, value): - value = np.asanyarray(value) - if value.shape != (3,): - raise ValueError('Scale must be a (3,) vector') - self._scale = value - self._matrix = None - - @property - def matrix(self): - """(4,4) float : The homogenous transform matrix for this node. - - Note that this matrix's elements are not settable, - it's just a copy of the internal matrix. You can set the whole - matrix, but not an individual element. - """ - if self._matrix is None: - self._matrix = self._m_from_tqs( - self.translation, self.rotation, self.scale - ) - return self._matrix.copy() - - @matrix.setter - def matrix(self, value): - value = np.asanyarray(value) - if value.shape != (4,4): - raise ValueError('Matrix must be a 4x4 numpy ndarray') - if not np.allclose(value[3,:], np.array([0.0, 0.0, 0.0, 1.0])): - raise ValueError('Bottom row of matrix must be [0,0,0,1]') - self.rotation = Node._q_from_m(value) - self.scale = Node._s_from_m(value) - self.translation = Node._t_from_m(value) - self._matrix = value - - @staticmethod - def _t_from_m(m): - return m[:3,3] - - @staticmethod - def _r_from_m(m): - U = m[:3,:3] - norms = np.linalg.norm(U.T, axis=1) - return U / norms - - @staticmethod - def _q_from_m(m): - M = np.eye(4) - M[:3,:3] = Node._r_from_m(m) - q_wxyz = transformations.quaternion_from_matrix(M) - return np.roll(q_wxyz, -1) - - @staticmethod - def _s_from_m(m): - return np.linalg.norm(m[:3,:3].T, axis=1) - - @staticmethod - def _r_from_q(q): - q_wxyz = np.roll(q, 1) - return transformations.quaternion_matrix(q_wxyz)[:3,:3] - - @staticmethod - def _m_from_tqs(t, q, s): - S = np.eye(4) - S[:3,:3] = np.diag(s) - - R = np.eye(4) - R[:3,:3] = Node._r_from_q(q) - - T = np.eye(4) - T[:3,3] = t - - return T.dot(R.dot(S)) diff --git a/spaces/ahdsoft/Persian-Topic-Modeling/topic_modeling.py b/spaces/ahdsoft/Persian-Topic-Modeling/topic_modeling.py deleted file mode 100644 index 05ce7cecb0b4a75cb8ffa36cb2ba448bca89f689..0000000000000000000000000000000000000000 --- a/spaces/ahdsoft/Persian-Topic-Modeling/topic_modeling.py +++ /dev/null @@ -1,100 +0,0 @@ -from bertopic import BERTopic -from scipy.cluster import hierarchy as sch -from sklearn.feature_extraction.text import CountVectorizer -from sklearn.datasets import fetch_20newsgroups -from bertopic import BERTopic -# from wordcloud import WordCloud -import matplotlib.pyplot as plt -from wordcloud_fa import WordCloudFa -import os - -import utils - -embed_model = os.environ.get("EMBED_MODEL") - -class TopicModeling: - def __init__(self, stopwords_path='./assets/stopwords.txt', specific_stopwords_path='./assets/shahrara_stopwords.txt', embedding_model= embed_model) -> None: - stopwords = open(stopwords_path).read().splitlines() - specific_stopwords = open(specific_stopwords_path).read().splitlines() - stopwords = stopwords + specific_stopwords - vectorizer_model = CountVectorizer(stop_words=stopwords) - self.topic_model = BERTopic(embedding_model=embedding_model, vectorizer_model=vectorizer_model, verbose=True) - - - def add_data(self, df): - print('add data') - # df = df.dropna() - df['FINAL_CONCATED_TEXT_FOR_TOPIC'] = df.apply(lambda x: '. '.join(x), axis=1) - df['FINAL_CONCATED_TEXT_FOR_TOPIC'] = df['FINAL_CONCATED_TEXT_FOR_TOPIC'].apply(utils.normalize) - docs = list(set(df['FINAL_CONCATED_TEXT_FOR_TOPIC'].tolist())) - docs = [d for d in docs if d and type(d) == str and len(d.split())>3] - print('len docs ', len(docs)) - return docs - - - def fit(self, docs): - print('self docs : ', len(docs)) - print(docs[:5]) - self.topics, self.probs = self.topic_model.fit_transform(docs) - - def get_barchart(self): - return self.topic_model.visualize_barchart() - - - def get_vis_topics(self): - return self.topic_model.visualize_topics() - - - def get_h_topics(self): - linkage_function = lambda x: sch.linkage(x, 'single', optimal_ordering=True) - hierarchical_topics = self.topic_model.hierarchical_topics(self.docs, linkage_function=linkage_function) - return self.topic_model.visualize_hierarchy(hierarchical_topics=hierarchical_topics) - - def topic_over_tome(self): - # # Create topics over time - # model = BERTopic(verbose=True) - topics_over_time = self.topic_model.topics_over_time(self.docs, self.timestamps, datetime_format="%m-%d") - return self.topic_model.visualize_topics_over_time(topics_over_time, top_n_topics=5) - - - def visualize_documents(self, docs): - self.topic_model.visualize_documents(docs, embeddings=embeddings) - reduced_embeddings = UMAP(n_neighbors=10, n_components=2, min_dist=0.0, metric='cosine').fit_transform(embeddings) - topic_model.visualize_documents(docs, reduced_embeddings=reduced_embeddings) - - - def get_topic_info(self): - return self.topic_model.get_topic_info() - - - def get_wordcloud(self): - all_plts = [] - topic_counts = len(self.topic_model.get_topic_info()) - if topic_counts > 30: - topic_counts = 30 - print('topic count ', topic_counts) - for topic_index in range(topic_counts): - print(topic_index) - top_n_words = self.topic_model.get_topic(topic_index) - if type(top_n_words) != bool: - text = {word: value for word, value in top_n_words} - wc = WordCloudFa(background_color="white", max_words=1000, no_reshape=True) - wc.generate_from_frequencies(text) - plt.imshow(wc, interpolation="bilinear") - plt.axis("off") - fig = plt.figure() - all_plts.append(fig) - # plt.show() - return all_plts - - def get_wordcloud_by_topic(self, topic_index): - top_n_words = self.topic_model.get_topic(topic_index) - if type(top_n_words) != bool: - text = {word: value for word, value in top_n_words} - wc = WordCloudFa(background_color="white", max_words=1000, no_reshape=True) - wc.generate_from_frequencies(text) - plt.imshow(wc, interpolation="bilinear") - plt.axis("off") - fig = plt.figure() - return fig - return None \ No newline at end of file diff --git a/spaces/aidiary/tts-ljspeech-demo/README.md b/spaces/aidiary/tts-ljspeech-demo/README.md deleted file mode 100644 index 97e7012e83053bfec7805320e9cb5c0e18e4848f..0000000000000000000000000000000000000000 --- a/spaces/aidiary/tts-ljspeech-demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: TTS LJSpeech Demo -emoji: 🌖 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/akhaliq/BlendGAN/gen_video.py b/spaces/akhaliq/BlendGAN/gen_video.py deleted file mode 100644 index 1602c6c1326032f6ebce2f176aae10050fc5afb8..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/BlendGAN/gen_video.py +++ /dev/null @@ -1,210 +0,0 @@ -import argparse -import os - -import cv2 -import numpy as np -import torch - -from model import Generator -from psp_encoder.psp_encoders import PSPEncoder -from utils import ten2cv, cv2ten - -import glob -from tqdm import tqdm -import random - - -seed = 0 - -random.seed(seed) -np.random.seed(seed) -torch.manual_seed(seed) -torch.cuda.manual_seed_all(seed) - - -def sigmoid(x, w=1): - return 1. / (1 + np.exp(-w * x)) - - -def get_alphas(start=-5, end=5, step=0.5, len_tail=10): - return [0] + [sigmoid(alpha) for alpha in np.arange(start, end, step)] + [1] * len_tail - - -def slide(entries, margin=32): - """Returns a sliding reference window. - Args: - entries: a list containing two reference images, x_prev and x_next, - both of which has a shape (1, 3, H, W) - Returns: - canvas: output slide of shape (num_frames, 3, H*2, W+margin) - """ - _, C, H, W = entries[0].shape - alphas = get_alphas() - T = len(alphas) # number of frames - - canvas = - torch.ones((T, C, H*2, W + margin)) - merged = torch.cat(entries, dim=2) # (1, 3, H*2, W) - for t, alpha in enumerate(alphas): - top = int(H * (1 - alpha)) # top, bottom for canvas - bottom = H * 2 - m_top = 0 # top, bottom for merged - m_bottom = 2 * H - top - canvas[t, :, top:bottom, :W] = merged[:, :, m_top:m_bottom, :] - return canvas - - -def slide_one_window(entries, margin=32): - """Returns a sliding reference window. - Args: - entries: a list containing two reference images, x_prev and x_next, - both of which has a shape (1, 3, H, W) - Returns: - canvas: output slide of shape (num_frames, 3, H, W+margin) - """ - _, C, H, W = entries[0].shape - device = entries[0].device - alphas = get_alphas() - T = len(alphas) # number of frames - - canvas = - torch.ones((T, C, H, W + margin)).to(device) - merged = torch.cat(entries, dim=2) # (1, 3, H*2, W) - for t, alpha in enumerate(alphas): - m_top = int(H * alpha) # top, bottom for merged - m_bottom = m_top + H - canvas[t, :, :, :W] = merged[:, :, m_top:m_bottom, :] - return canvas - - -def tensor2ndarray255(images): - images = torch.clamp(images * 0.5 + 0.5, 0, 1) - return (images.cpu().numpy().transpose(0, 2, 3, 1) * 255).astype(np.uint8) - - -@torch.no_grad() -def interpolate(args, g, sample_in, sample_style_prev, sample_style_next): - ''' returns T x C x H x W ''' - frames_ten = [] - alphas = get_alphas() - - for alpha in alphas: - sample_style = torch.lerp(sample_style_prev, sample_style_next, alpha) - frame_ten, _ = g([sample_in], z_embed=sample_style, add_weight_index=args.add_weight_index, - input_is_latent=True, return_latents=False, randomize_noise=False) - frames_ten.append(frame_ten) - frames_ten = torch.cat(frames_ten) - return frames_ten - - -@torch.no_grad() -def video_ref(args, g, psp_encoder, img_in_ten, img_style_tens): - video = [] - sample_in = psp_encoder(img_in_ten) - - img_style_ten_prev, sample_style_prev = None, None - - for idx in tqdm(range(len(img_style_tens))): - img_style_ten_next = img_style_tens[idx] - sample_style_next = g_ema.get_z_embed(img_style_ten_next) - if img_style_ten_prev is None: - img_style_ten_prev, sample_style_prev = img_style_ten_next, sample_style_next - continue - - interpolated = interpolate(args, g, sample_in, sample_style_prev, sample_style_next) - entries = [img_style_ten_prev, img_style_ten_next] - slided = slide_one_window(entries, margin=0) # [T, C, H, W) - frames = torch.cat([img_in_ten.expand_as(interpolated), slided, interpolated], dim=3).cpu() # [T, C, H, W*3) - video.append(frames) - img_style_ten_prev, sample_style_prev = img_style_ten_next, sample_style_next - - # append last frame 10 time - for _ in range(10): - video.append(frames[-1:]) - video = tensor2ndarray255(torch.cat(video)) # [T, H, W*3, C) - - return video - - -def save_video(fname, images, output_fps=30): - print('save video to: %s' % fname) - - assert isinstance(images, np.ndarray), "images should be np.array: NHWC" - num_frames, height, width, channels = images.shape - - fourcc = cv2.VideoWriter_fourcc(*'XVID') - videoWriter = cv2.VideoWriter(fname, fourcc, output_fps, (width, height)) - - for idx in tqdm(range(num_frames)): - frame = images[idx][:, :, ::-1] # [H, W*3, C) - videoWriter.write(frame) - - videoWriter.release() - - -if __name__ == '__main__': - device = 'cuda' - - parser = argparse.ArgumentParser() - - parser.add_argument('--size', type=int, default=1024) - - parser.add_argument('--ckpt', type=str, default='', help='path to BlendGAN checkpoint') - parser.add_argument('--psp_encoder_ckpt', type=str, default='', help='path to psp_encoder checkpoint') - - parser.add_argument('--style_img_path', type=str, default=None, help='path to style image') - parser.add_argument('--input_img_path', type=str, default=None, help='path to input image') - parser.add_argument('--add_weight_index', type=int, default=7) - - parser.add_argument('--channel_multiplier', type=int, default=2) - parser.add_argument('--outdir', type=str, default="") - - args = parser.parse_args() - - outdir = args.outdir - if not os.path.exists(outdir): - os.makedirs(outdir, exist_ok=True) - - args.latent = 512 - args.n_mlp = 8 - - checkpoint = torch.load(args.ckpt) - model_dict = checkpoint['g_ema'] - print('ckpt: ', args.ckpt) - - g_ema = Generator( - args.size, args.latent, args.n_mlp, channel_multiplier=args.channel_multiplier - ).to(device) - g_ema.load_state_dict(model_dict) - g_ema.eval() - - psp_encoder = PSPEncoder(args.psp_encoder_ckpt, output_size=args.size).to(device) - psp_encoder.eval() - - input_img_paths = sorted(glob.glob(os.path.join(args.input_img_path, '*.*'))) - style_img_paths = sorted(glob.glob(os.path.join(args.style_img_path, '*.*')))[:] - - for input_img_path in input_img_paths: - print('process: %s' % input_img_path) - - name_in = os.path.splitext(os.path.basename(input_img_path))[0] - img_in = cv2.imread(input_img_path, 1) - img_in = cv2.resize(img_in, (args.size, args.size)) - img_in_ten = cv2ten(img_in, device) - - img_style_tens = [] - - style_img_path_rand = random.choices(style_img_paths, k=8) - for style_img_path in style_img_path_rand: - name_style = os.path.splitext(os.path.basename(style_img_path))[0] - img_style = cv2.imread(style_img_path, 1) - img_style = cv2.resize(img_style, (args.size, args.size)) - img_style_ten = cv2ten(img_style, device) - - img_style_tens.append(img_style_ten) - - fname = f'{args.outdir}/{name_in}.mp4' - video = video_ref(args, g_ema, psp_encoder, img_in_ten, img_style_tens) - - save_video(fname, video, output_fps=30) - - print('Done!') - diff --git a/spaces/akhaliq/Detic/detic/data/datasets/register_oid.py b/spaces/akhaliq/Detic/detic/data/datasets/register_oid.py deleted file mode 100644 index bd281f53f07074740b453838ba32f42f81a28383..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Detic/detic/data/datasets/register_oid.py +++ /dev/null @@ -1,122 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Modified by Xingyi Zhou from https://github.com/facebookresearch/detectron2/blob/master/detectron2/data/datasets/coco.py -import copy -import io -import logging -import contextlib -import os -import datetime -import json -import numpy as np - -from PIL import Image - -from fvcore.common.timer import Timer -from fvcore.common.file_io import PathManager, file_lock -from detectron2.structures import BoxMode, PolygonMasks, Boxes -from detectron2.data import DatasetCatalog, MetadataCatalog - -logger = logging.getLogger(__name__) - -""" -This file contains functions to register a COCO-format dataset to the DatasetCatalog. -""" - -__all__ = ["register_coco_instances", "register_coco_panoptic_separated"] - - - -def register_oid_instances(name, metadata, json_file, image_root): - """ - """ - # 1. register a function which returns dicts - DatasetCatalog.register(name, lambda: load_coco_json_mem_efficient( - json_file, image_root, name)) - - # 2. Optionally, add metadata about this dataset, - # since they might be useful in evaluation, visualization or logging - MetadataCatalog.get(name).set( - json_file=json_file, image_root=image_root, evaluator_type="oid", **metadata - ) - - -def load_coco_json_mem_efficient(json_file, image_root, dataset_name=None, extra_annotation_keys=None): - """ - Actually not mem efficient - """ - from pycocotools.coco import COCO - - timer = Timer() - json_file = PathManager.get_local_path(json_file) - with contextlib.redirect_stdout(io.StringIO()): - coco_api = COCO(json_file) - if timer.seconds() > 1: - logger.info("Loading {} takes {:.2f} seconds.".format(json_file, timer.seconds())) - - id_map = None - if dataset_name is not None: - meta = MetadataCatalog.get(dataset_name) - cat_ids = sorted(coco_api.getCatIds()) - cats = coco_api.loadCats(cat_ids) - # The categories in a custom json file may not be sorted. - thing_classes = [c["name"] for c in sorted(cats, key=lambda x: x["id"])] - meta.thing_classes = thing_classes - - if not (min(cat_ids) == 1 and max(cat_ids) == len(cat_ids)): - if "coco" not in dataset_name: - logger.warning( - """ - Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you. - """ - ) - id_map = {v: i for i, v in enumerate(cat_ids)} - meta.thing_dataset_id_to_contiguous_id = id_map - - # sort indices for reproducible results - img_ids = sorted(coco_api.imgs.keys()) - imgs = coco_api.loadImgs(img_ids) - logger.info("Loaded {} images in COCO format from {}".format(len(imgs), json_file)) - - dataset_dicts = [] - - ann_keys = ["iscrowd", "bbox", "category_id"] + (extra_annotation_keys or []) - - for img_dict in imgs: - record = {} - record["file_name"] = os.path.join(image_root, img_dict["file_name"]) - record["height"] = img_dict["height"] - record["width"] = img_dict["width"] - image_id = record["image_id"] = img_dict["id"] - anno_dict_list = coco_api.imgToAnns[image_id] - if 'neg_category_ids' in img_dict: - record['neg_category_ids'] = \ - [id_map[x] for x in img_dict['neg_category_ids']] - - objs = [] - for anno in anno_dict_list: - assert anno["image_id"] == image_id - - assert anno.get("ignore", 0) == 0 - - obj = {key: anno[key] for key in ann_keys if key in anno} - - segm = anno.get("segmentation", None) - if segm: # either list[list[float]] or dict(RLE) - if not isinstance(segm, dict): - # filter out invalid polygons (< 3 points) - segm = [poly for poly in segm if len(poly) % 2 == 0 and len(poly) >= 6] - if len(segm) == 0: - num_instances_without_valid_segmentation += 1 - continue # ignore this instance - obj["segmentation"] = segm - - obj["bbox_mode"] = BoxMode.XYWH_ABS - - if id_map: - obj["category_id"] = id_map[obj["category_id"]] - objs.append(obj) - record["annotations"] = objs - dataset_dicts.append(record) - - del coco_api - return dataset_dicts \ No newline at end of file diff --git a/spaces/akhaliq/GPEN/lpips/trainer.py b/spaces/akhaliq/GPEN/lpips/trainer.py deleted file mode 100644 index 52b6112cdc79db7a429ec52e60fcefdb756f776b..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/GPEN/lpips/trainer.py +++ /dev/null @@ -1,280 +0,0 @@ - -from __future__ import absolute_import - -import numpy as np -import torch -from torch import nn -from collections import OrderedDict -from torch.autograd import Variable -from scipy.ndimage import zoom -from tqdm import tqdm -import lpips -import os - - -class Trainer(): - def name(self): - return self.model_name - - def initialize(self, model='lpips', net='alex', colorspace='Lab', pnet_rand=False, pnet_tune=False, model_path=None, - use_gpu=True, printNet=False, spatial=False, - is_train=False, lr=.0001, beta1=0.5, version='0.1', gpu_ids=[0]): - ''' - INPUTS - model - ['lpips'] for linearly calibrated network - ['baseline'] for off-the-shelf network - ['L2'] for L2 distance in Lab colorspace - ['SSIM'] for ssim in RGB colorspace - net - ['squeeze','alex','vgg'] - model_path - if None, will look in weights/[NET_NAME].pth - colorspace - ['Lab','RGB'] colorspace to use for L2 and SSIM - use_gpu - bool - whether or not to use a GPU - printNet - bool - whether or not to print network architecture out - spatial - bool - whether to output an array containing varying distances across spatial dimensions - is_train - bool - [True] for training mode - lr - float - initial learning rate - beta1 - float - initial momentum term for adam - version - 0.1 for latest, 0.0 was original (with a bug) - gpu_ids - int array - [0] by default, gpus to use - ''' - self.use_gpu = use_gpu - self.gpu_ids = gpu_ids - self.model = model - self.net = net - self.is_train = is_train - self.spatial = spatial - self.model_name = '%s [%s]'%(model,net) - - if(self.model == 'lpips'): # pretrained net + linear layer - self.net = lpips.LPIPS(pretrained=not is_train, net=net, version=version, lpips=True, spatial=spatial, - pnet_rand=pnet_rand, pnet_tune=pnet_tune, - use_dropout=True, model_path=model_path, eval_mode=False) - elif(self.model=='baseline'): # pretrained network - self.net = lpips.LPIPS(pnet_rand=pnet_rand, net=net, lpips=False) - elif(self.model in ['L2','l2']): - self.net = lpips.L2(use_gpu=use_gpu,colorspace=colorspace) # not really a network, only for testing - self.model_name = 'L2' - elif(self.model in ['DSSIM','dssim','SSIM','ssim']): - self.net = lpips.DSSIM(use_gpu=use_gpu,colorspace=colorspace) - self.model_name = 'SSIM' - else: - raise ValueError("Model [%s] not recognized." % self.model) - - self.parameters = list(self.net.parameters()) - - if self.is_train: # training mode - # extra network on top to go from distances (d0,d1) => predicted human judgment (h*) - self.rankLoss = lpips.BCERankingLoss() - self.parameters += list(self.rankLoss.net.parameters()) - self.lr = lr - self.old_lr = lr - self.optimizer_net = torch.optim.Adam(self.parameters, lr=lr, betas=(beta1, 0.999)) - else: # test mode - self.net.eval() - - if(use_gpu): - self.net.to(gpu_ids[0]) - self.net = torch.nn.DataParallel(self.net, device_ids=gpu_ids) - if(self.is_train): - self.rankLoss = self.rankLoss.to(device=gpu_ids[0]) # just put this on GPU0 - - if(printNet): - print('---------- Networks initialized -------------') - networks.print_network(self.net) - print('-----------------------------------------------') - - def forward(self, in0, in1, retPerLayer=False): - ''' Function computes the distance between image patches in0 and in1 - INPUTS - in0, in1 - torch.Tensor object of shape Nx3xXxY - image patch scaled to [-1,1] - OUTPUT - computed distances between in0 and in1 - ''' - - return self.net.forward(in0, in1, retPerLayer=retPerLayer) - - # ***** TRAINING FUNCTIONS ***** - def optimize_parameters(self): - self.forward_train() - self.optimizer_net.zero_grad() - self.backward_train() - self.optimizer_net.step() - self.clamp_weights() - - def clamp_weights(self): - for module in self.net.modules(): - if(hasattr(module, 'weight') and module.kernel_size==(1,1)): - module.weight.data = torch.clamp(module.weight.data,min=0) - - def set_input(self, data): - self.input_ref = data['ref'] - self.input_p0 = data['p0'] - self.input_p1 = data['p1'] - self.input_judge = data['judge'] - - if(self.use_gpu): - self.input_ref = self.input_ref.to(device=self.gpu_ids[0]) - self.input_p0 = self.input_p0.to(device=self.gpu_ids[0]) - self.input_p1 = self.input_p1.to(device=self.gpu_ids[0]) - self.input_judge = self.input_judge.to(device=self.gpu_ids[0]) - - self.var_ref = Variable(self.input_ref,requires_grad=True) - self.var_p0 = Variable(self.input_p0,requires_grad=True) - self.var_p1 = Variable(self.input_p1,requires_grad=True) - - def forward_train(self): # run forward pass - self.d0 = self.forward(self.var_ref, self.var_p0) - self.d1 = self.forward(self.var_ref, self.var_p1) - self.acc_r = self.compute_accuracy(self.d0,self.d1,self.input_judge) - - self.var_judge = Variable(1.*self.input_judge).view(self.d0.size()) - - self.loss_total = self.rankLoss.forward(self.d0, self.d1, self.var_judge*2.-1.) - - return self.loss_total - - def backward_train(self): - torch.mean(self.loss_total).backward() - - def compute_accuracy(self,d0,d1,judge): - ''' d0, d1 are Variables, judge is a Tensor ''' - d1_lt_d0 = (d1 %f' % (type,self.old_lr, lr)) - self.old_lr = lr - - - def get_image_paths(self): - return self.image_paths - - def save_done(self, flag=False): - np.save(os.path.join(self.save_dir, 'done_flag'),flag) - np.savetxt(os.path.join(self.save_dir, 'done_flag'),[flag,],fmt='%i') - - -def score_2afc_dataset(data_loader, func, name=''): - ''' Function computes Two Alternative Forced Choice (2AFC) score using - distance function 'func' in dataset 'data_loader' - INPUTS - data_loader - CustomDatasetDataLoader object - contains a TwoAFCDataset inside - func - callable distance function - calling d=func(in0,in1) should take 2 - pytorch tensors with shape Nx3xXxY, and return numpy array of length N - OUTPUTS - [0] - 2AFC score in [0,1], fraction of time func agrees with human evaluators - [1] - dictionary with following elements - d0s,d1s - N arrays containing distances between reference patch to perturbed patches - gts - N array in [0,1], preferred patch selected by human evaluators - (closer to "0" for left patch p0, "1" for right patch p1, - "0.6" means 60pct people preferred right patch, 40pct preferred left) - scores - N array in [0,1], corresponding to what percentage function agreed with humans - CONSTS - N - number of test triplets in data_loader - ''' - - d0s = [] - d1s = [] - gts = [] - - for data in tqdm(data_loader.load_data(), desc=name): - d0s+=func(data['ref'],data['p0']).data.cpu().numpy().flatten().tolist() - d1s+=func(data['ref'],data['p1']).data.cpu().numpy().flatten().tolist() - gts+=data['judge'].cpu().numpy().flatten().tolist() - - d0s = np.array(d0s) - d1s = np.array(d1s) - gts = np.array(gts) - scores = (d0s`_ ) - """ - - def _convert_category_id(segment_info, meta): - if segment_info["category_id"] in meta["thing_dataset_id_to_contiguous_id"]: - segment_info["category_id"] = meta["thing_dataset_id_to_contiguous_id"][ - segment_info["category_id"] - ] - segment_info["isthing"] = True - else: - segment_info["category_id"] = meta["stuff_dataset_id_to_contiguous_id"][ - segment_info["category_id"] - ] - segment_info["isthing"] = False - return segment_info - - with PathManager.open(json_file) as f: - json_info = json.load(f) - - ret = [] - for ann in json_info["annotations"]: - image_id = ann["image_id"] - # TODO: currently we assume image and label has the same filename but - # different extension, and images have extension ".jpg" for COCO. Need - # to make image extension a user-provided argument if we extend this - # function to support other COCO-like datasets. - image_file = os.path.join(image_dir, os.path.splitext(ann["file_name"])[0] + ".jpg") - label_file = os.path.join(gt_dir, ann["file_name"]) - sem_label_file = os.path.join(semseg_dir, ann["file_name"]) - segments_info = [_convert_category_id(x, meta) for x in ann["segments_info"]] - ret.append( - { - "file_name": image_file, - "image_id": image_id, - "pan_seg_file_name": label_file, - "sem_seg_file_name": sem_label_file, - "segments_info": segments_info, - } - ) - assert len(ret), f"No images found in {image_dir}!" - assert PathManager.isfile(ret[0]["file_name"]), ret[0]["file_name"] - assert PathManager.isfile(ret[0]["pan_seg_file_name"]), ret[0]["pan_seg_file_name"] - assert PathManager.isfile(ret[0]["sem_seg_file_name"]), ret[0]["sem_seg_file_name"] - return ret - - -def register_ade20k_panoptic( - name, metadata, image_root, panoptic_root, semantic_root, panoptic_json, instances_json=None -): - """ - Register a "standard" version of ADE20k panoptic segmentation dataset named `name`. - The dictionaries in this registered dataset follows detectron2's standard format. - Hence it's called "standard". - Args: - name (str): the name that identifies a dataset, - e.g. "ade20k_panoptic_train" - metadata (dict): extra metadata associated with this dataset. - image_root (str): directory which contains all the images - panoptic_root (str): directory which contains panoptic annotation images in COCO format - panoptic_json (str): path to the json panoptic annotation file in COCO format - sem_seg_root (none): not used, to be consistent with - `register_coco_panoptic_separated`. - instances_json (str): path to the json instance annotation file - """ - panoptic_name = name - DatasetCatalog.register( - panoptic_name, - lambda: load_ade20k_panoptic_json( - panoptic_json, image_root, panoptic_root, semantic_root, metadata - ), - ) - MetadataCatalog.get(panoptic_name).set( - panoptic_root=panoptic_root, - image_root=image_root, - panoptic_json=panoptic_json, - json_file=instances_json, - evaluator_type="ade20k_panoptic_seg", - ignore_label=255, - label_divisor=1000, - **metadata, - ) - - -_PREDEFINED_SPLITS_ADE20K_PANOPTIC = { - "ade20k_panoptic_train": ( - "ADEChallengeData2016/images/training", - "ADEChallengeData2016/ade20k_panoptic_train", - "ADEChallengeData2016/ade20k_panoptic_train.json", - "ADEChallengeData2016/annotations_detectron2/training", - "ADEChallengeData2016/ade20k_instance_train.json", - ), - "ade20k_panoptic_val": ( - "ADEChallengeData2016/images/validation", - "ADEChallengeData2016/ade20k_panoptic_val", - "ADEChallengeData2016/ade20k_panoptic_val.json", - "ADEChallengeData2016/annotations_detectron2/validation", - "ADEChallengeData2016/ade20k_instance_val.json", - ), -} - - -def get_metadata(): - meta = {} - # The following metadata maps contiguous id from [0, #thing categories + - # #stuff categories) to their names and colors. We have to replica of the - # same name and color under "thing_*" and "stuff_*" because the current - # visualization function in D2 handles thing and class classes differently - # due to some heuristic used in Panoptic FPN. We keep the same naming to - # enable reusing existing visualization functions. - thing_classes = [k["name"] for k in ADE20K_150_CATEGORIES if k["isthing"] == 1] - thing_colors = [k["color"] for k in ADE20K_150_CATEGORIES if k["isthing"] == 1] - stuff_classes = [k["name"] for k in ADE20K_150_CATEGORIES] - stuff_colors = [k["color"] for k in ADE20K_150_CATEGORIES] - - meta["thing_classes"] = thing_classes - meta["thing_colors"] = thing_colors - meta["stuff_classes"] = stuff_classes - meta["stuff_colors"] = stuff_colors - - # Convert category id for training: - # category id: like semantic segmentation, it is the class id for each - # pixel. Since there are some classes not used in evaluation, the category - # id is not always contiguous and thus we have two set of category ids: - # - original category id: category id in the original dataset, mainly - # used for evaluation. - # - contiguous category id: [0, #classes), in order to train the linear - # softmax classifier. - thing_dataset_id_to_contiguous_id = {} - stuff_dataset_id_to_contiguous_id = {} - - for i, cat in enumerate(ADE20K_150_CATEGORIES): - if cat["isthing"]: - thing_dataset_id_to_contiguous_id[cat["id"]] = i - # else: - # stuff_dataset_id_to_contiguous_id[cat["id"]] = i - - # in order to use sem_seg evaluator - stuff_dataset_id_to_contiguous_id[cat["id"]] = i - - meta["thing_dataset_id_to_contiguous_id"] = thing_dataset_id_to_contiguous_id - meta["stuff_dataset_id_to_contiguous_id"] = stuff_dataset_id_to_contiguous_id - - return meta - - -def register_all_ade20k_panoptic(root): - metadata = get_metadata() - for ( - prefix, - (image_root, panoptic_root, panoptic_json, semantic_root, instance_json), - ) in _PREDEFINED_SPLITS_ADE20K_PANOPTIC.items(): - # The "standard" version of COCO panoptic segmentation dataset, - # e.g. used by Panoptic-DeepLab - register_ade20k_panoptic( - prefix, - metadata, - os.path.join(root, image_root), - os.path.join(root, panoptic_root), - os.path.join(root, semantic_root), - os.path.join(root, panoptic_json), - os.path.join(root, instance_json), - ) - - -_root = os.getenv("DETECTRON2_DATASETS", "datasets") -register_all_ade20k_panoptic(_root) diff --git a/spaces/akhaliq/Pyxelate/README.md b/spaces/akhaliq/Pyxelate/README.md deleted file mode 100644 index d9f2ddc2883f955fc048e67f1244b66f9a623294..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Pyxelate/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Pyxelate, Images into Pixel Art -emoji: 🌍 -colorFrom: green -colorTo: blue -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/akhaliq/SummerTime/app.py b/spaces/akhaliq/SummerTime/app.py deleted file mode 100644 index 57d95fec4cd8f8d998b0c83fa5010196b23e39b1..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/SummerTime/app.py +++ /dev/null @@ -1,25 +0,0 @@ -import os -os.system('pip install gradio==2.3.0a0') -import model as st_model -import gradio as gr - - -model = st_model.summarizer() - -def inference(text): - documents = [text] - return model.summarize(documents)[0] - -title = "SummerTime" -description = "Gradio demo for SummerTime: An open-source text summarization toolkit for non-experts. To use it, simply add your text, or click one of the examples to load them. Read more at the links below." -article = "

SummerTime: Text Summarization Toolkit for Non-experts | Github Repo

" - -gr.Interface( - inference, - [gr.inputs.Textbox(label="Input", lines=20)], - gr.outputs.Textbox(label="Output"), - title=title, - description=description, - article=article, - examples=[["""PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions.The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow."""]] - ).launch(debug=True) \ No newline at end of file diff --git a/spaces/akhaliq/deeplab2/data/sample_generator_test.py b/spaces/akhaliq/deeplab2/data/sample_generator_test.py deleted file mode 100644 index 8fa3cb3cbd1a3104aca5ad6fa0e909956a914f8b..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/deeplab2/data/sample_generator_test.py +++ /dev/null @@ -1,274 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The Deeplab2 Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Tests for sample_generator.""" - -import os - -from absl import flags -import numpy as np -from PIL import Image -import tensorflow as tf - -from deeplab2 import common -from deeplab2.data import data_utils -from deeplab2.data import dataset -from deeplab2.data import sample_generator - -image_utils = tf.keras.preprocessing.image - -flags.DEFINE_string( - 'panoptic_annotation_data', - 'deeplab2/data/testdata/', - 'Path to annotated test image.') -flags.DEFINE_bool('update_golden_data', False, - 'Whether or not to update the golden data for testing.') - -FLAGS = flags.FLAGS - -_FILENAME_PREFIX = 'dummy_000000_000000' -_IMAGE_FOLDER = 'leftImg8bit/' -_TARGET_FOLDER = 'targets/' - - -def _get_groundtruth_image(computed_image_array, groundtruth_image_filename): - if FLAGS.update_golden_data: - image = Image.fromarray(tf.squeeze(computed_image_array).numpy()) - with tf.io.gfile.GFile(groundtruth_image_filename, mode='wb') as fp: - image.save(fp) - return computed_image_array - - with tf.io.gfile.GFile(groundtruth_image_filename, mode='rb') as fp: - image = data_utils.read_image(fp.read()) - # If loaded image has 3 channels, the returned shape is [height, width, 3]. - # If loaded image has 1 channel, the returned shape is [height, width]. - image = np.squeeze(image_utils.img_to_array(image)) - return image - - -def _get_groundtruth_array(computed_image_array, groundtruth_image_filename): - if FLAGS.update_golden_data: - with tf.io.gfile.GFile(groundtruth_image_filename, mode='wb') as fp: - np.save(fp, computed_image_array) - return computed_image_array - with tf.io.gfile.GFile(groundtruth_image_filename, mode='rb') as fp: - # If loaded data has C>1 channels, the returned shape is [height, width, C]. - # If loaded data has 1 channel, the returned shape is [height, width]. - array = np.squeeze(np.load(fp)) - return array - - -class PanopticSampleGeneratorTest(tf.test.TestCase): - - def setUp(self): - super().setUp() - self._test_img_data_dir = os.path.join( - FLAGS.test_srcdir, - FLAGS.panoptic_annotation_data, - _IMAGE_FOLDER) - self._test_gt_data_dir = os.path.join( - FLAGS.test_srcdir, - FLAGS.panoptic_annotation_data) - self._test_target_data_dir = os.path.join( - FLAGS.test_srcdir, - FLAGS.panoptic_annotation_data, - _TARGET_FOLDER) - image_path = self._test_img_data_dir + _FILENAME_PREFIX + '_leftImg8bit.png' - with tf.io.gfile.GFile(image_path, 'rb') as image_file: - rgb_image = data_utils.read_image(image_file.read()) - self._rgb_image = tf.convert_to_tensor(np.array(rgb_image)) - label_path = self._test_gt_data_dir + 'dummy_gt_for_vps.png' - with tf.io.gfile.GFile(label_path, 'rb') as label_file: - label = data_utils.read_image(label_file.read()) - self._label = tf.expand_dims(tf.convert_to_tensor( - np.dot(np.array(label), [1, 256, 256 * 256])), -1) - - def test_input_generator(self): - tf.random.set_seed(0) - np.random.seed(0) - small_instances = {'threshold': 4096, 'weight': 3.0} - generator = sample_generator.PanopticSampleGenerator( - dataset.CITYSCAPES_PANOPTIC_INFORMATION._asdict(), - focus_small_instances=small_instances, - is_training=True, - crop_size=[769, 769], - thing_id_mask_annotations=True) - input_sample = { - 'image': self._rgb_image, - 'image_name': 'test_image', - 'label': self._label, - 'height': 800, - 'width': 800 - } - sample = generator(input_sample) - - self.assertIn(common.IMAGE, sample) - self.assertIn(common.GT_SEMANTIC_KEY, sample) - self.assertIn(common.GT_PANOPTIC_KEY, sample) - self.assertIn(common.GT_INSTANCE_CENTER_KEY, sample) - self.assertIn(common.GT_INSTANCE_REGRESSION_KEY, sample) - self.assertIn(common.GT_IS_CROWD, sample) - self.assertIn(common.GT_THING_ID_MASK_KEY, sample) - self.assertIn(common.GT_THING_ID_CLASS_KEY, sample) - self.assertIn(common.SEMANTIC_LOSS_WEIGHT_KEY, sample) - self.assertIn(common.CENTER_LOSS_WEIGHT_KEY, sample) - self.assertIn(common.REGRESSION_LOSS_WEIGHT_KEY, sample) - - self.assertListEqual(sample[common.IMAGE].shape.as_list(), [769, 769, 3]) - self.assertListEqual(sample[common.GT_SEMANTIC_KEY].shape.as_list(), - [769, 769]) - self.assertListEqual(sample[common.GT_PANOPTIC_KEY].shape.as_list(), - [769, 769]) - self.assertListEqual(sample[common.GT_INSTANCE_CENTER_KEY].shape.as_list(), - [769, 769]) - self.assertListEqual( - sample[common.GT_INSTANCE_REGRESSION_KEY].shape.as_list(), - [769, 769, 2]) - self.assertListEqual(sample[common.GT_IS_CROWD].shape.as_list(), [769, 769]) - self.assertListEqual(sample[common.GT_THING_ID_MASK_KEY].shape.as_list(), - [769, 769]) - self.assertListEqual(sample[common.GT_THING_ID_CLASS_KEY].shape.as_list(), - [128]) - self.assertListEqual( - sample[common.SEMANTIC_LOSS_WEIGHT_KEY].shape.as_list(), [769, 769]) - self.assertListEqual(sample[common.CENTER_LOSS_WEIGHT_KEY].shape.as_list(), - [769, 769]) - self.assertListEqual( - sample[common.REGRESSION_LOSS_WEIGHT_KEY].shape.as_list(), - [769, 769]) - - gt_sem = sample[common.GT_SEMANTIC_KEY] - gt_pan = sample[common.GT_PANOPTIC_KEY] - gt_center = tf.cast(sample[common.GT_INSTANCE_CENTER_KEY] * 255, tf.uint8) - gt_is_crowd = sample[common.GT_IS_CROWD] - gt_thing_id_mask = sample[common.GT_THING_ID_MASK_KEY] - gt_thing_id_class = sample[common.GT_THING_ID_CLASS_KEY] - image = tf.cast(sample[common.IMAGE], tf.uint8) - - # semantic weights can be in range of [0, 3] in this example. - semantic_weights = tf.cast(sample[common.SEMANTIC_LOSS_WEIGHT_KEY] * 85, - tf.uint8) - center_weights = tf.cast(sample[common.CENTER_LOSS_WEIGHT_KEY] * 255, - tf.uint8) - offset_weights = tf.cast(sample[common.REGRESSION_LOSS_WEIGHT_KEY] * 255, - tf.uint8) - - np.testing.assert_almost_equal( - image.numpy(), - _get_groundtruth_image( - image, - self._test_target_data_dir + 'rgb_target.png')) - np.testing.assert_almost_equal( - gt_sem.numpy(), - _get_groundtruth_image( - gt_sem, - self._test_target_data_dir + 'semantic_target.png')) - # Save gt as png. Pillow is currently unable to correctly save the image as - # 32bit, but uses 16bit which overflows. - _ = _get_groundtruth_image( - gt_pan, self._test_target_data_dir + 'panoptic_target.png') - np.testing.assert_almost_equal( - gt_pan.numpy(), - _get_groundtruth_array( - gt_pan, - self._test_target_data_dir + 'panoptic_target.npy')) - np.testing.assert_almost_equal( - gt_thing_id_mask.numpy(), - _get_groundtruth_array( - gt_thing_id_mask, - self._test_target_data_dir + 'thing_id_mask_target.npy')) - np.testing.assert_almost_equal( - gt_thing_id_class.numpy(), - _get_groundtruth_array( - gt_thing_id_class, - self._test_target_data_dir + 'thing_id_class_target.npy')) - np.testing.assert_almost_equal( - gt_center.numpy(), - _get_groundtruth_image( - gt_center, - self._test_target_data_dir + 'center_target.png')) - np.testing.assert_almost_equal( - sample[common.GT_INSTANCE_REGRESSION_KEY].numpy(), - _get_groundtruth_array( - sample[common.GT_INSTANCE_REGRESSION_KEY].numpy(), - self._test_target_data_dir + 'offset_target.npy')) - np.testing.assert_array_equal( - gt_is_crowd.numpy(), - _get_groundtruth_array(gt_is_crowd.numpy(), - self._test_target_data_dir + 'is_crowd.npy')) - np.testing.assert_almost_equal( - semantic_weights.numpy(), - _get_groundtruth_image( - semantic_weights, - self._test_target_data_dir + 'semantic_weights.png')) - np.testing.assert_almost_equal( - center_weights.numpy(), - _get_groundtruth_image( - center_weights, - self._test_target_data_dir + 'center_weights.png')) - np.testing.assert_almost_equal( - offset_weights.numpy(), - _get_groundtruth_image( - offset_weights, - self._test_target_data_dir + 'offset_weights.png')) - - def test_input_generator_eval(self): - tf.random.set_seed(0) - np.random.seed(0) - small_instances = {'threshold': 4096, 'weight': 3.0} - generator = sample_generator.PanopticSampleGenerator( - dataset.CITYSCAPES_PANOPTIC_INFORMATION._asdict(), - focus_small_instances=small_instances, - is_training=False, - crop_size=[800, 800]) - input_sample = { - 'image': self._rgb_image, - 'image_name': 'test_image', - 'label': self._label, - 'height': 800, - 'width': 800 - } - sample = generator(input_sample) - - self.assertIn(common.GT_SEMANTIC_RAW, sample) - self.assertIn(common.GT_PANOPTIC_RAW, sample) - self.assertIn(common.GT_IS_CROWD_RAW, sample) - - gt_sem_raw = sample[common.GT_SEMANTIC_RAW] - gt_pan_raw = sample[common.GT_PANOPTIC_RAW] - gt_is_crowd_raw = sample[common.GT_IS_CROWD_RAW] - - self.assertListEqual(gt_sem_raw.shape.as_list(), [800, 800]) - self.assertListEqual(gt_pan_raw.shape.as_list(), [800, 800]) - self.assertListEqual(gt_is_crowd_raw.shape.as_list(), [800, 800]) - - np.testing.assert_almost_equal( - gt_sem_raw.numpy(), - _get_groundtruth_image( - gt_sem_raw, - self._test_target_data_dir + 'eval_semantic_target.png')) - np.testing.assert_almost_equal( - gt_pan_raw.numpy(), - _get_groundtruth_array( - gt_pan_raw, - self._test_target_data_dir + 'eval_panoptic_target.npy')) - np.testing.assert_almost_equal( - gt_is_crowd_raw.numpy(), - _get_groundtruth_array(gt_is_crowd_raw, self._test_target_data_dir + - 'eval_is_crowd.npy')) - - -if __name__ == '__main__': - tf.test.main() diff --git a/spaces/akhaliq/deeplab2/g3doc/change_logs.md b/spaces/akhaliq/deeplab2/g3doc/change_logs.md deleted file mode 100644 index 339995cd2c82674d225d595e090c19206f73092a..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/deeplab2/g3doc/change_logs.md +++ /dev/null @@ -1,6 +0,0 @@ -# Change logs - -* June 7th, 2021: Add hungarian matching support on TPU for MaX-DeepLab. Our - TF2 version is based on Jiquan Ngiam's original Lingvo tensorflow - implementation and Amil Merchant's TF1 version modifications. -* June 1st, 2021: "Hello, World!", DeepLab2 made publicly available. diff --git a/spaces/akhaliq/deeplab2/model/layers/positional_encodings.py b/spaces/akhaliq/deeplab2/model/layers/positional_encodings.py deleted file mode 100644 index b1db2a784dfaa6c4b9b64a7dfde6c8273f927a31..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/deeplab2/model/layers/positional_encodings.py +++ /dev/null @@ -1,243 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The Deeplab2 Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Implements relative [1, 2, 3] and global [3, 4] positional encodings. - -Our Axial-Deeplab [1] proposes position-sensitive self-attention which uses -relative positional encodings for query, key, and value. - -[1] Axial-Deeplab: Stand-Alone Axial-Attention for Panoptic Segmentation, - ECCV 2020 Spotlight. - Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, - Liang-Chieh Chen. -[2] Self-Attention with Relative Position Representations, NAACL 2018. - Peter Shaw, Jakob Uszkoreit, Ashish Vaswani. -[3] Tensor2Tensor for Neural Machine Translation, arXiv 2018, - http://arxiv.org/abs/1803.07416. - Ashish Vaswani, Samy Bengio, Eugene Brevdo, Francois Chollet, - Aidan N. Gomez, Stephan Gouws, Llion Jones, Łukasz Kaiser, - Nal Kalchbrenner, Niki Parmar, Ryan Sepassi, Noam Shazeer, - Jakob Uszkoreit. -[4] Attention Is All You Need, NeurIPS 2017. - Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, - Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin. -[5] An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, - ICLR 2021. - Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, - Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, - Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. -""" - -import tensorflow as tf - -# MAX_SPAN defines the maximum shape of positional encoding. It is set as a -# large constant so that we can easily load and use models with global or -# different local spans, but it should not be too large so that it takes a -# reasonable amount of memory. The value 255 is larger than almost all span -# choices (e.g. 65 for local attention, 129, 193, etc.) so 255 is large enough. -# 257 will be a good choice for gpu, but 255 is more efficient on TPU which pads -# tensors to 128x. -MAX_SPAN = 255 - - -def _compute_relative_distance_matrix(query_length, key_length): - """Computes a relative distance matrix between queries and keys. - - We assume that the queries and the keys are centered, i.e., - key_length = memory_flange + query_length + memory_flange. - - The function is based on the _generate_relative_positions_matrix function in - common_attention.py of tensor2tensor codebase: - https://github.com/tensorflow/tensor2tensor/blob/5623deb79cfcd28f8f8c5463b58b5bd76a81fd0d/tensor2tensor/layers/common_attention.py#L1670 - - Args: - query_length: An integer, the length of queries. - key_length: An integer, the length of keys. - - Returns: - distance_matrix: A [query_length, key_length] tensor. - - Raises: - ValueError: If (key_length - query_length) is odd, i.e., the assumption does - not hold. - """ - if (key_length - query_length) % 2: - raise ValueError('Key_length should be query_length + 2 * memory_flange.') - key_index = tf.range(key_length) - query_index = tf.range(query_length) + (key_length - query_length) // 2 - distance_matrix = key_index[None, :] - query_index[:, None] - # Shift the distance_matrix so that it is >= 0. Each entry of the - # distance_matrix distance will index a relative positional embedding. - distance_matrix = distance_matrix + MAX_SPAN - 1 - if query_length + (key_length - query_length) // 2 > MAX_SPAN: - tf.logging.warn('Axial attention span is larger than MAX_SPAN. In this ' - 'case, we use a single shared embedding for all positions ' - 'beyond this relative distance. Please make sure, this ' - 'behavior is intended.') - distance_matrix = tf.clip_by_value(distance_matrix, 0, MAX_SPAN * 2 - 2) - return distance_matrix - - -class RelativePositionalEncoding(tf.keras.layers.Layer): - """Generates relative positional encoding. - - The function is based on the _generate_relative_positions_embeddings function - in common_attention.py of tensor2tensor codebase: - https://github.com/tensorflow/tensor2tensor/blob/5623deb79cfcd28f8f8c5463b58b5bd76a81fd0d/tensor2tensor/layers/common_attention.py#L1691 - """ - - def __init__(self, query_length, key_length, depth, num_heads, name, - initialization_std=1.0, conv_kernel_weight_decay=0.0): - """Initializes a relative position encoding layer. - - Args: - query_length: An integer, the length of queries. - key_length: An integer, the length of keys. - depth: An integer, the number of embedding channels per head. - num_heads: An integer, the number of heads in multi-head attention. - name: A string, the name of the embedding. - initialization_std: A float, the initialization std for the embedding. - conv_kernel_weight_decay: A float, the weight decay for convolution - kernels. - - Returns: - output: A [num_heads, query, key, depth] tensor, the relative positional - encodings for each head and each query-key-pair. - """ - super(RelativePositionalEncoding, self).__init__(name=name) - self._initializer = tf.keras.initializers.TruncatedNormal( - stddev=initialization_std) - self._regularizer = tf.keras.regularizers.l2(conv_kernel_weight_decay) - - self._relative_distance_matrix = _compute_relative_distance_matrix( - query_length, key_length) - self._num_heads = num_heads - self._embedding_shape = (MAX_SPAN * 2 - 1, depth) - - def build(self, input_shape): - """Builds the embedding weight.""" - del input_shape - self._embeddings = self.add_weight( - shape=self._embedding_shape, - initializer=self._initializer, trainable=True, - name='embeddings', - regularizer=self._regularizer) - - def call(self, inputs): - """A forward pass that gathers the relative positional encoding.""" - del inputs - # Gather the embeddings according to the relative distances. - embeddings = tf.gather(self._embeddings, self._relative_distance_matrix) - return tf.tile(tf.expand_dims(embeddings, axis=0), - [self._num_heads, 1, 1, 1]) - - -class AddAbsolutePositionalEncoding(tf.keras.layers.Layer): - """Adds a learnable absolute positional encoding to the input feature. - - Supports both 1D and 2D versions of the positional encoding: (1) 1D positional - encoding represents each row index with an embedding, and represents each - column index with another embedding. This results in a total of (height + - width) learnable embedding vectors. (2) 2D positional encoding adds - independent embeddings to each input grid position. This choice uses a total - of (height * width) learnable embedding vectors. - """ - - def __init__(self, name, positional_encoding_type=None, - bn_layer=tf.keras.layers.BatchNormalization, - conv_kernel_weight_decay=0.0): - """Initializes an AddAbsolutePositionEmbedding layer. - - Args: - name: A string specifying the name of the layer. - positional_encoding_type: A string, type of the positional encoding. - Support '2D', '1D', 'none', and None. The feature is returned as is if - positional_encoding_type is 'none' or None. - bn_layer: An optional tf.keras.layers.Layer that computes the - normalization (default: tf.keras.layers.BatchNormalization). - conv_kernel_weight_decay: A float, the weight decay for convolution - kernels. - - Raises: - ValueError: If positional_encoding_type is not one of '1D', '2D', 'none', - and None. - """ - super(AddAbsolutePositionalEncoding, self).__init__(name=name) - if not any([positional_encoding_type is None, - positional_encoding_type.lower() == 'none', - positional_encoding_type.lower() == '2d', - positional_encoding_type.lower() == '1d']): - raise ValueError(positional_encoding_type + ' is not supported.') - self._positional_encoding_type = positional_encoding_type - # This initialization std is tuned for global attention, but it does not - # seem to be a sensitive hyper-parameter, since we use batch norm on the - # positional encodings. - self._initializer = tf.keras.initializers.TruncatedNormal(stddev=0.2) - self._kernel_regularizer = tf.keras.regularizers.l2( - conv_kernel_weight_decay) - self._bn_layer = bn_layer - - def build(self, input_shape): - """Builds the layer weights whose shape depends on the 4D input shape.""" - _, height, width, channel = input_shape - if self._positional_encoding_type.lower() == '2d': - self._embeddings = self.add_weight( - shape=(1, height, width, channel), - initializer=self._initializer, trainable=True, - name='embeddings', - regularizer=self._kernel_regularizer) - self._batch_norm = self._bn_layer(axis=-1, name='batch_norm') - elif self._positional_encoding_type.lower() == '1d': - # Generate separable positional encodings for the height axis and the - # width axis. - self._height_axis_embeddings = self.add_weight( - shape=(1, height, 1, channel), - initializer=self._initializer, trainable=True, - name='height_axis_embeddings', - regularizer=self._kernel_regularizer) - self._height_axis_batch_norm = self._bn_layer( - axis=-1, name='height_axis_batch_norm') - self._width_axis_embeddings = self.add_weight( - shape=(1, height, 1, channel), - initializer=self._initializer, trainable=True, - name='width_axis_embeddings', - regularizer=self._kernel_regularizer) - self._width_axis_batch_norm = self._bn_layer( - axis=-1, name='width_axis_batch_norm') - - def call(self, features, training=False): - """Performs a forward pass. - - Args: - features: An input [batch, height, width, channels] tensor. - training: A boolean, whether the model is in training mode. - - Returns: - output: The sum of the input feature and learnable positional encodings. - """ - if (self._positional_encoding_type is None or - self._positional_encoding_type.lower() == 'none'): - return features - elif self._positional_encoding_type.lower() == '2d': - positional_encoding = self._batch_norm(self._embeddings, - training=training) - elif self._positional_encoding_type.lower() == '1d': - height_axis_positional_encoding = self._height_axis_batch_norm( - self._height_axis_embeddings, training=training) - width_axis_positional_encoding = self._width_axis_batch_norm( - self._width_axis_embeddings, training=training) - positional_encoding = (height_axis_positional_encoding + - width_axis_positional_encoding) - return features + positional_encoding diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/distlib/__init__.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/distlib/__init__.py deleted file mode 100644 index 6878387a03699b95901624714d057d9d4ecfe1fe..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/distlib/__init__.py +++ /dev/null @@ -1,23 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright (C) 2012-2019 Vinay Sajip. -# Licensed to the Python Software Foundation under a contributor agreement. -# See LICENSE.txt and CONTRIBUTORS.txt. -# -import logging - -__version__ = '0.3.4' - -class DistlibException(Exception): - pass - -try: - from logging import NullHandler -except ImportError: # pragma: no cover - class NullHandler(logging.Handler): - def handle(self, record): pass - def emit(self, record): pass - def createLock(self): self.lock = None - -logger = logging.getLogger(__name__) -logger.addHandler(NullHandler()) diff --git a/spaces/algomuffin/jojo_fork/e4e/models/psp.py b/spaces/algomuffin/jojo_fork/e4e/models/psp.py deleted file mode 100644 index 36c0b2b7b3fdd28bc32272d0d8fcff24e4848355..0000000000000000000000000000000000000000 --- a/spaces/algomuffin/jojo_fork/e4e/models/psp.py +++ /dev/null @@ -1,99 +0,0 @@ -import matplotlib - -matplotlib.use('Agg') -import torch -from torch import nn -from e4e.models.encoders import psp_encoders -from e4e.models.stylegan2.model import Generator -from e4e.configs.paths_config import model_paths - - -def get_keys(d, name): - if 'state_dict' in d: - d = d['state_dict'] - d_filt = {k[len(name) + 1:]: v for k, v in d.items() if k[:len(name)] == name} - return d_filt - - -class pSp(nn.Module): - - def __init__(self, opts, device): - super(pSp, self).__init__() - self.opts = opts - self.device = device - # Define architecture - self.encoder = self.set_encoder() - self.decoder = Generator(opts.stylegan_size, 512, 8, channel_multiplier=2) - self.face_pool = torch.nn.AdaptiveAvgPool2d((256, 256)) - # Load weights if needed - self.load_weights() - - def set_encoder(self): - if self.opts.encoder_type == 'GradualStyleEncoder': - encoder = psp_encoders.GradualStyleEncoder(50, 'ir_se', self.opts) - elif self.opts.encoder_type == 'Encoder4Editing': - encoder = psp_encoders.Encoder4Editing(50, 'ir_se', self.opts) - else: - raise Exception('{} is not a valid encoders'.format(self.opts.encoder_type)) - return encoder - - def load_weights(self): - if self.opts.checkpoint_path is not None: - print('Loading e4e over the pSp framework from checkpoint: {}'.format(self.opts.checkpoint_path)) - ckpt = torch.load(self.opts.checkpoint_path, map_location='cpu') - self.encoder.load_state_dict(get_keys(ckpt, 'encoder'), strict=True) - self.decoder.load_state_dict(get_keys(ckpt, 'decoder'), strict=True) - self.__load_latent_avg(ckpt) - else: - print('Loading encoders weights from irse50!') - encoder_ckpt = torch.load(model_paths['ir_se50']) - self.encoder.load_state_dict(encoder_ckpt, strict=False) - print('Loading decoder weights from pretrained!') - ckpt = torch.load(self.opts.stylegan_weights) - self.decoder.load_state_dict(ckpt['g_ema'], strict=False) - self.__load_latent_avg(ckpt, repeat=self.encoder.style_count) - - def forward(self, x, resize=True, latent_mask=None, input_code=False, randomize_noise=True, - inject_latent=None, return_latents=False, alpha=None): - if input_code: - codes = x - else: - codes = self.encoder(x) - # normalize with respect to the center of an average face - if self.opts.start_from_latent_avg: - if codes.ndim == 2: - codes = codes + self.latent_avg.repeat(codes.shape[0], 1, 1)[:, 0, :] - else: - codes = codes + self.latent_avg.repeat(codes.shape[0], 1, 1) - - if latent_mask is not None: - for i in latent_mask: - if inject_latent is not None: - if alpha is not None: - codes[:, i] = alpha * inject_latent[:, i] + (1 - alpha) * codes[:, i] - else: - codes[:, i] = inject_latent[:, i] - else: - codes[:, i] = 0 - - input_is_latent = not input_code - images, result_latent = self.decoder([codes], - input_is_latent=input_is_latent, - randomize_noise=randomize_noise, - return_latents=return_latents) - - if resize: - images = self.face_pool(images) - - if return_latents: - return images, result_latent - else: - return images - - def __load_latent_avg(self, ckpt, repeat=None): - if 'latent_avg' in ckpt: - self.latent_avg = ckpt['latent_avg'].to(self.device) - if repeat is not None: - self.latent_avg = self.latent_avg.repeat(repeat, 1) - else: - self.latent_avg = None diff --git a/spaces/ali-ghamdan/deoldify/deoldify/augs.py b/spaces/ali-ghamdan/deoldify/deoldify/augs.py deleted file mode 100644 index 046618e9dcf3b0274b711611b24722984e7d8d29..0000000000000000000000000000000000000000 --- a/spaces/ali-ghamdan/deoldify/deoldify/augs.py +++ /dev/null @@ -1,29 +0,0 @@ -import random - -from fastai.vision.image import TfmPixel - -# Contributed by Rani Horev. Thank you! -def _noisify( - x, pct_pixels_min: float = 0.001, pct_pixels_max: float = 0.4, noise_range: int = 30 -): - if noise_range > 255 or noise_range < 0: - raise Exception("noise_range must be between 0 and 255, inclusively.") - - h, w = x.shape[1:] - img_size = h * w - mult = 10000.0 - pct_pixels = ( - random.randrange(int(pct_pixels_min * mult), int(pct_pixels_max * mult)) / mult - ) - noise_count = int(img_size * pct_pixels) - - for ii in range(noise_count): - yy = random.randrange(h) - xx = random.randrange(w) - noise = random.randrange(-noise_range, noise_range) / 255.0 - x[:, yy, xx].add_(noise) - - return x - - -noisify = TfmPixel(_noisify) diff --git a/spaces/aliabd/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/XML/DOM/DOMImplementation.pod b/spaces/aliabd/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/XML/DOM/DOMImplementation.pod deleted file mode 100644 index cb5e34df9ccb114665e7578fbbbefc8c4cb4b054..0000000000000000000000000000000000000000 --- a/spaces/aliabd/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/XML/DOM/DOMImplementation.pod +++ /dev/null @@ -1,24 +0,0 @@ -=head1 NAME - -XML::DOM::DOMImplementation - Information about XML::DOM implementation - -=head1 DESCRIPTION - -The DOMImplementation interface provides a number of methods for -performing operations that are independent of any particular instance -of the document object model. - -The DOM Level 1 does not specify a way of creating a document instance, -and hence document creation is an operation specific to an -implementation. Future Levels of the DOM specification are expected to -provide methods for creating documents directly. - -=head2 METHODS - -=over 4 - -=item hasFeature (feature, version) - -Returns 1 if and only if feature equals "XML" and version equals "1.0". - -=back diff --git a/spaces/aliabid94/AutoGPT/run_continuous.sh b/spaces/aliabid94/AutoGPT/run_continuous.sh deleted file mode 100644 index 1f4436c88503172c0578b15a8447ed8268502578..0000000000000000000000000000000000000000 --- a/spaces/aliabid94/AutoGPT/run_continuous.sh +++ /dev/null @@ -1,3 +0,0 @@ -#!/bin/bash - -./run.sh --continuous $@ diff --git a/spaces/aliceoq/vozes-da-loirinha/lib/uvr5_pack/lib_v5/layers.py b/spaces/aliceoq/vozes-da-loirinha/lib/uvr5_pack/lib_v5/layers.py deleted file mode 100644 index b82f06bb4993cd63f076e68d7e24185269b1bc42..0000000000000000000000000000000000000000 --- a/spaces/aliceoq/vozes-da-loirinha/lib/uvr5_pack/lib_v5/layers.py +++ /dev/null @@ -1,118 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F - -from . import spec_utils - - -class Conv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(Conv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nout, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - bias=False, - ), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class SeperableConv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(SeperableConv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nin, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - groups=nin, - bias=False, - ), - nn.Conv2d(nin, nout, kernel_size=1, bias=False), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class Encoder(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU): - super(Encoder, self).__init__() - self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ) - - def __call__(self, x): - skip = self.conv1(x) - h = self.conv2(skip) - - return h, skip - - -class Decoder(nn.Module): - def __init__( - self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False - ): - super(Decoder, self).__init__() - self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.dropout = nn.Dropout2d(0.1) if dropout else None - - def __call__(self, x, skip=None): - x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True) - if skip is not None: - skip = spec_utils.crop_center(skip, x) - x = torch.cat([x, skip], dim=1) - h = self.conv(x) - - if self.dropout is not None: - h = self.dropout(h) - - return h - - -class ASPPModule(nn.Module): - def __init__(self, nin, nout, dilations=(4, 8, 16), activ=nn.ReLU): - super(ASPPModule, self).__init__() - self.conv1 = nn.Sequential( - nn.AdaptiveAvgPool2d((1, None)), - Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ), - ) - self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ) - self.conv3 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[0], dilations[0], activ=activ - ) - self.conv4 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[1], dilations[1], activ=activ - ) - self.conv5 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.bottleneck = nn.Sequential( - Conv2DBNActiv(nin * 5, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1) - ) - - def forward(self, x): - _, _, h, w = x.size() - feat1 = F.interpolate( - self.conv1(x), size=(h, w), mode="bilinear", align_corners=True - ) - feat2 = self.conv2(x) - feat3 = self.conv3(x) - feat4 = self.conv4(x) - feat5 = self.conv5(x) - out = torch.cat((feat1, feat2, feat3, feat4, feat5), dim=1) - bottle = self.bottleneck(out) - return bottle diff --git a/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/extensions/api/util.py b/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/extensions/api/util.py deleted file mode 100644 index 63117aafc1c4ef8f7dabba4734d13353a1b34afc..0000000000000000000000000000000000000000 --- a/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/extensions/api/util.py +++ /dev/null @@ -1,71 +0,0 @@ -import time -import traceback -from threading import Thread -from typing import Callable, Optional - -from modules.text_generation import encode - - -def build_parameters(body): - prompt = body['prompt'] - - prompt_lines = [k.strip() for k in prompt.split('\n')] - max_context = body.get('max_context_length', 2048) - while len(prompt_lines) >= 0 and len(encode('\n'.join(prompt_lines))) > max_context: - prompt_lines.pop(0) - - prompt = '\n'.join(prompt_lines) - - generate_params = { - 'max_new_tokens': int(body.get('max_new_tokens', body.get('max_length', 200))), - 'do_sample': bool(body.get('do_sample', True)), - 'temperature': float(body.get('temperature', 0.5)), - 'top_p': float(body.get('top_p', 1)), - 'typical_p': float(body.get('typical_p', body.get('typical', 1))), - 'repetition_penalty': float(body.get('repetition_penalty', body.get('rep_pen', 1.1))), - 'encoder_repetition_penalty': float(body.get('encoder_repetition_penalty', 1.0)), - 'top_k': int(body.get('top_k', 0)), - 'min_length': int(body.get('min_length', 0)), - 'no_repeat_ngram_size': int(body.get('no_repeat_ngram_size', 0)), - 'num_beams': int(body.get('num_beams', 1)), - 'penalty_alpha': float(body.get('penalty_alpha', 0)), - 'length_penalty': float(body.get('length_penalty', 1)), - 'early_stopping': bool(body.get('early_stopping', False)), - 'seed': int(body.get('seed', -1)), - 'add_bos_token': bool(body.get('add_bos_token', True)), - 'truncation_length': int(body.get('truncation_length', 2048)), - 'ban_eos_token': bool(body.get('ban_eos_token', False)), - 'skip_special_tokens': bool(body.get('skip_special_tokens', True)), - 'custom_stopping_strings': '', # leave this blank - 'stopping_strings': body.get('stopping_strings', []), - } - - return generate_params - - -def try_start_cloudflared(port: int, max_attempts: int = 3, on_start: Optional[Callable[[str], None]] = None): - Thread(target=_start_cloudflared, args=[ - port, max_attempts, on_start], daemon=True).start() - - -def _start_cloudflared(port: int, max_attempts: int = 3, on_start: Optional[Callable[[str], None]] = None): - try: - from flask_cloudflared import _run_cloudflared - except ImportError: - print('You should install flask_cloudflared manually') - raise Exception( - 'flask_cloudflared not installed. Make sure you installed the requirements.txt for this extension.') - - for _ in range(max_attempts): - try: - public_url = _run_cloudflared(port, port + 1) - - if on_start: - on_start(public_url) - - return - except Exception: - traceback.print_exc() - time.sleep(3) - - raise Exception('Could not start cloudflared.') diff --git a/spaces/aodianyun/panoptic-segment-anything/GroundingDINO/groundingdino/config/GroundingDINO_SwinB.cfg.py b/spaces/aodianyun/panoptic-segment-anything/GroundingDINO/groundingdino/config/GroundingDINO_SwinB.cfg.py deleted file mode 100644 index f490c4bbd598a35de43d36ceafcbd769e7ff21bf..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/panoptic-segment-anything/GroundingDINO/groundingdino/config/GroundingDINO_SwinB.cfg.py +++ /dev/null @@ -1,43 +0,0 @@ -batch_size = 1 -modelname = "groundingdino" -backbone = "swin_B_384_22k" -position_embedding = "sine" -pe_temperatureH = 20 -pe_temperatureW = 20 -return_interm_indices = [1, 2, 3] -backbone_freeze_keywords = None -enc_layers = 6 -dec_layers = 6 -pre_norm = False -dim_feedforward = 2048 -hidden_dim = 256 -dropout = 0.0 -nheads = 8 -num_queries = 900 -query_dim = 4 -num_patterns = 0 -num_feature_levels = 4 -enc_n_points = 4 -dec_n_points = 4 -two_stage_type = "standard" -two_stage_bbox_embed_share = False -two_stage_class_embed_share = False -transformer_activation = "relu" -dec_pred_bbox_embed_share = True -dn_box_noise_scale = 1.0 -dn_label_noise_ratio = 0.5 -dn_label_coef = 1.0 -dn_bbox_coef = 1.0 -embed_init_tgt = True -dn_labelbook_size = 2000 -max_text_len = 256 -text_encoder_type = "bert-base-uncased" -use_text_enhancer = True -use_fusion_layer = True -use_checkpoint = True -use_transformer_ckpt = True -use_text_cross_attention = True -text_dropout = 0.0 -fusion_dropout = 0.0 -fusion_droppath = 0.1 -sub_sentence_present = True diff --git a/spaces/aodianyun/stable-diffusion-webui/modules/ui_postprocessing.py b/spaces/aodianyun/stable-diffusion-webui/modules/ui_postprocessing.py deleted file mode 100644 index 7789347028ecb309607038d0bc79eff934f45711..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/stable-diffusion-webui/modules/ui_postprocessing.py +++ /dev/null @@ -1,57 +0,0 @@ -import gradio as gr -from modules import scripts_postprocessing, scripts, shared, gfpgan_model, codeformer_model, ui_common, postprocessing, call_queue -import modules.generation_parameters_copypaste as parameters_copypaste - - -def create_ui(): - tab_index = gr.State(value=0) - - with gr.Row().style(equal_height=False, variant='compact'): - with gr.Column(variant='compact'): - with gr.Tabs(elem_id="mode_extras"): - with gr.TabItem('Single Image', elem_id="extras_single_tab") as tab_single: - extras_image = gr.Image(label="Source", source="upload", interactive=True, type="pil", elem_id="extras_image") - - with gr.TabItem('Batch Process', elem_id="extras_batch_process_tab") as tab_batch: - image_batch = gr.File(label="Batch Process", file_count="multiple", interactive=True, type="file", elem_id="extras_image_batch") - - with gr.TabItem('Batch from Directory', elem_id="extras_batch_directory_tab") as tab_batch_dir: - extras_batch_input_dir = gr.Textbox(label="Input directory", **shared.hide_dirs, placeholder="A directory on the same machine where the server is running.", elem_id="extras_batch_input_dir") - extras_batch_output_dir = gr.Textbox(label="Output directory", **shared.hide_dirs, placeholder="Leave blank to save images to the default path.", elem_id="extras_batch_output_dir") - show_extras_results = gr.Checkbox(label='Show result images', value=True, elem_id="extras_show_extras_results") - - submit = gr.Button('Generate', elem_id="extras_generate", variant='primary') - - script_inputs = scripts.scripts_postproc.setup_ui() - - with gr.Column(): - result_images, html_info_x, html_info, html_log = ui_common.create_output_panel("extras", shared.opts.outdir_extras_samples) - - tab_single.select(fn=lambda: 0, inputs=[], outputs=[tab_index]) - tab_batch.select(fn=lambda: 1, inputs=[], outputs=[tab_index]) - tab_batch_dir.select(fn=lambda: 2, inputs=[], outputs=[tab_index]) - - submit.click( - fn=call_queue.wrap_gradio_gpu_call(postprocessing.run_postprocessing, extra_outputs=[None, '']), - inputs=[ - tab_index, - extras_image, - image_batch, - extras_batch_input_dir, - extras_batch_output_dir, - show_extras_results, - *script_inputs - ], - outputs=[ - result_images, - html_info_x, - html_info, - ] - ) - - parameters_copypaste.add_paste_fields("extras", extras_image, None) - - extras_image.change( - fn=scripts.scripts_postproc.image_changed, - inputs=[], outputs=[] - ) diff --git a/spaces/apat27/pox-classifier/app.py b/spaces/apat27/pox-classifier/app.py deleted file mode 100644 index db0ec51d62ab40a1a0043055f379032ff65f112f..0000000000000000000000000000000000000000 --- a/spaces/apat27/pox-classifier/app.py +++ /dev/null @@ -1,28 +0,0 @@ -from fastai.vision.all import * -import gradio as gr -from fastcore.all import * - - -def label(x): - return x.parent.name - - -learn = load_learner('./model.pkl') - -categories = ('Chicken Pox', 'Cow Pox', 'Healthy', - 'Measles', 'Monkey Pox', 'Small Pox') - - -def classify(img): - pred, idx, probs = learn.predict(img) - return dict(zip(categories, map(float, probs))) - - -image = gr.inputs.Image(shape=(224, 224)) -label = gr.outputs.Label() - -examples = ['examples/healthy.jpg', - 'examples/monkeypox.jpg', 'examples/chickenpox.jpg'] -intf = gr.Interface(fn=classify, title='Pox Classifier', inputs=image, - outputs=label, examples=examples) -intf.launch(inline=False) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/dataclass/constants.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/dataclass/constants.py deleted file mode 100644 index 5af92f2b3aa51e460f0b045a348d3766f93eb90b..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/dataclass/constants.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from enum import Enum, EnumMeta -from typing import List - - -class StrEnumMeta(EnumMeta): - # this is workaround for submitit pickling leading to instance checks failing in hydra for StrEnum, see - # https://github.com/facebookresearch/hydra/issues/1156 - @classmethod - def __instancecheck__(cls, other): - return "enum" in str(type(other)) - - -class StrEnum(Enum, metaclass=StrEnumMeta): - def __str__(self): - return self.value - - def __eq__(self, other: str): - return self.value == other - - def __repr__(self): - return self.value - - def __hash__(self): - return hash(str(self)) - - -def ChoiceEnum(choices: List[str]): - """return the Enum class used to enforce list of choices""" - return StrEnum("Choices", {k: k for k in choices}) - - -LOG_FORMAT_CHOICES = ChoiceEnum(["json", "none", "simple", "tqdm"]) -DDP_BACKEND_CHOICES = ChoiceEnum( - [ - "c10d", # alias for pytorch_ddp - "fully_sharded", # FullyShardedDataParallel from fairscale - "legacy_ddp", - "no_c10d", # alias for legacy_ddp - "pytorch_ddp", - "slowmo", - ] -) -DDP_COMM_HOOK_CHOICES = ChoiceEnum(["none", "fp16"]) -DATASET_IMPL_CHOICES = ChoiceEnum(["raw", "lazy", "cached", "mmap", "fasta", "huffman"]) -GENERATION_CONSTRAINTS_CHOICES = ChoiceEnum(["ordered", "unordered"]) -GENERATION_DECODING_FORMAT_CHOICES = ChoiceEnum( - ["unigram", "ensemble", "vote", "dp", "bs"] -) -ZERO_SHARDING_CHOICES = ChoiceEnum(["none", "os"]) -PIPELINE_CHECKPOINT_CHOICES = ChoiceEnum(["always", "never", "except_last"]) -PRINT_ALIGNMENT_CHOICES = ChoiceEnum(["hard", "soft"]) diff --git a/spaces/asd998877/TsGpt/modules/overwrites.py b/spaces/asd998877/TsGpt/modules/overwrites.py deleted file mode 100644 index 035a4a52722d66ee28af1c05231ad1cea3339ef5..0000000000000000000000000000000000000000 --- a/spaces/asd998877/TsGpt/modules/overwrites.py +++ /dev/null @@ -1,94 +0,0 @@ -from __future__ import annotations -import logging - -from llama_index import Prompt -from typing import List, Tuple -import mdtex2html -from gradio_client import utils as client_utils - -from modules.presets import * -from modules.llama_func import * - - -def compact_text_chunks(self, prompt: Prompt, text_chunks: List[str]) -> List[str]: - logging.debug("Compacting text chunks...🚀🚀🚀") - combined_str = [c.strip() for c in text_chunks if c.strip()] - combined_str = [f"[{index+1}] {c}" for index, c in enumerate(combined_str)] - combined_str = "\n\n".join(combined_str) - # resplit based on self.max_chunk_overlap - text_splitter = self.get_text_splitter_given_prompt(prompt, 1, padding=1) - return text_splitter.split_text(combined_str) - - -def postprocess( - self, - y: List[List[str | Tuple[str] | Tuple[str, str] | None] | Tuple], - ) -> List[List[str | Dict | None]]: - """ - Parameters: - y: List of lists representing the message and response pairs. Each message and response should be a string, which may be in Markdown format. It can also be a tuple whose first element is a string filepath or URL to an image/video/audio, and second (optional) element is the alt text, in which case the media file is displayed. It can also be None, in which case that message is not displayed. - Returns: - List of lists representing the message and response. Each message and response will be a string of HTML, or a dictionary with media information. Or None if the message is not to be displayed. - """ - if y is None: - return [] - processed_messages = [] - for message_pair in y: - assert isinstance( - message_pair, (tuple, list) - ), f"Expected a list of lists or list of tuples. Received: {message_pair}" - assert ( - len(message_pair) == 2 - ), f"Expected a list of lists of length 2 or list of tuples of length 2. Received: {message_pair}" - - processed_messages.append( - [ - self._postprocess_chat_messages(message_pair[0], "user"), - self._postprocess_chat_messages(message_pair[1], "bot"), - ] - ) - return processed_messages - -def postprocess_chat_messages( - self, chat_message: str | Tuple | List | None, message_type: str - ) -> str | Dict | None: - if chat_message is None: - return None - elif isinstance(chat_message, (tuple, list)): - filepath = chat_message[0] - mime_type = client_utils.get_mimetype(filepath) - filepath = self.make_temp_copy_if_needed(filepath) - return { - "name": filepath, - "mime_type": mime_type, - "alt_text": chat_message[1] if len(chat_message) > 1 else None, - "data": None, # These last two fields are filled in by the frontend - "is_file": True, - } - elif isinstance(chat_message, str): - if message_type == "bot": - if not detect_converted_mark(chat_message): - chat_message = convert_mdtext(chat_message) - elif message_type == "user": - if not detect_converted_mark(chat_message): - chat_message = convert_asis(chat_message) - return chat_message - else: - raise ValueError(f"Invalid message for Chatbot component: {chat_message}") - -with open("./assets/custom.js", "r", encoding="utf-8") as f, open("./assets/Kelpy-Codos.js", "r", encoding="utf-8") as f2: - customJS = f.read() - kelpyCodos = f2.read() - -def reload_javascript(): - print("Reloading javascript...") - js = f'' - def template_response(*args, **kwargs): - res = GradioTemplateResponseOriginal(*args, **kwargs) - res.body = res.body.replace(b'', f'{js}'.encode("utf8")) - res.init_headers() - return res - - gr.routes.templates.TemplateResponse = template_response - -GradioTemplateResponseOriginal = gr.routes.templates.TemplateResponse \ No newline at end of file diff --git a/spaces/aubmindlab/Arabic-NLP/backend/sa.py b/spaces/aubmindlab/Arabic-NLP/backend/sa.py deleted file mode 100644 index 6410db09985ebd93cdb8a9281e647c5e9bc2b7a0..0000000000000000000000000000000000000000 --- a/spaces/aubmindlab/Arabic-NLP/backend/sa.py +++ /dev/null @@ -1,76 +0,0 @@ -import streamlit as st -from .services import SentimentAnalyzer -from functools import lru_cache - -# @st.cache(allow_output_mutation=False, hash_funcs={Tokenizer: str}) -@lru_cache(maxsize=1) -def load_text_generator(): - predictor = SentimentAnalyzer() - return predictor - - -predictor = load_text_generator() - - -def write(): - st.markdown( - """ - # Arabic Sentiment Analysis - - This is a simple sentiment analysis app that uses the prediction kernel from Wissam's (me) submission that won the [Arabic Senitment Analysis competition @ KAUST](https://www.kaggle.com/c/arabic-sentiment-analysis-2021-kaust) - """ - ) - if st.checkbox("More info: "): - st.markdown( - """ - ### Submission Description: - - My submission is based on an ensemble of 5 models with varying preprocessing, and classifier design. All model variants are built over MARBERT [1], which is a BERT-based model pre-trained on 1B dialectal Arabic tweets. - - For preprocessing, all models shared the following steps: - - Replacing user mentions with “USER” and links with “URL” - - Replacing the “#” with “HASH” - - Removed the underscore character since it is missing the MARBERT vocabulary. - - Removed diacritics and elongations (tatweel) - - Spacing out emojis - - For classifier design, all models use a dense layer on top of MARBERT unless otherwise specified. Model training is done by hyperparameter grid-search with 5-fold cross-validation with the following search space: - - Learning rate: [2e-5,3e-5,4e-5] - - Batch size: 128 - - Maximum sequence length: 64 - - Epochs: 3 (we select the best epoch for the final prediction) - - Warmup ratio: [0,0.1] - - Seed: [1,25,42,123,666] - - Model I is a vanilla variant with only the preprocessing steps mention above applied. Model II enhances the emoji representation by replacing OOV emojis with ones that have similar meaning, for example 💊  😷. - We noticed the repetitive use of “السلام عليكم” and “ورحمة الله وبركاته” in neutral tweets, especially when users were directing questions to business accounts. This could confuse the classifier, if it encountered these words in a for example a negative tweet, hence in Model III we removed variation of the phrase mentioned before using fuzzy matching algorithms. - - In Model IV, we tried to help the model by appending a sarcasm label to the input. We first trained a separate MARBERT on the ArSarcasm [2] dataset, and then used it to label the training and test sets. - - Model V uses the vanilla preprocessing approach, but instead of a dense layer built on top of MARBERT, we follow the approach detailed by Safaya et.al. [3] which uses a CNN-based classifier instead. - - For the final prediction, we first average the predictions of the 5 models from cross-validation (this is done for each model separately), we then average the results from the 5 model variants. We observed that the distribution of the predicted sentiment classes, doesn’t quite match the true distribution, this is due to the model preferring the neutral class over the positive class. To counter that, we apply what we call Label-Weighted average where during after the final averaging we rescale the score with the following weights 1.57,0.98 and 0.93 for positive, neutral, and negative (note that the weights were determined empirically). - - 1- https://aclanthology.org/2021.acl-long.551/ - - 2- https://github.com/iabufarha/ArSarcasm - - 3- https://github.com/alisafaya/OffensEval2020 - - - """ - ) - input_text = st.text_input( - "Enter your text here:", - ) - if st.button("Predict"): - with st.spinner("Predicting..."): - prediction, score, all_score = predictor.predict([input_text]) - st.write(f"Result: {prediction[0]}") - detailed_score = { - "Positive": all_score[0][0], - "Neutral": all_score[0][1], - "Negative": all_score[0][2], - } - st.write("All scores:") - st.write(detailed_score) diff --git a/spaces/banana-projects/web3d/node_modules/three/src/lights/Light.js b/spaces/banana-projects/web3d/node_modules/three/src/lights/Light.js deleted file mode 100644 index e4db9c009b2942622a744d24582bd6d2d791d74d..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/lights/Light.js +++ /dev/null @@ -1,62 +0,0 @@ -import { Object3D } from '../core/Object3D.js'; -import { Color } from '../math/Color.js'; - -/** - * @author mrdoob / http://mrdoob.com/ - * @author alteredq / http://alteredqualia.com/ - */ - -function Light( color, intensity ) { - - Object3D.call( this ); - - this.type = 'Light'; - - this.color = new Color( color ); - this.intensity = intensity !== undefined ? intensity : 1; - - this.receiveShadow = undefined; - -} - -Light.prototype = Object.assign( Object.create( Object3D.prototype ), { - - constructor: Light, - - isLight: true, - - copy: function ( source ) { - - Object3D.prototype.copy.call( this, source ); - - this.color.copy( source.color ); - this.intensity = source.intensity; - - return this; - - }, - - toJSON: function ( meta ) { - - var data = Object3D.prototype.toJSON.call( this, meta ); - - data.object.color = this.color.getHex(); - data.object.intensity = this.intensity; - - if ( this.groundColor !== undefined ) data.object.groundColor = this.groundColor.getHex(); - - if ( this.distance !== undefined ) data.object.distance = this.distance; - if ( this.angle !== undefined ) data.object.angle = this.angle; - if ( this.decay !== undefined ) data.object.decay = this.decay; - if ( this.penumbra !== undefined ) data.object.penumbra = this.penumbra; - - if ( this.shadow !== undefined ) data.object.shadow = this.shadow.toJSON(); - - return data; - - } - -} ); - - -export { Light }; diff --git a/spaces/beihai/PDF-Table-Extractor/git.sh b/spaces/beihai/PDF-Table-Extractor/git.sh deleted file mode 100644 index 0aabfded99e3ead2b6b903be6f4514d94213e7c5..0000000000000000000000000000000000000000 --- a/spaces/beihai/PDF-Table-Extractor/git.sh +++ /dev/null @@ -1,3 +0,0 @@ -git add . -git commit -m "1.0" -git push \ No newline at end of file diff --git a/spaces/bingbing520/ChatGPT/readme/README_ja.md b/spaces/bingbing520/ChatGPT/readme/README_ja.md deleted file mode 100644 index fc56eec0b81c22ff0a49e3960aa52ffd7d6dc5cb..0000000000000000000000000000000000000000 --- a/spaces/bingbing520/ChatGPT/readme/README_ja.md +++ /dev/null @@ -1,126 +0,0 @@ -
- - 简体中文 | English | 日本語 -
- -

川虎 Chat 🐯 Chuanhu Chat

-
- - Logo - - -

-

ChatGPT/ChatGLM/LLaMAなどのLLMのための軽量でユーザーフレンドリーなWeb-UI

-

- - Tests Passing - - - GitHub Contributors - - - GitHub pull requests - -

- ストリーム出力/会話回数無制限/履歴保存/プリセットプロンプト/ファイルへの質問チャット
- ウェブ検索/LaTeXレンダリング/表レンダリング/コードハイライト
- オートダークモード/アダプティブ・ウェブ・インターフェイス/WeChatライク・テーマ
- マルチパラメーターチューニング/マルチAPI-Key対応/マルチユーザー対応
- GPT-4対応/LLMのローカルデプロイ可能。 -

- 動画チュートリアル - · - 2.0 イントロダクション - · - 3.0 イントロダクション & チュートリアル - || - オンライントライアル - · - ワンクリックデプロイ -

-

- Animation Demo -

-

-
- -## 使う上でのTips - -- ChatGPTをより適切に制御するために、システムプロンプトを使用できます。 -- プロンプトテンプレートを使用するには、プロンプトテンプレートコレクションを選択し、ドロップダウンメニューから特定のプロンプトを選択。回答が不十分な場合は、`🔄再生成`ボタンを使って再試行します。 -- 入力ボックスで改行するには、Shift + Enterキーを押してください。 -- 入力履歴を素早く切り替えるには、入力ボックスで キーを押す。 -- プログラムをサーバにデプロイするには、プログラムの最終行を `demo.launch(server_name="0.0.0.0", server_port=)`に変更します。 -- 共有リンクを取得するには、プログラムの最後の行を `demo.launch(share=True)` に変更してください。なお、公開リンクでアクセスするためには、プログラムが実行されている必要があることに注意してください。 -- Hugging Face Spacesで使用する場合: より速く、より安全に利用するために、**Duplicate Space**を使用し、自分のスペースでプログラムを実行することをお勧めします。 - -## インストール - -```shell -git clone https://github.com/GaiZhenbiao/ChuanhuChatGPT.git -cd ChuanhuChatGPT -pip install -r requirements.txt -``` - -次に `config_example.json`をコピーして `config.json`にリネームし、そのファイルにAPI-Keyなどの設定を記入する。 - -```shell -python ChuanhuChatbot.py -``` - -ブラウザのウィンドウが開き、ChatGPTとチャットできるようになります。 - -> **Note** -> -> 詳しい手順は[wikiページ](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程)をご確認ください。 - -## トラブルシューティング - -問題が発生した場合は、まずこのプロジェクトの最新の変更点を手動で引っ張ってみるのがよいでしょう。その手順は以下の通りです: - -1. ウェブページの `Download ZIP` をクリックして最新のコードアーカイブをダウンロードするか、または - ```shell - git pull https://github.com/GaiZhenbiao/ChuanhuChatGPT.git main -f - ``` -2. 新しい依存関係が導入されている可能性があるため、依存関係を再度インストールしてみてください。 - ``` - pip install -r requirements.txt - ``` -3. Gradioを更新 - ``` - pip install gradio --upgrade --force-reinstall - ``` - -一般的に、以下の手順でほとんどの問題を解決することができます。 - -それでも問題が解決しない場合は、こちらのページをご参照ください: [よくある質問(FAQ)](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/常见问题) - -このページでは、考えられるほぼすべての問題点と解決策を掲載しています。よくお読みください。 - -## More Information - -より詳細な情報は、[wiki](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki) をご覧ください。: - -- [How to contribute a translation](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/Localization) -- [How to make a contribution](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/贡献指南) -- [How to cite the project](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用许可#如何引用该项目) -- [Project changelog](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/更新日志) -- [Project license](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用许可) - -## Starchart - -[![Star History Chart](https://api.star-history.com/svg?repos=GaiZhenbiao/ChuanhuChatGPT&type=Date)](https://star-history.com/#GaiZhenbiao/ChuanhuChatGPT&Date) - -## Contributors - - - - - -## Sponsor - -🐯 この企画が役に立ったら、遠慮なくコーラかコーヒーでもおごってください〜。 - -Buy Me A Coffee - -image diff --git a/spaces/bioriAsaeru/text-to-voice/Casio Classpad 300 Emulator Gba A Review of the Program and Its Performance.md b/spaces/bioriAsaeru/text-to-voice/Casio Classpad 300 Emulator Gba A Review of the Program and Its Performance.md deleted file mode 100644 index d355267465f049b391e0ff1e0003dcb3e5c6edad..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Casio Classpad 300 Emulator Gba A Review of the Program and Its Performance.md +++ /dev/null @@ -1,6 +0,0 @@ -

Akruti 6.0 Keygen Free Download diddy arabes orquest


Download Ziphttps://urloso.com/2uyRXW



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/bioriAsaeru/text-to-voice/Ewqlso Gold Edition Setup Keygen Crack Tips and Tricks for Creating Realistic and Expressive Orchestral Music with Ewqlso Gold Edition.md b/spaces/bioriAsaeru/text-to-voice/Ewqlso Gold Edition Setup Keygen Crack Tips and Tricks for Creating Realistic and Expressive Orchestral Music with Ewqlso Gold Edition.md deleted file mode 100644 index bdd7bc04f3b041fe2ede22d286da245424b3824f..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Ewqlso Gold Edition Setup Keygen Crack Tips and Tricks for Creating Realistic and Expressive Orchestral Music with Ewqlso Gold Edition.md +++ /dev/null @@ -1,6 +0,0 @@ -

ewqlso gold edition setup keygen crack


Download ►►► https://urloso.com/2uyObC



- - aaccfb2cb3
-
-
-

diff --git a/spaces/blanchon/gaussian-splatting-kit/services/http.py b/spaces/blanchon/gaussian-splatting-kit/services/http.py deleted file mode 100644 index 71a90757ee1b3576cafea116689875defe73ef02..0000000000000000000000000000000000000000 --- a/spaces/blanchon/gaussian-splatting-kit/services/http.py +++ /dev/null @@ -1,26 +0,0 @@ -from pathlib import Path -import requests -from rich.console import Console - -console = Console() - -def download_file(url: str, file_path: Path) -> Path: - console.log(f"📥 Downloading File from URL: {url}") - response = requests.get(url, stream=True) - if response.status_code == 200: - with file_path.open('wb') as file: - for chunk in response.iter_content(chunk_size=1024): - if chunk: - file.write(chunk) - console.log(f"✅ File Successfully Downloaded! Path: {file_path}") - else: - console.log(f"🚨 Error downloading file from {url}.") - return file_path - -def download_api(url: str, file_path: Path) -> Path: - # Download the video from internet - video_path = file_path + '/video.mp4' - console.log("🌟 Starting the Video Download...") - video_path = download_file(url, video_path) - console.log(f"🎉 Video Download Complete! Path: {video_path}") - return video_path \ No newline at end of file diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/musicgen/musicgen_clapemb_32khz.py b/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/musicgen/musicgen_clapemb_32khz.py deleted file mode 100644 index 64ad3f8c77afe1ab5908e407ad14d4879e1b1ad1..0000000000000000000000000000000000000000 --- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/musicgen/musicgen_clapemb_32khz.py +++ /dev/null @@ -1,32 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from ._explorers import LMExplorer -from ...environment import AudioCraftEnvironment - - -@LMExplorer -def explorer(launcher): - partitions = AudioCraftEnvironment.get_slurm_partitions(['team', 'global']) - launcher.slurm_(gpus=32, partition=partitions) - launcher.bind_(solver='musicgen/musicgen_base_32khz') - # replace this by the desired music dataset - launcher.bind_(dset='internal/music_400k_32khz') - launcher.bind_(conditioner='clapemb2music') - - fsdp = {'autocast': False, 'fsdp.use': True} - cache_path = {'conditioners.description.clap.cache_path': - '/fsx-audio-craft-llm/jadecopet/experiments/audiocraft/caches/clap_embed_music'} - text_wav_training_opt = {'conditioners.description.clap.text_p': 0.5} - - launcher.bind_(fsdp) - - launcher.slurm_(gpus=32).bind_(label='32gpus') - with launcher.job_array(): - launcher() - launcher(text_wav_training_opt) - launcher(cache_path) - launcher(cache_path, text_wav_training_opt) diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/model_cards/MUSICGEN_MODEL_CARD.md b/spaces/brainblow/AudioCreator_Music-Audio_Generation/model_cards/MUSICGEN_MODEL_CARD.md deleted file mode 100644 index 10ba9f9790841be06cd3e459cf667c1af6291343..0000000000000000000000000000000000000000 --- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/model_cards/MUSICGEN_MODEL_CARD.md +++ /dev/null @@ -1,90 +0,0 @@ -# MusicGen Model Card - -## Model details - -**Organization developing the model:** The FAIR team of Meta AI. - -**Model date:** MusicGen was trained between April 2023 and May 2023. - -**Model version:** This is the version 1 of the model. - -**Model type:** MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation. - -**Paper or resources for more information:** More information can be found in the paper [Simple and Controllable Music Generation][arxiv]. - -**Citation details:** See [our paper][arxiv] - -**License:** Code is released under MIT, model weights are released under CC-BY-NC 4.0. - -**Where to send questions or comments about the model:** Questions and comments about MusicGen can be sent via the [GitHub repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue. - -## Intended use -**Primary intended use:** The primary use of MusicGen is research on AI-based music generation, including: - -- Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science -- Generation of music guided by text or melody to understand current abilities of generative AI models by machine learning amateurs - -**Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models. - -**Out-of-scope use cases:** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. - -## Metrics - -**Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark: - -- Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish) -- Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST) -- CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model - -Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes: - -- Overall quality of the music samples; -- Text relevance to the provided text input; -- Adherence to the melody for melody-guided music generation. - -More details on performance measures and human studies can be found in the paper. - -**Decision thresholds:** Not applicable. - -## Evaluation datasets - -The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set. - -## Training datasets - -The model was trained on licensed data using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing. - -## Evaluation results - -Below are the objective metrics obtained on MusicCaps with the released model. Note that for the publicly released models, we had all the datasets go through a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs), in order to keep only the instrumental part. This explains the difference in objective metrics with the models used in the paper. - -| Model | Frechet Audio Distance | KLD | Text Consistency | Chroma Cosine Similarity | -|---|---|---|---|---| -| facebook/musicgen-small | 4.88 | 1.28 | 0.27 | - | -| facebook/musicgen-medium | 5.14 | 1.24 | 0.28 | - | -| facebook/musicgen-large | 5.48 | 1.22 | 0.28 | - | -| facebook/musicgen-melody | 4.93 | 1.26 | 0.27 | 0.44 | - -More information can be found in the paper [Simple and Controllable Music Generation][arxiv], in the Results section. - -## Limitations and biases - -**Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model. - -**Mitigations:** Vocals have been removed from the data source using corresponding tags, and then using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs). - -**Limitations:** - -- The model is not able to generate realistic vocals. -- The model has been trained with English descriptions and will not perform as well in other languages. -- The model does not perform equally well for all music styles and cultures. -- The model sometimes generates end of songs, collapsing to silence. -- It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results. - -**Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive. - -**Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data. - -**Use cases:** Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks. - -[arxiv]: https://arxiv.org/abs/2306.05284 diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ImImagePlugin.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ImImagePlugin.py deleted file mode 100644 index 746743f658cf3fa2e0022ae049808eb68d3d1221..0000000000000000000000000000000000000000 --- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ImImagePlugin.py +++ /dev/null @@ -1,371 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# IFUNC IM file handling for PIL -# -# history: -# 1995-09-01 fl Created. -# 1997-01-03 fl Save palette images -# 1997-01-08 fl Added sequence support -# 1997-01-23 fl Added P and RGB save support -# 1997-05-31 fl Read floating point images -# 1997-06-22 fl Save floating point images -# 1997-08-27 fl Read and save 1-bit images -# 1998-06-25 fl Added support for RGB+LUT images -# 1998-07-02 fl Added support for YCC images -# 1998-07-15 fl Renamed offset attribute to avoid name clash -# 1998-12-29 fl Added I;16 support -# 2001-02-17 fl Use 're' instead of 'regex' (Python 2.1) (0.7) -# 2003-09-26 fl Added LA/PA support -# -# Copyright (c) 1997-2003 by Secret Labs AB. -# Copyright (c) 1995-2001 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# - - -import os -import re - -from . import Image, ImageFile, ImagePalette - -# -------------------------------------------------------------------- -# Standard tags - -COMMENT = "Comment" -DATE = "Date" -EQUIPMENT = "Digitalization equipment" -FRAMES = "File size (no of images)" -LUT = "Lut" -NAME = "Name" -SCALE = "Scale (x,y)" -SIZE = "Image size (x*y)" -MODE = "Image type" - -TAGS = { - COMMENT: 0, - DATE: 0, - EQUIPMENT: 0, - FRAMES: 0, - LUT: 0, - NAME: 0, - SCALE: 0, - SIZE: 0, - MODE: 0, -} - -OPEN = { - # ifunc93/p3cfunc formats - "0 1 image": ("1", "1"), - "L 1 image": ("1", "1"), - "Greyscale image": ("L", "L"), - "Grayscale image": ("L", "L"), - "RGB image": ("RGB", "RGB;L"), - "RLB image": ("RGB", "RLB"), - "RYB image": ("RGB", "RLB"), - "B1 image": ("1", "1"), - "B2 image": ("P", "P;2"), - "B4 image": ("P", "P;4"), - "X 24 image": ("RGB", "RGB"), - "L 32 S image": ("I", "I;32"), - "L 32 F image": ("F", "F;32"), - # old p3cfunc formats - "RGB3 image": ("RGB", "RGB;T"), - "RYB3 image": ("RGB", "RYB;T"), - # extensions - "LA image": ("LA", "LA;L"), - "PA image": ("LA", "PA;L"), - "RGBA image": ("RGBA", "RGBA;L"), - "RGBX image": ("RGBX", "RGBX;L"), - "CMYK image": ("CMYK", "CMYK;L"), - "YCC image": ("YCbCr", "YCbCr;L"), -} - -# ifunc95 extensions -for i in ["8", "8S", "16", "16S", "32", "32F"]: - OPEN[f"L {i} image"] = ("F", f"F;{i}") - OPEN[f"L*{i} image"] = ("F", f"F;{i}") -for i in ["16", "16L", "16B"]: - OPEN[f"L {i} image"] = (f"I;{i}", f"I;{i}") - OPEN[f"L*{i} image"] = (f"I;{i}", f"I;{i}") -for i in ["32S"]: - OPEN[f"L {i} image"] = ("I", f"I;{i}") - OPEN[f"L*{i} image"] = ("I", f"I;{i}") -for i in range(2, 33): - OPEN[f"L*{i} image"] = ("F", f"F;{i}") - - -# -------------------------------------------------------------------- -# Read IM directory - -split = re.compile(rb"^([A-Za-z][^:]*):[ \t]*(.*)[ \t]*$") - - -def number(s): - try: - return int(s) - except ValueError: - return float(s) - - -## -# Image plugin for the IFUNC IM file format. - - -class ImImageFile(ImageFile.ImageFile): - format = "IM" - format_description = "IFUNC Image Memory" - _close_exclusive_fp_after_loading = False - - def _open(self): - # Quick rejection: if there's not an LF among the first - # 100 bytes, this is (probably) not a text header. - - if b"\n" not in self.fp.read(100): - msg = "not an IM file" - raise SyntaxError(msg) - self.fp.seek(0) - - n = 0 - - # Default values - self.info[MODE] = "L" - self.info[SIZE] = (512, 512) - self.info[FRAMES] = 1 - - self.rawmode = "L" - - while True: - s = self.fp.read(1) - - # Some versions of IFUNC uses \n\r instead of \r\n... - if s == b"\r": - continue - - if not s or s == b"\0" or s == b"\x1A": - break - - # FIXME: this may read whole file if not a text file - s = s + self.fp.readline() - - if len(s) > 100: - msg = "not an IM file" - raise SyntaxError(msg) - - if s[-2:] == b"\r\n": - s = s[:-2] - elif s[-1:] == b"\n": - s = s[:-1] - - try: - m = split.match(s) - except re.error as e: - msg = "not an IM file" - raise SyntaxError(msg) from e - - if m: - k, v = m.group(1, 2) - - # Don't know if this is the correct encoding, - # but a decent guess (I guess) - k = k.decode("latin-1", "replace") - v = v.decode("latin-1", "replace") - - # Convert value as appropriate - if k in [FRAMES, SCALE, SIZE]: - v = v.replace("*", ",") - v = tuple(map(number, v.split(","))) - if len(v) == 1: - v = v[0] - elif k == MODE and v in OPEN: - v, self.rawmode = OPEN[v] - - # Add to dictionary. Note that COMMENT tags are - # combined into a list of strings. - if k == COMMENT: - if k in self.info: - self.info[k].append(v) - else: - self.info[k] = [v] - else: - self.info[k] = v - - if k in TAGS: - n += 1 - - else: - msg = "Syntax error in IM header: " + s.decode("ascii", "replace") - raise SyntaxError(msg) - - if not n: - msg = "Not an IM file" - raise SyntaxError(msg) - - # Basic attributes - self._size = self.info[SIZE] - self.mode = self.info[MODE] - - # Skip forward to start of image data - while s and s[:1] != b"\x1A": - s = self.fp.read(1) - if not s: - msg = "File truncated" - raise SyntaxError(msg) - - if LUT in self.info: - # convert lookup table to palette or lut attribute - palette = self.fp.read(768) - greyscale = 1 # greyscale palette - linear = 1 # linear greyscale palette - for i in range(256): - if palette[i] == palette[i + 256] == palette[i + 512]: - if palette[i] != i: - linear = 0 - else: - greyscale = 0 - if self.mode in ["L", "LA", "P", "PA"]: - if greyscale: - if not linear: - self.lut = list(palette[:256]) - else: - if self.mode in ["L", "P"]: - self.mode = self.rawmode = "P" - elif self.mode in ["LA", "PA"]: - self.mode = "PA" - self.rawmode = "PA;L" - self.palette = ImagePalette.raw("RGB;L", palette) - elif self.mode == "RGB": - if not greyscale or not linear: - self.lut = list(palette) - - self.frame = 0 - - self.__offset = offs = self.fp.tell() - - self._fp = self.fp # FIXME: hack - - if self.rawmode[:2] == "F;": - # ifunc95 formats - try: - # use bit decoder (if necessary) - bits = int(self.rawmode[2:]) - if bits not in [8, 16, 32]: - self.tile = [("bit", (0, 0) + self.size, offs, (bits, 8, 3, 0, -1))] - return - except ValueError: - pass - - if self.rawmode in ["RGB;T", "RYB;T"]: - # Old LabEye/3PC files. Would be very surprised if anyone - # ever stumbled upon such a file ;-) - size = self.size[0] * self.size[1] - self.tile = [ - ("raw", (0, 0) + self.size, offs, ("G", 0, -1)), - ("raw", (0, 0) + self.size, offs + size, ("R", 0, -1)), - ("raw", (0, 0) + self.size, offs + 2 * size, ("B", 0, -1)), - ] - else: - # LabEye/IFUNC files - self.tile = [("raw", (0, 0) + self.size, offs, (self.rawmode, 0, -1))] - - @property - def n_frames(self): - return self.info[FRAMES] - - @property - def is_animated(self): - return self.info[FRAMES] > 1 - - def seek(self, frame): - if not self._seek_check(frame): - return - - self.frame = frame - - if self.mode == "1": - bits = 1 - else: - bits = 8 * len(self.mode) - - size = ((self.size[0] * bits + 7) // 8) * self.size[1] - offs = self.__offset + frame * size - - self.fp = self._fp - - self.tile = [("raw", (0, 0) + self.size, offs, (self.rawmode, 0, -1))] - - def tell(self): - return self.frame - - -# -# -------------------------------------------------------------------- -# Save IM files - - -SAVE = { - # mode: (im type, raw mode) - "1": ("0 1", "1"), - "L": ("Greyscale", "L"), - "LA": ("LA", "LA;L"), - "P": ("Greyscale", "P"), - "PA": ("LA", "PA;L"), - "I": ("L 32S", "I;32S"), - "I;16": ("L 16", "I;16"), - "I;16L": ("L 16L", "I;16L"), - "I;16B": ("L 16B", "I;16B"), - "F": ("L 32F", "F;32F"), - "RGB": ("RGB", "RGB;L"), - "RGBA": ("RGBA", "RGBA;L"), - "RGBX": ("RGBX", "RGBX;L"), - "CMYK": ("CMYK", "CMYK;L"), - "YCbCr": ("YCC", "YCbCr;L"), -} - - -def _save(im, fp, filename): - try: - image_type, rawmode = SAVE[im.mode] - except KeyError as e: - msg = f"Cannot save {im.mode} images as IM" - raise ValueError(msg) from e - - frames = im.encoderinfo.get("frames", 1) - - fp.write(f"Image type: {image_type} image\r\n".encode("ascii")) - if filename: - # Each line must be 100 characters or less, - # or: SyntaxError("not an IM file") - # 8 characters are used for "Name: " and "\r\n" - # Keep just the filename, ditch the potentially overlong path - name, ext = os.path.splitext(os.path.basename(filename)) - name = "".join([name[: 92 - len(ext)], ext]) - - fp.write(f"Name: {name}\r\n".encode("ascii")) - fp.write(("Image size (x*y): %d*%d\r\n" % im.size).encode("ascii")) - fp.write(f"File size (no of images): {frames}\r\n".encode("ascii")) - if im.mode in ["P", "PA"]: - fp.write(b"Lut: 1\r\n") - fp.write(b"\000" * (511 - fp.tell()) + b"\032") - if im.mode in ["P", "PA"]: - im_palette = im.im.getpalette("RGB", "RGB;L") - colors = len(im_palette) // 3 - palette = b"" - for i in range(3): - palette += im_palette[colors * i : colors * (i + 1)] - palette += b"\x00" * (256 - colors) - fp.write(palette) # 768 bytes - ImageFile._save(im, fp, [("raw", (0, 0) + im.size, 0, (rawmode, 0, -1))]) - - -# -# -------------------------------------------------------------------- -# Registry - - -Image.register_open(ImImageFile.format, ImImageFile) -Image.register_save(ImImageFile.format, _save) - -Image.register_extension(ImImageFile.format, ".im") diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DeepLab/deeplab/semantic_seg.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DeepLab/deeplab/semantic_seg.py deleted file mode 100644 index d4625c52d96b2a700d828112c2a2ea80f5028330..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DeepLab/deeplab/semantic_seg.py +++ /dev/null @@ -1,348 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from typing import Callable, Dict, List, Optional, Tuple, Union -import fvcore.nn.weight_init as weight_init -import torch -from torch import nn -from torch.nn import functional as F - -from detectron2.config import configurable -from detectron2.layers import ASPP, Conv2d, DepthwiseSeparableConv2d, ShapeSpec, get_norm -from detectron2.modeling import SEM_SEG_HEADS_REGISTRY - -from .loss import DeepLabCE - - -@SEM_SEG_HEADS_REGISTRY.register() -class DeepLabV3PlusHead(nn.Module): - """ - A semantic segmentation head described in :paper:`DeepLabV3+`. - """ - - @configurable - def __init__( - self, - input_shape: Dict[str, ShapeSpec], - *, - project_channels: List[int], - aspp_dilations: List[int], - aspp_dropout: float, - decoder_channels: List[int], - common_stride: int, - norm: Union[str, Callable], - train_size: Optional[Tuple], - loss_weight: float = 1.0, - loss_type: str = "cross_entropy", - ignore_value: int = -1, - num_classes: Optional[int] = None, - use_depthwise_separable_conv: bool = False, - ): - """ - NOTE: this interface is experimental. - - Args: - input_shape: shape of the input features. They will be ordered by stride - and the last one (with largest stride) is used as the input to the - decoder (i.e. the ASPP module); the rest are low-level feature for - the intermediate levels of decoder. - project_channels (list[int]): a list of low-level feature channels. - The length should be len(in_features) - 1. - aspp_dilations (list(int)): a list of 3 dilations in ASPP. - aspp_dropout (float): apply dropout on the output of ASPP. - decoder_channels (list[int]): a list of output channels of each - decoder stage. It should have the same length as "in_features" - (each element in "in_features" corresponds to one decoder stage). - common_stride (int): output stride of decoder. - norm (str or callable): normalization for all conv layers. - train_size (tuple): (height, width) of training images. - loss_weight (float): loss weight. - loss_type (str): type of loss function, 2 opptions: - (1) "cross_entropy" is the standard cross entropy loss. - (2) "hard_pixel_mining" is the loss in DeepLab that samples - top k% hardest pixels. - ignore_value (int): category to be ignored during training. - num_classes (int): number of classes, if set to None, the decoder - will not construct a predictor. - use_depthwise_separable_conv (bool): use DepthwiseSeparableConv2d - in ASPP and decoder. - """ - super().__init__() - input_shape = sorted(input_shape.items(), key=lambda x: x[1].stride) - - # fmt: off - self.in_features = [k for k, v in input_shape] # starting from "res2" to "res5" - in_channels = [x[1].channels for x in input_shape] - in_strides = [x[1].stride for x in input_shape] - aspp_channels = decoder_channels[-1] - self.ignore_value = ignore_value - self.common_stride = common_stride # output stride - self.loss_weight = loss_weight - self.loss_type = loss_type - self.decoder_only = num_classes is None - self.use_depthwise_separable_conv = use_depthwise_separable_conv - # fmt: on - - assert ( - len(project_channels) == len(self.in_features) - 1 - ), "Expected {} project_channels, got {}".format( - len(self.in_features) - 1, len(project_channels) - ) - assert len(decoder_channels) == len( - self.in_features - ), "Expected {} decoder_channels, got {}".format( - len(self.in_features), len(decoder_channels) - ) - self.decoder = nn.ModuleDict() - - use_bias = norm == "" - for idx, in_channel in enumerate(in_channels): - decoder_stage = nn.ModuleDict() - - if idx == len(self.in_features) - 1: - # ASPP module - if train_size is not None: - train_h, train_w = train_size - encoder_stride = in_strides[-1] - if train_h % encoder_stride or train_w % encoder_stride: - raise ValueError("Crop size need to be divisible by encoder stride.") - pool_h = train_h // encoder_stride - pool_w = train_w // encoder_stride - pool_kernel_size = (pool_h, pool_w) - else: - pool_kernel_size = None - project_conv = ASPP( - in_channel, - aspp_channels, - aspp_dilations, - norm=norm, - activation=F.relu, - pool_kernel_size=pool_kernel_size, - dropout=aspp_dropout, - use_depthwise_separable_conv=use_depthwise_separable_conv, - ) - fuse_conv = None - else: - project_conv = Conv2d( - in_channel, - project_channels[idx], - kernel_size=1, - bias=use_bias, - norm=get_norm(norm, project_channels[idx]), - activation=F.relu, - ) - weight_init.c2_xavier_fill(project_conv) - if use_depthwise_separable_conv: - # We use a single 5x5 DepthwiseSeparableConv2d to replace - # 2 3x3 Conv2d since they have the same receptive field, - # proposed in :paper:`Panoptic-DeepLab`. - fuse_conv = DepthwiseSeparableConv2d( - project_channels[idx] + decoder_channels[idx + 1], - decoder_channels[idx], - kernel_size=5, - padding=2, - norm1=norm, - activation1=F.relu, - norm2=norm, - activation2=F.relu, - ) - else: - fuse_conv = nn.Sequential( - Conv2d( - project_channels[idx] + decoder_channels[idx + 1], - decoder_channels[idx], - kernel_size=3, - padding=1, - bias=use_bias, - norm=get_norm(norm, decoder_channels[idx]), - activation=F.relu, - ), - Conv2d( - decoder_channels[idx], - decoder_channels[idx], - kernel_size=3, - padding=1, - bias=use_bias, - norm=get_norm(norm, decoder_channels[idx]), - activation=F.relu, - ), - ) - weight_init.c2_xavier_fill(fuse_conv[0]) - weight_init.c2_xavier_fill(fuse_conv[1]) - - decoder_stage["project_conv"] = project_conv - decoder_stage["fuse_conv"] = fuse_conv - - self.decoder[self.in_features[idx]] = decoder_stage - - if not self.decoder_only: - self.predictor = Conv2d( - decoder_channels[0], num_classes, kernel_size=1, stride=1, padding=0 - ) - nn.init.normal_(self.predictor.weight, 0, 0.001) - nn.init.constant_(self.predictor.bias, 0) - - if self.loss_type == "cross_entropy": - self.loss = nn.CrossEntropyLoss(reduction="mean", ignore_index=self.ignore_value) - elif self.loss_type == "hard_pixel_mining": - self.loss = DeepLabCE(ignore_label=self.ignore_value, top_k_percent_pixels=0.2) - else: - raise ValueError("Unexpected loss type: %s" % self.loss_type) - - @classmethod - def from_config(cls, cfg, input_shape): - if cfg.INPUT.CROP.ENABLED: - assert cfg.INPUT.CROP.TYPE == "absolute" - train_size = cfg.INPUT.CROP.SIZE - else: - train_size = None - decoder_channels = [cfg.MODEL.SEM_SEG_HEAD.CONVS_DIM] * ( - len(cfg.MODEL.SEM_SEG_HEAD.IN_FEATURES) - 1 - ) + [cfg.MODEL.SEM_SEG_HEAD.ASPP_CHANNELS] - ret = dict( - input_shape={ - k: v for k, v in input_shape.items() if k in cfg.MODEL.SEM_SEG_HEAD.IN_FEATURES - }, - project_channels=cfg.MODEL.SEM_SEG_HEAD.PROJECT_CHANNELS, - aspp_dilations=cfg.MODEL.SEM_SEG_HEAD.ASPP_DILATIONS, - aspp_dropout=cfg.MODEL.SEM_SEG_HEAD.ASPP_DROPOUT, - decoder_channels=decoder_channels, - common_stride=cfg.MODEL.SEM_SEG_HEAD.COMMON_STRIDE, - norm=cfg.MODEL.SEM_SEG_HEAD.NORM, - train_size=train_size, - loss_weight=cfg.MODEL.SEM_SEG_HEAD.LOSS_WEIGHT, - loss_type=cfg.MODEL.SEM_SEG_HEAD.LOSS_TYPE, - ignore_value=cfg.MODEL.SEM_SEG_HEAD.IGNORE_VALUE, - num_classes=cfg.MODEL.SEM_SEG_HEAD.NUM_CLASSES, - use_depthwise_separable_conv=cfg.MODEL.SEM_SEG_HEAD.USE_DEPTHWISE_SEPARABLE_CONV, - ) - return ret - - def forward(self, features, targets=None): - """ - Returns: - In training, returns (None, dict of losses) - In inference, returns (CxHxW logits, {}) - """ - y = self.layers(features) - if self.decoder_only: - # Output from self.layers() only contains decoder feature. - return y - if self.training: - return None, self.losses(y, targets) - else: - y = F.interpolate( - y, scale_factor=self.common_stride, mode="bilinear", align_corners=False - ) - return y, {} - - def layers(self, features): - # Reverse feature maps into top-down order (from low to high resolution) - for f in self.in_features[::-1]: - x = features[f] - proj_x = self.decoder[f]["project_conv"](x) - if self.decoder[f]["fuse_conv"] is None: - # This is aspp module - y = proj_x - else: - # Upsample y - y = F.interpolate(y, size=proj_x.size()[2:], mode="bilinear", align_corners=False) - y = torch.cat([proj_x, y], dim=1) - y = self.decoder[f]["fuse_conv"](y) - if not self.decoder_only: - y = self.predictor(y) - return y - - def losses(self, predictions, targets): - predictions = F.interpolate( - predictions, scale_factor=self.common_stride, mode="bilinear", align_corners=False - ) - loss = self.loss(predictions, targets) - losses = {"loss_sem_seg": loss * self.loss_weight} - return losses - - -@SEM_SEG_HEADS_REGISTRY.register() -class DeepLabV3Head(nn.Module): - """ - A semantic segmentation head described in :paper:`DeepLabV3`. - """ - - def __init__(self, cfg, input_shape: Dict[str, ShapeSpec]): - super().__init__() - - # fmt: off - self.in_features = cfg.MODEL.SEM_SEG_HEAD.IN_FEATURES - in_channels = [input_shape[f].channels for f in self.in_features] - aspp_channels = cfg.MODEL.SEM_SEG_HEAD.ASPP_CHANNELS - aspp_dilations = cfg.MODEL.SEM_SEG_HEAD.ASPP_DILATIONS - self.ignore_value = cfg.MODEL.SEM_SEG_HEAD.IGNORE_VALUE - num_classes = cfg.MODEL.SEM_SEG_HEAD.NUM_CLASSES - conv_dims = cfg.MODEL.SEM_SEG_HEAD.CONVS_DIM - self.common_stride = cfg.MODEL.SEM_SEG_HEAD.COMMON_STRIDE # output stride - norm = cfg.MODEL.SEM_SEG_HEAD.NORM - self.loss_weight = cfg.MODEL.SEM_SEG_HEAD.LOSS_WEIGHT - self.loss_type = cfg.MODEL.SEM_SEG_HEAD.LOSS_TYPE - train_crop_size = cfg.INPUT.CROP.SIZE - aspp_dropout = cfg.MODEL.SEM_SEG_HEAD.ASPP_DROPOUT - use_depthwise_separable_conv = cfg.MODEL.SEM_SEG_HEAD.USE_DEPTHWISE_SEPARABLE_CONV - # fmt: on - - assert len(self.in_features) == 1 - assert len(in_channels) == 1 - - # ASPP module - if cfg.INPUT.CROP.ENABLED: - assert cfg.INPUT.CROP.TYPE == "absolute" - train_crop_h, train_crop_w = train_crop_size - if train_crop_h % self.common_stride or train_crop_w % self.common_stride: - raise ValueError("Crop size need to be divisible by output stride.") - pool_h = train_crop_h // self.common_stride - pool_w = train_crop_w // self.common_stride - pool_kernel_size = (pool_h, pool_w) - else: - pool_kernel_size = None - self.aspp = ASPP( - in_channels[0], - aspp_channels, - aspp_dilations, - norm=norm, - activation=F.relu, - pool_kernel_size=pool_kernel_size, - dropout=aspp_dropout, - use_depthwise_separable_conv=use_depthwise_separable_conv, - ) - - self.predictor = Conv2d(conv_dims, num_classes, kernel_size=1, stride=1, padding=0) - nn.init.normal_(self.predictor.weight, 0, 0.001) - nn.init.constant_(self.predictor.bias, 0) - - if self.loss_type == "cross_entropy": - self.loss = nn.CrossEntropyLoss(reduction="mean", ignore_index=self.ignore_value) - elif self.loss_type == "hard_pixel_mining": - self.loss = DeepLabCE(ignore_label=self.ignore_value, top_k_percent_pixels=0.2) - else: - raise ValueError("Unexpected loss type: %s" % self.loss_type) - - def forward(self, features, targets=None): - """ - Returns: - In training, returns (None, dict of losses) - In inference, returns (CxHxW logits, {}) - """ - x = features[self.in_features[0]] - x = self.aspp(x) - x = self.predictor(x) - if self.training: - return None, self.losses(x, targets) - else: - x = F.interpolate( - x, scale_factor=self.common_stride, mode="bilinear", align_corners=False - ) - return x, {} - - def losses(self, predictions, targets): - predictions = F.interpolate( - predictions, scale_factor=self.common_stride, mode="bilinear", align_corners=False - ) - loss = self.loss(predictions, targets) - losses = {"loss_sem_seg": loss * self.loss_weight} - return losses diff --git a/spaces/chendelong/citation-tool/app.py b/spaces/chendelong/citation-tool/app.py deleted file mode 100644 index 9444f7e10b3a9a6d76a1d4200efb3c54dbe010d8..0000000000000000000000000000000000000000 --- a/spaces/chendelong/citation-tool/app.py +++ /dev/null @@ -1,194 +0,0 @@ - -import requests -import pprint -import json -import os -import gradio as gr -import requests -from bs4 import BeautifulSoup - -def get_dblp_bibref(title): - - print(f'DBLP query: {title}') - - try: - # Replace spaces in the title with '+' - title = title.replace(' ', '+') - - # Send a GET request to the DBLP search page with the paper title - response = requests.get(f'https://dblp.org/search/publ/api?q={title}&format=xml') - soup = BeautifulSoup(response.content, 'lxml') - - # Get the URL of the first paper in the search results - url = soup.select_one('url').text + '.bib' - response = requests.get(url) - - paper_link = soup.select_one('url').text + '.html' - - return response.text, paper_link - except Exception as e: - return f'Error during get bibref from DBLP: {e}', None - -# set pprint width -pp = pprint.PrettyPrinter(width=128) -API_KEY = 'eRLnjZeWSs4gHjSemy5af1X7IbugACFg1tSX6F3R' -FIELDS = "paperId,title,url,year,authors,venue,abstract,citationCount,openAccessPdf,fieldsOfStudy,publicationDate,citations,references" - - -# def get_name_mapping(venues_data='/nfs/delong/data/s2orc/s2ag_full/publication-venues'): -# name_mapping = {} # from full name to abbreviated name -# for file in os.listdir(venues_data): -# with open(os.path.join(venues_data, file), 'r') as f: -# venues = [json.loads(line) for line in f.readlines()] -# print(f"Total number of venues in {file}: {len(venues)}") -# for venue in venues: -# if len(venue['alternate_names'])>0: -# # name_mapping[venue['name']] = venue['alternate_names'][0] -# # instead of using the first alternate name, use the shortest one -# name_mapping[venue['name']] = min(venue['alternate_names'], key=len) - -# name_mapping['Neural Information Processing Systems'] = 'NeurIPS' - -# print(f'loaded {len(name_mapping)} venues from {venues_data}') -# return name_mapping - - -# name_mapping = get_name_mapping() -# json.dump(name_mapping, open('name_mapping.json', 'w'), indent=4) - -name_mapping = json.load(open('name_mapping.json', 'r')) -print(f'loaded {len(name_mapping)} venues from name_mapping.json') - -def search_paper_title_semanticscholar(title): - url = "https://api.semanticscholar.org/graph/v1/paper/search" - headers = {"Accept": "application/json", "x-api-key": API_KEY} - params = {"query": title, "limit": 1} - - response = requests.get(url, headers=headers, params=params) - - if response.status_code == 200: - data = response.json() - if data['total']!=0: - paper_id = data['data'][0]['paperId'] - url = f"https://api.semanticscholar.org/graph/v1/paper/{paper_id}" - - params = {"fields": FIELDS} - response = requests.get(url, headers=headers, params=params) - if response.status_code == 200: - data = response.json() - return data - else: - print(f"Error: {response.status_code}") - return None - else: - print("No paper found with the given title.") - return None - else: - print(f"Error: {response.status_code}") - return None - -def get_abbreviated_venue(name): - if name in name_mapping: - return name_mapping[name] - else: - return name - -def get_md_citation(paper_info): - - # citation_str = paper_info['authors'][0]['name'] + " *et al.* " - # citation_str = ', '.join([author['name'] for author in paper_info['authors']]) + '. ' - citation_str = '' - for author in paper_info['authors'][:5]: - citation_str += f"{author['name']}, " - - if len(paper_info['authors'])>5: - citation_str += '*et al.* ' - else: - citation_str = citation_str[:-2] + '. ' - - - citation_str += f"[{paper_info['title']}]({paper_info['url']}). " - citation_str += f"*{get_abbreviated_venue(paper_info['venue'])}*" - # citation_str += f" ({paper_info['year']})." - citation_str += f" ({paper_info['publicationDate'][:-3].replace('-', '.')})." - return citation_str - - -def summarize_paper_info(paper_info): - info_str = "" - # info_str += f"**Venue**: {paper_info['venue']}\n\n" - - author_str = '' - for author in paper_info['authors']: - author_str += f"[{author['name']}](https://www.semanticscholar.org/author/{author['authorId']}), " - author_str = author_str[:-2] - - info_str += f"**Authors**:\n\n{author_str}\n\n" - - info_str += f"\n\n> **Abstract**: {paper_info['abstract']}\n\n" - info_str += f"**Citation Count**: {paper_info['citationCount']}\n\n" - return info_str - -def get_output(title): - print(f"Title query: {title}") - - paper_info = search_paper_title_semanticscholar(title) - if paper_info is not None: - citation_str = get_md_citation(paper_info) - else: - citation_str = "No paper found with that title." - - bibtex, dblp_link = get_dblp_bibref(paper_info['title']) - - citation_str = f""" -```text -{paper_info['title']} -``` - -{citation_str} - ---- - -**Markdown source code** - -```markdown -{citation_str} -``` - - -**BibTex** - -```bibtex -{bibtex} -``` - -{summarize_paper_info(paper_info)} - ---- - -🔗 [[Open in Semantic Scholar]](https://www.semanticscholar.org/paper/{paper_info['paperId']}) | [[DBLP Page]]({dblp_link}) -""" - - print(citation_str) - - return citation_str - -def main(): - iface = gr.Interface( - fn=get_output, - inputs=gr.components.Textbox( - lines=1, - label="Please input the title of the paper to get its citation.", - placeholder="Your title here", - autofocus=True, - ), - outputs="markdown", - allow_flagging='never', - title="Citation Tool", - description="### Search paper title from [Semantic Scholar](https://www.semanticscholar.org/) and [DBLP](http://dblp.org/), and get structured citation.", - ) - iface.launch() - -if __name__=="__main__": - main() - diff --git a/spaces/chendl/compositional_test/transformers/docs/source/en/contributing.md b/spaces/chendl/compositional_test/transformers/docs/source/en/contributing.md deleted file mode 100644 index 9635ae09d739762d61f073dd9325cb6772c540aa..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/docs/source/en/contributing.md +++ /dev/null @@ -1,395 +0,0 @@ - - -# Contribute to 🤗 Transformers - -Everyone is welcome to contribute, and we value everybody's contribution. Code -contributions are not the only way to help the community. Answering questions, helping -others, and improving the documentation are also immensely valuable. - -It also helps us if you spread the word! Reference the library in blog posts -about the awesome projects it made possible, shout out on Twitter every time it has -helped you, or simply ⭐️ the repository to say thank you. - -However you choose to contribute, please be mindful and respect our -[code of conduct](https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md). - -**This guide was heavily inspired by the awesome [scikit-learn guide to contributing](https://github.com/scikit-learn/scikit-learn/blob/main/CONTRIBUTING.md).** - -## Ways to contribute - -There are several ways you can contribute to 🤗 Transformers: - -* Fix outstanding issues with the existing code. -* Submit issues related to bugs or desired new features. -* Implement new models. -* Contribute to the examples or to the documentation. - -If you don't know where to start, there is a special [Good First -Issue](https://github.com/huggingface/transformers/contribute) listing. It will give you a list of -open issues that are beginner-friendly and help you start contributing to open-source. Just comment in the issue that you'd like to work -on it. - -For something slightly more challenging, you can also take a look at the [Good Second Issue](https://github.com/huggingface/transformers/labels/Good%20Second%20Issue) list. In general though, if you feel like you know what you're doing, go for it and we'll help you get there! 🚀 - -> All contributions are equally valuable to the community. 🥰 - -## Fixing outstanding issues - -If you notice an issue with the existing code and have a fix in mind, feel free to [start contributing](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md/#create-a-pull-request) and open a Pull Request! - -## Submitting a bug-related issue or feature request - -Do your best to follow these guidelines when submitting a bug-related issue or a feature -request. It will make it easier for us to come back to you quickly and with good -feedback. - -### Did you find a bug? - -The 🤗 Transformers library is robust and reliable thanks to users who report the problems they encounter. - -Before you report an issue, we would really appreciate it if you could **make sure the bug was not -already reported** (use the search bar on GitHub under Issues). Your issue should also be related to bugs in the library itself, and not your code. If you're unsure whether the bug is in your code or the library, please ask on the [forum](https://discuss.huggingface.co/) first. This helps us respond quicker to fixing issues related to the library versus general questions. - -Once you've confirmed the bug hasn't already been reported, please include the following information in your issue so we can quickly resolve it: - -* Your **OS type and version** and **Python**, **PyTorch** and - **TensorFlow** versions when applicable. -* A short, self-contained, code snippet that allows us to reproduce the bug in - less than 30s. -* The *full* traceback if an exception is raised. -* Attach any other additional information, like screenshots, you think may help. - -To get the OS and software versions automatically, run the following command: - -```bash -transformers-cli env -``` - -You can also run the same command from the root of the repository: - -```bash -python src/transformers/commands/transformers_cli.py env -``` - -### Do you want a new feature? - -If there is a new feature you'd like to see in 🤗 Transformers, please open an issue and describe: - -1. What is the *motivation* behind this feature? Is it related to a problem or frustration with the library? Is it a feature related to something you need for a project? Is it something you worked on and think it could benefit the community? - - Whatever it is, we'd love to hear about it! - -2. Describe your requested feature in as much detail as possible. The more you can tell us about it, the better we'll be able to help you. -3. Provide a *code snippet* that demonstrates the features usage. -4. If the feature is related to a paper, please include a link. - -If your issue is well written we're already 80% of the way there by the time you create it. - -We have added [templates](https://github.com/huggingface/transformers/tree/main/templates) to help you get started with your issue. - -## Do you want to implement a new model? - -New models are constantly released and if you want to implement a new model, please provide the following information - -* A short description of the model and link to the paper. -* Link to the implementation if it is open-sourced. -* Link to the model weights if they are available. - -If you are willing to contribute the model yourself, let us know so we can help you add it to 🤗 Transformers! - -We have added a [detailed guide and templates](https://github.com/huggingface/transformers/tree/main/templates) to help you get started with adding a new model, and we also have a more technical guide for [how to add a model to 🤗 Transformers](https://huggingface.co/docs/transformers/add_new_model). - -## Do you want to add documentation? - -We're always looking for improvements to the documentation that make it more clear and accurate. Please let us know how the documentation can be improved such as typos and any content that is missing, unclear or inaccurate. We'll be happy to make the changes or help you make a contribution if you're interested! - -For more details about how to generate, build, and write the documentation, take a look at the documentation [README](https://github.com/huggingface/transformers/tree/main/docs). - -## Create a Pull Request - -Before writing any code, we strongly advise you to search through the existing PRs or -issues to make sure nobody is already working on the same thing. If you are -unsure, it is always a good idea to open an issue to get some feedback. - -You will need basic `git` proficiency to contribute to -🤗 Transformers. While `git` is not the easiest tool to use, it has the greatest -manual. Type `git --help` in a shell and enjoy! If you prefer books, [Pro -Git](https://git-scm.com/book/en/v2) is a very good reference. - -You'll need **[Python 3.7]((https://github.com/huggingface/transformers/blob/main/setup.py#L426))** or above to contribute to 🤗 Transformers. Follow the steps below to start contributing: - -1. Fork the [repository](https://github.com/huggingface/transformers) by - clicking on the **[Fork](https://github.com/huggingface/transformers/fork)** button on the repository's page. This creates a copy of the code - under your GitHub user account. - -2. Clone your fork to your local disk, and add the base repository as a remote: - - ```bash - git clone git@github.com:/transformers.git - cd transformers - git remote add upstream https://github.com/huggingface/transformers.git - ``` - -3. Create a new branch to hold your development changes: - - ```bash - git checkout -b a-descriptive-name-for-my-changes - ``` - - 🚨 **Do not** work on the `main` branch! - -4. Set up a development environment by running the following command in a virtual environment: - - ```bash - pip install -e ".[dev]" - ``` - - If 🤗 Transformers was already installed in the virtual environment, remove - it with `pip uninstall transformers` before reinstalling it in editable - mode with the `-e` flag. - - Depending on your OS, and since the number of optional dependencies of Transformers is growing, you might get a - failure with this command. If that's the case make sure to install the Deep Learning framework you are working with - (PyTorch, TensorFlow and/or Flax) then do: - - ```bash - pip install -e ".[quality]" - ``` - - which should be enough for most use cases. - -5. Develop the features on your branch. - - As you work on your code, you should make sure the test suite - passes. Run the tests impacted by your changes like this: - - ```bash - pytest tests/.py - ``` - - For more information about tests, check out the - [Testing](https://huggingface.co/docs/transformers/testing) guide. - - 🤗 Transformers relies on `black` and `ruff` to format its source code - consistently. After you make changes, apply automatic style corrections and code verifications - that can't be automated in one go with: - - ```bash - make fixup - ``` - - This target is also optimized to only work with files modified by the PR you're working on. - - If you prefer to run the checks one after the other, the following command applies the - style corrections: - - ```bash - make style - ``` - - 🤗 Transformers also uses `ruff` and a few custom scripts to check for coding mistakes. Quality - controls are run by the CI, but you can run the same checks with: - - ```bash - make quality - ``` - - Finally, we have a lot of scripts to make sure we didn't forget to update - some files when adding a new model. You can run these scripts with: - - ```bash - make repo-consistency - ``` - - To learn more about those checks and how to fix any issues with them, check out the - [Checks on a Pull Request](https://huggingface.co/docs/transformers/pr_checks) guide. - - If you're modifying documents under `docs/source` directory, make sure the documentation can still be built. This check will also run in the CI when you open a pull request. To run a local check - make sure you install the documentation builder: - - ```bash - pip install ".[docs]" - ``` - - Run the following command from the root of the repository: - - ```bash - doc-builder build transformers docs/source/en --build_dir ~/tmp/test-build - ``` - - This will build the documentation in the `~/tmp/test-build` folder where you can inspect the generated - Markdown files with your favorite editor. You can also preview the docs on GitHub when you open a pull request. - - Once you're happy with your changes, add changed files with `git add` and - record your changes locally with `git commit`: - - ```bash - git add modified_file.py - git commit - ``` - - Please remember to write [good commit - messages](https://chris.beams.io/posts/git-commit/) to clearly communicate the changes you made! - - To keep your copy of the code up to date with the original - repository, rebase your branch on `upstream/branch` *before* you open a pull request or if requested by a maintainer: - - ```bash - git fetch upstream - git rebase upstream/main - ``` - - Push your changes to your branch: - - ```bash - git push -u origin a-descriptive-name-for-my-changes - ``` - - If you've already opened a pull request, you'll need to force push with the `--force` flag. Otherwise, if the pull request hasn't been opened yet, you can just push your changes normally. - -6. Now you can go to your fork of the repository on GitHub and click on **Pull request** to open a pull request. Make sure you tick off all the boxes in our [checklist](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md/#pull-request-checklist) below. When you're ready, you can send your changes to the project maintainers for review. - -7. It's ok if maintainers request changes, it happens to our core contributors - too! So everyone can see the changes in the pull request, work in your local - branch and push the changes to your fork. They will automatically appear in - the pull request. - -### Pull request checklist - -☐ The pull request title should summarize your contribution.
-☐ If your pull request addresses an issue, please mention the issue number in the pull -request description to make sure they are linked (and people viewing the issue know you -are working on it).
-☐ To indicate a work in progress please prefix the title with `[WIP]`. These are -useful to avoid duplicated work, and to differentiate it from PRs ready to be merged. -☐ Make sure existing tests pass.
-☐ If adding a new feature, also add tests for it.
- - If you are adding a new model, make sure you use - `ModelTester.all_model_classes = (MyModel, MyModelWithLMHead,...)` to trigger the common tests. - - If you are adding new `@slow` tests, make sure they pass using - `RUN_SLOW=1 python -m pytest tests/models/my_new_model/test_my_new_model.py`. - - If you are adding a new tokenizer, write tests and make sure - `RUN_SLOW=1 python -m pytest tests/models/{your_model_name}/test_tokenization_{your_model_name}.py` passes. - CircleCI does not run the slow tests, but GitHub Actions does every night!
- -☐ All public methods must have informative docstrings (see -[`modeling_bert.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/modeling_bert.py) -for an example).
-☐ Due to the rapidly growing repository, don't add any images, videos and other -non-text files that'll significantly weigh down the repository. Instead, use a Hub -repository such as [`hf-internal-testing`](https://huggingface.co/hf-internal-testing) -to host these files and reference them by URL. We recommend placing documentation -related images in the following repository: -[huggingface/documentation-images](https://huggingface.co/datasets/huggingface/documentation-images). -You can open a PR on this dataset repostitory and ask a Hugging Face member to merge it. - -For more information about the checks run on a pull request, take a look at our [Checks on a Pull Request](https://huggingface.co/docs/transformers/pr_checks) guide. - -### Tests - -An extensive test suite is included to test the library behavior and several examples. Library tests can be found in -the [tests](https://github.com/huggingface/transformers/tree/main/tests) folder and examples tests in the -[examples](https://github.com/huggingface/transformers/tree/main/examples) folder. - -We like `pytest` and `pytest-xdist` because it's faster. From the root of the -repository, specify a *path to a subfolder or a test file* to run the test. - -```bash -python -m pytest -n auto --dist=loadfile -s -v ./tests/models/my_new_model -``` - -Similarly, for the `examples` directory, specify a *path to a subfolder or test file* to run the test. For example, the following command tests the text classification subfolder in the PyTorch `examples` directory: - -```bash -pip install -r examples/xxx/requirements.txt # only needed the first time -python -m pytest -n auto --dist=loadfile -s -v ./examples/pytorch/text-classification -``` - -In fact, this is actually how our `make test` and `make test-examples` commands are implemented (not including the `pip install`)! - -You can also specify a smaller set of tests in order to test only the feature -you're working on. - -By default, slow tests are skipped but you can set the `RUN_SLOW` environment variable to -`yes` to run them. This will download many gigabytes of models so make sure you -have enough disk space, a good internet connection or a lot of patience! - - - -Remember to specify a *path to a subfolder or a test file* to run the test. Otherwise, you'll run all the tests in the `tests` or `examples` folder, which will take a very long time! - - - -```bash -RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./tests/models/my_new_model -RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./examples/pytorch/text-classification -``` - -Like the slow tests, there are other environment variables available which not enabled by default during testing: -- `RUN_CUSTOM_TOKENIZERS`: Enables tests for custom tokenizers. -- `RUN_PT_FLAX_CROSS_TESTS`: Enables tests for PyTorch + Flax integration. -- `RUN_PT_TF_CROSS_TESTS`: Enables tests for TensorFlow + PyTorch integration. - -More environment variables and additional information can be found in the [testing_utils.py](src/transformers/testing_utils.py). - -🤗 Transformers uses `pytest` as a test runner only. It doesn't use any -`pytest`-specific features in the test suite itself. - -This means `unittest` is fully supported. Here's how to run tests with -`unittest`: - -```bash -python -m unittest discover -s tests -t . -v -python -m unittest discover -s examples -t examples -v -``` - -### Style guide - -For documentation strings, 🤗 Transformers follows the [Google Python Style Guide](https://google.github.io/styleguide/pyguide.html). -Check our [documentation writing guide](https://github.com/huggingface/transformers/tree/main/docs#writing-documentation---specification) -for more information. - -### Develop on Windows - -On Windows (unless you're working in [Windows Subsystem for Linux](https://learn.microsoft.com/en-us/windows/wsl/) or WSL), you need to configure git to transform Windows `CRLF` line endings to Linux `LF` line endings: - -```bash -git config core.autocrlf input -``` - -One way to run the `make` command on Windows is with MSYS2: - -1. [Download MSYS2](https://www.msys2.org/), and we assume it's installed in `C:\msys64`. -2. Open the command line `C:\msys64\msys2.exe` (it should be available from the **Start** menu). -3. Run in the shell: `pacman -Syu` and install `make` with `pacman -S make`. -4. Add `C:\msys64\usr\bin` to your PATH environment variable. - -You can now use `make` from any terminal (Powershell, cmd.exe, etc.)! 🎉 - -### Sync a forked repository with upstream main (the Hugging Face repository) - -When updating the main branch of a forked repository, please follow these steps to avoid pinging the upstream repository which adds reference notes to each upstream PR, and sends unnecessary notifications to the developers involved in these PRs. - -1. When possible, avoid syncing with the upstream using a branch and PR on the forked repository. Instead, merge directly into the forked main. -2. If a PR is absolutely necessary, use the following steps after checking out your branch: - -```bash -git checkout -b your-branch-for-syncing -git pull --squash --no-commit upstream main -git commit -m '' -git push --set-upstream origin your-branch-for-syncing -``` diff --git a/spaces/chendl/compositional_test/transformers/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py b/spaces/chendl/compositional_test/transformers/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py deleted file mode 100644 index 51d7c8651a24da09bd0a27f807686d7016738fda..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py +++ /dev/null @@ -1,595 +0,0 @@ -#!/usr/bin/env python -# coding=utf-8 -# Copyright 2021 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" -Fine-tuning the library models for sequence to sequence speech recognition. -""" -# You can also adapt this script on your own sequence to sequence speech -# recognition task. Pointers for this are left as comments. - -import logging -import os -import sys -from dataclasses import dataclass, field -from typing import Any, Dict, List, Optional, Union - -import datasets -import evaluate -import torch -from datasets import DatasetDict, load_dataset - -import transformers -from transformers import ( - AutoConfig, - AutoFeatureExtractor, - AutoModelForSpeechSeq2Seq, - AutoProcessor, - AutoTokenizer, - HfArgumentParser, - Seq2SeqTrainer, - Seq2SeqTrainingArguments, - set_seed, -) -from transformers.trainer_utils import get_last_checkpoint, is_main_process -from transformers.utils import check_min_version, send_example_telemetry -from transformers.utils.versions import require_version - - -# Will error if the minimal version of Transformers is not installed. Remove at your own risks. -check_min_version("4.28.0") - -require_version("datasets>=1.18.0", "To fix: pip install -r examples/pytorch/speech-recognition/requirements.txt") - -logger = logging.getLogger(__name__) - - -@dataclass -class ModelArguments: - """ - Arguments pertaining to which model/config/tokenizer we are going to fine-tune from. - """ - - model_name_or_path: str = field( - metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"} - ) - config_name: Optional[str] = field( - default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"} - ) - tokenizer_name: Optional[str] = field( - default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"} - ) - feature_extractor_name: Optional[str] = field( - default=None, metadata={"help": "feature extractor name or path if not the same as model_name"} - ) - cache_dir: Optional[str] = field( - default=None, - metadata={"help": "Where to store the pretrained models downloaded from huggingface.co"}, - ) - use_fast_tokenizer: bool = field( - default=True, - metadata={"help": "Whether to use one of the fast tokenizer (backed by the tokenizers library) or not."}, - ) - model_revision: str = field( - default="main", - metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."}, - ) - use_auth_token: bool = field( - default=False, - metadata={ - "help": ( - "Will use the token generated when running `huggingface-cli login` (necessary to use this script " - "with private models)." - ) - }, - ) - freeze_feature_encoder: bool = field( - default=True, metadata={"help": "Whether to freeze the feature encoder layers of the model."} - ) - freeze_encoder: bool = field( - default=False, metadata={"help": "Whether to freeze the entire encoder of the seq2seq model."} - ) - forced_decoder_ids: List[List[int]] = field( - default=None, - metadata={ - "help": ( - "A list of pairs of integers which indicates a mapping from generation indices to token indices " - "that will be forced before sampling. For example, [[0, 123]] means the first generated token " - "will always be a token of index 123." - ) - }, - ) - suppress_tokens: List[int] = field( - default=None, metadata={"help": "A list of tokens that will be suppressed at generation."} - ) - apply_spec_augment: bool = field( - default=False, - metadata={ - "help": "Whether to apply *SpecAugment* data augmentation to the input features. This is currently only relevant for Wav2Vec2, HuBERT, WavLM and Whisper models." - }, - ) - - -@dataclass -class DataTrainingArguments: - """ - Arguments pertaining to what data we are going to input our model for training and eval. - """ - - dataset_name: str = field( - default=None, metadata={"help": "The name of the dataset to use (via the datasets library)."} - ) - dataset_config_name: Optional[str] = field( - default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."} - ) - overwrite_cache: bool = field( - default=False, metadata={"help": "Overwrite the cached training and evaluation sets"} - ) - preprocessing_num_workers: Optional[int] = field( - default=None, - metadata={"help": "The number of processes to use for the preprocessing."}, - ) - max_train_samples: Optional[int] = field( - default=None, - metadata={ - "help": ( - "For debugging purposes or quicker training, truncate the number of training examples to this " - "value if set." - ) - }, - ) - max_eval_samples: Optional[int] = field( - default=None, - metadata={ - "help": ( - "For debugging purposes or quicker training, truncate the number of evaluation examples to this " - "value if set." - ) - }, - ) - audio_column_name: str = field( - default="audio", - metadata={"help": "The name of the dataset column containing the audio data. Defaults to 'audio'"}, - ) - text_column_name: str = field( - default="text", - metadata={"help": "The name of the dataset column containing the text data. Defaults to 'text'"}, - ) - max_duration_in_seconds: float = field( - default=20.0, - metadata={ - "help": ( - "Truncate audio files that are longer than `max_duration_in_seconds` seconds to" - " 'max_duration_in_seconds`" - ) - }, - ) - min_duration_in_seconds: float = field( - default=0.0, metadata={"help": "Filter audio files that are shorter than `min_duration_in_seconds` seconds"} - ) - preprocessing_only: bool = field( - default=False, - metadata={ - "help": ( - "Whether to only do data preprocessing and skip training. This is especially useful when data" - " preprocessing errors out in distributed training due to timeout. In this case, one should run the" - " preprocessing in a non-distributed setup with `preprocessing_only=True` so that the cached datasets" - " can consequently be loaded in distributed training" - ) - }, - ) - train_split_name: str = field( - default="train", - metadata={ - "help": "The name of the training data set split to use (via the datasets library). Defaults to 'train'" - }, - ) - eval_split_name: str = field( - default="test", - metadata={ - "help": "The name of the training data set split to use (via the datasets library). Defaults to 'train'" - }, - ) - do_lower_case: bool = field( - default=True, - metadata={"help": "Whether the target text should be lower cased."}, - ) - language: str = field( - default=None, - metadata={ - "help": ( - "Language for multilingual fine-tuning. This argument should be set for multilingual fine-tuning " - "only. For English speech recognition, it should be set to `None`." - ) - }, - ) - task: str = field( - default="transcribe", - metadata={"help": "Task, either `transcribe` for speech recognition or `translate` for speech translation."}, - ) - - -@dataclass -class DataCollatorSpeechSeq2SeqWithPadding: - """ - Data collator that will dynamically pad the inputs received. - Args: - processor ([`WhisperProcessor`]) - The processor used for processing the data. - decoder_start_token_id (`int`) - The begin-of-sentence of the decoder. - forward_attention_mask (`bool`) - Whether to return attention_mask. - """ - - processor: Any - decoder_start_token_id: int - forward_attention_mask: bool - - def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]: - # split inputs and labels since they have to be of different lengths and need - # different padding methods - model_input_name = self.processor.model_input_names[0] - input_features = [{model_input_name: feature[model_input_name]} for feature in features] - label_features = [{"input_ids": feature["labels"]} for feature in features] - - batch = self.processor.feature_extractor.pad(input_features, return_tensors="pt") - - if self.forward_attention_mask: - batch["attention_mask"] = torch.LongTensor([feature["attention_mask"] for feature in features]) - - labels_batch = self.processor.tokenizer.pad(label_features, return_tensors="pt") - - # replace padding with -100 to ignore loss correctly - labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100) - - # if bos token is appended in previous tokenization step, - # cut bos token here as it's append later anyways - if (labels[:, 0] == self.decoder_start_token_id).all().cpu().item(): - labels = labels[:, 1:] - - batch["labels"] = labels - - return batch - - -def main(): - # 1. Parse input arguments - # See all possible arguments in src/transformers/training_args.py - # or by passing the --help flag to this script. - # We now keep distinct sets of args, for a cleaner separation of concerns. - parser = HfArgumentParser((ModelArguments, DataTrainingArguments, Seq2SeqTrainingArguments)) - - if len(sys.argv) == 2 and sys.argv[1].endswith(".json"): - # If we pass only one argument to the script and it's the path to a json file, - # let's parse it to get our arguments. - model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1])) - else: - model_args, data_args, training_args = parser.parse_args_into_dataclasses() - - # Sending telemetry. Tracking the example usage helps us better allocate resources to maintain them. The - # information sent is the one passed as arguments along with your Python/PyTorch versions. - send_example_telemetry("run_speech_recognition_seq2seq", model_args, data_args) - - # 2. Setup logging - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - handlers=[logging.StreamHandler(sys.stdout)], - ) - log_level = training_args.get_process_log_level() - logger.setLevel(log_level) - datasets.utils.logging.set_verbosity(log_level) - transformers.utils.logging.set_verbosity(log_level) - transformers.utils.logging.enable_default_handler() - transformers.utils.logging.enable_explicit_format() - - logger.setLevel(logging.INFO if is_main_process(training_args.local_rank) else logging.WARN) - - # Log on each process the small summary: - logger.warning( - f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}" - f"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}" - ) - logger.info(f"Training/evaluation parameters {training_args}") - - # Set the verbosity to info of the Transformers logger (on main process only): - if is_main_process(training_args.local_rank): - transformers.utils.logging.set_verbosity_info() - logger.info("Training/evaluation parameters %s", training_args) - - # 3. Detecting last checkpoint and eventually continue from last checkpoint - last_checkpoint = None - if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir: - last_checkpoint = get_last_checkpoint(training_args.output_dir) - if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0: - raise ValueError( - f"Output directory ({training_args.output_dir}) already exists and is not empty. " - "Use --overwrite_output_dir to overcome." - ) - elif last_checkpoint is not None and training_args.resume_from_checkpoint is None: - logger.info( - f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change " - "the `--output_dir` or add `--overwrite_output_dir` to train from scratch." - ) - - # Set seed before initializing model. - set_seed(training_args.seed) - - # 4. Load dataset - raw_datasets = DatasetDict() - - if training_args.do_train: - raw_datasets["train"] = load_dataset( - data_args.dataset_name, - data_args.dataset_config_name, - split=data_args.train_split_name, - cache_dir=model_args.cache_dir, - use_auth_token=True if model_args.use_auth_token else None, - ) - - if training_args.do_eval: - raw_datasets["eval"] = load_dataset( - data_args.dataset_name, - data_args.dataset_config_name, - split=data_args.eval_split_name, - cache_dir=model_args.cache_dir, - use_auth_token=True if model_args.use_auth_token else None, - ) - - if data_args.audio_column_name not in next(iter(raw_datasets.values())).column_names: - raise ValueError( - f"--audio_column_name '{data_args.audio_column_name}' not found in dataset '{data_args.dataset_name}'. " - "Make sure to set `--audio_column_name` to the correct audio column - one of " - f"{', '.join(next(iter(raw_datasets.values())).column_names)}." - ) - - if data_args.text_column_name not in next(iter(raw_datasets.values())).column_names: - raise ValueError( - f"--text_column_name {data_args.text_column_name} not found in dataset '{data_args.dataset_name}'. " - "Make sure to set `--text_column_name` to the correct text column - one of " - f"{', '.join(next(iter(raw_datasets.values())).column_names)}." - ) - - # 5. Load pretrained model, tokenizer, and feature extractor - # - # Distributed training: - # The .from_pretrained methods guarantee that only one local process can concurrently - config = AutoConfig.from_pretrained( - model_args.config_name if model_args.config_name else model_args.model_name_or_path, - cache_dir=model_args.cache_dir, - revision=model_args.model_revision, - use_auth_token=True if model_args.use_auth_token else None, - ) - - config.update({"forced_decoder_ids": model_args.forced_decoder_ids, "suppress_tokens": model_args.suppress_tokens}) - - # SpecAugment for whisper models - if getattr(config, "model_type", None) == "whisper": - config.update({"apply_spec_augment": model_args.apply_spec_augment}) - - feature_extractor = AutoFeatureExtractor.from_pretrained( - model_args.feature_extractor_name if model_args.feature_extractor_name else model_args.model_name_or_path, - cache_dir=model_args.cache_dir, - revision=model_args.model_revision, - use_auth_token=True if model_args.use_auth_token else None, - ) - tokenizer = AutoTokenizer.from_pretrained( - model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path, - cache_dir=model_args.cache_dir, - use_fast=model_args.use_fast_tokenizer, - revision=model_args.model_revision, - use_auth_token=True if model_args.use_auth_token else None, - ) - model = AutoModelForSpeechSeq2Seq.from_pretrained( - model_args.model_name_or_path, - config=config, - cache_dir=model_args.cache_dir, - revision=model_args.model_revision, - use_auth_token=True if model_args.use_auth_token else None, - ) - - if model.config.decoder_start_token_id is None: - raise ValueError("Make sure that `config.decoder_start_token_id` is correctly defined") - - if model_args.freeze_feature_encoder: - model.freeze_feature_encoder() - - if model_args.freeze_encoder: - model.freeze_encoder() - model.model.encoder.gradient_checkpointing = False - - if data_args.language is not None: - # We only need to set the task id when the language is specified (i.e. in a multilingual setting) - tokenizer.set_prefix_tokens(language=data_args.language, task=data_args.task) - - # 6. Resample speech dataset if necessary - dataset_sampling_rate = next(iter(raw_datasets.values())).features[data_args.audio_column_name].sampling_rate - if dataset_sampling_rate != feature_extractor.sampling_rate: - raw_datasets = raw_datasets.cast_column( - data_args.audio_column_name, datasets.features.Audio(sampling_rate=feature_extractor.sampling_rate) - ) - - # 7. Preprocessing the datasets. - # We need to read the audio files as arrays and tokenize the targets. - max_input_length = data_args.max_duration_in_seconds * feature_extractor.sampling_rate - min_input_length = data_args.min_duration_in_seconds * feature_extractor.sampling_rate - audio_column_name = data_args.audio_column_name - num_workers = data_args.preprocessing_num_workers - text_column_name = data_args.text_column_name - model_input_name = feature_extractor.model_input_names[0] - do_lower_case = data_args.do_lower_case - # if SpecAugment is used for whisper models, return attention_mask to guide the mask along time axis - forward_attention_mask = ( - getattr(config, "model_type", None) == "whisper" - and getattr(config, "apply_spec_augment", False) - and getattr(config, "mask_time_prob", 0) > 0 - ) - - if data_args.max_train_samples is not None: - raw_datasets["train"] = raw_datasets["train"].select(range(data_args.max_train_samples)) - - if data_args.max_eval_samples is not None: - raw_datasets["eval"] = raw_datasets["eval"].select(range(data_args.max_eval_samples)) - - def prepare_dataset(batch): - # process audio - sample = batch[audio_column_name] - inputs = feature_extractor( - sample["array"], sampling_rate=sample["sampling_rate"], return_attention_mask=forward_attention_mask - ) - # process audio length - batch[model_input_name] = inputs.get(model_input_name)[0] - batch["input_length"] = len(sample["array"]) - if forward_attention_mask: - batch["attention_mask"] = inputs.get("attention_mask")[0] - - # process targets - input_str = batch[text_column_name].lower() if do_lower_case else batch[text_column_name] - batch["labels"] = tokenizer(input_str).input_ids - return batch - - with training_args.main_process_first(desc="dataset map pre-processing"): - vectorized_datasets = raw_datasets.map( - prepare_dataset, - remove_columns=next(iter(raw_datasets.values())).column_names, - num_proc=data_args.preprocessing_num_workers, - desc="preprocess train dataset", - ) - - # filter data that is shorter than min_input_length or longer than - # max_input_length - def is_audio_in_length_range(length): - return length > min_input_length and length < max_input_length - - vectorized_datasets = vectorized_datasets.filter( - is_audio_in_length_range, - num_proc=num_workers, - input_columns=["input_length"], - ) - - # for large datasets it is advised to run the preprocessing on a - # single machine first with `args.preprocessing_only` since there will mostly likely - # be a timeout when running the script in distributed mode. - # In a second step `args.preprocessing_only` can then be set to `False` to load the - # cached dataset - if data_args.preprocessing_only: - cache = {k: v.cache_files for k, v in vectorized_datasets.items()} - logger.info(f"Data preprocessing finished. Files cached at {cache}.") - return - - # 8. Load Metric - metric = evaluate.load("wer") - - def compute_metrics(pred): - pred_ids = pred.predictions - - pred.label_ids[pred.label_ids == -100] = tokenizer.pad_token_id - - pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True) - # we do not want to group tokens when computing the metrics - label_str = tokenizer.batch_decode(pred.label_ids, skip_special_tokens=True) - - wer = metric.compute(predictions=pred_str, references=label_str) - - return {"wer": wer} - - # 9. Create a single speech processor - # make sure all processes wait until data is saved - with training_args.main_process_first(): - # only the main process saves them - if is_main_process(training_args.local_rank): - # save feature extractor, tokenizer and config - feature_extractor.save_pretrained(training_args.output_dir) - tokenizer.save_pretrained(training_args.output_dir) - config.save_pretrained(training_args.output_dir) - - processor = AutoProcessor.from_pretrained(training_args.output_dir) - - # 10. Define data collator - data_collator = DataCollatorSpeechSeq2SeqWithPadding( - processor=processor, - decoder_start_token_id=model.config.decoder_start_token_id, - forward_attention_mask=forward_attention_mask, - ) - - # 11. Initialize Trainer - trainer = Seq2SeqTrainer( - model=model, - args=training_args, - train_dataset=vectorized_datasets["train"] if training_args.do_train else None, - eval_dataset=vectorized_datasets["eval"] if training_args.do_eval else None, - tokenizer=feature_extractor, - data_collator=data_collator, - compute_metrics=compute_metrics if training_args.predict_with_generate else None, - ) - - # 12. Training - if training_args.do_train: - checkpoint = None - if training_args.resume_from_checkpoint is not None: - checkpoint = training_args.resume_from_checkpoint - elif last_checkpoint is not None: - checkpoint = last_checkpoint - train_result = trainer.train(resume_from_checkpoint=checkpoint) - trainer.save_model() # Saves the feature extractor too for easy upload - - metrics = train_result.metrics - max_train_samples = ( - data_args.max_train_samples - if data_args.max_train_samples is not None - else len(vectorized_datasets["train"]) - ) - metrics["train_samples"] = min(max_train_samples, len(vectorized_datasets["train"])) - trainer.log_metrics("train", metrics) - trainer.save_metrics("train", metrics) - trainer.save_state() - - # 13. Evaluation - results = {} - if training_args.do_eval: - logger.info("*** Evaluate ***") - metrics = trainer.evaluate( - metric_key_prefix="eval", - max_length=training_args.generation_max_length, - num_beams=training_args.generation_num_beams, - ) - max_eval_samples = ( - data_args.max_eval_samples if data_args.max_eval_samples is not None else len(vectorized_datasets["eval"]) - ) - metrics["eval_samples"] = min(max_eval_samples, len(vectorized_datasets["eval"])) - - trainer.log_metrics("eval", metrics) - trainer.save_metrics("eval", metrics) - - # 14. Write Training Stats - kwargs = {"finetuned_from": model_args.model_name_or_path, "tasks": "automatic-speech-recognition"} - if data_args.dataset_name is not None: - kwargs["dataset_tags"] = data_args.dataset_name - if data_args.dataset_config_name is not None: - kwargs["dataset_args"] = data_args.dataset_config_name - kwargs["dataset"] = f"{data_args.dataset_name} {data_args.dataset_config_name}" - else: - kwargs["dataset"] = data_args.dataset_name - - if training_args.push_to_hub: - trainer.push_to_hub(**kwargs) - else: - trainer.create_model_card(**kwargs) - - return results - - -if __name__ == "__main__": - main() diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/document.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/document.py deleted file mode 100644 index 6493c458b1740989593b8f5a6ba0f9143be94b30..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/document.py +++ /dev/null @@ -1,205 +0,0 @@ -# encoding: utf-8 - -"""|Document| and closely related objects""" - -from __future__ import absolute_import, division, print_function, unicode_literals - -from docx.blkcntnr import BlockItemContainer -from docx.enum.section import WD_SECTION -from docx.enum.text import WD_BREAK -from docx.section import Section, Sections -from docx.shared import ElementProxy, Emu - - -class Document(ElementProxy): - """WordprocessingML (WML) document. - - Not intended to be constructed directly. Use :func:`docx.Document` to open or create - a document. - """ - - __slots__ = ('_part', '__body') - - def __init__(self, element, part): - super(Document, self).__init__(element) - self._part = part - self.__body = None - - def add_heading(self, text="", level=1): - """Return a heading paragraph newly added to the end of the document. - - The heading paragraph will contain *text* and have its paragraph style - determined by *level*. If *level* is 0, the style is set to `Title`. If *level* - is 1 (or omitted), `Heading 1` is used. Otherwise the style is set to `Heading - {level}`. Raises |ValueError| if *level* is outside the range 0-9. - """ - if not 0 <= level <= 9: - raise ValueError("level must be in range 0-9, got %d" % level) - style = "Title" if level == 0 else "Heading %d" % level - return self.add_paragraph(text, style) - - def add_page_break(self): - """Return newly |Paragraph| object containing only a page break.""" - paragraph = self.add_paragraph() - paragraph.add_run().add_break(WD_BREAK.PAGE) - return paragraph - - def add_paragraph(self, text='', style=None): - """ - Return a paragraph newly added to the end of the document, populated - with *text* and having paragraph style *style*. *text* can contain - tab (``\\t``) characters, which are converted to the appropriate XML - form for a tab. *text* can also include newline (``\\n``) or carriage - return (``\\r``) characters, each of which is converted to a line - break. - """ - return self._body.add_paragraph(text, style) - - def add_picture(self, image_path_or_stream, width=None, height=None): - """ - Return a new picture shape added in its own paragraph at the end of - the document. The picture contains the image at - *image_path_or_stream*, scaled based on *width* and *height*. If - neither width nor height is specified, the picture appears at its - native size. If only one is specified, it is used to compute - a scaling factor that is then applied to the unspecified dimension, - preserving the aspect ratio of the image. The native size of the - picture is calculated using the dots-per-inch (dpi) value specified - in the image file, defaulting to 72 dpi if no value is specified, as - is often the case. - """ - run = self.add_paragraph().add_run() - return run.add_picture(image_path_or_stream, width, height) - - def add_section(self, start_type=WD_SECTION.NEW_PAGE): - """ - Return a |Section| object representing a new section added at the end - of the document. The optional *start_type* argument must be a member - of the :ref:`WdSectionStart` enumeration, and defaults to - ``WD_SECTION.NEW_PAGE`` if not provided. - """ - new_sectPr = self._element.body.add_section_break() - new_sectPr.start_type = start_type - return Section(new_sectPr, self._part) - - def add_table(self, rows, cols, style=None): - """ - Add a table having row and column counts of *rows* and *cols* - respectively and table style of *style*. *style* may be a paragraph - style object or a paragraph style name. If *style* is |None|, the - table inherits the default table style of the document. - """ - table = self._body.add_table(rows, cols, self._block_width) - table.style = style - return table - - @property - def core_properties(self): - """ - A |CoreProperties| object providing read/write access to the core - properties of this document. - """ - return self._part.core_properties - - @property - def inline_shapes(self): - """ - An |InlineShapes| object providing access to the inline shapes in - this document. An inline shape is a graphical object, such as - a picture, contained in a run of text and behaving like a character - glyph, being flowed like other text in a paragraph. - """ - return self._part.inline_shapes - - @property - def paragraphs(self): - """ - A list of |Paragraph| instances corresponding to the paragraphs in - the document, in document order. Note that paragraphs within revision - marks such as ```` or ```` do not appear in this list. - """ - return self._body.paragraphs - - @property - def part(self): - """ - The |DocumentPart| object of this document. - """ - return self._part - - def save(self, path_or_stream): - """ - Save this document to *path_or_stream*, which can be either a path to - a filesystem location (a string) or a file-like object. - """ - self._part.save(path_or_stream) - - @property - def sections(self): - """|Sections| object providing access to each section in this document.""" - return Sections(self._element, self._part) - - @property - def settings(self): - """ - A |Settings| object providing access to the document-level settings - for this document. - """ - return self._part.settings - - @property - def styles(self): - """ - A |Styles| object providing access to the styles in this document. - """ - return self._part.styles - - @property - def tables(self): - """ - A list of |Table| instances corresponding to the tables in the - document, in document order. Note that only tables appearing at the - top level of the document appear in this list; a table nested inside - a table cell does not appear. A table within revision marks such as - ```` or ```` will also not appear in the list. - """ - return self._body.tables - - @property - def _block_width(self): - """ - Return a |Length| object specifying the width of available "writing" - space between the margins of the last section of this document. - """ - section = self.sections[-1] - return Emu( - section.page_width - section.left_margin - section.right_margin - ) - - @property - def _body(self): - """ - The |_Body| instance containing the content for this document. - """ - if self.__body is None: - self.__body = _Body(self._element.body, self) - return self.__body - - -class _Body(BlockItemContainer): - """ - Proxy for ```` element in this document, having primarily a - container role. - """ - def __init__(self, body_elm, parent): - super(_Body, self).__init__(body_elm, parent) - self._body = body_elm - - def clear_content(self): - """ - Return this |_Body| instance after clearing it of all content. - Section properties for the main document story, if present, are - preserved. - """ - self._body.clear_content() - return self diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/otTraverse.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/otTraverse.py deleted file mode 100644 index bf22dcfdb500cd50525fce749562384a82b1cb0f..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/otTraverse.py +++ /dev/null @@ -1,161 +0,0 @@ -"""Methods for traversing trees of otData-driven OpenType tables.""" -from collections import deque -from typing import Callable, Deque, Iterable, List, Optional, Tuple -from .otBase import BaseTable - - -__all__ = [ - "bfs_base_table", - "dfs_base_table", - "SubTablePath", -] - - -class SubTablePath(Tuple[BaseTable.SubTableEntry, ...]): - def __str__(self) -> str: - path_parts = [] - for entry in self: - path_part = entry.name - if entry.index is not None: - path_part += f"[{entry.index}]" - path_parts.append(path_part) - return ".".join(path_parts) - - -# Given f(current frontier, new entries) add new entries to frontier -AddToFrontierFn = Callable[[Deque[SubTablePath], List[SubTablePath]], None] - - -def dfs_base_table( - root: BaseTable, - root_accessor: Optional[str] = None, - skip_root: bool = False, - predicate: Optional[Callable[[SubTablePath], bool]] = None, - iter_subtables_fn: Optional[ - Callable[[BaseTable], Iterable[BaseTable.SubTableEntry]] - ] = None, -) -> Iterable[SubTablePath]: - """Depth-first search tree of BaseTables. - - Args: - root (BaseTable): the root of the tree. - root_accessor (Optional[str]): attribute name for the root table, if any (mostly - useful for debugging). - skip_root (Optional[bool]): if True, the root itself is not visited, only its - children. - predicate (Optional[Callable[[SubTablePath], bool]]): function to filter out - paths. If True, the path is yielded and its subtables are added to the - queue. If False, the path is skipped and its subtables are not traversed. - iter_subtables_fn (Optional[Callable[[BaseTable], Iterable[BaseTable.SubTableEntry]]]): - function to iterate over subtables of a table. If None, the default - BaseTable.iterSubTables() is used. - - Yields: - SubTablePath: tuples of BaseTable.SubTableEntry(name, table, index) namedtuples - for each of the nodes in the tree. The last entry in a path is the current - subtable, whereas preceding ones refer to its parent tables all the way up to - the root. - """ - yield from _traverse_ot_data( - root, - root_accessor, - skip_root, - predicate, - lambda frontier, new: frontier.extendleft(reversed(new)), - iter_subtables_fn, - ) - - -def bfs_base_table( - root: BaseTable, - root_accessor: Optional[str] = None, - skip_root: bool = False, - predicate: Optional[Callable[[SubTablePath], bool]] = None, - iter_subtables_fn: Optional[ - Callable[[BaseTable], Iterable[BaseTable.SubTableEntry]] - ] = None, -) -> Iterable[SubTablePath]: - """Breadth-first search tree of BaseTables. - - Args: - the root of the tree. - root_accessor (Optional[str]): attribute name for the root table, if any (mostly - useful for debugging). - skip_root (Optional[bool]): if True, the root itself is not visited, only its - children. - predicate (Optional[Callable[[SubTablePath], bool]]): function to filter out - paths. If True, the path is yielded and its subtables are added to the - queue. If False, the path is skipped and its subtables are not traversed. - iter_subtables_fn (Optional[Callable[[BaseTable], Iterable[BaseTable.SubTableEntry]]]): - function to iterate over subtables of a table. If None, the default - BaseTable.iterSubTables() is used. - - Yields: - SubTablePath: tuples of BaseTable.SubTableEntry(name, table, index) namedtuples - for each of the nodes in the tree. The last entry in a path is the current - subtable, whereas preceding ones refer to its parent tables all the way up to - the root. - """ - yield from _traverse_ot_data( - root, - root_accessor, - skip_root, - predicate, - lambda frontier, new: frontier.extend(new), - iter_subtables_fn, - ) - - -def _traverse_ot_data( - root: BaseTable, - root_accessor: Optional[str], - skip_root: bool, - predicate: Optional[Callable[[SubTablePath], bool]], - add_to_frontier_fn: AddToFrontierFn, - iter_subtables_fn: Optional[ - Callable[[BaseTable], Iterable[BaseTable.SubTableEntry]] - ] = None, -) -> Iterable[SubTablePath]: - # no visited because general otData cannot cycle (forward-offset only) - if root_accessor is None: - root_accessor = type(root).__name__ - - if predicate is None: - - def predicate(path): - return True - - if iter_subtables_fn is None: - - def iter_subtables_fn(table): - return table.iterSubTables() - - frontier: Deque[SubTablePath] = deque() - - root_entry = BaseTable.SubTableEntry(root_accessor, root) - if not skip_root: - frontier.append((root_entry,)) - else: - add_to_frontier_fn( - frontier, - [ - (root_entry, subtable_entry) - for subtable_entry in iter_subtables_fn(root) - ], - ) - - while frontier: - # path is (value, attr_name) tuples. attr_name is attr of parent to get value - path = frontier.popleft() - current = path[-1].value - - if not predicate(path): - continue - - yield SubTablePath(path) - - new_entries = [ - path + (subtable_entry,) for subtable_entry in iter_subtables_fn(current) - ] - - add_to_frontier_fn(frontier, new_entries) diff --git a/spaces/cihyFjudo/fairness-paper-search/Daz3d Poser Victoria 4 Elite Bundle Keygen Everything You Need to Know About the Pro Suite.md b/spaces/cihyFjudo/fairness-paper-search/Daz3d Poser Victoria 4 Elite Bundle Keygen Everything You Need to Know About the Pro Suite.md deleted file mode 100644 index 977881ccc03a3d7b3834e32e02cc0cd8484732cc..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Daz3d Poser Victoria 4 Elite Bundle Keygen Everything You Need to Know About the Pro Suite.md +++ /dev/null @@ -1,8 +0,0 @@ - -

Poser and daz studio.barbarian for m4 h4.this package has been tested in poser 7.this product is intended for use in poser versions 6 and 7 and is alsopatible figures: victoria 4, aiko 4, the kids 4, victoria 4 elite, michael 4 elite.introducing gossamer for hiro 4,. Been optimized for daz studio and.artist: arki runtimedna.hiro 4 delivers five new body shapes, six anime styled head shapes, and dozens.

-

Strong,chiseled and rugged character set forpatible figures: victoria 4, michael 4, aiko 4, hiro 4.hiro 4, and freak 4, as well as an.michael4 base and hiro 4 base from daz3d.add to cart in cart.seafolk kelphair for v4, a4, m4 and h4.from poser and daz free.free download.hiro 4 information compiled by bastblack. There will be differences between the results in daz studio and poserpatible software: daz studio 4.9, poser.

-

Daz3d Poser Victoria 4 Elite Bundle Keygen


Download File 🌟 https://tinurli.com/2uwi8T



-

Required products:.daz studio, poser.alexandre for m4 h4 is a man for michael 4 and hiro 4 for daz.hiro 4 delivers five newpatible software: daz studio, poser.yve hiro 4 poses 1 30 poses for hiro 4 from daz3d.the textures for gossamer have been optimized for daz studio and poser 6 and.superhero trunks for m is a conforming clothing item by vacasoft designed to recreate.poser material presets .fairytale prince for.

-

M4 and h4 in people and wearables, clothing and.install types.envision michael 4 or hiro fourth deer forest barbarians with clothing and.hirotoon 4 for hiro 4an awesome toon character. Replacement toon teeth.dson importer for poser, daz studio 4.9, daz studio.michael 4, hiro 4 and the freak 4. Tyrell for m4 h4 fr4 in people and.daz3dposercybermech 4.1 posted.a4 and h4 for genesis. Dson importer for poser, daz studio 4.9, daz.

aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Get Sarguzasht Digest March 2018 Free of Cost and Explore the World of Fiction.md b/spaces/cihyFjudo/fairness-paper-search/Get Sarguzasht Digest March 2018 Free of Cost and Explore the World of Fiction.md deleted file mode 100644 index 4f3d2d82c72929d70e7d3f03fcea71dd82ab6ed3..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Get Sarguzasht Digest March 2018 Free of Cost and Explore the World of Fiction.md +++ /dev/null @@ -1,11 +0,0 @@ -
-

Free download and read online Sarguzasht Digest September 2018.BooksPk.Site uploaded this book under the category of Digests and Magazines.Format of Sarguzasht Digest September 2018 is PDF and file size of this file is 16 MB and Sarguzasht Digest September 2018 has 128 pages , Sarguzasht Digest September 2018 has been downloaded 8,776 times.

-

Sarguzasht Digest March 2018 Free Download


Download Filehttps://tinurli.com/2uwkDS



-

Sarguzasht Digest September 2018 Must read this book online or download pdf. Sarguzasht Digest September 2018 is uploaded now and available for download in PDF.Sarguzasht Digest,September 2018,Biographies, Auto Biographies,Dastaan,Urdu Digests,Latest digests free download PDF

-

Free download and read online Sarguzasht Digest June2018.BooksPk.Site uploaded this book under the category of Digests and Magazines.Format of Sarguzasht Digest June2018 is PDF and file size of this file is 45 MB and Sarguzasht Digest June2018 has 134 pages , Sarguzasht Digest June2018 has been downloaded 2,184 times.

-

Sarguzasht Digest June2018 Must read this book online or download pdf. Sarguzasht Digest June2018 is uploaded now and available for download in PDF.Sarguzasht Digest,June 2018,Biographies, Auto Biogrophies,Dastaan,Urdu Digests,Latest digests free download PDF

-

-

This Monthly Digest Sarguzasht January 2018 is a widely appreciated magazine which requires more attention of the readers than any other. So, the new edition is available which you can download in Pdf format. Not only Pakistani readers like it but also, outside Pakistan, it has full fledge fame like USA, UK, Canada, Germany, and Italy. I hope you will like this Sarguzasht January 2018.

-

Sarguzasht Digest October 2018 Sarguzasht Digest October 2018 high-quality print. Sarguzasht Digest October 2018 is one in all most famous Pakistan Urdu digest, moreover, now not only in Pakistan but additionally, out of doors the country along with united states of America, united kingdom, Canada, Australia, Italy, U.A.E, India, and Saudi Arabia it has a massive fan following. free download Sarguzasht Digest October 2018 ,Urdu novels by Umera Ahmed, Romantic Urdu novels free download, Nimra Ahmed novels list, free Urdu books novels, types of poetry, want ad digest, free textbook pdf, download pdf books, poetry foundation, sad poetry, Urdu digest Jasoosi Digest September 2018 , Pakeeza Digest September 2018,test preparation,test MCQs,General Knowledge. Urdu\nSarguzasht Digest October 2018 Free Download Sarguzasht Digest October 2018 \nAddition of Sarguzasht Digest October 2018 is now to be had for download. Sarguzasht Digest October 2018 incorporates the stories of social and romantic nature generally. there's additionally interviews, Poems, prices and meals recipes in Sarguzasht Digest October 2018. Many well-known Urdu girl writers write in Sarguzasht Digest October 2018 .read and download Sarguzasht Digest October 2018 in PDF layout. novels and stories protected in Sarguzasht Digest October 2018 Sarguzasht Digest October 2018 Free Download Pdf

aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Install webrec cab windows 7 How to enable ActiveX and install the plugin for Dahua devices.md b/spaces/cihyFjudo/fairness-paper-search/Install webrec cab windows 7 How to enable ActiveX and install the plugin for Dahua devices.md deleted file mode 100644 index 3e22bc9b5fecedf570fd9b2d24265739c3f37e6d..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Install webrec cab windows 7 How to enable ActiveX and install the plugin for Dahua devices.md +++ /dev/null @@ -1,6 +0,0 @@ -

AutoCADArchitecture200964bitadlmintdllcrackdownload


Download Filehttps://tinurli.com/2uwj5G



- - aaccfb2cb3
-
-
-

diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/middleware/cors.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/middleware/cors.py deleted file mode 100644 index 8dfaad0dbb3ff5300cccb2023748cd30f54bc920..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/middleware/cors.py +++ /dev/null @@ -1 +0,0 @@ -from starlette.middleware.cors import CORSMiddleware as CORSMiddleware # noqa diff --git a/spaces/cmagganas/chainlit-arxiv/app.py b/spaces/cmagganas/chainlit-arxiv/app.py deleted file mode 100644 index 3ded998d51bb5e65bbedeedf684c45b823e96a87..0000000000000000000000000000000000000000 --- a/spaces/cmagganas/chainlit-arxiv/app.py +++ /dev/null @@ -1,102 +0,0 @@ -import os -from langchain.embeddings.openai import OpenAIEmbeddings -from langchain.document_loaders import PyMuPDFLoader -from langchain.text_splitter import RecursiveCharacterTextSplitter -from langchain.vectorstores import Chroma -from langchain.chains import RetrievalQAWithSourcesChain -from langchain.chat_models import ChatOpenAI -from langchain.prompts.chat import ( - ChatPromptTemplate, - SystemMessagePromptTemplate, - HumanMessagePromptTemplate, -) -# import os -import arxiv -import chainlit as cl -from chainlit import user_session - -@cl.langchain_factory -def init(): - arxiv_query = None - - # Wait for the user to ask an Arxiv question - while arxiv_query == None: - arxiv_query = cl.AskUserMessage( - content="Please enter a topic to begin!", timeout=15 - ).send() - - # Obtain the top 30 results from Arxiv for the query - search = arxiv.Search( - query=arxiv_query["content"], - max_results=30, - sort_by=arxiv.SortCriterion.Relevance, - ) - - # download each of the pdfs - pdf_data = [] - - for result in search.results(): - loader = PyMuPDFLoader(result.pdf_url) - loaded_pdf = loader.load() - - for document in loaded_pdf: - document.metadata["source"] = result.entry_id - document.metadata["file_path"] = result.pdf_url - document.metadata["title"] = result.title - pdf_data.append(document) - - # Create a Chroma vector store - embeddings = OpenAIEmbeddings( - disallowed_special=(), - ) - docsearch = Chroma.from_documents(pdf_data, embeddings) - - # Create a chain that uses the Chroma vector store - chain = RetrievalQAWithSourcesChain.from_chain_type( - ChatOpenAI( - model_name="gpt-4", - temperature=0, - ), - chain_type="stuff", - retriever=docsearch.as_retriever(), - return_source_documents=True, - ) - - # Let the user know that the system is ready - cl.Message( - content=f"We found a few papers about `{arxiv_query['content']}` you can now ask questions!" - ).send() - - return chain - - -@cl.langchain_postprocess -def process_response(res): - answer = res["answer"] - source_elements_dict = {} - source_elements = [] - for idx, source in enumerate(res["source_documents"]): - title = source.metadata["title"] - - if title not in source_elements_dict: - source_elements_dict[title] = { - "page_number": [source.metadata["page"]], - "url": source.metadata["file_path"], - } - - else: - source_elements_dict[title]["page_number"].append(source.metadata["page"]) - - # sort the page numbers - source_elements_dict[title]["page_number"].sort() - - for title, source in source_elements_dict.items(): - # create a string for the page numbers - page_numbers = ", ".join([str(x) for x in source["page_number"]]) - text_for_source = f"Page Number(s): {page_numbers}\nURL: {source['url']}" - source_elements.append( - cl.Text(name=title, text=text_for_source, display="inline") - ) - - cl.Message(content=answer, elements=source_elements).send() - diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dcadec.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dcadec.h deleted file mode 100644 index 0ff28dd4d17c97d542775215a1d4ccb95535df12..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dcadec.h +++ /dev/null @@ -1,106 +0,0 @@ -/* - * Copyright (C) 2016 foo86 - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_DCADEC_H -#define AVCODEC_DCADEC_H - -#include - -#include "libavutil/crc.h" -#include "libavutil/float_dsp.h" -#include "libavutil/log.h" - -#include "avcodec.h" -#include "get_bits.h" -#include "dca.h" -#include "dcadsp.h" -#include "dca_core.h" -#include "dca_exss.h" -#include "dca_xll.h" -#include "dca_lbr.h" - -#define DCA_PACKET_CORE 0x01 -#define DCA_PACKET_EXSS 0x02 -#define DCA_PACKET_XLL 0x04 -#define DCA_PACKET_LBR 0x08 -#define DCA_PACKET_MASK 0x0f - -#define DCA_PACKET_RECOVERY 0x10 ///< Sync error recovery flag -#define DCA_PACKET_RESIDUAL 0x20 ///< Core valid for residual decoding - -enum DCAOutputChannelOrder { - CHANNEL_ORDER_DEFAULT, - CHANNEL_ORDER_CODED, -}; - -typedef struct DCAContext { - const AVClass *class; ///< class for AVOptions - AVCodecContext *avctx; - - DCACoreDecoder core; ///< Core decoder context - DCAExssParser exss; ///< EXSS parser context - DCAXllDecoder xll; ///< XLL decoder context - DCALbrDecoder lbr; ///< LBR decoder context - - DCADSPContext dcadsp; - - const AVCRC *crctab; - - uint8_t *buffer; ///< Packet buffer - unsigned int buffer_size; - - int packet; ///< Packet flags - - int request_channel_layout; ///< Converted from avctx.request_channel_layout - int core_only; ///< Core only decoding flag - int output_channel_order; - AVChannelLayout downmix_layout; -} DCAContext; - -int ff_dca_set_channel_layout(AVCodecContext *avctx, int *ch_remap, int dca_mask); - -void ff_dca_downmix_to_stereo_fixed(DCADSPContext *dcadsp, int32_t **samples, - int *coeff_l, int nsamples, int ch_mask); -void ff_dca_downmix_to_stereo_float(AVFloatDSPContext *fdsp, float **samples, - int *coeff_l, int nsamples, int ch_mask); - -static inline int ff_dca_check_crc(AVCodecContext *avctx, GetBitContext *s, - int p1, int p2) -{ - DCAContext *dca = avctx->priv_data; - - if (!(avctx->err_recognition & (AV_EF_CRCCHECK | AV_EF_CAREFUL))) - return 0; - if (((p1 | p2) & 7) || p1 < 0 || p2 > s->size_in_bits || p2 - p1 < 16) - return -1; - if (av_crc(dca->crctab, 0xffff, s->buffer + p1 / 8, (p2 - p1) / 8)) - return -1; - return 0; -} - -static inline int ff_dca_seek_bits(GetBitContext *s, int p) -{ - if (p < get_bits_count(s) || p > s->size_in_bits) - return -1; - skip_bits_long(s, p - get_bits_count(s)); - return 0; -} - -#endif diff --git a/spaces/congsaPfin/Manga-OCR/logs/Enjoy PUBG Mobile at 90fps on These iOS Devices - A Complete Guide.md b/spaces/congsaPfin/Manga-OCR/logs/Enjoy PUBG Mobile at 90fps on These iOS Devices - A Complete Guide.md deleted file mode 100644 index f6ba6b1bb3e6187d728e9978ad37d13b53735c6e..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Enjoy PUBG Mobile at 90fps on These iOS Devices - A Complete Guide.md +++ /dev/null @@ -1,130 +0,0 @@ -
-

How to Download 90 FPS for iOS Devices

-

If you are a fan of playing games on your iOS device, you might have heard of 90 FPS. FPS stands for frames per second, and it refers to how many times the image on your screen is refreshed every second. The higher the FPS, the smoother and more realistic the gameplay experience.

-

90 fps download ios


Download Zip 🆗 https://urlca.com/2uO7ST



-

However, not all iOS devices support 90 FPS, and not all games offer this option. In this article, we will show you how to download 90 FPS for one of the most popular games on iOS devices, PUBG Mobile. We will also explain what are the benefits and challenges of playing games at 90 FPS, and what are the requirements and steps for downloading it.

-

What is 90 FPS and Why You Need It

-

As we mentioned earlier, FPS stands for frames per second, and it measures how many times the image on your screen is refreshed every second. The more frames per second, the smoother and more realistic the gameplay experience.

-

The Benefits of Playing Games at 90 FPS

-

Playing games at 90 FPS has many benefits, such as:

-
    -
  • Improved responsiveness: Higher FPS means less input lag, which means your actions are registered faster by the game.
  • -
  • Enhanced clarity: Higher FPS means less motion blur, which means you can see more details and movements clearly.
  • -
  • Competitive edge: Higher FPS means better performance, which means you can react faster and aim better than your opponents.
  • -
-

The Challenges of Enabling 90 FPS on iOS Devices

-

However, playing games at 90 FPS also has some challenges, such as:

-

How to enable 90 fps in iOS iPhone iPad after PUBG MOBILE 1.0 update
-List of games with 90Hz / 120 Hz support on Play Store
-How to unlock 90 fps for iOS (no jailbreak)
-Grimvalor: RPG adventure game with 120FPS support
-Active sav: file to enable 90 fps in PUBG MOBILE
-Best iOS devices for 90 fps gaming
-How to get 90 fps ERANGEL 2.0 in iOS
-Benefits of 90 fps vs 60 fps in mobile gaming
-iPhone 12: the first iPhone with 120Hz refresh rate
-How to play Call of Duty Mobile at 90 fps on iOS
-How to check your FPS on iOS devices
-How to improve your FPS on iOS devices
-Fortnite Mobile: how to enable 90 fps on iOS
-Genshin Impact: how to play at 90 fps on iOS
-Dead Cells: action-platformer game with 120FPS support
-How to record your gameplay at 90 fps on iOS
-How to stream your gameplay at 90 fps on iOS
-Asphalt 9: Legends: racing game with 120FPS support
-Critical Ops: multiplayer FPS game with 120FPS support
-Vainglory: MOBA game with 120FPS support
-Shadowgun Legends: sci-fi FPS game with 120FPS support
-Bullet Force: online FPS game with 120FPS support
-Modern Combat Versus: online FPS game with 120FPS support
-Into the Dead 2: zombie survival game with 120FPS support
-Badland Brawl: physics-based PvP game with 120FPS support
-Geometry Dash SubZero: rhythm-based platformer game with 120FPS support
-Alto's Odyssey: endless runner game with 120FPS support
-Thumper: Pocket Edition: rhythm violence game with 120FPS support
-Super Hexagon: minimal action game with 120FPS support
-Phoenix II: bullet hell shooter game with 120FPS support
-The Room: Old Sins: puzzle adventure game with 120FPS support
-The Witness: open world puzzle game with 120FPS support
-Oceanhorn 2: Knights of the Lost Realm: action RPG game with 120FPS support
-Sky: Children of the Light: social adventure game with 120FPS support
-Journey: artistic adventure game with 120FPS support
-Monument Valley 2: beautiful puzzle game with 120FPS support
-Lara Croft GO: turn-based puzzle game with 120FPS support
-Hitman GO: turn-based strategy game with 120FPS support
-Deus Ex GO: turn-based stealth game with 120FPS support
-Leo's Fortune: platform adventure game with 120FPS support
-Limbo: dark and atmospheric platformer game with 120FPS support
-Inside: dystopian and cinematic platformer game with 120FPS support
-Fez Pocket Edition: perspective-shifting puzzle platformer game with 120FPS support
-Bastion: action RPG game with stunning graphics and narration
-Transistor: sci-fi RPG game with strategic combat and rich story
-Stardew Valley: farming simulation RPG game
-Terraria: sandbox adventure game
-Minecraft: sandbox survival game

-
    -
  • Limited compatibility: Not all iOS devices support 90 FPS, and not all games offer this option. For example, PUBG Mobile only supports 90 FPS on certain iOS devices that have a screen refresh rate of 120 Hz or higher.
  • -
  • Increased battery drain: Higher FPS means more power consumption, which means your battery will drain faster and your device will heat up more.
  • -
  • Reduced stability: Higher FPS means more stress on your device, which means you might experience crashes, glitches, or lag spikes.
  • -
-

Therefore, you need to weigh the pros and cons of playing games at 90 FPS, and decide whether it is worth it for you.

-

How to Download 90 FPS for iOS Devices

-

If you have decided that you want to play PUBG Mobile at 90 FPS on your iOS device, you need to follow some steps to download it. But before that, you need to make sure that your device meets the requirements for downloading 90 FPS.

-

The Requirements for Downloading 90 FPS

-

There are two main requirements for downloading 90 FPS on your iOS device:

-

Compatible iOS Devices

-

As we mentioned earlier, not all iOS devices support 90 FPS. According to PUBG Mobile's official website, the following iOS devices support 90 FPS:

- - - - - - - - - - - - - -
Device NameScreen Refresh Rate
iPhone 12 Pro Max120 Hz
iPhone 12 Pro120 Hz
iPhone 12120 Hz
iPhone 12 Mini120 Hz
iPad Pro (2021)120 Hz
iPad Pro (2020)120 Hz
iPad Pro (2018)120 Hz
iPad Air (2020)60 Hz
iPad Mini (2019)60 Hz
iPad (2020)60 Hz
iPad (2019)60 Hz
-

If your device is not on this list, you might not be able to download 90 FPS, or you might experience poor performance or compatibility issues.

-

Active Sav File

-

The second requirement for downloading 90 FPS is an active sav file. This is a file that contains the settings and preferences of PUBG Mobile, and it can be used to enable 90 FPS on your device. You need to download an active sav file from a trusted source, such as Google Drive, and save it on your device.

-

The Steps for Downloading 90 FPS

-

Once you have met the requirements for downloading 90 FPS, you can follow these steps to download it on your iOS device:

-

Step 1: Download Active Sav File from Google Drive

-

The first step is to download an active sav file from Google Drive. You can use this link to access the file, and then tap on the download button. You will see a pop-up message asking you to confirm the download. Tap on OK to proceed. The file will be downloaded and saved on your device.

-

Step 2: Open PUBG Mobile App on Your iOS Device

-

The second step is to open the PUBG Mobile app on your iOS device. You will see the main menu of the game, where you can select different modes and options. Before you start playing, you need to go to the settings menu and change some settings.

-

Step 3: Go to Settings and Tap on Graphics

-

The third step is to go to the settings menu and tap on graphics. This will open a new menu where you can adjust the graphics quality and frame rate of the game. You will see a list of options, such as smooth, balanced, HD, HDR, and ultra HD. You need to select smooth graphics, which will lower the graphics quality but increase the frame rate.

-

Step 4: Select Smooth Graphics and Extreme Frame Rate

-

The fourth step is to select smooth graphics and extreme frame rate. After selecting smooth graphics, you will see another list of options, such as low, medium, high, ultra, and extreme. You need to select extreme frame rate, which will enable 90 FPS on your device. You will see a message saying that extreme frame rate is only supported on certain devices. Tap on OK to confirm.

-

Step 5: Enjoy Playing PUBG Mobile at 90 FPS

-

The fifth and final step is to enjoy playing PUBG Mobile at 90 FPS on your iOS device. You will notice a significant difference in the smoothness and realism of the game, and you will have a better chance of winning against your enemies. You can also check the FPS counter on the top left corner of the screen, which will show you how many frames per second you are getting.

-

Conclusion

-

In this article, we have shown you how to download 90 FPS for iOS devices, and why you might want to do so. We have explained what are the benefits and challenges of playing games at 90 FPS, and what are the requirements and steps for downloading it. We have also provided you with a link to download an active sav file from Google Drive, which is essential for enabling 90 FPS on your device.

-

Summary of the Main Points

-

Here is a summary of the main points we have covered in this article:

-
    -
  • 90 FPS stands for frames per second, and it refers to how many times the image on your screen is refreshed every second.
  • -
  • Playing games at 90 FPS has many benefits, such as improved responsiveness, enhanced clarity, and competitive edge.
  • -
  • Playing games at 90 FPS also has some challenges, such as limited compatibility, increased battery drain, and reduced stability.
  • -
  • Not all iOS devices support 90 FPS, and not all games offer this option. PUBG Mobile only supports 90 FPS on certain iOS devices that have a screen refresh rate of 120 Hz or higher.
  • -
  • To download 90 FPS for iOS devices, you need to have a compatible device and an active sav file.
  • -
  • You need to follow these steps to download 90 FPS for iOS devices: download active sav file from Google Drive, open PUBG Mobile app on your device, go to settings and tap on graphics, select smooth graphics and extreme frame rate, and enjoy playing PUBG Mobile at 90 FPS.
  • -
-

Call to Action

-

We hope you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. If you liked this article, please share it with your friends and family who might be interested in playing games at 90 FPS on their iOS devices. And if you want to read more articles like this one, please subscribe to our newsletter and follow us on social media. Thank you for reading!

-

FAQs

-

Here are some frequently asked questions about downloading 90 FPS for iOS devices:

-
    -
  1. Is 90 FPS better than 60 FPS?
    Yes, 90 FPS is better than 60 FPS in terms of smoothness and realism. However, it also depends on your personal preference and device capability. Some people might not notice much difference between 60 FPS and 90 FPS, or they might prefer lower FPS for better graphics quality or battery life.
  2. -
  3. Can I download 90 FPS for other games besides PUBG Mobile?
    It depends on the game. Some games might offer 90 FPS as an option in their settings menu, while others might not support it at all. You can check the game's official website or app store page to see if it supports 90 FPS or not.
  4. -
  5. Can I download 90 FPS for Android devices?
    Yes, you can download 90 FPS for Android devices as well. However, the process might be different from iOS devices. You might need to use a third-party app or tool to enable 90 FPS on your Android device. You can search online for tutorials or guides on how to do so.
  6. -
  7. Will downloading 90 FPS affect my account or data?
    No, downloading 90 FPS will not affect your account or data. You will still be able to play PUBG Mobile normally with your existing account and data. However, you should always backup your data before making any changes to your device or game settings.
  8. -
  9. Will downloading 90 FPS get me banned?
    No, downloading 90 FPS will not get you banned. As long as you are using a trusted source and a legitimate method to download 90 FPS, you will not violate any terms of service or rules of PUBG Mobile. However, you should avoid using any hacks or cheats that might give you an unfair advantage over other players.
  10. -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Download and Install AetherSX2 MOD APK on Your Android Phone.md b/spaces/congsaPfin/Manga-OCR/logs/How to Download and Install AetherSX2 MOD APK on Your Android Phone.md deleted file mode 100644 index e844d0f3745293869b74f791dab08e7bb3bfa372..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/How to Download and Install AetherSX2 MOD APK on Your Android Phone.md +++ /dev/null @@ -1,149 +0,0 @@ - -

AetherSX2 APK Download Mod: How to Play PS2 Games on Android

-

Do you miss playing your favorite PS2 games on your Android device? If yes, then you are in luck. There is a way to enjoy PS2 games on your smartphone or tablet without buying a PS2 console or using a PC emulator. All you need is a simple app called AetherSX2 mod apk.

-

aethersx2 apk download mod


DOWNLOADhttps://urlca.com/2uOd9V



-

AetherSX2 mod apk is a very unique emulator application that allows its users to enjoy access to all kinds of PS2 games by downloading only a single application. The app can be easily enjoyed by the users and can be simply installed on their android devices. In this article, we will tell you everything you need to know about AetherSX2 mod apk, how to download and install it, how to play PS2 games on it, and what are its pros and cons. So, let's get started.

-

What is AetherSX2?

-

AetherSX2 is an emulator app that lets you play PS2 games on your Android device. An emulator is a software that mimics the hardware and software of another device, in this case, the PS2 console. By using an emulator, you can run games and applications that are designed for another platform, such as the PS2.

-

AetherSX2 is one of the best PS2 emulators for Android because it has many features that make it stand out from other emulators. It has high compatibility, meaning it can run most of the PS2 games without any problems. It also has high performance, meaning it can run the games smoothly and without lagging. It also has customizable graphics and audio settings, meaning you can adjust the resolution, frame rate, sound quality, and other aspects of the game according to your preference. And best of all, it is free and ad-free, meaning you don't have to pay anything or deal with annoying ads while playing.

-

Features of AetherSX2

-

Some of the main features of AetherSX2 are:

-

aethersx2 mod apk ad free
-aethersx2 ps2 emulator mod apk
-aethersx2 apk download latest version
-aethersx2 mod apk unlimited coins
-aethersx2 apk download for android
-aethersx2 mod apk no ads
-aethersx2 ps2 emulator apk download
-aethersx2 mod apk premium
-aethersx2 apk download free
-aethersx2 mod apk full unlocked
-aethersx2 ps2 emulator mod apk download
-aethersx2 mod apk pro
-aethersx2 apk download 2023
-aethersx2 mod apk cracked
-aethersx2 ps2 emulator apk free download
-aethersx2 mod apk hack
-aethersx2 apk download for pc
-aethersx2 mod apk latest
-aethersx2 ps2 emulator apk mod
-aethersx2 mod apk online
-aethersx2 apk download uptodown
-aethersx2 mod apk android 1
-aethersx2 ps2 emulator apk no ads
-aethersx2 mod apk rexdl
-aethersx2 apk download apkpure
-aethersx2 mod apk revdl
-aethersx2 ps2 emulator apk premium
-aethersx2 mod apk unlimited money
-aethersx2 apk download for ios
-aethersx2 mod apk vip
-aethersx2 ps2 emulator apk pro
-aethersx2 mod apk 2023
-aethersx2 apk download for windows 10
-aethersx2 mod apk 1.5 4248
-aethersx2 ps2 emulator apk latest version
-aethersx2 mod apk all games unlocked
-aethersx2 apk download old version
-aethersx2 mod apk an1
-aethersx2 ps2 emulator apk cracked
-aethersx2 mod apk apkmody

-
    -
  • It supports most of the PS2 games, including popular titles like God of War, Grand Theft Auto, Final Fantasy, Metal Gear Solid, and more.
  • -
  • It has high compatibility and performance, meaning it can run the games smoothly and without lagging.
  • -
  • It has customizable graphics and audio settings, meaning you can adjust the resolution, frame rate, sound quality, and other aspects of the game according to your preference.
  • -
  • It has a user-friendly interface, meaning it is easy to use and navigate.
  • -
  • It has save and load states, meaning you can save your progress and resume it anytime you want.
  • -
  • It has cheat codes support, meaning you can use cheats to enhance your gaming experience.
  • -
  • It has multiplayer support, meaning you can play with your friends online or locally using Wi-Fi or Bluetooth.
  • -
  • It is free and ad-free, meaning you don't have to pay anything or deal with annoying ads while playing.
  • -
-

How to download and install AetherSX2 mod apk

-

To download and install AetherSX2 mod apk on your Android device, follow these steps:

-
    -
  1. Go to [this link](^1^) and download the latest version of AetherSX2 mod apk file.
  2. -
  3. Once the download is complete, locate the file in your device storage and tap on it to install it.
  4. -
  5. If you see a warning message that says "Install blocked", go to your device settings and enable "Unknown sources" to allow the installation of apps from sources other than the Google Play Store.
  6. -
  7. Follow the instructions on the screen to complete the installation process.
  8. -
  9. Once the installation is done, you can launch the app from your app drawer or home screen.
  10. -
-

How to play PS2 games on AetherSX2

-

To play PS2 games on AetherSX2, you need two things: the PS2 BIOS file and the PS2 game ISO file. The PS2 BIOS file is a system file that contains the basic functions of the PS2 console. The PS2 game ISO file is a digital copy of the PS2 game disc. You can get these files from your own PS2 console and game discs, or you can download them from the internet. However, downloading them from the internet may be illegal in some countries, so do it at your own risk.

-

How to load PS2 games from your device storage

-

If you already have the PS2 BIOS file and the PS2 game ISO file on your device storage, you can load them on AetherSX2 by following these steps:

-
    -
  1. Launch the AetherSX2 app and tap on the menu icon at the top left corner of the screen.
  2. -
  3. Tap on "Settings" and then tap on "BIOS".
  4. -
  5. Locate and select the PS2 BIOS file from your device storage. It should have a .bin extension.
  6. -
  7. Go back to the main menu and tap on "Games".
  8. -
  9. Locate and select the PS2 game ISO file from your device storage. It should have a .iso or .bin extension.
  10. -
  11. The game will start loading and you can enjoy playing it on your Android device.
  12. -
-

How to download PS2 games from the internet

-

If you don't have the PS2 BIOS file and the PS2 game ISO file on your device storage, you can download them from the internet by following these steps:

-
    -
  1. Go to [this link] and download the PS2 BIOS file. It should have a .zip or .rar extension.
  2. -
  3. Extract the file using a file manager app or a zip extractor app. You should get a .bin file inside.
  4. -
  5. Copy or move the .bin file to a folder of your choice on your device storage.
  6. -
  7. Go to [this link] and browse through the list of PS2 games available for download. You can also use the search function to find a specific game.
  8. -
  9. Select a game that you want to download and tap on it. You will be redirected to another page where you can see more details about the game, such as its genre, rating, size, etc.
  10. -
  11. Scroll down and tap on "Download Now". You will be redirected to another page where you can see different download links for different servers.
  12. -
  13. Select a server that works for you and tap on it. The download will start automatically. The file should have a .iso or .bin extension.
  14. -
  15. Once the download is complete, locate the file in your device storage and copy or move it to a folder of your choice.
  16. -
-

How to configure the settings and controls of AetherSX2

-

To configure the settings and controls of AetherSX2, follow these steps:

-
    -
  1. Launch the AetherSX2 app and tap on the menu icon at the top left corner of the screen.
  2. -
  3. Tap on "Settings" and then tap on any of the options that you want to change, such as "Graphics", "Audio", "Input", etc.
  4. -
  5. You can adjust various settings according to your preference, such as resolution, frame rate, sound quality, controller layout, etc.
  6. -
  7. You can also save different profiles for different games by tapping on "Save Profile" at the bottom of each settings page.
  8. -
  9. To load a profile for a specific game, go back to the main menu and tap on "Games". Then, long-press on a game that you want to play and select "Load Profile". Choose a profile that you have saved before and tap on it. The game will load with those settings applied.
  10. -
-

Pros and cons of AetherSX2 mod apk

-

Pros

-

Free and ad-free

-

AetherSX2 mod apk is completely free and ad-free, meaning you don't have to pay anything or deal with annoying ads while playing. This is a great advantage over other PS2 emulators that may charge you for some features or show you ads that interrupt your gaming experience.

-

High compatibility and performance

-

AetherSX2 mod apk has high compatibility and performance, meaning it can run most of the PS2 games without any problems. It also has high performance, meaning it can run the games smoothly and without lagging. This is a great advantage over other PS2 emulators that may have low compatibility or performance, meaning they may not be able to run some games or they may run them slowly or with glitches.

-

Customizable graphics and audio

-

AetherSX2 mod apk has customizable graphics and audio settings, meaning you can adjust the resolution, frame rate, sound quality, and other aspects of the game according to your preference. This is a great advantage over other PS2 emulators that may have fixed or limited graphics and audio settings, meaning you may not be able to enjoy the game in the best possible way.

-

Cons

-

Requires a powerful device

-

AetherSX2 mod apk requires a powerful device to run properly, meaning you need a device that has a high-end processor, RAM, and storage. This is a disadvantage over other PS2 emulators that may run on lower-end devices, meaning you may not be able to use AetherSX2 mod apk if you have an old or weak device.

-

May encounter some bugs and glitches

-

AetherSX2 mod apk may encounter some bugs and glitches while running some games, meaning you may experience some errors, crashes, freezes, or graphical issues. This is a disadvantage over other PS2 emulators that may run more smoothly and stably, meaning you may not be able to enjoy the game fully if you encounter these problems.

-

Not available on Google Play Store

-

AetherSX2 mod apk is not available on Google Play Store, meaning you have to download it from an external source. This is a disadvantage over other PS2 emulators that are available on Google Play Store, meaning you may not be able to trust the source of AetherSX2 mod apk or you may have to deal with some installation issues.

-

Conclusion

-

AetherSX2 mod apk is a very unique emulator application that allows its users to enjoy access to all kinds of PS2 games by downloading only a single application. The app can be easily enjoyed by the users and can be simply installed on their android devices. It has many features that make it stand out from other emulators, such as high compatibility, performance, graphics, audio, and more. However, it also has some drawbacks, such as requiring a powerful device, encountering some bugs and glitches, and not being available on Google Play Store. Therefore, you should weigh the pros and cons of AetherSX2 mod apk before downloading and installing it on your device.

-

FAQs

-

Here are some frequently asked questions about AetherSX2 mod apk:

-
    -
  • Q: Is AetherSX2 mod apk safe to use?
  • -
  • A: AetherSX2 mod apk is safe to use as long as you download it from a trusted source. However, you should always scan the file for viruses or malware before installing it on your device.
  • -
  • Q: Is AetherSX2 mod apk legal to use?
  • -
  • A: AetherSX2 mod apk is legal to use as long as you own the original PS2 console and game discs. However, downloading the PS2 BIOS file and the PS2 game ISO file from the internet may be illegal in some countries, so do it at your own risk.
  • -
  • Q: What are the minimum requirements for AetherSX2 mod apk?
  • -
  • A: The minimum requirements for AetherSX2 mod apk are:
  • -
      -
    • An Android device running Android 5.0 or higher.
    • -
    • A processor with at least 4 cores and 1.5 GHz speed.
    • -
    • A RAM of at least 3 GB.
    • -
    • A storage of at least 16 GB.
    • -
    -
  • Q: What are some of the best PS2 games to play on AetherSX2 mod apk?
  • -
  • A: Some of the best PS2 games to play on AetherSX2 mod apk are:
  • -
      -
    • God of War
    • -
    • Grand Theft Auto
    • -
    • Final Fantasy
    • -
    • Metal Gear Solid
    • -
    • Kingdom Hearts
    • and more. -
    -
  • Q: How can I contact the developer of AetherSX2 mod apk?
  • -
  • A: You can contact the developer of AetherSX2 mod apk by visiting their official website or their social media pages. You can also send them an email or leave a comment on their blog.
  • -
-

I hope you found this article helpful and informative. If you have any questions or feedback, feel free to leave a comment below. Thank you for reading and happy gaming!

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/The Ultimate Guide to Downloading and Playing 3Win8 Casino Games.md b/spaces/congsaPfin/Manga-OCR/logs/The Ultimate Guide to Downloading and Playing 3Win8 Casino Games.md deleted file mode 100644 index f91953569a7c6719ee7cd9ce8750bcf7810958bf..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/The Ultimate Guide to Downloading and Playing 3Win8 Casino Games.md +++ /dev/null @@ -1,55 +0,0 @@ - -

How to Download 3win8 and Enjoy Online Casino Games

-

If you are looking for a fun and exciting way to spend your free time, you might want to try out 3win8, one of the most popular online casino platforms in Malaysia and Singapore. In this article, we will show you how to download 3win8 for different devices, what are the benefits of playing 3win8, and what are some alternatives to 3win8.

-

download 3win8


Download ››› https://urlca.com/2uO8mu



-

What is 3win8?

-

3win8 is an online casino platform that offers a wide range of games from two of the world's most respected developers, Playtech and RTG. You can play slot games, table games, live casino games, and more on 3win8. Here are some features of 3win8 that make it stand out from other online casino platforms:

-

A popular online casino platform

-

3win8 has been around since 2016 and has gained a loyal fan base in Malaysia and Singapore. It is known for its fast deposit and withdrawal process, its 24/7 customer service, and its attractive design. You can access 3win8 from its official website or from various casino agents that offer it.

-

A variety of games from Playtech and RTG

-

One of the main attractions of 3win8 is its diverse collection of games from Playtech and RTG. Playtech is a leading developer of online casino games, especially slot games, that are known for their high-quality graphics, sound effects, and themes. Some of the popular Playtech slot games on 3win8 include White King, Arctic Treasure, Golden Tour, Cat Queen, Bonus Bear, and Buffalo Blitz.

-

RTG, or Real-Time Gaming, is another reputable developer of online casino games, especially slot games, that are known for their high RTPs, progressive jackpots, and bonus features. Some of the popular RTG slot games on 3win8 include Crystal Waters, Aladdin's Wish, God of Wealth, Highway Kings, and Cleopatra's Gold.

-

Download 3win8 IOS installation guide
-Download 3win8 Android APK file
-Download 3win8 for Windows version
-Download 3win8 casino slot games
-Download 3win8 online casino Singapore
-Download 3win8 free credit no deposit
-Download 3win8 latest version 2021
-Download 3win8 tips and tricks
-Download 3win8 customer service
-Download 3win8 bonus and promotions
-Download 3win8 official website
-Download 3win8 register account
-Download 3win8 login and play
-Download 3win8 safe and secure
-Download 3win8 fast and easy
-Download 3win8 best online casino Malaysia
-Download 3win8 live dealer games
-Download 3win8 jackpot and prizes
-Download 3win8 demo and trial
-Download 3win8 review and feedback
-Download 3win8 compatible devices
-Download 3win8 update and maintenance
-Download 3win8 hack and cheat
-Download 3win8 withdrawal and deposit
-Download 3win8 referral and loyalty program

-

A secure and user-friendly app

-

Another feature of 3win8 is its secure and user-friendly app that you can download for your Windows PC, iOS device, or Android device. The app allows you to play your favorite games anytime and anywhere with a stable internet connection. The app also has a simple and intuitive interface that makes it easy to navigate and use.

-

How to Download 3win8 for Different Devices?

-

If you want to download 3win8 for your device, you need to follow these steps:

-

For Windows PC

-
    -
  1. Search for "918.network download 3win8" or go to [this link](^2^) to find the download section for 3win8.
  2. -
  3. Select Download 3Win8 in Windows version.
  4. -
  5. Wait for the installation to complete.
  6. -
  7. Launch the app and log in with your username and password.
  8. -
-

For iOS devices

-
    -
  1. Search for "918.network download 3win8" or go to [this link](^2^) to find the download section for 3win8.
  2. -
  3. Select Download 3Win8 in iOS version.
  4. -
  5. Wait for the installation to complete.
  6. -
  7. Go to Settings > General > Device Management > Nice I have already written the article on the topic of "download 3win8" with the outline, the HTML formatting, the 500 words, the 15 headings and subheadings, the table, the conversational style, the conclusion, and the FAQs. I have also written "

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/lama/saicinpainting/training/modules/pix2pixhd.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/lama/saicinpainting/training/modules/pix2pixhd.py deleted file mode 100644 index 2e4fcfcff083f9ce4d3c7880ff0f74f8f745a251..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/lama/saicinpainting/training/modules/pix2pixhd.py +++ /dev/null @@ -1,669 +0,0 @@ -# original: https://github.com/NVIDIA/pix2pixHD/blob/master/models/networks.py -import collections -from functools import partial -import functools -import logging -from collections import defaultdict - -import numpy as np -import torch.nn as nn - -from annotator.lama.saicinpainting.training.modules.base import BaseDiscriminator, deconv_factory, get_conv_block_ctor, get_norm_layer, get_activation -from annotator.lama.saicinpainting.training.modules.ffc import FFCResnetBlock -from annotator.lama.saicinpainting.training.modules.multidilated_conv import MultidilatedConv - -class DotDict(defaultdict): - # https://stackoverflow.com/questions/2352181/how-to-use-a-dot-to-access-members-of-dictionary - """dot.notation access to dictionary attributes""" - __getattr__ = defaultdict.get - __setattr__ = defaultdict.__setitem__ - __delattr__ = defaultdict.__delitem__ - -class Identity(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, x): - return x - - -class ResnetBlock(nn.Module): - def __init__(self, dim, padding_type, norm_layer, activation=nn.ReLU(True), use_dropout=False, conv_kind='default', - dilation=1, in_dim=None, groups=1, second_dilation=None): - super(ResnetBlock, self).__init__() - self.in_dim = in_dim - self.dim = dim - if second_dilation is None: - second_dilation = dilation - self.conv_block = self.build_conv_block(dim, padding_type, norm_layer, activation, use_dropout, - conv_kind=conv_kind, dilation=dilation, in_dim=in_dim, groups=groups, - second_dilation=second_dilation) - - if self.in_dim is not None: - self.input_conv = nn.Conv2d(in_dim, dim, 1) - - self.out_channnels = dim - - def build_conv_block(self, dim, padding_type, norm_layer, activation, use_dropout, conv_kind='default', - dilation=1, in_dim=None, groups=1, second_dilation=1): - conv_layer = get_conv_block_ctor(conv_kind) - - conv_block = [] - p = 0 - if padding_type == 'reflect': - conv_block += [nn.ReflectionPad2d(dilation)] - elif padding_type == 'replicate': - conv_block += [nn.ReplicationPad2d(dilation)] - elif padding_type == 'zero': - p = dilation - else: - raise NotImplementedError('padding [%s] is not implemented' % padding_type) - - if in_dim is None: - in_dim = dim - - conv_block += [conv_layer(in_dim, dim, kernel_size=3, padding=p, dilation=dilation), - norm_layer(dim), - activation] - if use_dropout: - conv_block += [nn.Dropout(0.5)] - - p = 0 - if padding_type == 'reflect': - conv_block += [nn.ReflectionPad2d(second_dilation)] - elif padding_type == 'replicate': - conv_block += [nn.ReplicationPad2d(second_dilation)] - elif padding_type == 'zero': - p = second_dilation - else: - raise NotImplementedError('padding [%s] is not implemented' % padding_type) - conv_block += [conv_layer(dim, dim, kernel_size=3, padding=p, dilation=second_dilation, groups=groups), - norm_layer(dim)] - - return nn.Sequential(*conv_block) - - def forward(self, x): - x_before = x - if self.in_dim is not None: - x = self.input_conv(x) - out = x + self.conv_block(x_before) - return out - -class ResnetBlock5x5(nn.Module): - def __init__(self, dim, padding_type, norm_layer, activation=nn.ReLU(True), use_dropout=False, conv_kind='default', - dilation=1, in_dim=None, groups=1, second_dilation=None): - super(ResnetBlock5x5, self).__init__() - self.in_dim = in_dim - self.dim = dim - if second_dilation is None: - second_dilation = dilation - self.conv_block = self.build_conv_block(dim, padding_type, norm_layer, activation, use_dropout, - conv_kind=conv_kind, dilation=dilation, in_dim=in_dim, groups=groups, - second_dilation=second_dilation) - - if self.in_dim is not None: - self.input_conv = nn.Conv2d(in_dim, dim, 1) - - self.out_channnels = dim - - def build_conv_block(self, dim, padding_type, norm_layer, activation, use_dropout, conv_kind='default', - dilation=1, in_dim=None, groups=1, second_dilation=1): - conv_layer = get_conv_block_ctor(conv_kind) - - conv_block = [] - p = 0 - if padding_type == 'reflect': - conv_block += [nn.ReflectionPad2d(dilation * 2)] - elif padding_type == 'replicate': - conv_block += [nn.ReplicationPad2d(dilation * 2)] - elif padding_type == 'zero': - p = dilation * 2 - else: - raise NotImplementedError('padding [%s] is not implemented' % padding_type) - - if in_dim is None: - in_dim = dim - - conv_block += [conv_layer(in_dim, dim, kernel_size=5, padding=p, dilation=dilation), - norm_layer(dim), - activation] - if use_dropout: - conv_block += [nn.Dropout(0.5)] - - p = 0 - if padding_type == 'reflect': - conv_block += [nn.ReflectionPad2d(second_dilation * 2)] - elif padding_type == 'replicate': - conv_block += [nn.ReplicationPad2d(second_dilation * 2)] - elif padding_type == 'zero': - p = second_dilation * 2 - else: - raise NotImplementedError('padding [%s] is not implemented' % padding_type) - conv_block += [conv_layer(dim, dim, kernel_size=5, padding=p, dilation=second_dilation, groups=groups), - norm_layer(dim)] - - return nn.Sequential(*conv_block) - - def forward(self, x): - x_before = x - if self.in_dim is not None: - x = self.input_conv(x) - out = x + self.conv_block(x_before) - return out - - -class MultidilatedResnetBlock(nn.Module): - def __init__(self, dim, padding_type, conv_layer, norm_layer, activation=nn.ReLU(True), use_dropout=False): - super().__init__() - self.conv_block = self.build_conv_block(dim, padding_type, conv_layer, norm_layer, activation, use_dropout) - - def build_conv_block(self, dim, padding_type, conv_layer, norm_layer, activation, use_dropout, dilation=1): - conv_block = [] - conv_block += [conv_layer(dim, dim, kernel_size=3, padding_mode=padding_type), - norm_layer(dim), - activation] - if use_dropout: - conv_block += [nn.Dropout(0.5)] - - conv_block += [conv_layer(dim, dim, kernel_size=3, padding_mode=padding_type), - norm_layer(dim)] - - return nn.Sequential(*conv_block) - - def forward(self, x): - out = x + self.conv_block(x) - return out - - -class MultiDilatedGlobalGenerator(nn.Module): - def __init__(self, input_nc, output_nc, ngf=64, n_downsampling=3, - n_blocks=3, norm_layer=nn.BatchNorm2d, - padding_type='reflect', conv_kind='default', - deconv_kind='convtranspose', activation=nn.ReLU(True), - up_norm_layer=nn.BatchNorm2d, affine=None, up_activation=nn.ReLU(True), - add_out_act=True, max_features=1024, multidilation_kwargs={}, - ffc_positions=None, ffc_kwargs={}): - assert (n_blocks >= 0) - super().__init__() - - conv_layer = get_conv_block_ctor(conv_kind) - resnet_conv_layer = functools.partial(get_conv_block_ctor('multidilated'), **multidilation_kwargs) - norm_layer = get_norm_layer(norm_layer) - if affine is not None: - norm_layer = partial(norm_layer, affine=affine) - up_norm_layer = get_norm_layer(up_norm_layer) - if affine is not None: - up_norm_layer = partial(up_norm_layer, affine=affine) - - model = [nn.ReflectionPad2d(3), - conv_layer(input_nc, ngf, kernel_size=7, padding=0), - norm_layer(ngf), - activation] - - identity = Identity() - ### downsample - for i in range(n_downsampling): - mult = 2 ** i - - model += [conv_layer(min(max_features, ngf * mult), - min(max_features, ngf * mult * 2), - kernel_size=3, stride=2, padding=1), - norm_layer(min(max_features, ngf * mult * 2)), - activation] - - mult = 2 ** n_downsampling - feats_num_bottleneck = min(max_features, ngf * mult) - - ### resnet blocks - for i in range(n_blocks): - if ffc_positions is not None and i in ffc_positions: - model += [FFCResnetBlock(feats_num_bottleneck, padding_type, norm_layer, activation_layer=nn.ReLU, - inline=True, **ffc_kwargs)] - model += [MultidilatedResnetBlock(feats_num_bottleneck, padding_type=padding_type, - conv_layer=resnet_conv_layer, activation=activation, - norm_layer=norm_layer)] - - ### upsample - for i in range(n_downsampling): - mult = 2 ** (n_downsampling - i) - model += deconv_factory(deconv_kind, ngf, mult, up_norm_layer, up_activation, max_features) - model += [nn.ReflectionPad2d(3), - nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)] - if add_out_act: - model.append(get_activation('tanh' if add_out_act is True else add_out_act)) - self.model = nn.Sequential(*model) - - def forward(self, input): - return self.model(input) - -class ConfigGlobalGenerator(nn.Module): - def __init__(self, input_nc, output_nc, ngf=64, n_downsampling=3, - n_blocks=3, norm_layer=nn.BatchNorm2d, - padding_type='reflect', conv_kind='default', - deconv_kind='convtranspose', activation=nn.ReLU(True), - up_norm_layer=nn.BatchNorm2d, affine=None, up_activation=nn.ReLU(True), - add_out_act=True, max_features=1024, - manual_block_spec=[], - resnet_block_kind='multidilatedresnetblock', - resnet_conv_kind='multidilated', - resnet_dilation=1, - multidilation_kwargs={}): - assert (n_blocks >= 0) - super().__init__() - - conv_layer = get_conv_block_ctor(conv_kind) - resnet_conv_layer = functools.partial(get_conv_block_ctor(resnet_conv_kind), **multidilation_kwargs) - norm_layer = get_norm_layer(norm_layer) - if affine is not None: - norm_layer = partial(norm_layer, affine=affine) - up_norm_layer = get_norm_layer(up_norm_layer) - if affine is not None: - up_norm_layer = partial(up_norm_layer, affine=affine) - - model = [nn.ReflectionPad2d(3), - conv_layer(input_nc, ngf, kernel_size=7, padding=0), - norm_layer(ngf), - activation] - - identity = Identity() - - ### downsample - for i in range(n_downsampling): - mult = 2 ** i - model += [conv_layer(min(max_features, ngf * mult), - min(max_features, ngf * mult * 2), - kernel_size=3, stride=2, padding=1), - norm_layer(min(max_features, ngf * mult * 2)), - activation] - - mult = 2 ** n_downsampling - feats_num_bottleneck = min(max_features, ngf * mult) - - if len(manual_block_spec) == 0: - manual_block_spec = [ - DotDict(lambda : None, { - 'n_blocks': n_blocks, - 'use_default': True}) - ] - - ### resnet blocks - for block_spec in manual_block_spec: - def make_and_add_blocks(model, block_spec): - block_spec = DotDict(lambda : None, block_spec) - if not block_spec.use_default: - resnet_conv_layer = functools.partial(get_conv_block_ctor(block_spec.resnet_conv_kind), **block_spec.multidilation_kwargs) - resnet_conv_kind = block_spec.resnet_conv_kind - resnet_block_kind = block_spec.resnet_block_kind - if block_spec.resnet_dilation is not None: - resnet_dilation = block_spec.resnet_dilation - for i in range(block_spec.n_blocks): - if resnet_block_kind == "multidilatedresnetblock": - model += [MultidilatedResnetBlock(feats_num_bottleneck, padding_type=padding_type, - conv_layer=resnet_conv_layer, activation=activation, - norm_layer=norm_layer)] - if resnet_block_kind == "resnetblock": - model += [ResnetBlock(ngf * mult, padding_type=padding_type, activation=activation, norm_layer=norm_layer, - conv_kind=resnet_conv_kind)] - if resnet_block_kind == "resnetblock5x5": - model += [ResnetBlock5x5(ngf * mult, padding_type=padding_type, activation=activation, norm_layer=norm_layer, - conv_kind=resnet_conv_kind)] - if resnet_block_kind == "resnetblockdwdil": - model += [ResnetBlock(ngf * mult, padding_type=padding_type, activation=activation, norm_layer=norm_layer, - conv_kind=resnet_conv_kind, dilation=resnet_dilation, second_dilation=resnet_dilation)] - make_and_add_blocks(model, block_spec) - - ### upsample - for i in range(n_downsampling): - mult = 2 ** (n_downsampling - i) - model += deconv_factory(deconv_kind, ngf, mult, up_norm_layer, up_activation, max_features) - model += [nn.ReflectionPad2d(3), - nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)] - if add_out_act: - model.append(get_activation('tanh' if add_out_act is True else add_out_act)) - self.model = nn.Sequential(*model) - - def forward(self, input): - return self.model(input) - - -def make_dil_blocks(dilated_blocks_n, dilation_block_kind, dilated_block_kwargs): - blocks = [] - for i in range(dilated_blocks_n): - if dilation_block_kind == 'simple': - blocks.append(ResnetBlock(**dilated_block_kwargs, dilation=2 ** (i + 1))) - elif dilation_block_kind == 'multi': - blocks.append(MultidilatedResnetBlock(**dilated_block_kwargs)) - else: - raise ValueError(f'dilation_block_kind could not be "{dilation_block_kind}"') - return blocks - - -class GlobalGenerator(nn.Module): - def __init__(self, input_nc, output_nc, ngf=64, n_downsampling=3, n_blocks=9, norm_layer=nn.BatchNorm2d, - padding_type='reflect', conv_kind='default', activation=nn.ReLU(True), - up_norm_layer=nn.BatchNorm2d, affine=None, - up_activation=nn.ReLU(True), dilated_blocks_n=0, dilated_blocks_n_start=0, - dilated_blocks_n_middle=0, - add_out_act=True, - max_features=1024, is_resblock_depthwise=False, - ffc_positions=None, ffc_kwargs={}, dilation=1, second_dilation=None, - dilation_block_kind='simple', multidilation_kwargs={}): - assert (n_blocks >= 0) - super().__init__() - - conv_layer = get_conv_block_ctor(conv_kind) - norm_layer = get_norm_layer(norm_layer) - if affine is not None: - norm_layer = partial(norm_layer, affine=affine) - up_norm_layer = get_norm_layer(up_norm_layer) - if affine is not None: - up_norm_layer = partial(up_norm_layer, affine=affine) - - if ffc_positions is not None: - ffc_positions = collections.Counter(ffc_positions) - - model = [nn.ReflectionPad2d(3), - conv_layer(input_nc, ngf, kernel_size=7, padding=0), - norm_layer(ngf), - activation] - - identity = Identity() - ### downsample - for i in range(n_downsampling): - mult = 2 ** i - - model += [conv_layer(min(max_features, ngf * mult), - min(max_features, ngf * mult * 2), - kernel_size=3, stride=2, padding=1), - norm_layer(min(max_features, ngf * mult * 2)), - activation] - - mult = 2 ** n_downsampling - feats_num_bottleneck = min(max_features, ngf * mult) - - dilated_block_kwargs = dict(dim=feats_num_bottleneck, padding_type=padding_type, - activation=activation, norm_layer=norm_layer) - if dilation_block_kind == 'simple': - dilated_block_kwargs['conv_kind'] = conv_kind - elif dilation_block_kind == 'multi': - dilated_block_kwargs['conv_layer'] = functools.partial( - get_conv_block_ctor('multidilated'), **multidilation_kwargs) - - # dilated blocks at the start of the bottleneck sausage - if dilated_blocks_n_start is not None and dilated_blocks_n_start > 0: - model += make_dil_blocks(dilated_blocks_n_start, dilation_block_kind, dilated_block_kwargs) - - # resnet blocks - for i in range(n_blocks): - # dilated blocks at the middle of the bottleneck sausage - if i == n_blocks // 2 and dilated_blocks_n_middle is not None and dilated_blocks_n_middle > 0: - model += make_dil_blocks(dilated_blocks_n_middle, dilation_block_kind, dilated_block_kwargs) - - if ffc_positions is not None and i in ffc_positions: - for _ in range(ffc_positions[i]): # same position can occur more than once - model += [FFCResnetBlock(feats_num_bottleneck, padding_type, norm_layer, activation_layer=nn.ReLU, - inline=True, **ffc_kwargs)] - - if is_resblock_depthwise: - resblock_groups = feats_num_bottleneck - else: - resblock_groups = 1 - - model += [ResnetBlock(feats_num_bottleneck, padding_type=padding_type, activation=activation, - norm_layer=norm_layer, conv_kind=conv_kind, groups=resblock_groups, - dilation=dilation, second_dilation=second_dilation)] - - - # dilated blocks at the end of the bottleneck sausage - if dilated_blocks_n is not None and dilated_blocks_n > 0: - model += make_dil_blocks(dilated_blocks_n, dilation_block_kind, dilated_block_kwargs) - - # upsample - for i in range(n_downsampling): - mult = 2 ** (n_downsampling - i) - model += [nn.ConvTranspose2d(min(max_features, ngf * mult), - min(max_features, int(ngf * mult / 2)), - kernel_size=3, stride=2, padding=1, output_padding=1), - up_norm_layer(min(max_features, int(ngf * mult / 2))), - up_activation] - model += [nn.ReflectionPad2d(3), - nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)] - if add_out_act: - model.append(get_activation('tanh' if add_out_act is True else add_out_act)) - self.model = nn.Sequential(*model) - - def forward(self, input): - return self.model(input) - - -class GlobalGeneratorGated(GlobalGenerator): - def __init__(self, *args, **kwargs): - real_kwargs=dict( - conv_kind='gated_bn_relu', - activation=nn.Identity(), - norm_layer=nn.Identity - ) - real_kwargs.update(kwargs) - super().__init__(*args, **real_kwargs) - - -class GlobalGeneratorFromSuperChannels(nn.Module): - def __init__(self, input_nc, output_nc, n_downsampling, n_blocks, super_channels, norm_layer="bn", padding_type='reflect', add_out_act=True): - super().__init__() - self.n_downsampling = n_downsampling - norm_layer = get_norm_layer(norm_layer) - if type(norm_layer) == functools.partial: - use_bias = (norm_layer.func == nn.InstanceNorm2d) - else: - use_bias = (norm_layer == nn.InstanceNorm2d) - - channels = self.convert_super_channels(super_channels) - self.channels = channels - - model = [nn.ReflectionPad2d(3), - nn.Conv2d(input_nc, channels[0], kernel_size=7, padding=0, bias=use_bias), - norm_layer(channels[0]), - nn.ReLU(True)] - - for i in range(n_downsampling): # add downsampling layers - mult = 2 ** i - model += [nn.Conv2d(channels[0+i], channels[1+i], kernel_size=3, stride=2, padding=1, bias=use_bias), - norm_layer(channels[1+i]), - nn.ReLU(True)] - - mult = 2 ** n_downsampling - - n_blocks1 = n_blocks // 3 - n_blocks2 = n_blocks1 - n_blocks3 = n_blocks - n_blocks1 - n_blocks2 - - for i in range(n_blocks1): - c = n_downsampling - dim = channels[c] - model += [ResnetBlock(dim, padding_type=padding_type, norm_layer=norm_layer)] - - for i in range(n_blocks2): - c = n_downsampling+1 - dim = channels[c] - kwargs = {} - if i == 0: - kwargs = {"in_dim": channels[c-1]} - model += [ResnetBlock(dim, padding_type=padding_type, norm_layer=norm_layer, **kwargs)] - - for i in range(n_blocks3): - c = n_downsampling+2 - dim = channels[c] - kwargs = {} - if i == 0: - kwargs = {"in_dim": channels[c-1]} - model += [ResnetBlock(dim, padding_type=padding_type, norm_layer=norm_layer, **kwargs)] - - for i in range(n_downsampling): # add upsampling layers - mult = 2 ** (n_downsampling - i) - model += [nn.ConvTranspose2d(channels[n_downsampling+3+i], - channels[n_downsampling+3+i+1], - kernel_size=3, stride=2, - padding=1, output_padding=1, - bias=use_bias), - norm_layer(channels[n_downsampling+3+i+1]), - nn.ReLU(True)] - model += [nn.ReflectionPad2d(3)] - model += [nn.Conv2d(channels[2*n_downsampling+3], output_nc, kernel_size=7, padding=0)] - - if add_out_act: - model.append(get_activation('tanh' if add_out_act is True else add_out_act)) - self.model = nn.Sequential(*model) - - def convert_super_channels(self, super_channels): - n_downsampling = self.n_downsampling - result = [] - cnt = 0 - - if n_downsampling == 2: - N1 = 10 - elif n_downsampling == 3: - N1 = 13 - else: - raise NotImplementedError - - for i in range(0, N1): - if i in [1,4,7,10]: - channel = super_channels[cnt] * (2 ** cnt) - config = {'channel': channel} - result.append(channel) - logging.info(f"Downsample channels {result[-1]}") - cnt += 1 - - for i in range(3): - for counter, j in enumerate(range(N1 + i * 3, N1 + 3 + i * 3)): - if len(super_channels) == 6: - channel = super_channels[3] * 4 - else: - channel = super_channels[i + 3] * 4 - config = {'channel': channel} - if counter == 0: - result.append(channel) - logging.info(f"Bottleneck channels {result[-1]}") - cnt = 2 - - for i in range(N1+9, N1+21): - if i in [22, 25,28]: - cnt -= 1 - if len(super_channels) == 6: - channel = super_channels[5 - cnt] * (2 ** cnt) - else: - channel = super_channels[7 - cnt] * (2 ** cnt) - result.append(int(channel)) - logging.info(f"Upsample channels {result[-1]}") - return result - - def forward(self, input): - return self.model(input) - - -# Defines the PatchGAN discriminator with the specified arguments. -class NLayerDiscriminator(BaseDiscriminator): - def __init__(self, input_nc, ndf=64, n_layers=3, norm_layer=nn.BatchNorm2d,): - super().__init__() - self.n_layers = n_layers - - kw = 4 - padw = int(np.ceil((kw-1.0)/2)) - sequence = [[nn.Conv2d(input_nc, ndf, kernel_size=kw, stride=2, padding=padw), - nn.LeakyReLU(0.2, True)]] - - nf = ndf - for n in range(1, n_layers): - nf_prev = nf - nf = min(nf * 2, 512) - - cur_model = [] - cur_model += [ - nn.Conv2d(nf_prev, nf, kernel_size=kw, stride=2, padding=padw), - norm_layer(nf), - nn.LeakyReLU(0.2, True) - ] - sequence.append(cur_model) - - nf_prev = nf - nf = min(nf * 2, 512) - - cur_model = [] - cur_model += [ - nn.Conv2d(nf_prev, nf, kernel_size=kw, stride=1, padding=padw), - norm_layer(nf), - nn.LeakyReLU(0.2, True) - ] - sequence.append(cur_model) - - sequence += [[nn.Conv2d(nf, 1, kernel_size=kw, stride=1, padding=padw)]] - - for n in range(len(sequence)): - setattr(self, 'model'+str(n), nn.Sequential(*sequence[n])) - - def get_all_activations(self, x): - res = [x] - for n in range(self.n_layers + 2): - model = getattr(self, 'model' + str(n)) - res.append(model(res[-1])) - return res[1:] - - def forward(self, x): - act = self.get_all_activations(x) - return act[-1], act[:-1] - - -class MultidilatedNLayerDiscriminator(BaseDiscriminator): - def __init__(self, input_nc, ndf=64, n_layers=3, norm_layer=nn.BatchNorm2d, multidilation_kwargs={}): - super().__init__() - self.n_layers = n_layers - - kw = 4 - padw = int(np.ceil((kw-1.0)/2)) - sequence = [[nn.Conv2d(input_nc, ndf, kernel_size=kw, stride=2, padding=padw), - nn.LeakyReLU(0.2, True)]] - - nf = ndf - for n in range(1, n_layers): - nf_prev = nf - nf = min(nf * 2, 512) - - cur_model = [] - cur_model += [ - MultidilatedConv(nf_prev, nf, kernel_size=kw, stride=2, padding=[2, 3], **multidilation_kwargs), - norm_layer(nf), - nn.LeakyReLU(0.2, True) - ] - sequence.append(cur_model) - - nf_prev = nf - nf = min(nf * 2, 512) - - cur_model = [] - cur_model += [ - nn.Conv2d(nf_prev, nf, kernel_size=kw, stride=1, padding=padw), - norm_layer(nf), - nn.LeakyReLU(0.2, True) - ] - sequence.append(cur_model) - - sequence += [[nn.Conv2d(nf, 1, kernel_size=kw, stride=1, padding=padw)]] - - for n in range(len(sequence)): - setattr(self, 'model'+str(n), nn.Sequential(*sequence[n])) - - def get_all_activations(self, x): - res = [x] - for n in range(self.n_layers + 2): - model = getattr(self, 'model' + str(n)) - res.append(model(res[-1])) - return res[1:] - - def forward(self, x): - act = self.get_all_activations(x) - return act[-1], act[:-1] - - -class NLayerDiscriminatorAsGen(NLayerDiscriminator): - def forward(self, x): - return super().forward(x)[0] diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mlsd/__init__.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mlsd/__init__.py deleted file mode 100644 index d01b379230ae3a35e229557c98d2917a53d8e581..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mlsd/__init__.py +++ /dev/null @@ -1,45 +0,0 @@ -# MLSD Line Detection -# From https://github.com/navervision/mlsd -# Apache-2.0 license - -import cv2 -import numpy as np -import torch -import os - -from einops import rearrange -from .models.mbv2_mlsd_tiny import MobileV2_MLSD_Tiny -from .models.mbv2_mlsd_large import MobileV2_MLSD_Large -from .utils import pred_lines - -from annotator.util import annotator_ckpts_path - - -remote_model_path = "https://huggingface.co/lllyasviel/Annotators/resolve/main/mlsd_large_512_fp32.pth" - - -class MLSDdetector: - def __init__(self): - model_path = os.path.join(annotator_ckpts_path, "mlsd_large_512_fp32.pth") - if not os.path.exists(model_path): - from basicsr.utils.download_util import load_file_from_url - load_file_from_url(remote_model_path, model_dir=annotator_ckpts_path) - model = MobileV2_MLSD_Large() -# model.load_state_dict(torch.load(model_path), strict=True) -# self.model = model.cuda().eval() - model.load_state_dict(torch.load(model_path, map_location=torch.device('cpu')), strict=True) - self.model = model.cpu().eval() - - def __call__(self, input_image, thr_v, thr_d): - assert input_image.ndim == 3 - img = input_image - img_output = np.zeros_like(img) - try: - with torch.no_grad(): - lines = pred_lines(img, self.model, [img.shape[0], img.shape[1]], thr_v, thr_d) - for line in lines: - x_start, y_start, x_end, y_end = [int(val) for val in line] - cv2.line(img_output, (x_start, y_start), (x_end, y_end), [255, 255, 255], 1) - except Exception as e: - pass - return img_output[:, :, 0] diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/utils/collect_env.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/utils/collect_env.py deleted file mode 100644 index bb25d297ee83c70fd244762e1a7fd554c1fa4b69..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/utils/collect_env.py +++ /dev/null @@ -1,246 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import importlib -import numpy as np -import os -import re -import subprocess -import sys -from collections import defaultdict -import PIL -import torch -import torchvision -from tabulate import tabulate - -__all__ = ["collect_env_info"] - - -def collect_torch_env(): - try: - import torch.__config__ - - return torch.__config__.show() - except ImportError: - # compatible with older versions of pytorch - from torch.utils.collect_env import get_pretty_env_info - - return get_pretty_env_info() - - -def get_env_module(): - var_name = "DETECTRON2_ENV_MODULE" - return var_name, os.environ.get(var_name, "") - - -def detect_compute_compatibility(CUDA_HOME, so_file): - try: - cuobjdump = os.path.join(CUDA_HOME, "bin", "cuobjdump") - if os.path.isfile(cuobjdump): - output = subprocess.check_output( - "'{}' --list-elf '{}'".format(cuobjdump, so_file), shell=True - ) - output = output.decode("utf-8").strip().split("\n") - arch = [] - for line in output: - line = re.findall(r"\.sm_([0-9]*)\.", line)[0] - arch.append(".".join(line)) - arch = sorted(set(arch)) - return ", ".join(arch) - else: - return so_file + "; cannot find cuobjdump" - except Exception: - # unhandled failure - return so_file - - -def collect_env_info(): - has_gpu = torch.cuda.is_available() # true for both CUDA & ROCM - torch_version = torch.__version__ - - # NOTE that CUDA_HOME/ROCM_HOME could be None even when CUDA runtime libs are functional - from torch.utils.cpp_extension import CUDA_HOME, ROCM_HOME - - has_rocm = False - if (getattr(torch.version, "hip", None) is not None) and (ROCM_HOME is not None): - has_rocm = True - has_cuda = has_gpu and (not has_rocm) - - data = [] - data.append(("sys.platform", sys.platform)) # check-template.yml depends on it - data.append(("Python", sys.version.replace("\n", ""))) - data.append(("numpy", np.__version__)) - - try: - import annotator.oneformer.detectron2 # noqa - - data.append( - ("detectron2", detectron2.__version__ + " @" + os.path.dirname(detectron2.__file__)) - ) - except ImportError: - data.append(("detectron2", "failed to import")) - except AttributeError: - data.append(("detectron2", "imported a wrong installation")) - - try: - import annotator.oneformer.detectron2._C as _C - except ImportError as e: - data.append(("detectron2._C", f"not built correctly: {e}")) - - # print system compilers when extension fails to build - if sys.platform != "win32": # don't know what to do for windows - try: - # this is how torch/utils/cpp_extensions.py choose compiler - cxx = os.environ.get("CXX", "c++") - cxx = subprocess.check_output("'{}' --version".format(cxx), shell=True) - cxx = cxx.decode("utf-8").strip().split("\n")[0] - except subprocess.SubprocessError: - cxx = "Not found" - data.append(("Compiler ($CXX)", cxx)) - - if has_cuda and CUDA_HOME is not None: - try: - nvcc = os.path.join(CUDA_HOME, "bin", "nvcc") - nvcc = subprocess.check_output("'{}' -V".format(nvcc), shell=True) - nvcc = nvcc.decode("utf-8").strip().split("\n")[-1] - except subprocess.SubprocessError: - nvcc = "Not found" - data.append(("CUDA compiler", nvcc)) - if has_cuda and sys.platform != "win32": - try: - so_file = importlib.util.find_spec("detectron2._C").origin - except (ImportError, AttributeError): - pass - else: - data.append( - ("detectron2 arch flags", detect_compute_compatibility(CUDA_HOME, so_file)) - ) - else: - # print compilers that are used to build extension - data.append(("Compiler", _C.get_compiler_version())) - data.append(("CUDA compiler", _C.get_cuda_version())) # cuda or hip - if has_cuda and getattr(_C, "has_cuda", lambda: True)(): - data.append( - ("detectron2 arch flags", detect_compute_compatibility(CUDA_HOME, _C.__file__)) - ) - - data.append(get_env_module()) - data.append(("PyTorch", torch_version + " @" + os.path.dirname(torch.__file__))) - data.append(("PyTorch debug build", torch.version.debug)) - try: - data.append(("torch._C._GLIBCXX_USE_CXX11_ABI", torch._C._GLIBCXX_USE_CXX11_ABI)) - except Exception: - pass - - if not has_gpu: - has_gpu_text = "No: torch.cuda.is_available() == False" - else: - has_gpu_text = "Yes" - data.append(("GPU available", has_gpu_text)) - if has_gpu: - devices = defaultdict(list) - for k in range(torch.cuda.device_count()): - cap = ".".join((str(x) for x in torch.cuda.get_device_capability(k))) - name = torch.cuda.get_device_name(k) + f" (arch={cap})" - devices[name].append(str(k)) - for name, devids in devices.items(): - data.append(("GPU " + ",".join(devids), name)) - - if has_rocm: - msg = " - invalid!" if not (ROCM_HOME and os.path.isdir(ROCM_HOME)) else "" - data.append(("ROCM_HOME", str(ROCM_HOME) + msg)) - else: - try: - from torch.utils.collect_env import get_nvidia_driver_version, run as _run - - data.append(("Driver version", get_nvidia_driver_version(_run))) - except Exception: - pass - msg = " - invalid!" if not (CUDA_HOME and os.path.isdir(CUDA_HOME)) else "" - data.append(("CUDA_HOME", str(CUDA_HOME) + msg)) - - cuda_arch_list = os.environ.get("TORCH_CUDA_ARCH_LIST", None) - if cuda_arch_list: - data.append(("TORCH_CUDA_ARCH_LIST", cuda_arch_list)) - data.append(("Pillow", PIL.__version__)) - - try: - data.append( - ( - "torchvision", - str(torchvision.__version__) + " @" + os.path.dirname(torchvision.__file__), - ) - ) - if has_cuda: - try: - torchvision_C = importlib.util.find_spec("torchvision._C").origin - msg = detect_compute_compatibility(CUDA_HOME, torchvision_C) - data.append(("torchvision arch flags", msg)) - except (ImportError, AttributeError): - data.append(("torchvision._C", "Not found")) - except AttributeError: - data.append(("torchvision", "unknown")) - - try: - import fvcore - - data.append(("fvcore", fvcore.__version__)) - except (ImportError, AttributeError): - pass - - try: - import iopath - - data.append(("iopath", iopath.__version__)) - except (ImportError, AttributeError): - pass - - try: - import cv2 - - data.append(("cv2", cv2.__version__)) - except (ImportError, AttributeError): - data.append(("cv2", "Not found")) - env_str = tabulate(data) + "\n" - env_str += collect_torch_env() - return env_str - - -def test_nccl_ops(): - num_gpu = torch.cuda.device_count() - if os.access("/tmp", os.W_OK): - import torch.multiprocessing as mp - - dist_url = "file:///tmp/nccl_tmp_file" - print("Testing NCCL connectivity ... this should not hang.") - mp.spawn(_test_nccl_worker, nprocs=num_gpu, args=(num_gpu, dist_url), daemon=False) - print("NCCL succeeded.") - - -def _test_nccl_worker(rank, num_gpu, dist_url): - import torch.distributed as dist - - dist.init_process_group(backend="NCCL", init_method=dist_url, rank=rank, world_size=num_gpu) - dist.barrier(device_ids=[rank]) - - -if __name__ == "__main__": - try: - from annotator.oneformer.detectron2.utils.collect_env import collect_env_info as f - - print(f()) - except ImportError: - print(collect_env_info()) - - if torch.cuda.is_available(): - num_gpu = torch.cuda.device_count() - for k in range(num_gpu): - device = f"cuda:{k}" - try: - x = torch.tensor([1, 2.0], dtype=torch.float32) - x = x.to(device) - except Exception as e: - print( - f"Unable to copy tensor to device={device}: {e}. " - "Your CUDA environment is broken." - ) - if num_gpu > 1: - test_nccl_ops() diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/data/datasets/register_ade20k_instance.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/data/datasets/register_ade20k_instance.py deleted file mode 100644 index e32d2b0bf5e2a937ac0ecf46b76239d6bc889ab8..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/data/datasets/register_ade20k_instance.py +++ /dev/null @@ -1,56 +0,0 @@ -# ------------------------------------------------------------------------------ -# Reference: https://github.com/facebookresearch/Mask2Former/blob/main/mask2former/data/datasets/register_ade20k_instance.py -# ------------------------------------------------------------------------------ - -import json -import logging -import numpy as np -import os -from PIL import Image - -from annotator.oneformer.detectron2.data import DatasetCatalog, MetadataCatalog -from annotator.oneformer.detectron2.data.datasets.coco import load_coco_json, register_coco_instances -from annotator.oneformer.detectron2.utils.file_io import PathManager - -ADE_CATEGORIES = [{'id': 7, 'name': 'bed'}, {'id': 8, 'name': 'windowpane'}, {'id': 10, 'name': 'cabinet'}, {'id': 12, 'name': 'person'}, {'id': 14, 'name': 'door'}, {'id': 15, 'name': 'table'}, {'id': 18, 'name': 'curtain'}, {'id': 19, 'name': 'chair'}, {'id': 20, 'name': 'car'}, {'id': 22, 'name': 'painting'}, {'id': 23, 'name': 'sofa'}, {'id': 24, 'name': 'shelf'}, {'id': 27, 'name': 'mirror'}, {'id': 30, 'name': 'armchair'}, {'id': 31, 'name': 'seat'}, {'id': 32, 'name': 'fence'}, {'id': 33, 'name': 'desk'}, {'id': 35, 'name': 'wardrobe'}, {'id': 36, 'name': 'lamp'}, {'id': 37, 'name': 'bathtub'}, {'id': 38, 'name': 'railing'}, {'id': 39, 'name': 'cushion'}, {'id': 41, 'name': 'box'}, {'id': 42, 'name': 'column'}, {'id': 43, 'name': 'signboard'}, {'id': 44, 'name': 'chest of drawers'}, {'id': 45, 'name': 'counter'}, {'id': 47, 'name': 'sink'}, {'id': 49, 'name': 'fireplace'}, {'id': 50, 'name': 'refrigerator'}, {'id': 53, 'name': 'stairs'}, {'id': 55, 'name': 'case'}, {'id': 56, 'name': 'pool table'}, {'id': 57, 'name': 'pillow'}, {'id': 58, 'name': 'screen door'}, {'id': 62, 'name': 'bookcase'}, {'id': 64, 'name': 'coffee table'}, {'id': 65, 'name': 'toilet'}, {'id': 66, 'name': 'flower'}, {'id': 67, 'name': 'book'}, {'id': 69, 'name': 'bench'}, {'id': 70, 'name': 'countertop'}, {'id': 71, 'name': 'stove'}, {'id': 72, 'name': 'palm'}, {'id': 73, 'name': 'kitchen island'}, {'id': 74, 'name': 'computer'}, {'id': 75, 'name': 'swivel chair'}, {'id': 76, 'name': 'boat'}, {'id': 78, 'name': 'arcade machine'}, {'id': 80, 'name': 'bus'}, {'id': 81, 'name': 'towel'}, {'id': 82, 'name': 'light'}, {'id': 83, 'name': 'truck'}, {'id': 85, 'name': 'chandelier'}, {'id': 86, 'name': 'awning'}, {'id': 87, 'name': 'streetlight'}, {'id': 88, 'name': 'booth'}, {'id': 89, 'name': 'television receiver'}, {'id': 90, 'name': 'airplane'}, {'id': 92, 'name': 'apparel'}, {'id': 93, 'name': 'pole'}, {'id': 95, 'name': 'bannister'}, {'id': 97, 'name': 'ottoman'}, {'id': 98, 'name': 'bottle'}, {'id': 102, 'name': 'van'}, {'id': 103, 'name': 'ship'}, {'id': 104, 'name': 'fountain'}, {'id': 107, 'name': 'washer'}, {'id': 108, 'name': 'plaything'}, {'id': 110, 'name': 'stool'}, {'id': 111, 'name': 'barrel'}, {'id': 112, 'name': 'basket'}, {'id': 115, 'name': 'bag'}, {'id': 116, 'name': 'minibike'}, {'id': 118, 'name': 'oven'}, {'id': 119, 'name': 'ball'}, {'id': 120, 'name': 'food'}, {'id': 121, 'name': 'step'}, {'id': 123, 'name': 'trade name'}, {'id': 124, 'name': 'microwave'}, {'id': 125, 'name': 'pot'}, {'id': 126, 'name': 'animal'}, {'id': 127, 'name': 'bicycle'}, {'id': 129, 'name': 'dishwasher'}, {'id': 130, 'name': 'screen'}, {'id': 132, 'name': 'sculpture'}, {'id': 133, 'name': 'hood'}, {'id': 134, 'name': 'sconce'}, {'id': 135, 'name': 'vase'}, {'id': 136, 'name': 'traffic light'}, {'id': 137, 'name': 'tray'}, {'id': 138, 'name': 'ashcan'}, {'id': 139, 'name': 'fan'}, {'id': 142, 'name': 'plate'}, {'id': 143, 'name': 'monitor'}, {'id': 144, 'name': 'bulletin board'}, {'id': 146, 'name': 'radiator'}, {'id': 147, 'name': 'glass'}, {'id': 148, 'name': 'clock'}, {'id': 149, 'name': 'flag'}] - - -_PREDEFINED_SPLITS = { - # point annotations without masks - "ade20k_instance_train": ( - "ADEChallengeData2016/images/training", - "ADEChallengeData2016/ade20k_instance_train.json", - ), - "ade20k_instance_val": ( - "ADEChallengeData2016/images/validation", - "ADEChallengeData2016/ade20k_instance_val.json", - ), -} - - -def _get_ade_instances_meta(): - thing_ids = [k["id"] for k in ADE_CATEGORIES] - assert len(thing_ids) == 100, len(thing_ids) - # Mapping from the incontiguous ADE category id to an id in [0, 99] - thing_dataset_id_to_contiguous_id = {k: i for i, k in enumerate(thing_ids)} - thing_classes = [k["name"] for k in ADE_CATEGORIES] - ret = { - "thing_dataset_id_to_contiguous_id": thing_dataset_id_to_contiguous_id, - "thing_classes": thing_classes, - } - return ret - - -def register_all_ade20k_instance(root): - for key, (image_root, json_file) in _PREDEFINED_SPLITS.items(): - # Assume pre-defined datasets live in `./datasets`. - register_coco_instances( - key, - _get_ade_instances_meta(), - os.path.join(root, json_file) if "://" not in json_file else json_file, - os.path.join(root, image_root), - ) - - -_root = os.getenv("DETECTRON2_DATASETS", "datasets") -register_all_ade20k_instance(_root) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/utils/sync_bn.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/utils/sync_bn.py deleted file mode 100644 index f78f39181d75bb85c53e8c7c8eaf45690e9f0bee..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/utils/sync_bn.py +++ /dev/null @@ -1,59 +0,0 @@ -import torch - -import annotator.uniformer.mmcv as mmcv - - -class _BatchNormXd(torch.nn.modules.batchnorm._BatchNorm): - """A general BatchNorm layer without input dimension check. - - Reproduced from @kapily's work: - (https://github.com/pytorch/pytorch/issues/41081#issuecomment-783961547) - The only difference between BatchNorm1d, BatchNorm2d, BatchNorm3d, etc - is `_check_input_dim` that is designed for tensor sanity checks. - The check has been bypassed in this class for the convenience of converting - SyncBatchNorm. - """ - - def _check_input_dim(self, input): - return - - -def revert_sync_batchnorm(module): - """Helper function to convert all `SyncBatchNorm` (SyncBN) and - `mmcv.ops.sync_bn.SyncBatchNorm`(MMSyncBN) layers in the model to - `BatchNormXd` layers. - - Adapted from @kapily's work: - (https://github.com/pytorch/pytorch/issues/41081#issuecomment-783961547) - - Args: - module (nn.Module): The module containing `SyncBatchNorm` layers. - - Returns: - module_output: The converted module with `BatchNormXd` layers. - """ - module_output = module - module_checklist = [torch.nn.modules.batchnorm.SyncBatchNorm] - if hasattr(mmcv, 'ops'): - module_checklist.append(mmcv.ops.SyncBatchNorm) - if isinstance(module, tuple(module_checklist)): - module_output = _BatchNormXd(module.num_features, module.eps, - module.momentum, module.affine, - module.track_running_stats) - if module.affine: - # no_grad() may not be needed here but - # just to be consistent with `convert_sync_batchnorm()` - with torch.no_grad(): - module_output.weight = module.weight - module_output.bias = module.bias - module_output.running_mean = module.running_mean - module_output.running_var = module.running_var - module_output.num_batches_tracked = module.num_batches_tracked - module_output.training = module.training - # qconfig exists in quantized models - if hasattr(module, 'qconfig'): - module_output.qconfig = module.qconfig - for name, child in module.named_children(): - module_output.add_module(name, revert_sync_batchnorm(child)) - del module - return module_output diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/runner/checkpoint.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/runner/checkpoint.py deleted file mode 100644 index b29ca320679164432f446adad893e33fb2b4b29e..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/runner/checkpoint.py +++ /dev/null @@ -1,707 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import io -import os -import os.path as osp -import pkgutil -import re -import time -import warnings -from collections import OrderedDict -from importlib import import_module -from tempfile import TemporaryDirectory - -import torch -import torchvision -from torch.optim import Optimizer -from torch.utils import model_zoo - -import annotator.uniformer.mmcv as mmcv -from ..fileio import FileClient -from ..fileio import load as load_file -from ..parallel import is_module_wrapper -from ..utils import mkdir_or_exist -from .dist_utils import get_dist_info - -ENV_MMCV_HOME = 'MMCV_HOME' -ENV_XDG_CACHE_HOME = 'XDG_CACHE_HOME' -DEFAULT_CACHE_DIR = '~/.cache' - - -def _get_mmcv_home(): - mmcv_home = os.path.expanduser( - os.getenv( - ENV_MMCV_HOME, - os.path.join( - os.getenv(ENV_XDG_CACHE_HOME, DEFAULT_CACHE_DIR), 'mmcv'))) - - mkdir_or_exist(mmcv_home) - return mmcv_home - - -def load_state_dict(module, state_dict, strict=False, logger=None): - """Load state_dict to a module. - - This method is modified from :meth:`torch.nn.Module.load_state_dict`. - Default value for ``strict`` is set to ``False`` and the message for - param mismatch will be shown even if strict is False. - - Args: - module (Module): Module that receives the state_dict. - state_dict (OrderedDict): Weights. - strict (bool): whether to strictly enforce that the keys - in :attr:`state_dict` match the keys returned by this module's - :meth:`~torch.nn.Module.state_dict` function. Default: ``False``. - logger (:obj:`logging.Logger`, optional): Logger to log the error - message. If not specified, print function will be used. - """ - unexpected_keys = [] - all_missing_keys = [] - err_msg = [] - - metadata = getattr(state_dict, '_metadata', None) - state_dict = state_dict.copy() - if metadata is not None: - state_dict._metadata = metadata - - # use _load_from_state_dict to enable checkpoint version control - def load(module, prefix=''): - # recursively check parallel module in case that the model has a - # complicated structure, e.g., nn.Module(nn.Module(DDP)) - if is_module_wrapper(module): - module = module.module - local_metadata = {} if metadata is None else metadata.get( - prefix[:-1], {}) - module._load_from_state_dict(state_dict, prefix, local_metadata, True, - all_missing_keys, unexpected_keys, - err_msg) - for name, child in module._modules.items(): - if child is not None: - load(child, prefix + name + '.') - - load(module) - load = None # break load->load reference cycle - - # ignore "num_batches_tracked" of BN layers - missing_keys = [ - key for key in all_missing_keys if 'num_batches_tracked' not in key - ] - - if unexpected_keys: - err_msg.append('unexpected key in source ' - f'state_dict: {", ".join(unexpected_keys)}\n') - if missing_keys: - err_msg.append( - f'missing keys in source state_dict: {", ".join(missing_keys)}\n') - - rank, _ = get_dist_info() - if len(err_msg) > 0 and rank == 0: - err_msg.insert( - 0, 'The model and loaded state dict do not match exactly\n') - err_msg = '\n'.join(err_msg) - if strict: - raise RuntimeError(err_msg) - elif logger is not None: - logger.warning(err_msg) - else: - print(err_msg) - - -def get_torchvision_models(): - model_urls = dict() - for _, name, ispkg in pkgutil.walk_packages(torchvision.models.__path__): - if ispkg: - continue - _zoo = import_module(f'torchvision.models.{name}') - if hasattr(_zoo, 'model_urls'): - _urls = getattr(_zoo, 'model_urls') - model_urls.update(_urls) - return model_urls - - -def get_external_models(): - mmcv_home = _get_mmcv_home() - default_json_path = osp.join(mmcv.__path__[0], 'model_zoo/open_mmlab.json') - default_urls = load_file(default_json_path) - assert isinstance(default_urls, dict) - external_json_path = osp.join(mmcv_home, 'open_mmlab.json') - if osp.exists(external_json_path): - external_urls = load_file(external_json_path) - assert isinstance(external_urls, dict) - default_urls.update(external_urls) - - return default_urls - - -def get_mmcls_models(): - mmcls_json_path = osp.join(mmcv.__path__[0], 'model_zoo/mmcls.json') - mmcls_urls = load_file(mmcls_json_path) - - return mmcls_urls - - -def get_deprecated_model_names(): - deprecate_json_path = osp.join(mmcv.__path__[0], - 'model_zoo/deprecated.json') - deprecate_urls = load_file(deprecate_json_path) - assert isinstance(deprecate_urls, dict) - - return deprecate_urls - - -def _process_mmcls_checkpoint(checkpoint): - state_dict = checkpoint['state_dict'] - new_state_dict = OrderedDict() - for k, v in state_dict.items(): - if k.startswith('backbone.'): - new_state_dict[k[9:]] = v - new_checkpoint = dict(state_dict=new_state_dict) - - return new_checkpoint - - -class CheckpointLoader: - """A general checkpoint loader to manage all schemes.""" - - _schemes = {} - - @classmethod - def _register_scheme(cls, prefixes, loader, force=False): - if isinstance(prefixes, str): - prefixes = [prefixes] - else: - assert isinstance(prefixes, (list, tuple)) - for prefix in prefixes: - if (prefix not in cls._schemes) or force: - cls._schemes[prefix] = loader - else: - raise KeyError( - f'{prefix} is already registered as a loader backend, ' - 'add "force=True" if you want to override it') - # sort, longer prefixes take priority - cls._schemes = OrderedDict( - sorted(cls._schemes.items(), key=lambda t: t[0], reverse=True)) - - @classmethod - def register_scheme(cls, prefixes, loader=None, force=False): - """Register a loader to CheckpointLoader. - - This method can be used as a normal class method or a decorator. - - Args: - prefixes (str or list[str] or tuple[str]): - The prefix of the registered loader. - loader (function, optional): The loader function to be registered. - When this method is used as a decorator, loader is None. - Defaults to None. - force (bool, optional): Whether to override the loader - if the prefix has already been registered. Defaults to False. - """ - - if loader is not None: - cls._register_scheme(prefixes, loader, force=force) - return - - def _register(loader_cls): - cls._register_scheme(prefixes, loader_cls, force=force) - return loader_cls - - return _register - - @classmethod - def _get_checkpoint_loader(cls, path): - """Finds a loader that supports the given path. Falls back to the local - loader if no other loader is found. - - Args: - path (str): checkpoint path - - Returns: - loader (function): checkpoint loader - """ - - for p in cls._schemes: - if path.startswith(p): - return cls._schemes[p] - - @classmethod - def load_checkpoint(cls, filename, map_location=None, logger=None): - """load checkpoint through URL scheme path. - - Args: - filename (str): checkpoint file name with given prefix - map_location (str, optional): Same as :func:`torch.load`. - Default: None - logger (:mod:`logging.Logger`, optional): The logger for message. - Default: None - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - - checkpoint_loader = cls._get_checkpoint_loader(filename) - class_name = checkpoint_loader.__name__ - mmcv.print_log( - f'load checkpoint from {class_name[10:]} path: {filename}', logger) - return checkpoint_loader(filename, map_location) - - -@CheckpointLoader.register_scheme(prefixes='') -def load_from_local(filename, map_location): - """load checkpoint by local file path. - - Args: - filename (str): local checkpoint file path - map_location (str, optional): Same as :func:`torch.load`. - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - - if not osp.isfile(filename): - raise IOError(f'{filename} is not a checkpoint file') - checkpoint = torch.load(filename, map_location=map_location) - return checkpoint - - -@CheckpointLoader.register_scheme(prefixes=('http://', 'https://')) -def load_from_http(filename, map_location=None, model_dir=None): - """load checkpoint through HTTP or HTTPS scheme path. In distributed - setting, this function only download checkpoint at local rank 0. - - Args: - filename (str): checkpoint file path with modelzoo or - torchvision prefix - map_location (str, optional): Same as :func:`torch.load`. - model_dir (string, optional): directory in which to save the object, - Default: None - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - rank, world_size = get_dist_info() - rank = int(os.environ.get('LOCAL_RANK', rank)) - if rank == 0: - checkpoint = model_zoo.load_url( - filename, model_dir=model_dir, map_location=map_location) - if world_size > 1: - torch.distributed.barrier() - if rank > 0: - checkpoint = model_zoo.load_url( - filename, model_dir=model_dir, map_location=map_location) - return checkpoint - - -@CheckpointLoader.register_scheme(prefixes='pavi://') -def load_from_pavi(filename, map_location=None): - """load checkpoint through the file path prefixed with pavi. In distributed - setting, this function download ckpt at all ranks to different temporary - directories. - - Args: - filename (str): checkpoint file path with pavi prefix - map_location (str, optional): Same as :func:`torch.load`. - Default: None - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - assert filename.startswith('pavi://'), \ - f'Expected filename startswith `pavi://`, but get {filename}' - model_path = filename[7:] - - try: - from pavi import modelcloud - except ImportError: - raise ImportError( - 'Please install pavi to load checkpoint from modelcloud.') - - model = modelcloud.get(model_path) - with TemporaryDirectory() as tmp_dir: - downloaded_file = osp.join(tmp_dir, model.name) - model.download(downloaded_file) - checkpoint = torch.load(downloaded_file, map_location=map_location) - return checkpoint - - -@CheckpointLoader.register_scheme(prefixes='s3://') -def load_from_ceph(filename, map_location=None, backend='petrel'): - """load checkpoint through the file path prefixed with s3. In distributed - setting, this function download ckpt at all ranks to different temporary - directories. - - Args: - filename (str): checkpoint file path with s3 prefix - map_location (str, optional): Same as :func:`torch.load`. - backend (str, optional): The storage backend type. Options are 'ceph', - 'petrel'. Default: 'petrel'. - - .. warning:: - :class:`mmcv.fileio.file_client.CephBackend` will be deprecated, - please use :class:`mmcv.fileio.file_client.PetrelBackend` instead. - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - allowed_backends = ['ceph', 'petrel'] - if backend not in allowed_backends: - raise ValueError(f'Load from Backend {backend} is not supported.') - - if backend == 'ceph': - warnings.warn( - 'CephBackend will be deprecated, please use PetrelBackend instead') - - # CephClient and PetrelBackend have the same prefix 's3://' and the latter - # will be chosen as default. If PetrelBackend can not be instantiated - # successfully, the CephClient will be chosen. - try: - file_client = FileClient(backend=backend) - except ImportError: - allowed_backends.remove(backend) - file_client = FileClient(backend=allowed_backends[0]) - - with io.BytesIO(file_client.get(filename)) as buffer: - checkpoint = torch.load(buffer, map_location=map_location) - return checkpoint - - -@CheckpointLoader.register_scheme(prefixes=('modelzoo://', 'torchvision://')) -def load_from_torchvision(filename, map_location=None): - """load checkpoint through the file path prefixed with modelzoo or - torchvision. - - Args: - filename (str): checkpoint file path with modelzoo or - torchvision prefix - map_location (str, optional): Same as :func:`torch.load`. - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - model_urls = get_torchvision_models() - if filename.startswith('modelzoo://'): - warnings.warn('The URL scheme of "modelzoo://" is deprecated, please ' - 'use "torchvision://" instead') - model_name = filename[11:] - else: - model_name = filename[14:] - return load_from_http(model_urls[model_name], map_location=map_location) - - -@CheckpointLoader.register_scheme(prefixes=('open-mmlab://', 'openmmlab://')) -def load_from_openmmlab(filename, map_location=None): - """load checkpoint through the file path prefixed with open-mmlab or - openmmlab. - - Args: - filename (str): checkpoint file path with open-mmlab or - openmmlab prefix - map_location (str, optional): Same as :func:`torch.load`. - Default: None - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - - model_urls = get_external_models() - prefix_str = 'open-mmlab://' - if filename.startswith(prefix_str): - model_name = filename[13:] - else: - model_name = filename[12:] - prefix_str = 'openmmlab://' - - deprecated_urls = get_deprecated_model_names() - if model_name in deprecated_urls: - warnings.warn(f'{prefix_str}{model_name} is deprecated in favor ' - f'of {prefix_str}{deprecated_urls[model_name]}') - model_name = deprecated_urls[model_name] - model_url = model_urls[model_name] - # check if is url - if model_url.startswith(('http://', 'https://')): - checkpoint = load_from_http(model_url, map_location=map_location) - else: - filename = osp.join(_get_mmcv_home(), model_url) - if not osp.isfile(filename): - raise IOError(f'{filename} is not a checkpoint file') - checkpoint = torch.load(filename, map_location=map_location) - return checkpoint - - -@CheckpointLoader.register_scheme(prefixes='mmcls://') -def load_from_mmcls(filename, map_location=None): - """load checkpoint through the file path prefixed with mmcls. - - Args: - filename (str): checkpoint file path with mmcls prefix - map_location (str, optional): Same as :func:`torch.load`. - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - - model_urls = get_mmcls_models() - model_name = filename[8:] - checkpoint = load_from_http( - model_urls[model_name], map_location=map_location) - checkpoint = _process_mmcls_checkpoint(checkpoint) - return checkpoint - - -def _load_checkpoint(filename, map_location=None, logger=None): - """Load checkpoint from somewhere (modelzoo, file, url). - - Args: - filename (str): Accept local filepath, URL, ``torchvision://xxx``, - ``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for - details. - map_location (str, optional): Same as :func:`torch.load`. - Default: None. - logger (:mod:`logging.Logger`, optional): The logger for error message. - Default: None - - Returns: - dict or OrderedDict: The loaded checkpoint. It can be either an - OrderedDict storing model weights or a dict containing other - information, which depends on the checkpoint. - """ - return CheckpointLoader.load_checkpoint(filename, map_location, logger) - - -def _load_checkpoint_with_prefix(prefix, filename, map_location=None): - """Load partial pretrained model with specific prefix. - - Args: - prefix (str): The prefix of sub-module. - filename (str): Accept local filepath, URL, ``torchvision://xxx``, - ``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for - details. - map_location (str | None): Same as :func:`torch.load`. Default: None. - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - - checkpoint = _load_checkpoint(filename, map_location=map_location) - - if 'state_dict' in checkpoint: - state_dict = checkpoint['state_dict'] - else: - state_dict = checkpoint - if not prefix.endswith('.'): - prefix += '.' - prefix_len = len(prefix) - - state_dict = { - k[prefix_len:]: v - for k, v in state_dict.items() if k.startswith(prefix) - } - - assert state_dict, f'{prefix} is not in the pretrained model' - return state_dict - - -def load_checkpoint(model, - filename, - map_location=None, - strict=False, - logger=None, - revise_keys=[(r'^module\.', '')]): - """Load checkpoint from a file or URI. - - Args: - model (Module): Module to load checkpoint. - filename (str): Accept local filepath, URL, ``torchvision://xxx``, - ``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for - details. - map_location (str): Same as :func:`torch.load`. - strict (bool): Whether to allow different params for the model and - checkpoint. - logger (:mod:`logging.Logger` or None): The logger for error message. - revise_keys (list): A list of customized keywords to modify the - state_dict in checkpoint. Each item is a (pattern, replacement) - pair of the regular expression operations. Default: strip - the prefix 'module.' by [(r'^module\\.', '')]. - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - checkpoint = _load_checkpoint(filename, map_location, logger) - # OrderedDict is a subclass of dict - if not isinstance(checkpoint, dict): - raise RuntimeError( - f'No state_dict found in checkpoint file {filename}') - # get state_dict from checkpoint - if 'state_dict' in checkpoint: - state_dict = checkpoint['state_dict'] - else: - state_dict = checkpoint - - # strip prefix of state_dict - metadata = getattr(state_dict, '_metadata', OrderedDict()) - for p, r in revise_keys: - state_dict = OrderedDict( - {re.sub(p, r, k): v - for k, v in state_dict.items()}) - # Keep metadata in state_dict - state_dict._metadata = metadata - - # load state_dict - load_state_dict(model, state_dict, strict, logger) - return checkpoint - - -def weights_to_cpu(state_dict): - """Copy a model state_dict to cpu. - - Args: - state_dict (OrderedDict): Model weights on GPU. - - Returns: - OrderedDict: Model weights on GPU. - """ - state_dict_cpu = OrderedDict() - for key, val in state_dict.items(): - state_dict_cpu[key] = val.cpu() - # Keep metadata in state_dict - state_dict_cpu._metadata = getattr(state_dict, '_metadata', OrderedDict()) - return state_dict_cpu - - -def _save_to_state_dict(module, destination, prefix, keep_vars): - """Saves module state to `destination` dictionary. - - This method is modified from :meth:`torch.nn.Module._save_to_state_dict`. - - Args: - module (nn.Module): The module to generate state_dict. - destination (dict): A dict where state will be stored. - prefix (str): The prefix for parameters and buffers used in this - module. - """ - for name, param in module._parameters.items(): - if param is not None: - destination[prefix + name] = param if keep_vars else param.detach() - for name, buf in module._buffers.items(): - # remove check of _non_persistent_buffers_set to allow nn.BatchNorm2d - if buf is not None: - destination[prefix + name] = buf if keep_vars else buf.detach() - - -def get_state_dict(module, destination=None, prefix='', keep_vars=False): - """Returns a dictionary containing a whole state of the module. - - Both parameters and persistent buffers (e.g. running averages) are - included. Keys are corresponding parameter and buffer names. - - This method is modified from :meth:`torch.nn.Module.state_dict` to - recursively check parallel module in case that the model has a complicated - structure, e.g., nn.Module(nn.Module(DDP)). - - Args: - module (nn.Module): The module to generate state_dict. - destination (OrderedDict): Returned dict for the state of the - module. - prefix (str): Prefix of the key. - keep_vars (bool): Whether to keep the variable property of the - parameters. Default: False. - - Returns: - dict: A dictionary containing a whole state of the module. - """ - # recursively check parallel module in case that the model has a - # complicated structure, e.g., nn.Module(nn.Module(DDP)) - if is_module_wrapper(module): - module = module.module - - # below is the same as torch.nn.Module.state_dict() - if destination is None: - destination = OrderedDict() - destination._metadata = OrderedDict() - destination._metadata[prefix[:-1]] = local_metadata = dict( - version=module._version) - _save_to_state_dict(module, destination, prefix, keep_vars) - for name, child in module._modules.items(): - if child is not None: - get_state_dict( - child, destination, prefix + name + '.', keep_vars=keep_vars) - for hook in module._state_dict_hooks.values(): - hook_result = hook(module, destination, prefix, local_metadata) - if hook_result is not None: - destination = hook_result - return destination - - -def save_checkpoint(model, - filename, - optimizer=None, - meta=None, - file_client_args=None): - """Save checkpoint to file. - - The checkpoint will have 3 fields: ``meta``, ``state_dict`` and - ``optimizer``. By default ``meta`` will contain version and time info. - - Args: - model (Module): Module whose params are to be saved. - filename (str): Checkpoint filename. - optimizer (:obj:`Optimizer`, optional): Optimizer to be saved. - meta (dict, optional): Metadata to be saved in checkpoint. - file_client_args (dict, optional): Arguments to instantiate a - FileClient. See :class:`mmcv.fileio.FileClient` for details. - Default: None. - `New in version 1.3.16.` - """ - if meta is None: - meta = {} - elif not isinstance(meta, dict): - raise TypeError(f'meta must be a dict or None, but got {type(meta)}') - meta.update(mmcv_version=mmcv.__version__, time=time.asctime()) - - if is_module_wrapper(model): - model = model.module - - if hasattr(model, 'CLASSES') and model.CLASSES is not None: - # save class name to the meta - meta.update(CLASSES=model.CLASSES) - - checkpoint = { - 'meta': meta, - 'state_dict': weights_to_cpu(get_state_dict(model)) - } - # save optimizer state dict in the checkpoint - if isinstance(optimizer, Optimizer): - checkpoint['optimizer'] = optimizer.state_dict() - elif isinstance(optimizer, dict): - checkpoint['optimizer'] = {} - for name, optim in optimizer.items(): - checkpoint['optimizer'][name] = optim.state_dict() - - if filename.startswith('pavi://'): - if file_client_args is not None: - raise ValueError( - 'file_client_args should be "None" if filename starts with' - f'"pavi://", but got {file_client_args}') - try: - from pavi import modelcloud - from pavi import exception - except ImportError: - raise ImportError( - 'Please install pavi to load checkpoint from modelcloud.') - model_path = filename[7:] - root = modelcloud.Folder() - model_dir, model_name = osp.split(model_path) - try: - model = modelcloud.get(model_dir) - except exception.NodeNotFoundError: - model = root.create_training_model(model_dir) - with TemporaryDirectory() as tmp_dir: - checkpoint_file = osp.join(tmp_dir, model_name) - with open(checkpoint_file, 'wb') as f: - torch.save(checkpoint, f) - f.flush() - model.create_file(checkpoint_file, name=model_name) - else: - file_client = FileClient.infer_client(file_client_args, filename) - with io.BytesIO() as f: - torch.save(checkpoint, f) - file_client.put(f.getvalue(), filename) diff --git a/spaces/czwQAQ/extras/Dockerfile b/spaces/czwQAQ/extras/Dockerfile deleted file mode 100644 index f45cdfda0fab5fe7680df646ea7caf47d45e4352..0000000000000000000000000000000000000000 --- a/spaces/czwQAQ/extras/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM python:3.11 - -WORKDIR /app - -COPY requirements-complete.txt . -RUN pip install -r requirements-complete.txt - -RUN mkdir /.cache && chmod -R 777 /.cache -RUN mkdir .chroma && chmod -R 777 .chroma - -COPY . . - - -RUN chmod -R 777 /app - -RUN --mount=type=secret,id=password,mode=0444,required=true \ - cat /run/secrets/password > /test - -EXPOSE 7860 - -CMD ["python", "server.py", "--cpu", "--enable-modules=caption,summarize,classify,silero-tts,edge-tts,chromadb"] diff --git a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/extract_kp_videos_safe.py b/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/extract_kp_videos_safe.py deleted file mode 100644 index 5c9cff8759936d5dced6c5cfc66fe874fa1583f8..0000000000000000000000000000000000000000 --- a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/extract_kp_videos_safe.py +++ /dev/null @@ -1,138 +0,0 @@ -import os -import cv2 -import time -import glob -import argparse -import numpy as np -from PIL import Image -import torch -from tqdm import tqdm -from itertools import cycle -from facexlib.alignment import init_alignment_model, landmark_98_to_68 -from facexlib.detection import init_detection_model -from torch.multiprocessing import Pool, Process, set_start_method - - -class KeypointExtractor(): - def __init__(self, device='cuda'): - - ### gfpgan/weights - try: - import webui # in webui - root_path = 'extensions/SadTalker/gfpgan/weights' - - except: - root_path = 'gfpgan/weights' - - self.detector = init_alignment_model('awing_fan',device=device, model_rootpath=root_path) - self.det_net = init_detection_model('retinaface_resnet50', half=False,device=device, model_rootpath=root_path) - - def extract_keypoint(self, images, name=None, info=True): - if isinstance(images, list): - keypoints = [] - if info: - i_range = tqdm(images,desc='landmark Det:') - else: - i_range = images - - for image in i_range: - current_kp = self.extract_keypoint(image) - # current_kp = self.detector.get_landmarks(np.array(image)) - if np.mean(current_kp) == -1 and keypoints: - keypoints.append(keypoints[-1]) - else: - keypoints.append(current_kp[None]) - - keypoints = np.concatenate(keypoints, 0) - np.savetxt(os.path.splitext(name)[0]+'.txt', keypoints.reshape(-1)) - return keypoints - else: - while True: - try: - with torch.no_grad(): - # face detection -> face alignment. - img = np.array(images) - bboxes = self.det_net.detect_faces(images, 0.97) - - bboxes = bboxes[0] - - # bboxes[0] -= 100 - # bboxes[1] -= 100 - # bboxes[2] += 100 - # bboxes[3] += 100 - img = img[int(bboxes[1]):int(bboxes[3]), int(bboxes[0]):int(bboxes[2]), :] - - keypoints = landmark_98_to_68(self.detector.get_landmarks(img)) # [0] - - #### keypoints to the original location - keypoints[:,0] += int(bboxes[0]) - keypoints[:,1] += int(bboxes[1]) - - break - except RuntimeError as e: - if str(e).startswith('CUDA'): - print("Warning: out of memory, sleep for 1s") - time.sleep(1) - else: - print(e) - break - except TypeError: - print('No face detected in this image') - shape = [68, 2] - keypoints = -1. * np.ones(shape) - break - if name is not None: - np.savetxt(os.path.splitext(name)[0]+'.txt', keypoints.reshape(-1)) - return keypoints - -def read_video(filename): - frames = [] - cap = cv2.VideoCapture(filename) - while cap.isOpened(): - ret, frame = cap.read() - if ret: - frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) - frame = Image.fromarray(frame) - frames.append(frame) - else: - break - cap.release() - return frames - -def run(data): - filename, opt, device = data - os.environ['CUDA_VISIBLE_DEVICES'] = device - kp_extractor = KeypointExtractor() - images = read_video(filename) - name = filename.split('/')[-2:] - os.makedirs(os.path.join(opt.output_dir, name[-2]), exist_ok=True) - kp_extractor.extract_keypoint( - images, - name=os.path.join(opt.output_dir, name[-2], name[-1]) - ) - -if __name__ == '__main__': - set_start_method('spawn') - parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter) - parser.add_argument('--input_dir', type=str, help='the folder of the input files') - parser.add_argument('--output_dir', type=str, help='the folder of the output files') - parser.add_argument('--device_ids', type=str, default='0,1') - parser.add_argument('--workers', type=int, default=4) - - opt = parser.parse_args() - filenames = list() - VIDEO_EXTENSIONS_LOWERCASE = {'mp4'} - VIDEO_EXTENSIONS = VIDEO_EXTENSIONS_LOWERCASE.union({f.upper() for f in VIDEO_EXTENSIONS_LOWERCASE}) - extensions = VIDEO_EXTENSIONS - - for ext in extensions: - os.listdir(f'{opt.input_dir}') - print(f'{opt.input_dir}/*.{ext}') - filenames = sorted(glob.glob(f'{opt.input_dir}/*.{ext}')) - print('Total number of videos:', len(filenames)) - pool = Pool(opt.workers) - args_list = cycle([opt]) - device_ids = opt.device_ids.split(",") - device_ids = cycle(device_ids) - for data in tqdm(pool.imap_unordered(run, zip(filenames, args_list, device_ids))): - None diff --git a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/arcface_torch/configs/3millions.py b/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/arcface_torch/configs/3millions.py deleted file mode 100644 index c9edc2f1414e35f93abfd3dfe11a61f1f406580e..0000000000000000000000000000000000000000 --- a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/arcface_torch/configs/3millions.py +++ /dev/null @@ -1,23 +0,0 @@ -from easydict import EasyDict as edict - -# configs for test speed - -config = edict() -config.loss = "arcface" -config.network = "r50" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 1.0 -config.fp16 = True -config.momentum = 0.9 -config.weight_decay = 5e-4 -config.batch_size = 128 -config.lr = 0.1 # batch size is 512 - -config.rec = "synthetic" -config.num_classes = 300 * 10000 -config.num_epoch = 30 -config.warmup_epoch = -1 -config.decay_epoch = [10, 16, 22] -config.val_targets = [] diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/aiohttp/typedefs.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/aiohttp/typedefs.py deleted file mode 100644 index 84283d9a4634a4836cd50cabe34efd2ae5915f56..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/aiohttp/typedefs.py +++ /dev/null @@ -1,64 +0,0 @@ -import json -import os -import sys -from typing import ( - TYPE_CHECKING, - Any, - Awaitable, - Callable, - Iterable, - Mapping, - Tuple, - Union, -) - -from multidict import CIMultiDict, CIMultiDictProxy, MultiDict, MultiDictProxy, istr -from yarl import URL - -# These are for other modules to use (to avoid repeating the conditional import). -if sys.version_info >= (3, 8): - from typing import Final as Final, Protocol as Protocol, TypedDict as TypedDict -else: - from typing_extensions import ( # noqa: F401 - Final, - Protocol as Protocol, - TypedDict as TypedDict, - ) - -DEFAULT_JSON_ENCODER = json.dumps -DEFAULT_JSON_DECODER = json.loads - -if TYPE_CHECKING: # pragma: no cover - _CIMultiDict = CIMultiDict[str] - _CIMultiDictProxy = CIMultiDictProxy[str] - _MultiDict = MultiDict[str] - _MultiDictProxy = MultiDictProxy[str] - from http.cookies import BaseCookie, Morsel - - from .web import Request, StreamResponse -else: - _CIMultiDict = CIMultiDict - _CIMultiDictProxy = CIMultiDictProxy - _MultiDict = MultiDict - _MultiDictProxy = MultiDictProxy - -Byteish = Union[bytes, bytearray, memoryview] -JSONEncoder = Callable[[Any], str] -JSONDecoder = Callable[[str], Any] -LooseHeaders = Union[Mapping[Union[str, istr], str], _CIMultiDict, _CIMultiDictProxy] -RawHeaders = Tuple[Tuple[bytes, bytes], ...] -StrOrURL = Union[str, URL] - -LooseCookiesMappings = Mapping[str, Union[str, "BaseCookie[str]", "Morsel[Any]"]] -LooseCookiesIterables = Iterable[ - Tuple[str, Union[str, "BaseCookie[str]", "Morsel[Any]"]] -] -LooseCookies = Union[ - LooseCookiesMappings, - LooseCookiesIterables, - "BaseCookie[str]", -] - -Handler = Callable[["Request"], Awaitable["StreamResponse"]] - -PathLike = Union[str, "os.PathLike[str]"] diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-4ccfb72c.css b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-4ccfb72c.css deleted file mode 100644 index a528c508c9856f09311ecdc208c5d65121782769..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-4ccfb72c.css +++ /dev/null @@ -1 +0,0 @@ -.wrap.svelte-1sc8eck{display:flex;flex-direction:column;flex-flow:column;margin:0;padding:0;height:100%}.codemirror-wrapper.svelte-1sc8eck{height:100%;overflow:auto}.cm-editor{height:100%}.cm-selectionBackground{background-color:#b9d2ff30!important}.cm-focused{outline:none!important}button.svelte-qi7jcw{position:relative;cursor:pointer;padding:5px;width:22px;height:22px}.check.svelte-qi7jcw{position:absolute;top:0;right:0;z-index:var(--layer-top);background:var(--background-fill-primary);padding:var(--size-1);width:100%;height:100%;color:var(--body-text-color)}a.svelte-14d303a{position:relative;cursor:pointer;padding:5px;width:22px;height:22px}.copied.svelte-14d303a{color:var(--color-green-500)}.check.svelte-14d303a{position:absolute;top:0;right:0;z-index:var(--layer-top);background:var(--background-fill-primary);padding:var(--size-1);width:100%;height:100%;color:var(--body-text-color)}div.svelte-1yin446{display:flex;position:absolute;top:var(--block-label-margin);right:var(--block-label-margin);align-items:center;z-index:var(--layer-2);transition:.15s;box-shadow:var(--shadow-drop);border:1px solid var(--border-color-primary);border-top:none;border-right:none;border-radius:var(--block-label-right-radius);background:var(--block-label-background-fill);overflow:hidden;color:var(--block-label-text-color);font:var(--font);font-size:var(--button-small-text-size)} diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jinja2/lexer.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jinja2/lexer.py deleted file mode 100644 index aff7e9f993792e1ced39c93fc0d39dcb5bdd5fde..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jinja2/lexer.py +++ /dev/null @@ -1,866 +0,0 @@ -"""Implements a Jinja / Python combination lexer. The ``Lexer`` class -is used to do some preprocessing. It filters out invalid operators like -the bitshift operators we don't allow in templates. It separates -template code and python code in expressions. -""" -import re -import typing as t -from ast import literal_eval -from collections import deque -from sys import intern - -from ._identifier import pattern as name_re -from .exceptions import TemplateSyntaxError -from .utils import LRUCache - -if t.TYPE_CHECKING: - import typing_extensions as te - from .environment import Environment - -# cache for the lexers. Exists in order to be able to have multiple -# environments with the same lexer -_lexer_cache: t.MutableMapping[t.Tuple, "Lexer"] = LRUCache(50) # type: ignore - -# static regular expressions -whitespace_re = re.compile(r"\s+") -newline_re = re.compile(r"(\r\n|\r|\n)") -string_re = re.compile( - r"('([^'\\]*(?:\\.[^'\\]*)*)'" r'|"([^"\\]*(?:\\.[^"\\]*)*)")', re.S -) -integer_re = re.compile( - r""" - ( - 0b(_?[0-1])+ # binary - | - 0o(_?[0-7])+ # octal - | - 0x(_?[\da-f])+ # hex - | - [1-9](_?\d)* # decimal - | - 0(_?0)* # decimal zero - ) - """, - re.IGNORECASE | re.VERBOSE, -) -float_re = re.compile( - r""" - (?": TOKEN_GT, - ">=": TOKEN_GTEQ, - "<": TOKEN_LT, - "<=": TOKEN_LTEQ, - "=": TOKEN_ASSIGN, - ".": TOKEN_DOT, - ":": TOKEN_COLON, - "|": TOKEN_PIPE, - ",": TOKEN_COMMA, - ";": TOKEN_SEMICOLON, -} - -reverse_operators = {v: k for k, v in operators.items()} -assert len(operators) == len(reverse_operators), "operators dropped" -operator_re = re.compile( - f"({'|'.join(re.escape(x) for x in sorted(operators, key=lambda x: -len(x)))})" -) - -ignored_tokens = frozenset( - [ - TOKEN_COMMENT_BEGIN, - TOKEN_COMMENT, - TOKEN_COMMENT_END, - TOKEN_WHITESPACE, - TOKEN_LINECOMMENT_BEGIN, - TOKEN_LINECOMMENT_END, - TOKEN_LINECOMMENT, - ] -) -ignore_if_empty = frozenset( - [TOKEN_WHITESPACE, TOKEN_DATA, TOKEN_COMMENT, TOKEN_LINECOMMENT] -) - - -def _describe_token_type(token_type: str) -> str: - if token_type in reverse_operators: - return reverse_operators[token_type] - - return { - TOKEN_COMMENT_BEGIN: "begin of comment", - TOKEN_COMMENT_END: "end of comment", - TOKEN_COMMENT: "comment", - TOKEN_LINECOMMENT: "comment", - TOKEN_BLOCK_BEGIN: "begin of statement block", - TOKEN_BLOCK_END: "end of statement block", - TOKEN_VARIABLE_BEGIN: "begin of print statement", - TOKEN_VARIABLE_END: "end of print statement", - TOKEN_LINESTATEMENT_BEGIN: "begin of line statement", - TOKEN_LINESTATEMENT_END: "end of line statement", - TOKEN_DATA: "template data / text", - TOKEN_EOF: "end of template", - }.get(token_type, token_type) - - -def describe_token(token: "Token") -> str: - """Returns a description of the token.""" - if token.type == TOKEN_NAME: - return token.value - - return _describe_token_type(token.type) - - -def describe_token_expr(expr: str) -> str: - """Like `describe_token` but for token expressions.""" - if ":" in expr: - type, value = expr.split(":", 1) - - if type == TOKEN_NAME: - return value - else: - type = expr - - return _describe_token_type(type) - - -def count_newlines(value: str) -> int: - """Count the number of newline characters in the string. This is - useful for extensions that filter a stream. - """ - return len(newline_re.findall(value)) - - -def compile_rules(environment: "Environment") -> t.List[t.Tuple[str, str]]: - """Compiles all the rules from the environment into a list of rules.""" - e = re.escape - rules = [ - ( - len(environment.comment_start_string), - TOKEN_COMMENT_BEGIN, - e(environment.comment_start_string), - ), - ( - len(environment.block_start_string), - TOKEN_BLOCK_BEGIN, - e(environment.block_start_string), - ), - ( - len(environment.variable_start_string), - TOKEN_VARIABLE_BEGIN, - e(environment.variable_start_string), - ), - ] - - if environment.line_statement_prefix is not None: - rules.append( - ( - len(environment.line_statement_prefix), - TOKEN_LINESTATEMENT_BEGIN, - r"^[ \t\v]*" + e(environment.line_statement_prefix), - ) - ) - if environment.line_comment_prefix is not None: - rules.append( - ( - len(environment.line_comment_prefix), - TOKEN_LINECOMMENT_BEGIN, - r"(?:^|(?<=\S))[^\S\r\n]*" + e(environment.line_comment_prefix), - ) - ) - - return [x[1:] for x in sorted(rules, reverse=True)] - - -class Failure: - """Class that raises a `TemplateSyntaxError` if called. - Used by the `Lexer` to specify known errors. - """ - - def __init__( - self, message: str, cls: t.Type[TemplateSyntaxError] = TemplateSyntaxError - ) -> None: - self.message = message - self.error_class = cls - - def __call__(self, lineno: int, filename: str) -> "te.NoReturn": - raise self.error_class(self.message, lineno, filename) - - -class Token(t.NamedTuple): - lineno: int - type: str - value: str - - def __str__(self) -> str: - return describe_token(self) - - def test(self, expr: str) -> bool: - """Test a token against a token expression. This can either be a - token type or ``'token_type:token_value'``. This can only test - against string values and types. - """ - # here we do a regular string equality check as test_any is usually - # passed an iterable of not interned strings. - if self.type == expr: - return True - - if ":" in expr: - return expr.split(":", 1) == [self.type, self.value] - - return False - - def test_any(self, *iterable: str) -> bool: - """Test against multiple token expressions.""" - return any(self.test(expr) for expr in iterable) - - -class TokenStreamIterator: - """The iterator for tokenstreams. Iterate over the stream - until the eof token is reached. - """ - - def __init__(self, stream: "TokenStream") -> None: - self.stream = stream - - def __iter__(self) -> "TokenStreamIterator": - return self - - def __next__(self) -> Token: - token = self.stream.current - - if token.type is TOKEN_EOF: - self.stream.close() - raise StopIteration - - next(self.stream) - return token - - -class TokenStream: - """A token stream is an iterable that yields :class:`Token`\\s. The - parser however does not iterate over it but calls :meth:`next` to go - one token ahead. The current active token is stored as :attr:`current`. - """ - - def __init__( - self, - generator: t.Iterable[Token], - name: t.Optional[str], - filename: t.Optional[str], - ): - self._iter = iter(generator) - self._pushed: "te.Deque[Token]" = deque() - self.name = name - self.filename = filename - self.closed = False - self.current = Token(1, TOKEN_INITIAL, "") - next(self) - - def __iter__(self) -> TokenStreamIterator: - return TokenStreamIterator(self) - - def __bool__(self) -> bool: - return bool(self._pushed) or self.current.type is not TOKEN_EOF - - @property - def eos(self) -> bool: - """Are we at the end of the stream?""" - return not self - - def push(self, token: Token) -> None: - """Push a token back to the stream.""" - self._pushed.append(token) - - def look(self) -> Token: - """Look at the next token.""" - old_token = next(self) - result = self.current - self.push(result) - self.current = old_token - return result - - def skip(self, n: int = 1) -> None: - """Got n tokens ahead.""" - for _ in range(n): - next(self) - - def next_if(self, expr: str) -> t.Optional[Token]: - """Perform the token test and return the token if it matched. - Otherwise the return value is `None`. - """ - if self.current.test(expr): - return next(self) - - return None - - def skip_if(self, expr: str) -> bool: - """Like :meth:`next_if` but only returns `True` or `False`.""" - return self.next_if(expr) is not None - - def __next__(self) -> Token: - """Go one token ahead and return the old one. - - Use the built-in :func:`next` instead of calling this directly. - """ - rv = self.current - - if self._pushed: - self.current = self._pushed.popleft() - elif self.current.type is not TOKEN_EOF: - try: - self.current = next(self._iter) - except StopIteration: - self.close() - - return rv - - def close(self) -> None: - """Close the stream.""" - self.current = Token(self.current.lineno, TOKEN_EOF, "") - self._iter = iter(()) - self.closed = True - - def expect(self, expr: str) -> Token: - """Expect a given token type and return it. This accepts the same - argument as :meth:`jinja2.lexer.Token.test`. - """ - if not self.current.test(expr): - expr = describe_token_expr(expr) - - if self.current.type is TOKEN_EOF: - raise TemplateSyntaxError( - f"unexpected end of template, expected {expr!r}.", - self.current.lineno, - self.name, - self.filename, - ) - - raise TemplateSyntaxError( - f"expected token {expr!r}, got {describe_token(self.current)!r}", - self.current.lineno, - self.name, - self.filename, - ) - - return next(self) - - -def get_lexer(environment: "Environment") -> "Lexer": - """Return a lexer which is probably cached.""" - key = ( - environment.block_start_string, - environment.block_end_string, - environment.variable_start_string, - environment.variable_end_string, - environment.comment_start_string, - environment.comment_end_string, - environment.line_statement_prefix, - environment.line_comment_prefix, - environment.trim_blocks, - environment.lstrip_blocks, - environment.newline_sequence, - environment.keep_trailing_newline, - ) - lexer = _lexer_cache.get(key) - - if lexer is None: - _lexer_cache[key] = lexer = Lexer(environment) - - return lexer - - -class OptionalLStrip(tuple): - """A special tuple for marking a point in the state that can have - lstrip applied. - """ - - __slots__ = () - - # Even though it looks like a no-op, creating instances fails - # without this. - def __new__(cls, *members, **kwargs): # type: ignore - return super().__new__(cls, members) - - -class _Rule(t.NamedTuple): - pattern: t.Pattern[str] - tokens: t.Union[str, t.Tuple[str, ...], t.Tuple[Failure]] - command: t.Optional[str] - - -class Lexer: - """Class that implements a lexer for a given environment. Automatically - created by the environment class, usually you don't have to do that. - - Note that the lexer is not automatically bound to an environment. - Multiple environments can share the same lexer. - """ - - def __init__(self, environment: "Environment") -> None: - # shortcuts - e = re.escape - - def c(x: str) -> t.Pattern[str]: - return re.compile(x, re.M | re.S) - - # lexing rules for tags - tag_rules: t.List[_Rule] = [ - _Rule(whitespace_re, TOKEN_WHITESPACE, None), - _Rule(float_re, TOKEN_FLOAT, None), - _Rule(integer_re, TOKEN_INTEGER, None), - _Rule(name_re, TOKEN_NAME, None), - _Rule(string_re, TOKEN_STRING, None), - _Rule(operator_re, TOKEN_OPERATOR, None), - ] - - # assemble the root lexing rule. because "|" is ungreedy - # we have to sort by length so that the lexer continues working - # as expected when we have parsing rules like <% for block and - # <%= for variables. (if someone wants asp like syntax) - # variables are just part of the rules if variable processing - # is required. - root_tag_rules = compile_rules(environment) - - block_start_re = e(environment.block_start_string) - block_end_re = e(environment.block_end_string) - comment_end_re = e(environment.comment_end_string) - variable_end_re = e(environment.variable_end_string) - - # block suffix if trimming is enabled - block_suffix_re = "\\n?" if environment.trim_blocks else "" - - self.lstrip_blocks = environment.lstrip_blocks - - self.newline_sequence = environment.newline_sequence - self.keep_trailing_newline = environment.keep_trailing_newline - - root_raw_re = ( - rf"(?P{block_start_re}(\-|\+|)\s*raw\s*" - rf"(?:\-{block_end_re}\s*|{block_end_re}))" - ) - root_parts_re = "|".join( - [root_raw_re] + [rf"(?P<{n}>{r}(\-|\+|))" for n, r in root_tag_rules] - ) - - # global lexing rules - self.rules: t.Dict[str, t.List[_Rule]] = { - "root": [ - # directives - _Rule( - c(rf"(.*?)(?:{root_parts_re})"), - OptionalLStrip(TOKEN_DATA, "#bygroup"), # type: ignore - "#bygroup", - ), - # data - _Rule(c(".+"), TOKEN_DATA, None), - ], - # comments - TOKEN_COMMENT_BEGIN: [ - _Rule( - c( - rf"(.*?)((?:\+{comment_end_re}|\-{comment_end_re}\s*" - rf"|{comment_end_re}{block_suffix_re}))" - ), - (TOKEN_COMMENT, TOKEN_COMMENT_END), - "#pop", - ), - _Rule(c(r"(.)"), (Failure("Missing end of comment tag"),), None), - ], - # blocks - TOKEN_BLOCK_BEGIN: [ - _Rule( - c( - rf"(?:\+{block_end_re}|\-{block_end_re}\s*" - rf"|{block_end_re}{block_suffix_re})" - ), - TOKEN_BLOCK_END, - "#pop", - ), - ] - + tag_rules, - # variables - TOKEN_VARIABLE_BEGIN: [ - _Rule( - c(rf"\-{variable_end_re}\s*|{variable_end_re}"), - TOKEN_VARIABLE_END, - "#pop", - ) - ] - + tag_rules, - # raw block - TOKEN_RAW_BEGIN: [ - _Rule( - c( - rf"(.*?)((?:{block_start_re}(\-|\+|))\s*endraw\s*" - rf"(?:\+{block_end_re}|\-{block_end_re}\s*" - rf"|{block_end_re}{block_suffix_re}))" - ), - OptionalLStrip(TOKEN_DATA, TOKEN_RAW_END), # type: ignore - "#pop", - ), - _Rule(c(r"(.)"), (Failure("Missing end of raw directive"),), None), - ], - # line statements - TOKEN_LINESTATEMENT_BEGIN: [ - _Rule(c(r"\s*(\n|$)"), TOKEN_LINESTATEMENT_END, "#pop") - ] - + tag_rules, - # line comments - TOKEN_LINECOMMENT_BEGIN: [ - _Rule( - c(r"(.*?)()(?=\n|$)"), - (TOKEN_LINECOMMENT, TOKEN_LINECOMMENT_END), - "#pop", - ) - ], - } - - def _normalize_newlines(self, value: str) -> str: - """Replace all newlines with the configured sequence in strings - and template data. - """ - return newline_re.sub(self.newline_sequence, value) - - def tokenize( - self, - source: str, - name: t.Optional[str] = None, - filename: t.Optional[str] = None, - state: t.Optional[str] = None, - ) -> TokenStream: - """Calls tokeniter + tokenize and wraps it in a token stream.""" - stream = self.tokeniter(source, name, filename, state) - return TokenStream(self.wrap(stream, name, filename), name, filename) - - def wrap( - self, - stream: t.Iterable[t.Tuple[int, str, str]], - name: t.Optional[str] = None, - filename: t.Optional[str] = None, - ) -> t.Iterator[Token]: - """This is called with the stream as returned by `tokenize` and wraps - every token in a :class:`Token` and converts the value. - """ - for lineno, token, value_str in stream: - if token in ignored_tokens: - continue - - value: t.Any = value_str - - if token == TOKEN_LINESTATEMENT_BEGIN: - token = TOKEN_BLOCK_BEGIN - elif token == TOKEN_LINESTATEMENT_END: - token = TOKEN_BLOCK_END - # we are not interested in those tokens in the parser - elif token in (TOKEN_RAW_BEGIN, TOKEN_RAW_END): - continue - elif token == TOKEN_DATA: - value = self._normalize_newlines(value_str) - elif token == "keyword": - token = value_str - elif token == TOKEN_NAME: - value = value_str - - if not value.isidentifier(): - raise TemplateSyntaxError( - "Invalid character in identifier", lineno, name, filename - ) - elif token == TOKEN_STRING: - # try to unescape string - try: - value = ( - self._normalize_newlines(value_str[1:-1]) - .encode("ascii", "backslashreplace") - .decode("unicode-escape") - ) - except Exception as e: - msg = str(e).split(":")[-1].strip() - raise TemplateSyntaxError(msg, lineno, name, filename) from e - elif token == TOKEN_INTEGER: - value = int(value_str.replace("_", ""), 0) - elif token == TOKEN_FLOAT: - # remove all "_" first to support more Python versions - value = literal_eval(value_str.replace("_", "")) - elif token == TOKEN_OPERATOR: - token = operators[value_str] - - yield Token(lineno, token, value) - - def tokeniter( - self, - source: str, - name: t.Optional[str], - filename: t.Optional[str] = None, - state: t.Optional[str] = None, - ) -> t.Iterator[t.Tuple[int, str, str]]: - """This method tokenizes the text and returns the tokens in a - generator. Use this method if you just want to tokenize a template. - - .. versionchanged:: 3.0 - Only ``\\n``, ``\\r\\n`` and ``\\r`` are treated as line - breaks. - """ - lines = newline_re.split(source)[::2] - - if not self.keep_trailing_newline and lines[-1] == "": - del lines[-1] - - source = "\n".join(lines) - pos = 0 - lineno = 1 - stack = ["root"] - - if state is not None and state != "root": - assert state in ("variable", "block"), "invalid state" - stack.append(state + "_begin") - - statetokens = self.rules[stack[-1]] - source_length = len(source) - balancing_stack: t.List[str] = [] - newlines_stripped = 0 - line_starting = True - - while True: - # tokenizer loop - for regex, tokens, new_state in statetokens: - m = regex.match(source, pos) - - # if no match we try again with the next rule - if m is None: - continue - - # we only match blocks and variables if braces / parentheses - # are balanced. continue parsing with the lower rule which - # is the operator rule. do this only if the end tags look - # like operators - if balancing_stack and tokens in ( - TOKEN_VARIABLE_END, - TOKEN_BLOCK_END, - TOKEN_LINESTATEMENT_END, - ): - continue - - # tuples support more options - if isinstance(tokens, tuple): - groups: t.Sequence[str] = m.groups() - - if isinstance(tokens, OptionalLStrip): - # Rule supports lstrip. Match will look like - # text, block type, whitespace control, type, control, ... - text = groups[0] - # Skipping the text and first type, every other group is the - # whitespace control for each type. One of the groups will be - # -, +, or empty string instead of None. - strip_sign = next(g for g in groups[2::2] if g is not None) - - if strip_sign == "-": - # Strip all whitespace between the text and the tag. - stripped = text.rstrip() - newlines_stripped = text[len(stripped) :].count("\n") - groups = [stripped, *groups[1:]] - elif ( - # Not marked for preserving whitespace. - strip_sign != "+" - # lstrip is enabled. - and self.lstrip_blocks - # Not a variable expression. - and not m.groupdict().get(TOKEN_VARIABLE_BEGIN) - ): - # The start of text between the last newline and the tag. - l_pos = text.rfind("\n") + 1 - - if l_pos > 0 or line_starting: - # If there's only whitespace between the newline and the - # tag, strip it. - if whitespace_re.fullmatch(text, l_pos): - groups = [text[:l_pos], *groups[1:]] - - for idx, token in enumerate(tokens): - # failure group - if token.__class__ is Failure: - raise token(lineno, filename) - # bygroup is a bit more complex, in that case we - # yield for the current token the first named - # group that matched - elif token == "#bygroup": - for key, value in m.groupdict().items(): - if value is not None: - yield lineno, key, value - lineno += value.count("\n") - break - else: - raise RuntimeError( - f"{regex!r} wanted to resolve the token dynamically" - " but no group matched" - ) - # normal group - else: - data = groups[idx] - - if data or token not in ignore_if_empty: - yield lineno, token, data - - lineno += data.count("\n") + newlines_stripped - newlines_stripped = 0 - - # strings as token just are yielded as it. - else: - data = m.group() - - # update brace/parentheses balance - if tokens == TOKEN_OPERATOR: - if data == "{": - balancing_stack.append("}") - elif data == "(": - balancing_stack.append(")") - elif data == "[": - balancing_stack.append("]") - elif data in ("}", ")", "]"): - if not balancing_stack: - raise TemplateSyntaxError( - f"unexpected '{data}'", lineno, name, filename - ) - - expected_op = balancing_stack.pop() - - if expected_op != data: - raise TemplateSyntaxError( - f"unexpected '{data}', expected '{expected_op}'", - lineno, - name, - filename, - ) - - # yield items - if data or tokens not in ignore_if_empty: - yield lineno, tokens, data - - lineno += data.count("\n") - - line_starting = m.group()[-1:] == "\n" - # fetch new position into new variable so that we can check - # if there is a internal parsing error which would result - # in an infinite loop - pos2 = m.end() - - # handle state changes - if new_state is not None: - # remove the uppermost state - if new_state == "#pop": - stack.pop() - # resolve the new state by group checking - elif new_state == "#bygroup": - for key, value in m.groupdict().items(): - if value is not None: - stack.append(key) - break - else: - raise RuntimeError( - f"{regex!r} wanted to resolve the new state dynamically" - f" but no group matched" - ) - # direct state name given - else: - stack.append(new_state) - - statetokens = self.rules[stack[-1]] - # we are still at the same position and no stack change. - # this means a loop without break condition, avoid that and - # raise error - elif pos2 == pos: - raise RuntimeError( - f"{regex!r} yielded empty string without stack change" - ) - - # publish new function and start again - pos = pos2 - break - # if loop terminated without break we haven't found a single match - # either we are at the end of the file or we have a problem - else: - # end of text - if pos >= source_length: - return - - # something went wrong - raise TemplateSyntaxError( - f"unexpected char {source[pos]!r} at {pos}", lineno, name, filename - ) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/cli/__init__.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/cli/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/deelerb/3dselfie/PIFu/lib/sdf.py b/spaces/deelerb/3dselfie/PIFu/lib/sdf.py deleted file mode 100644 index e87e639eb94993c3e4068d6bd4d21f902aee7694..0000000000000000000000000000000000000000 --- a/spaces/deelerb/3dselfie/PIFu/lib/sdf.py +++ /dev/null @@ -1,100 +0,0 @@ -import numpy as np - - -def create_grid(resX, resY, resZ, b_min=np.array([0, 0, 0]), b_max=np.array([1, 1, 1]), transform=None): - ''' - Create a dense grid of given resolution and bounding box - :param resX: resolution along X axis - :param resY: resolution along Y axis - :param resZ: resolution along Z axis - :param b_min: vec3 (x_min, y_min, z_min) bounding box corner - :param b_max: vec3 (x_max, y_max, z_max) bounding box corner - :return: [3, resX, resY, resZ] coordinates of the grid, and transform matrix from mesh index - ''' - coords = np.mgrid[:resX, :resY, :resZ] - coords = coords.reshape(3, -1) - coords_matrix = np.eye(4) - length = b_max - b_min - coords_matrix[0, 0] = length[0] / resX - coords_matrix[1, 1] = length[1] / resY - coords_matrix[2, 2] = length[2] / resZ - coords_matrix[0:3, 3] = b_min - coords = np.matmul(coords_matrix[:3, :3], coords) + coords_matrix[:3, 3:4] - if transform is not None: - coords = np.matmul(transform[:3, :3], coords) + transform[:3, 3:4] - coords_matrix = np.matmul(transform, coords_matrix) - coords = coords.reshape(3, resX, resY, resZ) - return coords, coords_matrix - - -def batch_eval(points, eval_func, num_samples=512 * 512 * 512): - num_pts = points.shape[1] - sdf = np.zeros(num_pts) - - num_batches = num_pts // num_samples - for i in range(num_batches): - sdf[i * num_samples:i * num_samples + num_samples] = eval_func( - points[:, i * num_samples:i * num_samples + num_samples]) - if num_pts % num_samples: - sdf[num_batches * num_samples:] = eval_func(points[:, num_batches * num_samples:]) - - return sdf - - -def eval_grid(coords, eval_func, num_samples=512 * 512 * 512): - resolution = coords.shape[1:4] - coords = coords.reshape([3, -1]) - sdf = batch_eval(coords, eval_func, num_samples=num_samples) - return sdf.reshape(resolution) - - -def eval_grid_octree(coords, eval_func, - init_resolution=64, threshold=0.01, - num_samples=512 * 512 * 512): - resolution = coords.shape[1:4] - - sdf = np.zeros(resolution) - - dirty = np.ones(resolution, dtype=np.bool) - grid_mask = np.zeros(resolution, dtype=np.bool) - - reso = resolution[0] // init_resolution - - while reso > 0: - # subdivide the grid - grid_mask[0:resolution[0]:reso, 0:resolution[1]:reso, 0:resolution[2]:reso] = True - # test samples in this iteration - test_mask = np.logical_and(grid_mask, dirty) - #print('step size:', reso, 'test sample size:', test_mask.sum()) - points = coords[:, test_mask] - - sdf[test_mask] = batch_eval(points, eval_func, num_samples=num_samples) - dirty[test_mask] = False - - # do interpolation - if reso <= 1: - break - for x in range(0, resolution[0] - reso, reso): - for y in range(0, resolution[1] - reso, reso): - for z in range(0, resolution[2] - reso, reso): - # if center marked, return - if not dirty[x + reso // 2, y + reso // 2, z + reso // 2]: - continue - v0 = sdf[x, y, z] - v1 = sdf[x, y, z + reso] - v2 = sdf[x, y + reso, z] - v3 = sdf[x, y + reso, z + reso] - v4 = sdf[x + reso, y, z] - v5 = sdf[x + reso, y, z + reso] - v6 = sdf[x + reso, y + reso, z] - v7 = sdf[x + reso, y + reso, z + reso] - v = np.array([v0, v1, v2, v3, v4, v5, v6, v7]) - v_min = v.min() - v_max = v.max() - # this cell is all the same - if (v_max - v_min) < threshold: - sdf[x:x + reso, y:y + reso, z:z + reso] = (v_max + v_min) / 2 - dirty[x:x + reso, y:y + reso, z:z + reso] = False - reso //= 2 - - return sdf.reshape(resolution) diff --git a/spaces/deepwisdom/MetaGPT/tests/metagpt/provider/test_metagpt_llm_api.py b/spaces/deepwisdom/MetaGPT/tests/metagpt/provider/test_metagpt_llm_api.py deleted file mode 100644 index 9c8356ca6bdd70a2e6aa9817c2b5417a3b8d52fe..0000000000000000000000000000000000000000 --- a/spaces/deepwisdom/MetaGPT/tests/metagpt/provider/test_metagpt_llm_api.py +++ /dev/null @@ -1,17 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/8/30 -@Author : mashenquan -@File : test_metagpt_llm_api.py -""" -from metagpt.provider.metagpt_llm_api import MetaGPTLLMAPI - - -def test_metagpt(): - llm = MetaGPTLLMAPI() - assert llm - - -if __name__ == "__main__": - test_metagpt() diff --git a/spaces/diacanFperku/AutoGPT/Multiecuscan 1.3 Crack [VERIFIED].md b/spaces/diacanFperku/AutoGPT/Multiecuscan 1.3 Crack [VERIFIED].md deleted file mode 100644 index 003a10aeee4f0e902a41d69d203349542494fc8f..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Multiecuscan 1.3 Crack [VERIFIED].md +++ /dev/null @@ -1,22 +0,0 @@ -

    Multiecuscan 1.3 Crack


    Downloadhttps://gohhs.com/2uFT7S



    - -exe, and sys.inf (auto) = not supported. - -MiEcu Scanner Is there a way to get this scanner to work with my 2011 chevy? I have seen that some can do that and I really would love to get it to work. I also have a 2012 gmc and it is a piece of junk. - -Hey guys! I have been having some problems with a 2003 Saturn Vue. The tail light wont come on even when I turn it on. I just got to the mechanic and he said that it might be the sensor and so he replaced it and still nothing. When I start my car and it starts up the headlights will blink like they are starting to come on but the rest of the lights dont come on. Any idea? I would appreciate it!! - -I need to verify the codes for a 2004 Hummer H3, does anyone know the vehicle identification code? I went to my dealer and they said it was on the bill of sale but when I searched my VIN, I was told its not on the bill of sale. - -Sir I am having a problem in my 2007 honda accord a dvmax. I am facing a problem with the fuel pump sensor and I don't know what to do, I have tried two fuel pumps which my husband and I had taken it to the mechanic and the problem still exists. - -I have a 2001 and it was running fine a couple weeks ago and I turned it off and left it. My husband went to get gas and it wont start again, the battery is good. What do I do? It was never checked over for this problem. Please help. - -I have a 2004 Acura Integra and I lost my transponder so I was just wondering if there was anyway to use the radio remote to unlock the doors and move the car forward and back? Thanks for the help. - -I bought a 1999 Passat and since it was a used car it needed a new alternator and the dealer gave me a 30 day warranty on it. Today i drove it home and when i started the car the car shut off. I checked all the fuses and got a no power in the starter. the battery is fine it is charged. Can i use my spare alternator? Or is it not compatible with the car? - -I have a 2004 Jeep Grand Cherokee Laredo. When I was driving home last week it started to make a grinding noise then lost all power and I was in 4fefd39f24
    -
    -
    -

    diff --git a/spaces/diagaiwei/ir_chinese_medqa/colbert/__init__.py b/spaces/diagaiwei/ir_chinese_medqa/colbert/__init__.py deleted file mode 100644 index 907bec9739e2e70dcfb019c30313f9888d0ddd2e..0000000000000000000000000000000000000000 --- a/spaces/diagaiwei/ir_chinese_medqa/colbert/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -from .trainer import Trainer -from .indexer import Indexer -from .searcher import Searcher - -from .modeling.checkpoint import Checkpoint \ No newline at end of file diff --git a/spaces/digitalxingtong/Bufeiyan-b-Bert-VITS2/monotonic_align/setup.py b/spaces/digitalxingtong/Bufeiyan-b-Bert-VITS2/monotonic_align/setup.py deleted file mode 100644 index 30c224807a70faa9df9c9eb75f8e80c8c867b16b..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Bufeiyan-b-Bert-VITS2/monotonic_align/setup.py +++ /dev/null @@ -1,9 +0,0 @@ -from distutils.core import setup -from Cython.Build import cythonize -import numpy - -setup( - name = 'monotonic_align', - ext_modules = cythonize("core.pyx"), - include_dirs=[numpy.get_include()] -) diff --git a/spaces/digitalxingtong/Eileen-Bert-Vits2/app.py b/spaces/digitalxingtong/Eileen-Bert-Vits2/app.py deleted file mode 100644 index becf3f25f95411d6788a61705403237201b282fc..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Eileen-Bert-Vits2/app.py +++ /dev/null @@ -1,182 +0,0 @@ -import sys, os - -if sys.platform == "darwin": - os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1" - -import logging - -logging.getLogger("numba").setLevel(logging.WARNING) -logging.getLogger("markdown_it").setLevel(logging.WARNING) -logging.getLogger("urllib3").setLevel(logging.WARNING) -logging.getLogger("matplotlib").setLevel(logging.WARNING) - -logging.basicConfig(level=logging.INFO, format="| %(name)s | %(levelname)s | %(message)s") - -logger = logging.getLogger(__name__) - -import torch -import argparse -import commons -import utils -from models import SynthesizerTrn -from text.symbols import symbols -from text import cleaned_text_to_sequence, get_bert -from text.cleaner import clean_text -import gradio as gr -import webbrowser - - -net_g = None - - -def get_text(text, language_str, hps): - norm_text, phone, tone, word2ph = clean_text(text, language_str) - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - - if hps.data.add_blank: - phone = commons.intersperse(phone, 0) - tone = commons.intersperse(tone, 0) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - bert = get_bert(norm_text, word2ph, language_str) - del word2ph - - assert bert.shape[-1] == len(phone) - - phone = torch.LongTensor(phone) - tone = torch.LongTensor(tone) - language = torch.LongTensor(language) - - return bert, phone, tone, language -import soundfile as sf -def infer(text, sdp_ratio, noise_scale, noise_scale_w, length_scale, sid): - global net_g - bert, phones, tones, lang_ids = get_text(text, "ZH", hps) - with torch.no_grad(): - x_tst=phones.to(device).unsqueeze(0) - tones=tones.to(device).unsqueeze(0) - lang_ids=lang_ids.to(device).unsqueeze(0) - bert = bert.to(device).unsqueeze(0) - x_tst_lengths = torch.LongTensor([phones.size(0)]).to(device) - del phones - speakers = torch.LongTensor([hps.data.spk2id[sid]]).to(device) - audio = net_g.infer(x_tst, x_tst_lengths, speakers, tones, lang_ids, bert, sdp_ratio=sdp_ratio - , noise_scale=noise_scale, noise_scale_w=noise_scale_w, length_scale=length_scale)[0][0,0].data.cpu().float().numpy() - del x_tst, tones, lang_ids, bert, x_tst_lengths, speakers - sf.write("tmp.wav", audio, 44100) - return audio -def convert_wav_to_ogg(wav_file): - os.makedirs('out', exist_ok=True) - filename = os.path.splitext(os.path.basename(wav_file.name))[0] - output_path_ogg = os.path.join('out', f"out.ogg") - - renamed_input_path = os.path.join('in', f"in.wav") - os.makedirs('in', exist_ok=True) - os.rename(wav_file.name, renamed_input_path) - command = ["ffmpeg", "-i", renamed_input_path, "-acodec", "libopus", "-y", output_path_ogg] - os.system(" ".join(command)) - return output_path_ogg -def tts_fn(text, speaker, sdp_ratio, noise_scale, noise_scale_w, length_scale): - with torch.no_grad(): - audio = infer(text, sdp_ratio=sdp_ratio, noise_scale=noise_scale, noise_scale_w=noise_scale_w, length_scale=length_scale, sid=speaker) - with open('tmp.wav', 'rb') as wav_file: - newogg = convert_wav_to_ogg(wav_file) - return "Success", (hps.data.sampling_rate, audio),newogg - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--model_dir", default="./logs/nailin/nailin.pth", help="path of your model") - parser.add_argument("--config_dir", default="./configs/config.json", help="path of your config file") - parser.add_argument("--share", default=False, help="make link public") - parser.add_argument("-d", "--debug", action="store_true", help="enable DEBUG-LEVEL log") - - args = parser.parse_args() - if args.debug: - logger.info("Enable DEBUG-LEVEL log") - logging.basicConfig(level=logging.DEBUG) - hps = utils.get_hparams_from_file(args.config_dir) - device = "cuda:0" if torch.cuda.is_available() else "cpu" - ''' - device = ( - "cuda:0" - if torch.cuda.is_available() - else ( - "mps" - if sys.platform == "darwin" and torch.backends.mps.is_available() - else "cpu" - ) - ) - ''' - net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model).to(device) - _ = net_g.eval() - - _ = utils.load_checkpoint(args.model_dir, net_g, None, skip_optimizer=True) - - speaker_ids = hps.data.spk2id - speakers = list(speaker_ids.keys()) - with gr.Blocks() as app: - with gr.Row(): - with gr.Column(): - - - gr.Markdown(value=""" - 乃琳 Bert-Vits2在线语音生成\n - 1、模型作者:数字星瞳企划 https://t.me/xingtong25680 \n - \n - 2、原项目地址:https://github.com/Stardust-minus/Bert-VITS2\n - 3、使用此模型进行二创请注明AI生成,以及该项目地址。\n - 4、如果想生成超长txt文本的音频请使用colab。 https://colab.research.google.com/drive/13ek8_j1aknr-pbjj3NXxSM4vBIsracU3?usp=drive_link\n - - """) - text = gr.TextArea(label="Text", placeholder="Input Text Here", - value="这里是数字星瞳企画,请在电报搜索星瞳全拼加二五六八零,获取最新更新进展。") - speaker = gr.Dropdown(choices=speakers, value=speakers[0], label='Speaker') - sdp_ratio = gr.Slider(minimum=0, maximum=1, value=0.2, step=0.01, label='语调变化') - noise_scale = gr.Slider(minimum=0.1, maximum=1.5, value=0.6, step=0.01, label='感情变化') - noise_scale_w = gr.Slider(minimum=0.1, maximum=1.4, value=0.8, step=0.01, label='音节发音长度变化') - length_scale = gr.Slider(minimum=0.1, maximum=2, value=1, step=0.01, label='语速') - btn = gr.Button("开启AI语音之旅吧!", variant="primary") - with gr.Column(): - text_output = gr.Textbox(label="Message") - audio_output = gr.Audio(label="Output Audio") - ogg_output = gr.File(label="Converted OGG file") - gr.Markdown(value=""" - 模型汇总:\n - 星瞳 https://huggingface.co/spaces/digitalxingtong/Xingtong-Bert-Vits2 \n - 星瞳 朗读专用 https://huggingface.co/spaces/digitalxingtong/Xingtong-Read-Bert-VITS2 \n - 星瞳 长文本专用 https://huggingface.co/spaces/digitalxingtong/Xingtong-Longread-Bert-VITS2 \n - 甜甜叫花鸡 https://huggingface.co/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2 \n - 七海 https://huggingface.co/spaces/digitalxingtong/Nanami-Bert-Vits2 \n - 东雪莲 https://huggingface.co/spaces/digitalxingtong/Azuma-Bert-Vits2 \n - 嘉然 https://huggingface.co/spaces/digitalxingtong/Jiaran-Bert-Vits2 \n - 乃琳 https://huggingface.co/spaces/digitalxingtong/Eileen-Bert-Vits2 \n - 恬豆 https://huggingface.co/spaces/digitalxingtong/Dou-Bert-Vits2 \n - 奶绿 杂谈 https://huggingface.co/spaces/digitalxingtong/Nailv-Bert-Vits2 \n - 奶绿 朗读 https://huggingface.co/spaces/digitalxingtong/Nailv-read-Bert-Vits2 \n - 露早 https://huggingface.co/spaces/digitalxingtong/Luzao-Bert-Vits2 \n - 柚恩 https://huggingface.co/spaces/digitalxingtong/Un-Bert-Vits2 \n - 米诺 https://huggingface.co/spaces/digitalxingtong/Minuo-Bert-Vits2 \n - 扇宝 https://huggingface.co/spaces/digitalxingtong/Shanbao-Bert-Vits2 \n - 牧牧白 https://huggingface.co/spaces/digitalxingtong/Miiu-Bert-Vits2 \n - 吉诺儿kino https://huggingface.co/spaces/digitalxingtong/Kino-Bert-Vits2 \n - 九夏 https://huggingface.co/spaces/digitalxingtong/Jiuxia-Bert-Vits2 \n - 卡缇娅 https://huggingface.co/spaces/digitalxingtong/Yaya-Bert-Vits2 \n - 理想_ideal https://huggingface.co/spaces/digitalxingtong/Lixiang-Bert-Vits2 \n - 阿梓 https://huggingface.co/spaces/digitalxingtong/Azusa-Bert-Vits2 \n - 鹿鸣 https://huggingface.co/spaces/digitalxingtong/Luming-Bert-Vits2 \n - 永雏塔菲 https://huggingface.co/spaces/digitalxingtong/Taffy-Bert-VITS2 \n - """) - btn.click(tts_fn, - inputs=[text, speaker, sdp_ratio, noise_scale, noise_scale_w, length_scale], - outputs=[text_output, audio_output,ogg_output]) - - - app.launch(show_error=True) diff --git a/spaces/digitalxingtong/Kino-Bert-VITS2/monotonic_align/__init__.py b/spaces/digitalxingtong/Kino-Bert-VITS2/monotonic_align/__init__.py deleted file mode 100644 index a323673bb16070d6d0fffddb939b657d0915ff1b..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Kino-Bert-VITS2/monotonic_align/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -from numpy import zeros, int32, float32 -from torch import from_numpy - -from .core import maximum_path_jit - - -def maximum_path(neg_cent, mask): - """ numba optimized version. - neg_cent: [b, t_t, t_s] - mask: [b, t_t, t_s] - """ - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(float32) - path = zeros(neg_cent.shape, dtype=int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32) - maximum_path_jit(path, neg_cent, t_t_max, t_s_max) - return from_numpy(path).to(device=device, dtype=dtype) \ No newline at end of file diff --git a/spaces/django-ochain/AI-market-researcher/README.md b/spaces/django-ochain/AI-market-researcher/README.md deleted file mode 100644 index ed55e0a4c1119ba0bfe805661d3f0eb6790d3449..0000000000000000000000000000000000000000 --- a/spaces/django-ochain/AI-market-researcher/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: AI Market Researcher -emoji: 🌖 -colorFrom: indigo -colorTo: blue -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/dorkai/SINGPT-Temporary/modules/ui.py b/spaces/dorkai/SINGPT-Temporary/modules/ui.py deleted file mode 100644 index bb193e35c11b2a3d474ea89e7567206a3343395a..0000000000000000000000000000000000000000 --- a/spaces/dorkai/SINGPT-Temporary/modules/ui.py +++ /dev/null @@ -1,92 +0,0 @@ -import gradio as gr - -refresh_symbol = '\U0001f504' # 🔄 - -css = """ -.tabs.svelte-710i53 { - margin-top: 0 -} -.py-6 { - padding-top: 2.5rem -} -.dark #refresh-button { - background-color: #ffffff1f; -} -#refresh-button { - flex: none; - margin: 0; - padding: 0; - min-width: 50px; - border: none; - box-shadow: none; - border-radius: 10px; - background-color: #0000000d; -} -#download-label, #upload-label { - min-height: 0 -} -#accordion { -} -.dark svg { - fill: white; -} -svg { - display: unset !important; - vertical-align: middle !important; - margin: 5px; -} -ol li p, ul li p { - display: inline-block; -} -""" - -chat_css = """ -.h-\[40vh\], .wrap.svelte-byatnx.svelte-byatnx.svelte-byatnx { - height: 66.67vh -} -.gradio-container { - max-width: 800px !important; - margin-left: auto !important; - margin-right: auto !important; -} -.w-screen { - width: unset -} -div.svelte-362y77>*, div.svelte-362y77>.form>* { - flex-wrap: nowrap -} -/* fixes the API documentation in chat mode */ -.api-docs.svelte-1iguv9h.svelte-1iguv9h.svelte-1iguv9h { - display: grid; -} -.pending.svelte-1ed2p3z { - opacity: 1; -} -""" - -class ToolButton(gr.Button, gr.components.FormComponent): - """Small button with single emoji as text, fits inside gradio forms""" - - def __init__(self, **kwargs): - super().__init__(variant="tool", **kwargs) - - def get_block_name(self): - return "button" - -def create_refresh_button(refresh_component, refresh_method, refreshed_args, elem_id): - def refresh(): - refresh_method() - args = refreshed_args() if callable(refreshed_args) else refreshed_args - - for k, v in args.items(): - setattr(refresh_component, k, v) - - return gr.update(**(args or {})) - - refresh_button = ToolButton(value=refresh_symbol, elem_id=elem_id) - refresh_button.click( - fn=refresh, - inputs=[], - outputs=[refresh_component] - ) - return refresh_button diff --git a/spaces/dyguay/object-detection-api/app.py b/spaces/dyguay/object-detection-api/app.py deleted file mode 100644 index d69acb4a6357638f3ce8163555337e079cfbcaa5..0000000000000000000000000000000000000000 --- a/spaces/dyguay/object-detection-api/app.py +++ /dev/null @@ -1,28 +0,0 @@ -import gradio as gr -from ObjectDetector import Detector - -detector = Detector() -iface = gr.Interface(detector.detectObject, gr.inputs.Image(), "image", - title="Object Detection API", - article="""Object detection is a computer vision technique that allows us to identify and locate objects in an image or video. With this kind of identification and localization, object detection can be used to count objects in a scene and determine and track their precise locations, all while accurately labeling them. -
    - Github Link -

    -
    - Copyright © 2021 -
    """, - css=""" - body { - background-color: #f4e1e6; - text-align: center; - font-size: 16px; - } - a { - color: #096ac5; - } - a:hover { - color: #096ac5; - }""" - ) - -iface.launch() \ No newline at end of file diff --git a/spaces/dyhzq/vits-uma-genshin-honkai/app.py b/spaces/dyhzq/vits-uma-genshin-honkai/app.py deleted file mode 100644 index 92ddafdcd240434f58569b0e6964ef331a971dcf..0000000000000000000000000000000000000000 --- a/spaces/dyhzq/vits-uma-genshin-honkai/app.py +++ /dev/null @@ -1,124 +0,0 @@ -import time -import gradio as gr -import utils -import commons -from models import SynthesizerTrn -from text import text_to_sequence -from torch import no_grad, LongTensor -import torch - -hps_ms = utils.get_hparams_from_file(r'./model/config.json') -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -net_g_ms = SynthesizerTrn( - len(hps_ms.symbols), - hps_ms.data.filter_length // 2 + 1, - hps_ms.train.segment_size // hps_ms.data.hop_length, - n_speakers=hps_ms.data.n_speakers, - **hps_ms.model).to(device) -_ = net_g_ms.eval() -speakers = hps_ms.speakers -model, optimizer, learning_rate, epochs = utils.load_checkpoint(r'./model/G_953000.pth', net_g_ms, None) - -def get_text(text, hps): - text_norm, clean_text = text_to_sequence(text, hps.symbols, hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = LongTensor(text_norm) - return text_norm, clean_text - -def vits(text, language, speaker_id, noise_scale, noise_scale_w, length_scale): - start = time.perf_counter() - if not len(text): - return "输入文本不能为空!", None, None - text = text.replace('\n', ' ').replace('\r', '').replace(" ", "") - if len(text) > 500: - return f"输入文字过长!{len(text)}>100", None, None - if language == 0: - text = f"[ZH]{text}[ZH]" - elif language == 1: - text = f"[JA]{text}[JA]" - else: - text = f"{text}" - stn_tst, clean_text = get_text(text, hps_ms) - with no_grad(): - x_tst = stn_tst.unsqueeze(0) - x_tst_lengths = LongTensor([stn_tst.size(0)]) - speaker_id = LongTensor([speaker_id]) - audio = net_g_ms.infer(x_tst, x_tst_lengths, sid=speaker_id, noise_scale=noise_scale, noise_scale_w=noise_scale_w, - length_scale=length_scale)[0][0, 0].data.cpu().float().numpy() - - return "生成成功!", (22050, audio), f"生成耗时 {round(time.perf_counter()-start, 2)} s" - -def search_speaker(search_value): - for s in speakers: - if search_value == s: - return s - for s in speakers: - if search_value in s: - return s - -def change_lang(language): - if language == 0: - return 0.6, 0.668, 1.2 - else: - return 0.6, 0.668, 1.1 - -download_audio_js = """ -() =>{{ - let root = document.querySelector("body > gradio-app"); - if (root.shadowRoot != null) - root = root.shadowRoot; - let audio = root.querySelector("#tts-audio").querySelector("audio"); - let text = root.querySelector("#input-text").querySelector("textarea"); - if (audio == undefined) - return; - text = text.value; - if (text == undefined) - text = Math.floor(Math.random()*100000000); - audio = audio.src; - let oA = document.createElement("a"); - oA.download = text.substr(0, 20)+'.wav'; - oA.href = audio; - document.body.appendChild(oA); - oA.click(); - oA.remove(); -}} -""" - -if __name__ == '__main__': - with gr.Blocks() as app: - gr.Markdown( - "#
    VITS语音在线合成demo\n" - "
    主要有赛马娘,原神中文,原神日语,崩坏3的音色
    " - '' - '' - ) - - with gr.Tabs(): - with gr.TabItem("vits"): - with gr.Row(): - with gr.Column(): - input_text = gr.Textbox(label="Text (100 words limitation)", lines=5, value="今天晚上吃啥好呢。", elem_id=f"input-text") - lang = gr.Dropdown(label="Language", choices=["中文", "日语", "中日混合(中文用[ZH][ZH]包裹起来,日文用[JA][JA]包裹起来)"], - type="index", value="中文") - btn = gr.Button(value="Submit") - with gr.Row(): - search = gr.Textbox(label="Search Speaker", lines=1) - btn2 = gr.Button(value="Search") - sid = gr.Dropdown(label="Speaker", choices=speakers, type="index", value=speakers[228]) - with gr.Row(): - ns = gr.Slider(label="noise_scale(控制感情变化程度)", minimum=0.1, maximum=1.0, step=0.1, value=0.6, interactive=True) - nsw = gr.Slider(label="noise_scale_w(控制音素发音长度)", minimum=0.1, maximum=1.0, step=0.1, value=0.668, interactive=True) - ls = gr.Slider(label="length_scale(控制整体语速)", minimum=0.1, maximum=2.0, step=0.1, value=1.2, interactive=True) - with gr.Column(): - o1 = gr.Textbox(label="Output Message") - o2 = gr.Audio(label="Output Audio", elem_id=f"tts-audio") - o3 = gr.Textbox(label="Extra Info") - download = gr.Button("Download Audio") - btn.click(vits, inputs=[input_text, lang, sid, ns, nsw, ls], outputs=[o1, o2, o3], api_name="generate") - download.click(None, [], [], _js=download_audio_js.format()) - btn2.click(search_speaker, inputs=[search], outputs=[sid]) - lang.change(change_lang, inputs=[lang], outputs=[ns, nsw, ls]) - with gr.TabItem("可用人物一览"): - gr.Radio(label="Speaker", choices=speakers, interactive=False, type="index") - app.queue(concurrency_count=1).launch() \ No newline at end of file diff --git a/spaces/dyhzq/vits-uma-genshin-honkai/utils.py b/spaces/dyhzq/vits-uma-genshin-honkai/utils.py deleted file mode 100644 index ee4b01ddfbe8173965371b29f770f3e87615fe71..0000000000000000000000000000000000000000 --- a/spaces/dyhzq/vits-uma-genshin-honkai/utils.py +++ /dev/null @@ -1,225 +0,0 @@ -import os -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -import librosa -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict= {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})" .format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10,2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_audio_to_torch(full_path, target_sampling_rate): - audio, sampling_rate = librosa.load(full_path, sr=target_sampling_rate, mono=True) - return torch.FloatTensor(audio.astype(np.float32)) - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/echometerain/whos-that-pokemon/README.md b/spaces/echometerain/whos-that-pokemon/README.md deleted file mode 100644 index 1504de1c056413a43b31b4d8f2ea03cdbd842e35..0000000000000000000000000000000000000000 --- a/spaces/echometerain/whos-that-pokemon/README.md +++ /dev/null @@ -1,28 +0,0 @@ ---- -title: Who's That Pokemon? -emoji: ⚡ -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.38.0 -app_file: app.py -pinned: false ---- - -# Who's That Pokemon? - -[Kaggle link](https://www.kaggle.com/code/echometerhhwl/who-s-that-pokemon-improved) | [Huggingface demo](https://huggingface.co/spaces/echometerain/whos-that-pokemon) - -Identifies generation 1 pokemon based on [35627 images](https://www.kaggle.com/datasets/echometerhhwl/pokemon-gen-1-38914)! - -![image](https://github.com/echometerain/whos-that-pokemon/assets/70437021/68b0ed63-4be6-4d30-a06f-a8c1b59060a3) - -![image](https://github.com/echometerain/whos-that-pokemon/assets/70437021/e46821ee-26ce-4976-8545-da8786fdc9c0) - -Default model based on convnext_tiny (`model.pkl`) has 95.7% accuracy (with dataset augmented at runtime) - -![image](https://github.com/echometerain/whos-that-pokemon/assets/70437021/b3006352-ad07-4227-9da3-aa9c182d3303) - -Alternative model based on resnet34 (`model-r34.pkl`) has 94.3% accuracy (with dataset augmented at runtime) - -![image](https://github.com/echometerain/whos-that-pokemon/assets/70437021/5e41262e-86b9-4dde-a48a-56951ea25644) diff --git a/spaces/erbanku/gpt-academic/crazy_functional.py b/spaces/erbanku/gpt-academic/crazy_functional.py deleted file mode 100644 index 4b29aef50a59265fc0b5d6b6dea68c6efea939bb..0000000000000000000000000000000000000000 --- a/spaces/erbanku/gpt-academic/crazy_functional.py +++ /dev/null @@ -1,239 +0,0 @@ -from toolbox import HotReload # HotReload 的意思是热更新,修改函数插件后,不需要重启程序,代码直接生效 - - -def get_crazy_functions(): - ###################### 第一组插件 ########################### - from crazy_functions.读文章写摘要 import 读文章写摘要 - from crazy_functions.生成函数注释 import 批量生成函数注释 - from crazy_functions.解析项目源代码 import 解析项目本身 - from crazy_functions.解析项目源代码 import 解析一个Python项目 - from crazy_functions.解析项目源代码 import 解析一个C项目的头文件 - from crazy_functions.解析项目源代码 import 解析一个C项目 - from crazy_functions.解析项目源代码 import 解析一个Golang项目 - from crazy_functions.解析项目源代码 import 解析一个Java项目 - from crazy_functions.解析项目源代码 import 解析一个Rect项目 - from crazy_functions.高级功能函数模板 import 高阶功能模板函数 - from crazy_functions.代码重写为全英文_多线程 import 全项目切换英文 - from crazy_functions.Latex全文润色 import Latex英文润色 - from crazy_functions.询问多个大语言模型 import 同时问询 - from crazy_functions.解析项目源代码 import 解析一个Lua项目 - from crazy_functions.解析项目源代码 import 解析一个CSharp项目 - from crazy_functions.总结word文档 import 总结word文档 - from crazy_functions.解析JupyterNotebook import 解析ipynb文件 - from crazy_functions.对话历史存档 import 对话历史存档 - from crazy_functions.对话历史存档 import 载入对话历史存档 - from crazy_functions.对话历史存档 import 删除所有本地对话历史记录 - - from crazy_functions.批量Markdown翻译 import Markdown英译中 - function_plugins = { - "解析整个Python项目": { - "Color": "stop", # 按钮颜色 - "Function": HotReload(解析一个Python项目) - }, - "载入对话历史存档": { - "AsButton":False, - "Function": HotReload(载入对话历史存档) - }, - "删除所有本地对话历史记录(请谨慎操作)": { - "AsButton":False, - "Function": HotReload(删除所有本地对话历史记录) - }, - "[测试功能] 解析Jupyter Notebook文件": { - "Color": "stop", - "AsButton":False, - "Function": HotReload(解析ipynb文件), - "AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False) - "ArgsReminder": "若输入0,则不解析notebook中的Markdown块", # 高级参数输入区的显示提示 - }, - "批量总结Word文档": { - "Color": "stop", - "Function": HotReload(总结word文档) - }, - "解析整个C++项目头文件": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个C项目的头文件) - }, - "解析整个C++项目(.cpp/.hpp/.c/.h)": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个C项目) - }, - "解析整个Go项目": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个Golang项目) - }, - "解析整个Java项目": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个Java项目) - }, - "解析整个React项目": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个Rect项目) - }, - "解析整个Lua项目": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个Lua项目) - }, - "解析整个CSharp项目": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个CSharp项目) - }, - "读Tex论文写摘要": { - "Color": "stop", # 按钮颜色 - "Function": HotReload(读文章写摘要) - }, - "Markdown/Readme英译中": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "Function": HotReload(Markdown英译中) - }, - "批量生成函数注释": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(批量生成函数注释) - }, - "保存当前的对话": { - "Function": HotReload(对话历史存档) - }, - "[多线程Demo] 解析此项目本身(源码自译解)": { - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析项目本身) - }, - "[多线程demo] 把本项目源代码切换成全英文": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(全项目切换英文) - }, - "[插件demo] 历史上的今天": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Function": HotReload(高阶功能模板函数) - }, - - } - ###################### 第二组插件 ########################### - # [第二组插件]: 经过充分测试 - from crazy_functions.批量总结PDF文档 import 批量总结PDF文档 - from crazy_functions.批量总结PDF文档pdfminer import 批量总结PDF文档pdfminer - from crazy_functions.批量翻译PDF文档_多线程 import 批量翻译PDF文档 - from crazy_functions.谷歌检索小助手 import 谷歌检索小助手 - from crazy_functions.理解PDF文档内容 import 理解PDF文档内容标准文件输入 - from crazy_functions.Latex全文润色 import Latex中文润色 - from crazy_functions.Latex全文翻译 import Latex中译英 - from crazy_functions.Latex全文翻译 import Latex英译中 - from crazy_functions.批量Markdown翻译 import Markdown中译英 - - function_plugins.update({ - "批量翻译PDF文档(多线程)": { - "Color": "stop", - "AsButton": True, # 加入下拉菜单中 - "Function": HotReload(批量翻译PDF文档) - }, - "询问多个GPT模型": { - "Color": "stop", # 按钮颜色 - "Function": HotReload(同时问询) - }, - "[测试功能] 批量总结PDF文档": { - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Function": HotReload(批量总结PDF文档) - }, - "[测试功能] 批量总结PDF文档pdfminer": { - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(批量总结PDF文档pdfminer) - }, - "谷歌学术检索助手(输入谷歌学术搜索页url)": { - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(谷歌检索小助手) - }, - - "理解PDF文档内容 (模仿ChatPDF)": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(理解PDF文档内容标准文件输入) - }, - "[测试功能] 英文Latex项目全文润色(输入路径或上传压缩包)": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(Latex英文润色) - }, - "[测试功能] 中文Latex项目全文润色(输入路径或上传压缩包)": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(Latex中文润色) - }, - "[测试功能] Latex项目全文中译英(输入路径或上传压缩包)": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(Latex中译英) - }, - "[测试功能] Latex项目全文英译中(输入路径或上传压缩包)": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(Latex英译中) - }, - "[测试功能] 批量Markdown中译英(输入路径或上传压缩包)": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(Markdown中译英) - }, - - - }) - - ###################### 第三组插件 ########################### - # [第三组插件]: 尚未充分测试的函数插件,放在这里 - from crazy_functions.下载arxiv论文翻译摘要 import 下载arxiv论文并翻译摘要 - function_plugins.update({ - "一键下载arxiv论文并翻译摘要(先在input输入编号,如1812.10695)": { - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(下载arxiv论文并翻译摘要) - } - }) - - from crazy_functions.联网的ChatGPT import 连接网络回答问题 - function_plugins.update({ - "连接网络回答问题(先输入问题,再点击按钮,需要访问谷歌)": { - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(连接网络回答问题) - } - }) - - from crazy_functions.解析项目源代码 import 解析任意code项目 - function_plugins.update({ - "解析项目源代码(手动指定和筛选源代码文件类型)": { - "Color": "stop", - "AsButton": False, - "AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False) - "ArgsReminder": "输入时用逗号隔开, *代表通配符, 加了^代表不匹配; 不输入代表全部匹配。例如: \"*.c, ^*.cpp, config.toml, ^*.toml\"", # 高级参数输入区的显示提示 - "Function": HotReload(解析任意code项目) - }, - }) - from crazy_functions.询问多个大语言模型 import 同时问询_指定模型 - function_plugins.update({ - "询问多个GPT模型(手动指定询问哪些模型)": { - "Color": "stop", - "AsButton": False, - "AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False) - "ArgsReminder": "支持任意数量的llm接口,用&符号分隔。例如chatglm&gpt-3.5-turbo&api2d-gpt-4", # 高级参数输入区的显示提示 - "Function": HotReload(同时问询_指定模型) - }, - }) - ###################### 第n组插件 ########################### - return function_plugins diff --git a/spaces/eson/tokenizer-arena/vocab/chinese_llama/merge_tokenizer/README.md b/spaces/eson/tokenizer-arena/vocab/chinese_llama/merge_tokenizer/README.md deleted file mode 100644 index 9929ff14d5fa0ef87a9695767bac33b4ec3bcbdf..0000000000000000000000000000000000000000 --- a/spaces/eson/tokenizer-arena/vocab/chinese_llama/merge_tokenizer/README.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -title: llama 词表扩充 ---- - -## 词表合并问题 - -https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/128 - -不同语料上统计的score,不可比吧? \ No newline at end of file diff --git a/spaces/eson/tokenizer-arena/vocab/gpt_neox_japanese/README.md b/spaces/eson/tokenizer-arena/vocab/gpt_neox_japanese/README.md deleted file mode 100644 index 4a08ebbb45d48451ad45b66806709f9b4edf091c..0000000000000000000000000000000000000000 --- a/spaces/eson/tokenizer-arena/vocab/gpt_neox_japanese/README.md +++ /dev/null @@ -1,64 +0,0 @@ - - -## vocab.txt - -``` -るのは -よね -写真,寫真,冩真,写眞,寫眞,冩眞 -マイ -そん -女性,𠨰性,⼥性,女𧢱,𠨰𧢱,⼥𧢱 -内容,內容,内㣑,内㝐,内彮,内𠕺,內㣑,內㝐,內彮,內𠕺 -``` - -怎么还有不同写法?? - - - - -## 文本归一化 - -以下的normalization,在生成任务中并不好。 - -``` - self.content_repatter1 = re.compile(r"(https?|ftp)(:\/\/[-_\.!~*\'()a-zA-Z0-9;\/?:\@&=\+$,%#]+)") - self.content_repatter2 = re.compile(r"[A-Za-z0-9\._+]*@[\-_0-9A-Za-z]+(\.[A-Za-z]+)*") - self.content_repatter3 = re.compile(r"[\(]{0,1}[0-9]{2,4}[\)\-\(]{0,1}[0-9]{2,4}[\)\-]{0,1}[0-9]{3,4}") - self.content_repatter4 = re.compile( - r"([12]\d{3}[/\-年])*(0?[1-9]|1[0-2])[/\-月]((0?[1-9]|[12][0-9]|3[01])日?)*(\d{1,2}|:|\d{1,2}時|\d{1,2}分|\(日\)|\(月\)|\(火\)|\(水\)|\(木\)|\(金\)|\(土\)|㈰|㈪|㈫|㈬|㈭|㈮|㈯)*" - ) - self.content_repatter5 = re.compile( - r"(明治|大正|昭和|平成|令和|㍾|㍽|㍼|㍻|\u32ff)\d{1,2}年(0?[1-9]|1[0-2])月(0?[1-9]|[12][0-9]|3[01])日(\d{1,2}|:|\d{1,2}時|\d{1,2}分|\(日\)|\(月\)|\(火\)|\(水\)|\(木\)|\(金\)|\(土\)|㈰|㈪|㈫|㈬|㈭|㈮|㈯)*" - ) - self.content_repatter6 = re.compile( - r"((0|[1-9]\d*|[1-9]\d{0,2}(,\d{3})+)*億)*((0|[1-9]\d*|[1-9]\d{0,2}(,\d{3})+)*万)*((0|[1-9]\d*|[1-9]\d{0,2}(,\d{3})+)*千)*(0|[1-9]\d*|[1-9]\d{0,2}(,\d{3})+)*(千円|万円|千万円|円|千ドル|万ドル|千万ドル|ドル|千ユーロ|万ユーロ|千万ユーロ|ユーロ)+(\(税込\)|\(税抜\)|\+tax)*" - ) - - def clean_text(self, content): - content = self.content_repatter1.sub("", content) - content = self.content_repatter2.sub("", content) - content = self.content_repatter3.sub("", content) - content = self.content_repatter4.sub("", content) - content = self.content_repatter5.sub("", content) - content = self.content_repatter6.sub("", content) - content = content.translate(self.content_trans1) - while "" in content: - content = content.replace("", "") - return content - - def tokenize(self, text, clean=False): - text = text.replace(" ", "") - text = text.replace(" ", "") - text = text.replace("\r\n", "
    ") - text = text.replace("\n", "
    ") - text = text.replace("\r", "
    ") - text = text.replace("\t", "") - text = text.replace("—", "ー") - text = text.replace("−", "ー") - for k, v in self.emoji["emoji"].items(): - if k in text: - text = text.replace(k, v) - if clean: - text = self.clean_text(text) -``` \ No newline at end of file diff --git a/spaces/etweedy/dreambooth-tessa/app.py b/spaces/etweedy/dreambooth-tessa/app.py deleted file mode 100644 index b4f098ef894c370dda6c40d8146527a8f66dd2a3..0000000000000000000000000000000000000000 --- a/spaces/etweedy/dreambooth-tessa/app.py +++ /dev/null @@ -1,10 +0,0 @@ -import gradio as gr -description = "A version of Stable Diffusion v1.5 which knows about my dog, Tessa. This model was fine-tuned using the Dreambooth technique (https://dreambooth.github.io/). To generate a picture of Tessa, provide a prompt referring to '\ dog'. Make sure to include the brackets. This is running on a free CPU, so for faster inference please duplicate this space." -title = "Dreambooth Tessa Generator" -examples = [["A formal portrait of dog in the style of Rubens, masterpiece, stunning piece of art."],["A hybrid of dog and baby yoda, green fur, very cute eyes, stunning high definition rendering, 4k, unreal engine, trending on artstation hq."],["Rendering of dog, classical floral elements emanating from center of face, woodcutting template, decorative design, classical ornament, motif, bilateral symmetry, roses, leaves, flowers, buds, flowering buds, feathers, negative space, highly detailed etching"]] -interface = gr.Interface.load("models/etweedy/tessa", - description=description, - title = title, - examples = examples -) -interface.launch() \ No newline at end of file diff --git a/spaces/evaluate-measurement/toxicity/README.md b/spaces/evaluate-measurement/toxicity/README.md deleted file mode 100644 index de132f2768eb5fc1322abc31f00ed17b5e260141..0000000000000000000000000000000000000000 --- a/spaces/evaluate-measurement/toxicity/README.md +++ /dev/null @@ -1,117 +0,0 @@ ---- -title: Toxicity -emoji: 🤗 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.0.2 -app_file: app.py -pinned: false -tags: -- evaluate -- measurement -description: >- - The toxicity measurement aims to quantify the toxicity of the input texts using a pretrained hate speech classification model. ---- - -# Measurement Card for Toxicity - -## Measurement description -The toxicity measurement aims to quantify the toxicity of the input texts using a pretrained hate speech classification model. - -## How to use - -The default model used is [roberta-hate-speech-dynabench-r4](https://huggingface.co/facebook/roberta-hate-speech-dynabench-r4-target). In this model, ‘hate’ is defined as “abusive speech targeting specific group characteristics, such as ethnic origin, religion, gender, or sexual orientation.” Definitions used by other classifiers may vary. -When loading the measurement, you can also specify another model: -``` -toxicity = evaluate.load("toxicity", 'DaNLP/da-electra-hatespeech-detection', module_type="measurement",) -``` -The model should be compatible with the AutoModelForSequenceClassification class. -For more information, see [the AutoModelForSequenceClassification documentation]( https://huggingface.co/docs/transformers/master/en/model_doc/auto#transformers.AutoModelForSequenceClassification). - -Args: - `predictions` (list of str): prediction/candidate sentences - `toxic_label` (str) (optional): the toxic label that you want to detect, depending on the labels that the model has been trained on. - This can be found using the `id2label` function, e.g.: - ```python - >>> model = AutoModelForSequenceClassification.from_pretrained("DaNLP/da-electra-hatespeech-detection") - >>> model.config.id2label - {0: 'not offensive', 1: 'offensive'} - ``` - In this case, the `toxic_label` would be `offensive`. - `aggregation` (optional): determines the type of aggregation performed on the data. If set to `None`, the scores for each prediction are returned. - Otherwise: - - 'maximum': returns the maximum toxicity over all predictions - - 'ratio': the percentage of predictions with toxicity above a certain threshold. - `threshold`: (int) (optional): the toxicity detection to be used for calculating the 'ratio' aggregation, described above. The default threshold is 0.5, based on the one established by [RealToxicityPrompts](https://arxiv.org/abs/2009.11462). - -## Output values - - `toxicity`: a list of toxicity scores, one for each sentence in `predictions` (default behavior) - - `max_toxicity`: the maximum toxicity over all scores (if `aggregation` = `maximum`) - - `toxicity_ratio` : the percentage of predictions with toxicity >= 0.5 (if `aggregation` = `ratio`) - - -### Values from popular papers - - -## Examples - Example 1 (default behavior): -```python ->>> toxicity = evaluate.load("toxicity", module_type="measurement") ->>> input_texts = ["she went to the library", "he is a douchebag"] ->>> results = toxicity.compute(predictions=input_texts) ->>> print([round(s, 4) for s in results["toxicity"]]) -[0.0002, 0.8564] -``` - Example 2 (returns ratio of toxic sentences): -```python ->>> toxicity = evaluate.load("toxicity", module_type="measurement") ->>> input_texts = ["she went to the library", "he is a douchebag"] ->>> results = toxicity.compute(predictions=input_texts, aggregation="ratio") ->>> print(results['toxicity_ratio']) -0.5 -``` - Example 3 (returns the maximum toxicity score): -```python ->>> toxicity = evaluate.load("toxicity", module_type="measurement") ->>> input_texts = ["she went to the library", "he is a douchebag"] ->>> results = toxicity.compute(predictions=input_texts, aggregation="maximum") ->>> print(round(results['max_toxicity'], 4)) -0.8564 -``` - Example 4 (uses a custom model): -```python ->>> toxicity = evaluate.load("toxicity", 'DaNLP/da-electra-hatespeech-detection') ->>> input_texts = ["she went to the library", "he is a douchebag"] ->>> results = toxicity.compute(predictions=input_texts, toxic_label='offensive') ->>> print([round(s, 4) for s in results["toxicity"]]) -[0.0176, 0.0203] -``` - - - -## Citation - -```bibtex -@inproceedings{vidgen2021lftw, - title={Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection}, - author={Bertie Vidgen and Tristan Thrush and Zeerak Waseem and Douwe Kiela}, - booktitle={ACL}, - year={2021} -} -``` - -```bibtex -@article{gehman2020realtoxicityprompts, - title={Realtoxicityprompts: Evaluating neural toxic degeneration in language models}, - author={Gehman, Samuel and Gururangan, Suchin and Sap, Maarten and Choi, Yejin and Smith, Noah A}, - journal={arXiv preprint arXiv:2009.11462}, - year={2020} -} - -``` - -## Further References diff --git a/spaces/falterWliame/Face_Mask_Detection/EFILM 1.5 3 DOWNLOAD.md b/spaces/falterWliame/Face_Mask_Detection/EFILM 1.5 3 DOWNLOAD.md deleted file mode 100644 index f93ce1973140d1f30da2b111830b736d88cdff5a..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/EFILM 1.5 3 DOWNLOAD.md +++ /dev/null @@ -1,18 +0,0 @@ - -

    How to Download and Use eFilm Workstation 1.5.3

    -

    eFilm Workstation is a DICOM image viewer for diagnosis of medical images, endorsed by radiologists worldwide[^1^]. It has an extensive track record among more than 40,000 users worldwide and offers a rich functionality needed for interpreting radiographs on a monitor[^1^]. In this article, we will show you how to download and use eFilm Workstation 1.5.3, which is one of the older versions of the software.

    -

    How to Download eFilm Workstation 1.5.3

    -

    To download eFilm Workstation 1.5.3, you need to visit the developer's website[^2^] and click on the "Download now from developer's website" button. You will be redirected to a page where you can choose your operating system and language. The file size is about 100 MB and the download may take some time depending on your internet speed.

    -

    EFILM 1.5 3 DOWNLOAD


    Download Zip ··· https://urlca.com/2uDcJg



    -

    Once the download is complete, you need to run the installer file and follow the instructions on the screen. You will need to enter your license key or request a trial key if you don't have one. The license key costs $950 and can be purchased from the developer's website[^2^]. You will also need to agree to the terms and conditions of use before proceeding with the installation.

    -

    How to Use eFilm Workstation 1.5.3

    -

    After installing eFilm Workstation 1.5.3, you can launch it from your desktop or start menu. You will see a user interface with various tools and menus for viewing and manipulating medical images. You can open and close patient studies, zoom in and out of specific areas, check relevant organs and circulation systems, measure distances and angles, adjust window width and level values, compare multiple series synchronously, create simple MPRs and 3D cursors, export images to different formats, create CDs or DVDs with easy viewers, link to other applications, etc.[^1^] [^2^] [^3^]

    -

    To open a patient study, you can either click on the "Open" button on the toolbar or go to "File" > "Open" from the menu bar. You can browse your computer or network for DICOM files or folders and select them to open. You can also drag and drop files or folders onto the eFilm window to open them.

    -

    To view an image, you can either double-click on it or select it and click on the "View" button on the toolbar. You can also use the arrow keys or the mouse wheel to scroll through images in a series. You can change the layout of the images by clicking on the "Layout" button on the toolbar and choosing a preset or custom layout.

    -

    To manipulate an image, you can use various tools on the toolbar or menu bar, such as zoom, pan, rotate, flip, window width/level, measure, annotate, etc. You can also right-click on an image to access more options, such as reference lines, synchro display, MPR, 3D cursor, PET-CT fusion, etc.

    -

    To export an image, you can either click on the "Export" button on the toolbar or go to "File" > "Export" from the menu bar. You can choose to export images as DICOM, JPEG, BMP, AVI or other formats. You can also create CDs or DVDs with easy viewers by clicking on the "Create CD/DVD" button on the toolbar or going to "File" > "Create CD/DVD" from the menu bar.

    -

    Conclusion

    -

    eFilm Workstation 1.5.3 is a powerful and user-friendly DICOM image viewer for diagnosis of medical images. It offers a rich functionality needed for interpreting radiographs on a monitor and has an extensive track record among more than 40,000 users worldwide[^1^]. To download and use eFilm Workstation 1.5.3, you need to visit the developer's website[^2^], download and install the software, enter your license key or request a trial key, and then open and manipulate images as you wish

    -

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/Ea Games Generic Multi HOT Keygen Free Download.md b/spaces/falterWliame/Face_Mask_Detection/Ea Games Generic Multi HOT Keygen Free Download.md deleted file mode 100644 index 6970db63926a91ebfbce95958230e822664b34d6..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Ea Games Generic Multi HOT Keygen Free Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Ea Games Generic Multi Keygen Free Download


    Download File === https://urlca.com/2uDbWv



    - - 4fefd39f24
    -
    -
    -

    diff --git a/spaces/falterWliame/Face_Mask_Detection/Gravity PATCHED Full Hd Movie In Hindi Download.md b/spaces/falterWliame/Face_Mask_Detection/Gravity PATCHED Full Hd Movie In Hindi Download.md deleted file mode 100644 index 36ce50bfa63a4fe286d51b7daf9b8ff34ceb7469..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Gravity PATCHED Full Hd Movie In Hindi Download.md +++ /dev/null @@ -1,20 +0,0 @@ -
    -

    How to Download Gravity Full HD Movie in Hindi

    -

    Gravity is a 2013 science fiction thriller film directed by Alfonso Cuarón and starring Sandra Bullock and George Clooney. The film follows two astronauts who are stranded in space after their shuttle is destroyed by debris. Gravity was critically acclaimed for its visual effects, cinematography, direction, and performances. It won seven Academy Awards, including Best Director, Best Cinematography, and Best Visual Effects.

    -

    gravity full hd movie in hindi download


    Download Zip ✦✦✦ https://urlca.com/2uDcgB



    -

    If you want to watch Gravity in full HD quality with Hindi audio, you have several options to download it from the internet. Here are some of the best websites that offer Gravity full HD movie in Hindi download:

    -
      -
    • PogoLinks: This website provides multiple download links for Gravity in different resolutions and formats. You can choose from 480p, 720p, 720p HEVC, and 1080p. The audio is dual-audio with Hindi-Eng subtitles. You can also watch the trailer and read the synopsis of the movie on this website[^1^].
    • -
    • Free Warez Stuff: This website offers a direct download link for Gravity in 720p Blu-Ray quality with dual-audio and Esubs. The file size is 1 GB and the movie name is Gravity (2013) 720p Blu-Ray x264 [Dual-Audio] [BD 448KBPS] [Hindi 5.1 + English 5.1] - Esubs - Mafiaking[^2^]. You can also see the screenshots and IMDB info of the movie on this website.
    • -
    • Archive.org: This website offers a CAMRip version of Gravity with English audio and video. The file size is 446 MB and the format is AVI. The video quality is not very good but it is watchable. The movie name is Gravity 2013 FULL ViDEO ENG AUDiO CAMRip AyFdE[^3^]. You can also stream the movie online on this website.
    • -
    -

    These are some of the best websites that offer Gravity full HD movie in Hindi download. However, you should be careful while downloading movies from these websites as they may contain viruses or malware that can harm your device. You should also respect the copyrights of the movie makers and watch Gravity legally on streaming platforms or buy the DVD or Blu-Ray if available.

    - -

    Gravity Movie Review

    -

    Gravity is not only a stunning visual spectacle, but also a gripping and emotional story of survival and resilience. The film immerses the viewers in the vast and terrifying emptiness of space, where every breath, movement, and sound matters. The film also explores the themes of isolation, loss, and hope, as the main characters struggle to find a way back to Earth.

    -

    The film is directed by Alfonso Cuarón, who co-wrote the screenplay with his son Jonás Cuarón. The film features only two actors on screen: Sandra Bullock as Dr. Ryan Stone, a medical engineer on her first space mission, and George Clooney as Matt Kowalski, a veteran astronaut on his final mission. Both actors deliver outstanding performances that convey the physical and emotional challenges of their situation. Bullock especially shines as the protagonist, who undergoes a remarkable transformation from a timid and vulnerable rookie to a determined and courageous survivor.

    -

    -

    The film is also a technical marvel, with stunning cinematography, editing, sound design, and visual effects. The film uses long takes, fluid camera movements, and 3D technology to create a realistic and immersive experience of being in space. The film also uses silence and sound effects to create tension and contrast. The film's score by Steven Price is subtle and haunting, complementing the mood and atmosphere of the film.

    -

    Gravity is a masterpiece of filmmaking that deserves to be seen on the biggest screen possible. It is a thrilling and moving journey that will leave you breathless and awestruck. It is one of the best films of 2013 and one of the best science fiction films ever made.

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Among Us APK Download and Play the Ultimate Space Mystery Game.md b/spaces/fatiXbelha/sd/Among Us APK Download and Play the Ultimate Space Mystery Game.md deleted file mode 100644 index af6161b611dc1fa373f5766ecf45a0d2980b101b..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Among Us APK Download and Play the Ultimate Space Mystery Game.md +++ /dev/null @@ -1,129 +0,0 @@ - -

    What is Among Us and why is it so popular?

    -

    If you are looking for a fun and exciting multiplayer game that you can play with your friends or strangers online, you might want to check out Among Us. Released in 2018 by InnerSloth, Among Us is a casual, free-to-play game that has become a global phenomenon in 2020. It is available on various platforms, including Android, iOS, Windows, and Nintendo Switch. It also supports cross-play, meaning you can play with anyone regardless of their device.

    -

    Among Us is a game of deception, teamwork, and betrayal. You can play with 4 to 15 players in a spaceship that is preparing for departure. However, one or more players are impostors who are secretly trying to kill everyone else. As a crewmate, your goal is to complete tasks around the ship or find out who the impostor is and vote them off. As an impostor, your goal is to sabotage the ship, kill crewmates, or convince them that you are innocent.

    -

    amongus apk


    Download File →→→ https://urllie.com/2uNxYh



    -

    Among Us has many features and modes that make it appealing and addictive. You can choose from three different maps: The Skeld, Mira HQ, and Polus. You can also customize the game settings, such as the number of impostors, the speed of players, the kill cooldown, the emergency meetings, the task difficulty, and more. You can also chat with other players during meetings or use voice chat apps like Discord for more communication.

    -

    Among Us has become so popular because it is easy to play, fun to watch, and suitable for all ages. It also offers a lot of replay value, as every game is different depending on the players' actions and interactions. Moreover, it has attracted many streamers, celebrities, and influencers who have played and promoted the game on various platforms. It has also inspired many memes, fan arts, animations, parodies, and merchandise.

    -

    How to download and install Among Us APK on Android devices?

    -

    If you want to play Among Us on your Android device, you have two options: you can either download it from the Google Play Store or download an APK file from another source. An APK file is an application package file that contains all the files needed to install an app on your device. Downloading an APK file can have some benefits, such as getting access to earlier or modified versions of the game that are not available on the official store. However, it can also have some risks, such as exposing your device to malware or viruses that can harm your data or system.

    -

    If you decide to download an APK file for Among Us, you need to follow these steps:

    -
      -
    1. Find a trusted source that offers the latest version of Among Us APK. You can use sites like Filehippo or APKPure that are known for their reliability and security.
    2. Download the APK file to your device. You may need to enable the option to install apps from unknown sources in your device settings. This will allow you to install apps that are not from the Google Play Store.
    3. -
    4. Locate the APK file in your device's file manager and tap on it to install it. You may need to grant some permissions to the app, such as access to your storage, camera, microphone, or location.
    5. -
    6. Launch the app and enjoy playing Among Us on your device.
    7. -
    -

    However, before you download and install an APK file, you should also consider some tips to ensure a smooth and safe gaming experience:

    -
      -
    • Always check the source and the file size of the APK file. If the source is unknown or suspicious, or if the file size is too small or too large, do not download it. It may contain malware or viruses that can harm your device.
    • -
    • Always scan the APK file with an antivirus app before installing it. This will help you detect and remove any potential threats that may be hidden in the file.
    • -
    • Always update the app regularly. This will help you get the latest features, bug fixes, and security patches for the game. You can either update the app from the same source where you downloaded it, or use an app updater tool that can automatically check and update your apps.
    • -
    -

    How to play Among Us with friends online or over local WiFi?

    -

    One of the best things about Among Us is that you can play it with your friends online or over local WiFi. This way, you can have more fun and interaction with your fellow players. However, there are some options and requirements that you need to know before you start playing.

    -

    amongus apk download
    -amongus apk mod
    -amongus apk latest version
    -amongus apk hack
    -amongus apk pc
    -amongus apk free
    -amongus apk android
    -amongus apk ios
    -amongus apk online
    -amongus apk update
    -amongus apk 2023.6.13
    -amongus apk unlocked
    -amongus apk always impostor
    -amongus apk no ads
    -amongus apk unlimited money
    -amongus apk airship map
    -amongus apk 2023.3.28
    -amongus apk with friends
    -amongus apk offline
    -amongus apk 2023.5.12
    -amongus apk all skins
    -amongus apk no kill cooldown
    -amongus apk voice chat
    -amongus apk 2023.4.2
    -amongus apk 2023.2.21
    -amongus apk revdl
    -amongus apk uptodown
    -amongus apk happymod
    -amongus apk an1
    -amongus apk rexdl
    -amongus apk pro
    -amongus apk premium
    -amongus apk mirror
    -amongus apk pure
    -amongus apk old version
    -amongus apk beta version
    -amongus apk mod menu
    -amongus apk mod always impostor
    -amongus apk mod unlocked everything
    -amongus apk mod invisible name
    -amongus apk mod no ban
    -amongus apk mod pets and hats free
    -amongus apk mod see impostor
    -amongus apk mod speed hack
    -amongus apk mod anti ban

    -

    If you want to play online, you need to have a stable internet connection and an account on InnerSloth's website. You can create an account for free by entering your email address and a password. This will allow you to access online features such as chat, statistics, cosmetics, and more. You can also link your account to your Steam or Discord accounts for more convenience.

    -

    If you want to play over local WiFi, you need to have a WiFi router and a device that can create a hotspot. You also need to be in the same physical location as your friends. This way, you can connect to the same network and play together without using internet data.

    -

    To play with your friends online or over local WiFi, you need to follow these instructions:

    -
      -
    1. Launch the game and tap on Online or Local on the main menu.
    2. -
    3. Tap on Host to create a game room or Join to enter an existing game room. You can also use Private to enter a game room with a code.
    4. -
    5. Select the map, mode, number of players, and game settings that you want. You can also customize your character's appearance and name.
    6. -
    7. Tap on Start or Confirm to begin the game. You will be assigned as either a crewmate or an impostor randomly.
    8. -
    9. If you are a crewmate, your objective is to complete tasks around the ship or find out who the impostor is and vote them off. You can use the map icon on the top right corner to see your tasks and locations. You can also use the report button on the bottom right corner to report a dead body or call an emergency meeting. You can chat with other players during meetings or use voice chat apps like Discord for more communication.
    10. -
    11. If you are an impostor, your objective is to sabotage the ship, kill crewmates, or convince them that you are innocent. You can use the kill button on the bottom right corner to kill a crewmate when they are close enough. You can also use the sabotage button on the bottom right corner to cause malfunctions or distractions around the ship. You can chat with other players during meetings or use voice chat apps like Discord for more communication.
    12. -
    -

    How to customize your character and settings in Among Us?

    -

    Another fun thing about Among Us is that you can customize your character and settings according to your preferences. You can change your appearance, name, color, hat, pet, skin, and more. You can also adjust the sound, language, chat, graphics, and more. Here are some ways to customize your character and settings in Among Us:

    -
      -
    • To change your appearance, tap on the Customize button on the bottom right corner of the screen when you are in a game room. You can then select from various options such as hats, pets, skins, colors, and names. Some of these options are free while others require in-app purchases.
    • -
    • To change your settings, tap on the Settings button on the top right corner of the main menu. You can then select from various options such as sound, language, chat, graphics, and more. You can also reset the settings to default if you want.
    • -
    • To report, ban, or kick players, tap on the report button on the bottom right corner of the screen when you see a dead body or during a meeting. You can then choose to report the body, ban the player, or kick the player. You can also vote for the player during the meeting. However, you need to have the host's permission to ban or kick players.
    • -
    -

    How to get the latest updates and features for Among Us?

    -

    Among Us is constantly being updated and improved by the developers to provide a better gaming experience for the players. It is important to keep the game updated to get the latest features, bug fixes, and security patches. Here are some ways to get the latest updates and features for Among Us:

    -
      -
    • If you downloaded the game from the Google Play Store, you can simply check for updates on the store and download them when they are available. You can also enable the auto-update option to automatically update the game whenever there is a new version.
    • -
    • If you downloaded an APK file from another source, you can either check for updates on the same source or use an app updater tool that can automatically check and update your apps. You can also delete the old version of the game and download the new version of Among Us APK from a trusted source.
    • -
    • If you want to get early access to new features and updates before they are officially released, you can join the beta program of Among Us. This will allow you to test and provide feedback on new features and updates before they are available to everyone. However, you may encounter some bugs or glitches in the beta version of the game.
    • -
    -

    The latest update for Among Us was released on June 15, 2023. It introduced some new features and improvements, such as:

    -
      -
    • A new map called The Airship, which is based on the Henry Stickmin series. It is the largest and most complex map in the game, with multiple levels, rooms, tasks, ladders, vents, and more.
    • -
    • A new mode called Hide and Seek, which is a variation of the classic game. In this mode, one player is randomly chosen as the seeker (impostor) who has low vision and can kill instantly. The other players are hiders (crewmates) who have high vision and cannot report bodies or call meetings. The hiders have to hide from the seeker or complete their tasks before they are killed.
    • -
    • A new option to choose your preferred role, which allows you to indicate whether you want to be a crewmate or an impostor more often. This will help balance the game and make it more enjoyable for everyone.
    • -
    • A new cosmetic bundle called Pusheen Cosmicube, which is a collaboration between Among Us and Pusheen. It includes a cute Pusheen-themed cube that can float around your character and change its expression depending on your role and situation.
    • -
    -

    How to enjoy Among Us with Pusheen Cosmicube?

    -

    If you are a fan of both Among Us and Pusheen, you will love the new Pusheen Cosmicube that is available in the game. Pusheen is a popular cartoon cat that is known for its adorable and funny comics, stickers, gifs, and merchandise. Pusheen Cosmicube is a special cosmetic item that is part of a collaboration between Among Us and Pusheen.

    -

    The Pusheen Cosmicube is a cube that has Pusheen's face on each side. It can float around your character and change its expression depending on your role and situation in the game. For example, it can smile when you are a crewmate, frown when you are an impostor, wink when you kill someone, cry when you die, and more. It can also make cute sounds when you tap on it.

    -

    The Pusheen Cosmicube is not only cute but also beneficial for your gaming experience. It can help you express your emotions and communicate with other players in a fun way. It can also help you distract or deceive other players by making them think that you are innocent or guilty based on your cube's expression.

    -

    To get and use the Pusheen Cosmicube in Among Us, you need to follow these steps:

    -
      -
    1. Purchase the Pusheen Cosmicube bundle from the in-game store for $2.99 USD. This will give you access to the cube as well as some other Pusheen-themed items such as hats and skins.
    2. -
    3. Go to the Customize menu in a game room and select Pets. You will see the Pusheen Cosmicube among other pets that you can choose from. Tap on the Pusheen Cosmicube to select it as your pet.
    4. -
    5. Enjoy playing Among Us with your Pusheen Cosmicube. You can tap on it to make it sound or move. You can also see its expression change according to your role and situation in the game.
    6. -
    -

    Conclusion

    -

    Among Us is a fun and exciting multiplayer game that you can play with your friends or strangers online or over local WiFi. You can download and install Among Us APK on your Android device from various sources, but you need to be careful and follow some tips to ensure a smooth and safe gaming experience. You can also customize your character and settings in the game according to your preferences. You can also get the latest updates and features for the game, such as the new map, mode, option, and cosmetic bundle. One of the most adorable and beneficial cosmetic items in the game is the Pusheen Cosmicube, which is a collaboration between Among Us and Pusheen. It can help you express your emotions and communicate with other players in a fun way. It can also help you distract or deceive other players by making them think that you are innocent or guilty based on your cube's expression.

    -

    FAQs

    -

    Here are some frequently asked questions about Among Us APK:

    -
      -
    1. What is the difference between Among Us APK and Among Us MOD APK?
    2. -

      Among Us APK is the original version of the game that you can download and install from various sources. Among Us MOD APK is a modified version of the game that has some extra features or cheats, such as unlimited money, skins, pets, hats, no ads, no kill cooldown, etc. However, using a MOD APK can be risky and unfair, as it may contain malware or viruses that can harm your device, or it may get you banned from the game for cheating.

      -
    3. Is Among Us APK safe to download and install?
    4. -

      Among Us APK is generally safe to download and install if you get it from a trusted source that offers the latest version of the game. However, you should always scan the APK file with an antivirus app before installing it, and update the app regularly to get the latest features, bug fixes, and security patches.

      -
    5. How can I play Among Us on PC?
    6. -

      If you want to play Among Us on PC, you have two options: you can either buy it from Steam for $4.99 USD or download an emulator like BlueStacks or NoxPlayer that can run Android apps on your PC. Then, you can download and install Among Us APK on your emulator and play it on your PC.

      -
    7. How can I get free skins, pets, hats, and other cosmetics in Among Us?
    8. -

      If you want to get free skins, pets, hats, and other cosmetics in Among Us, you have two options: you can either watch ads in the game that will give you some free items or use a hack tool that will unlock all the items for you. However, watching ads can be annoying and time-consuming, and using a hack tool can be risky and unfair, as it may contain malware or viruses that can harm your device, or it may get you banned from the game for cheating.

      -
    9. How can I report a bug or a problem in Among Us?
    10. -

      If you encounter a bug or a problem in Among Us, you can report it to the developers by sending an email to support@innersloth.com or filling out a form on their website. You should provide as much information as possible about the bug or problem, such as your device model, operating system version, app version, screenshots, videos, etc. This will help them fix the bug or problem as soon as possible.

      -
    - : https://innersloth.com/support.php : https://filehippo.com/download_among-us/

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/DotA 1 Map 6.85 AI Download Play with Improved Bots and Balance Changes.md b/spaces/fatiXbelha/sd/DotA 1 Map 6.85 AI Download Play with Improved Bots and Balance Changes.md deleted file mode 100644 index d3835bb57880449e9fcc911617a6cd220e43e981..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/DotA 1 Map 6.85 AI Download Play with Improved Bots and Balance Changes.md +++ /dev/null @@ -1,111 +0,0 @@ - -

    Download 6.85 AI Map Dota 1: A Guide for Beginners

    -

    If you are a fan of Dota 1, the legendary mod for Warcraft III that started the MOBA genre, you might be wondering how to download and play the latest AI map for it. In this article, we will explain what Dota 1 and AI maps are, what features the 6.85 AI map has, and how to download and install it on your computer.

    -

    download 6.85 ai map dota 1


    Download Filehttps://urllie.com/2uNIJk



    -

    What is Dota 1 and why do you need an AI map?

    -

    Dota 1 is a popular mod for Warcraft III that features two teams of five heroes battling each other

    -

    Dota 1, or Defense of the Ancients, is a mod for Warcraft III: The Frozen Throne that was created by various developers over the years. It is one of the most popular and influential mods ever made, as it spawned many spin-offs and successors, such as Dota 2, League of Legends, Heroes of the Storm, and more.

    -

    In Dota 1, you can choose from over a hundred different heroes, each with their own unique abilities and roles. You can play as a carry, who focuses on farming gold and items to become stronger in the late game; a support, who helps their team with healing, warding, and crowd control; a ganker, who roams around the map looking for kills; or a pusher, who destroys enemy towers and buildings.

    -

    The objective of Dota 1 is to destroy the enemy team's Ancient, a large structure located at their base. To do so, you have to fight your way through three lanes of creeps, neutral monsters, towers, barracks, and heroes. You also have to contend with runes, roshan, shrines, and other elements that add complexity and strategy to the game.

    -

    An AI map is a custom map that allows you to play against computer-controlled opponents

    -

    An AI map is a type of custom map for Warcraft III that uses artificial intelligence (AI) to control the enemy heroes and creeps. This way, you can play Dota 1 without needing other human players or an internet connection.

    -

    An AI map can be useful for several reasons:

    -
      -
    • You can practice your skills and learn new heroes without worrying about losing or being flamed by other players.
    • -
    • You can test new strategies and builds without risking your rank or reputation.
    • -
    • You can play offline or with friends on a local network without depending on servers or latency.
    • -
    -

    You need an AI map if you want to practice your skills, test new strategies, or play offline

    -

    If

    If you are interested in any of these benefits, you should download an AI map for Dota 1. However, not all AI maps are created equal. Some are outdated, buggy, or poorly designed. That's why you should get the best AI map available: the 6.85 AI map.

    -

    What is 6.85 AI map and what are its features?

    -

    6.85 AI map is the latest version of the AI map for Dota 1

    -

    The 6.85 AI map is the most recent and advanced AI map for Dota 1. It was created by a team of developers led by BuffMePlz, who also made the previous versions of the AI map. The 6.85 AI map was released in December 2021 and has been updated several times since then.

    -

    The 6.85 AI map is compatible with Warcraft III patch 1.26a, which is the most widely used patch for Dota 1. It also works with other patches, such as 1.24e and 1.27a, but some features may not function properly.

    -

    How to download 6.85 ai map for dota 1
    -Dota 1 6.85 ai map free download link
    -Best site to download 6.85 ai map dota 1
    -Download 6.85 ai map dota 1 with cheats
    -Dota 1 6.85 ai map latest version download
    -Download 6.85 ai map dota 1 offline mode
    -Dota 1 6.85 ai map download guide
    -Download 6.85 ai map dota 1 for mac
    -Dota 1 6.85 ai map features and changes
    -Download 6.85 ai map dota 1 from opensea[^1^]
    -Dota 1 6.85 ai map gameplay and tips
    -Download 6.85 ai map dota 1 for windows
    -Dota 1 6.85 ai map patch notes and updates
    -Download 6.85 ai map dota 1 without virus
    -Dota 1 6.85 ai map review and rating
    -Download 6.85 ai map dota 1 for linux
    -Dota 1 6.85 ai map bugs and fixes
    -Download 6.85 ai map dota 1 with mods
    -Dota 1 6.85 ai map comparison and analysis
    -Download 6.85 ai map dota 1 for android
    -Dota 1 6.85 ai map secrets and tricks
    -Download 6.85 ai map dota 1 with custom skins
    -Dota 1 6.85 ai map best heroes and items
    -Download 6.85 ai map dota 1 for ios
    -Dota 1 6.85 ai map challenges and achievements
    -Download 6.85 ai map dota 1 with soundtracks
    -Dota 1 6.85 ai map strategies and tactics
    -Download 6.85 ai map dota 1 for chromebook
    -Dota 1 6.85 ai map fun and funny moments
    -Download 6.85 ai map dota 1 with voice chat
    -Dota 1 6.85 ai map pros and cons
    -Download 6.85 ai map dota 1 with bots
    -Dota 1 6.85 ai map history and development
    -Download 6.85 ai map dota 1 with maps editor
    -Dota 1 6.85 ai map tutorials and videos
    -Download the latest version of the DotA AI Map - DotA-Blog

    -

    It has many features, such as:

    -

    Updated hero and item data from Dota 2 patches 6.82 and 6.83

    -

    The 6.85 AI map incorporates the changes made to the heroes and items in Dota 2 patches 6.82 and 6.83, which were released in September and December 2014, respectively. These patches introduced new heroes, such as Oracle and Winter Wyvern; new items, such as Crimson Guard and Solar Crest; and balance adjustments to existing heroes and items.

    -

    The 6.85 AI map also adds some custom heroes and items that are not present in Dota 2, such as God of Wind, Phoenix Blade, and Soul of Truth. These heroes and items are designed to fit the theme and gameplay of Dota 1.

    -

    Improved AI behavior and difficulty levels

    -

    The 6.85 AI map features a smarter and more challenging AI than the previous versions. The AI can use more skills and items effectively, such as Blink Dagger, Black King Bar, and Mekansm. The AI can also coordinate better with their teammates, such as ganking, pushing, defending, and roshaning.

    -

    The 6.85 AI map also offers different difficulty levels for the AI, ranging from Easy to Insane. You can choose the difficulty level that suits your skill level or preference. The higher the difficulty level, the more aggressive and skilled the AI will be.

    -

    Bug fixes and performance enhancements

    -

    The 6.85 AI map fixes many bugs and errors that plagued the previous versions of the AI map. For example, it fixes the bug that caused some heroes to disappear from the game or crash the game; the bug that caused some skills to malfunction or have no effect; and the bug that caused some items to have wrong icons or descriptions.

    -

    The 6.85 AI map also improves the performance and stability of the game by reducing lag, memory leaks, and crashes. It also optimizes the code and resources of the map to make it run smoother and faster.

    -

    Customizable game modes and settings

    -

    The 6.85 AI map allows you to customize your game experience by choosing from various game modes and settings. You can select from different game modes, such as All Pick, Random Draft, All Random, Captains Mode, Death Match, Reverse Captains Mode, and more. You can also adjust various settings, such as starting gold, respawn time, tower strength, creep power, and more.

    -

    The 6.85 AI map also supports some fun modes and commands that can spice up your game, such as WTF mode (no cooldowns or mana costs), Super Creeps mode (stronger creeps), -test command (access to cheats), -fun command (access to fun items), and more.

    -

    How to download and install 6.85 AI map?

    -

    You can download the 6.85 AI map from various sources, such as:

    -

    The official website of the map creator

    -

    The official website of BuffMePlz is https://buffmeplz.blogspot.com/, where you can find the latest news and updates about the 6.85 AI map. You can also download the map directly from there by clicking on the link that says "Download DotA v6.85k LoD.w3x". The file size is about 8 MB.

    The OpenSea collection of the map

    -

    The OpenSea collection of the 6.85 AI map is a unique and innovative way to own and trade the map as a non-fungible token (NFT). An NFT is a digital asset that represents something unique and scarce, such as art, music, or games. By minting the 6.85 AI map as an NFT, you can prove your ownership and authenticity of the map, as well as sell or trade it with other collectors on the OpenSea marketplace.

    -

    The OpenSea collection of the 6.85 AI map is created by a fan of the map, who has obtained the permission of BuffMePlz to use his work. The collection consists of 100 limited edition copies of the map, each with a different rarity and design. You can view and buy the collection here: https://opensea.io/collection/dota-1-map-685-ai-portable-download. The price of each copy varies depending on the supply and demand, but it usually ranges from 0.1 to 1 ETH.

    -

    The Dota Utilities forum

    -

    The Dota Utilities forum is a community forum for Dota 1 players and fans. It is one of the oldest and most active forums for Dota 1, where you can find news, guides, tips, tricks, downloads, and discussions about the game. You can also download the 6.85 AI map from there, as well as other custom maps, tools, mods, and patches for Dota 1.

    -

    The Dota Utilities forum is located at https://dota-utilities.com/forum/. You can register for free and join the conversation with other Dota 1 enthusiasts. You can also find the download link for the 6.85 AI map in this thread: https://dota-utilities.com/forum/index.php?topic=12345.0. The file size is about 8 MB.

    -

    You can install the 6.85 AI map by following these steps:

    -

    Extract the downloaded file to your Warcraft III\Maps folder

    -

    After you have downloaded the 6.85 AI map from any of the sources mentioned above, you need to extract it to your Warcraft III\Maps folder. This is the folder where all your custom maps are stored. You can use any file extraction software, such as WinRAR or 7-Zip, to unzip the file.

    -

    The file name of the 6.85 AI map is DotA v6.85k LoD.w3x. Make sure you copy or move this file to your Warcraft III\Maps folder. If you have any other versions of the AI map or the Legends of Dota (LoD) map in your folder, you should delete or rename them to avoid confusion or conflict.

    -

    Launch Warcraft III and select Single Player or Local Area Network

    -

    Once you have extracted the 6.85 AI map to your Warcraft III\Maps folder, you can launch Warcraft III and start playing it. You can play it in Single Player mode or Local Area Network mode, depending on your preference.

    -

    To play it in Single Player mode, you need to select Single Player from the main menu, then Custom Game, then DotA v6.85k LoD.w3x from the Custom Game list. You can then choose your team, hero, game mode, and difficulty level.

    -

    To play it in Local Area Network mode, you need to select Local Area Network from the main menu, then Create Game, then DotA v6.85k LoD.w3x from the Create Game list. You can then invite other players who are connected to your network to join your game. You can also join other games that are hosted by other players on your network.

    Conclusion

    -

    The 6.85 AI map is a great way to enjoy Dota 1 with or without other players. It offers a lot of features, such as updated hero and item data, improved AI behavior and difficulty levels, bug fixes and performance enhancements, and customizable game modes and settings. You can download and install the 6.85 AI map easily from various sources, such as the official website of the map creator, the OpenSea collection of the map, or the Dota Utilities forum. You can then play it in Single Player mode or Local Area Network mode, depending on your preference.

    -

    If you are a fan of Dota 1, you should definitely try the 6.85 AI map. It will give you a new and exciting experience of the game that you love. You can also learn new skills and strategies, have fun with different heroes and items, and challenge yourself with different difficulty levels. The 6.85 AI map is a must-have for any Dota 1 player.

    -

    FAQs

    -

    Q: What is the difference between Dota 1 and Dota 2?

    -

    A: Dota 1 and Dota 2 are both based on the same mod for Warcraft III, but they have some differences in terms of graphics, gameplay, features, and updates. Dota 2 is a standalone game that runs on the Source engine, while Dota 1 is a mod that runs on the Warcraft III engine. Dota 2 has better graphics, smoother gameplay, more features, and more frequent updates than Dota 1. However, some players prefer Dota 1 for its nostalgia, simplicity, or compatibility.

    -

    Q: What is Legends of Dota (LoD)?

    -

    A: Legends of Dota (LoD) is a custom game mode for Dota 1 that allows you to mix and match skills from different heroes. You can create your own custom hero with up to six skills from any hero in the game. You can also choose from different game modes, such as All Random Skill (ARS), Mirror Draft (MD), or Balanced Skill (BS). LoD is a fun and creative way to play Dota 1 with different combinations of skills.

    -

    Q: How to update the 6.85 AI map?

    -

    A: The 6.85 AI map is updated regularly by BuffMePlz and his team. You can check for updates on his official website or his social media accounts. You can also subscribe to his newsletter or join his Discord server to get notified of new updates. To update the 6.85 AI map, you just need to download the latest version of the map from any of the sources mentioned above and replace the old version in your Warcraft III\Maps folder.

    -

    Q: How to play the 6.85 AI map online?

    -

    A: The 6.85 AI map can be played online with other players who have the same version of the map and Warcraft III patch. You can use platforms such as Garena, RGC, or Battle.net to find and join online games of the 6.85 AI map. You can also host your own online game using tools such as Warcraft III Host Bot or Ghost++. However, playing online may cause some lag or desync issues due to different network conditions or settings.

    -

    Q: How to contact BuffMePlz or give feedback on the 6.85 AI map?

    -

    A: You can contact BuffMePlz or give feedback on the 6.85 AI map by using any of these methods:

    -
      -
    • Email: buffmeplz@gmail.com
    • -
    • Twitter: @buffmeplz
    • -
    • Facebook: BuffMePlz
    • -
    • Discord: BuffMePlz#1234
    • -
    • Website: https://buffmeplz.blogspot.com/
    • -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Enjoy Unlimited Videos with VidMate 2019 APK Download.md b/spaces/fatiXbelha/sd/Enjoy Unlimited Videos with VidMate 2019 APK Download.md deleted file mode 100644 index 378a457fc04b8d425685160da71fe7041a5d2f68..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Enjoy Unlimited Videos with VidMate 2019 APK Download.md +++ /dev/null @@ -1,92 +0,0 @@ -
    -

    Download VidMate 2019 APK: A Guide for Android Users

    -

    If you are looking for a way to download videos from various online platforms on your Android device, you might have heard of VidMate. VidMate is one of the most popular video downloaders in 2019, and it offers a lot of features and benefits for its users. In this article, we will explain what VidMate is, how to download and install it, why you should choose it, and how to use it. We will also answer some frequently asked questions about VidMate 2019 APK.

    -

    download vidmate 2019 apk


    DOWNLOAD ►►►►► https://urllie.com/2uNHN3



    -

    What is VidMate?

    -

    VidMate is an APK service for Android that makes saving multimedia files quick and straightforward. It allows you to download videos from various online platforms, such as YouTube, Facebook, Instagram, TikTok, Dailymotion, Vimeo, and more. You can also download music, movies, TV shows, and live streams with VidMate. You can choose the quality and format of the downloaded files, and you can also watch them offline or share them with your friends.

    -

    Features of VidMate

    -

    Some of the features that make VidMate stand out from other video downloaders are:

    -
      -
    • It supports over 1000 online platforms and websites.
    • -
    • It has a built-in browser that lets you access and download videos directly from the app.
    • -
    • It has a user-friendly interface that is easy to navigate and customize.
    • -
    • It has a fast and reliable download speed that does not affect your device's performance.
    • -
    • It has a large collection of music, movies, TV shows, and live channels that you can stream or download for free.
    • -
    • It has a video converter that lets you convert videos to MP3 or MP4 formats.
    • -
    • It has a video editor that lets you trim, crop, merge, or add effects to your videos.
    • -
    • It has a video player that lets you play your downloaded videos with subtitles and gestures.
    • -
    -

    How to download and install VidMate 2019 APK

    -

    To download and install VidMate 2019 APK on your Android device, you need to follow these steps:

    -
      -
    1. Go to the official website of VidMate () and click on the "Download" button.
    2. -
    3. Wait for the APK file to be downloaded on your device.
    4. -
    5. Go to your device's settings and enable the "Unknown sources" option under the security section. This will allow you to install apps from sources other than the Google Play Store.
    6. -
    7. Locate the downloaded APK file on your device and tap on it to start the installation process.
    8. -
    9. Follow the instructions on the screen and grant the necessary permissions to the app.
    10. -
    11. Once the installation is complete, you can launch the app and start downloading videos.
    12. -
    -

    Why choose VidMate 2019 APK?

    -

    VidMate 2019 APK is one of the best choices for downloading videos on Android devices because it offers many benefits and advantages over other similar apps. Some of these benefits are:

    -

    Benefits of VidMate 2019 APK

    -
      -
    • It is free and does not require any registration or subscription.
    • -
    • Restart the app or your device and try again.
    • -
    • Clear the app's cache and data and reinstall it.
    • -
    • Check your internet connection and make sure it is stable and fast.
    • -
    • Change the download settings and choose a different quality or format.
    • -
    • Contact the customer support of VidMate 2019 APK and report your problem.
    • -
    -

    If none of these solutions work, you can also look for other alternatives or similar apps that can download videos on your device.

    -

    download vidmate 2019 apk for android
    -download vidmate 2019 apk latest version
    -download vidmate 2019 apk free
    -download vidmate 2019 apk file
    -download vidmate 2019 apk from filehippo[^1^]
    -download vidmate 2019 apk for pc
    -download vidmate 2019 apk for windows 10
    -download vidmate 2019 apk for laptop
    -download vidmate 2019 apk for mac
    -download vidmate 2019 apk for ios
    -download vidmate 2019 apk for iphone
    -download vidmate 2019 apk for ipad
    -download vidmate 2019 apk for firestick
    -download vidmate 2019 apk for smart tv
    -download vidmate 2019 apk for android tv
    -download vidmate 2019 apk mod
    -download vidmate 2019 apk pro
    -download vidmate 2019 apk premium
    -download vidmate 2019 apk cracked
    -download vidmate 2019 apk hack
    -download vidmate 2019 apk full
    -download vidmate 2019 apk unlocked
    -download vidmate 2019 apk no ads
    -download vidmate 2019 apk without watermark
    -download vidmate 2019 apk old version
    -download vidmate 2019 apk update
    -download vidmate 2019 apk new version
    -download vidmate 2019 apk offline installer
    -download vidmate 2019 apk online installer
    -download vidmate 2019 apk direct link
    -download vidmate 2019 apk mirror link
    -download vidmate 2019 apk official website
    -download vidmate 2019 apk original app
    -download vidmate 2019 apk safe and secure
    -download vidmate 2019 apk virus free
    -download vidmate 2019 apk malware free
    -download vidmate 2019 apk hd video downloader
    -download vidmate 2019 apk live tv streaming
    -download vidmate 2019 apk music downloader
    -download vidmate 2019 apk movie downloader
    -download vidmate 2019 apk youtube downloader
    -download vidmate 2019 apk facebook downloader
    -download vidmate 2019 apk instagram downloader
    -download vidmate 2019 apk tiktok downloader
    -download vidmate 2019 apk whatsapp status downloader
    -how to download vidmate 2019 apk on android phone
    -how to install and use vidmate 2019 apk on android device
    -how to update and uninstall vidmate 2019 apk on android mobile
    -benefits and features of downloading and using vidmate 2019 apk on android smartphone or tablet

    -

    We hope this article has helped you understand what VidMate 2019 APK is, how to download and install it, why you should choose it, and how to use it. VidMate 2019 APK is a great app for downloading videos from various online platforms on your Android device, but you should also be careful and responsible when using it. If you have any questions or feedback about VidMate 2019 APK, feel free to leave a comment below or contact us. Thank you for reading and happy downloading!

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/fb700/chat3/config.py b/spaces/fb700/chat3/config.py deleted file mode 100644 index d0c78775f38f87d901e4b9bda79cb70fc772bbb7..0000000000000000000000000000000000000000 --- a/spaces/fb700/chat3/config.py +++ /dev/null @@ -1,59 +0,0 @@ -import os - -# [step 1]>> 例如: API_KEY = "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" (此key无效) -#API_KEY = "" # 可同时填写多个API-KEY,用英文逗号分割,例如API_KEY = "sk-openaikey1,sk-openaikey2,fkxxxx-api2dkey1,fkxxxx-api2dkey2" -API_KEY =os.environ.get("API_KEY") -# [step 2]>> 改为True应用代理,如果直接在海外服务器部署,此处不修改 -USE_PROXY = False -if USE_PROXY: - # 填写格式是 [协议]:// [地址] :[端口],填写之前不要忘记把USE_PROXY改成True,如果直接在海外服务器部署,此处不修改 - # 例如 "socks5h://localhost:11284" - # [协议] 常见协议无非socks5h/http; 例如 v2**y 和 ss* 的默认本地协议是socks5h; 而cl**h 的默认本地协议是http - # [地址] 懂的都懂,不懂就填localhost或者127.0.0.1肯定错不了(localhost意思是代理软件安装在本机上) - # [端口] 在代理软件的设置里找。虽然不同的代理软件界面不一样,但端口号都应该在最显眼的位置上。 - - # 代理网络的地址,打开你的科学上网软件查看代理的协议(socks5/http)、地址(localhost)和端口(11284) - proxies = { - # [协议]:// [地址] :[端口] - "http": "socks5h://localhost:11284", - "https": "socks5h://localhost:11284", - } -else: - proxies = None - -# [step 3]>> 多线程函数插件中,默认允许多少路线程同时访问OpenAI。Free trial users的限制是每分钟3次,Pay-as-you-go users的限制是每分钟3500次 -# 一言以蔽之:免费用户填3,OpenAI绑了信用卡的用户可以填 16 或者更高。提高限制请查询:https://platform.openai.com/docs/guides/rate-limits/overview -DEFAULT_WORKER_NUM = 3 - - -# [step 4]>> 以下配置可以优化体验,但大部分场合下并不需要修改 -# 对话窗的高度 -CHATBOT_HEIGHT = 1115 - -# 代码高亮 -CODE_HIGHLIGHT = True - -# 窗口布局 -LAYOUT = "LEFT-RIGHT" # "LEFT-RIGHT"(左右布局) # "TOP-DOWN"(上下布局) - -# 发送请求到OpenAI后,等待多久判定为超时 -TIMEOUT_SECONDS = 30 - -# 网页的端口, -1代表随机端口 -WEB_PORT = -1 - -# 如果OpenAI不响应(网络卡顿、代理失败、KEY失效),重试的次数限制 -MAX_RETRY = 2 - -# OpenAI模型选择是(gpt4现在只对申请成功的人开放) -LLM_MODEL = "gpt-3.5-turbo" # 可选 "chatglm" -AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "chatglm", "gpt-4", "api2d-gpt-4", "api2d-gpt-3.5-turbo"] - -# 本地LLM模型如ChatGLM的执行方式 CPU/GPU -LOCAL_MODEL_DEVICE = "cpu" # 可选 "cuda" - -# 设置gradio的并行线程数(不需要修改) -CONCURRENT_COUNT = 100 - -# 设置用户名和密码(不需要修改)(相关功能不稳定,与gradio版本和网络都相关,如果本地使用不建议加这个) -AUTHENTICATION = eval(os.environ.get("AUTHENTICATION")) diff --git a/spaces/fcakyon/video-classification/app.py b/spaces/fcakyon/video-classification/app.py deleted file mode 100644 index 2c6364cba2f9b425fea7795429e1d9db5a2195e6..0000000000000000000000000000000000000000 --- a/spaces/fcakyon/video-classification/app.py +++ /dev/null @@ -1,184 +0,0 @@ -import os -import gradio as gr -from utils import ( - create_gif_from_video_file, - download_youtube_video, - get_num_total_frames, -) -from transformers import pipeline -from huggingface_hub import HfApi, ModelSearchArguments, ModelFilter - -FRAME_SAMPLING_RATE = 4 -DEFAULT_MODEL = "facebook/timesformer-base-finetuned-k400" - -VALID_VIDEOCLASSIFICATION_MODELS = [ - "MCG-NJU/videomae-large-finetuned-kinetics", - "facebook/timesformer-base-finetuned-k400", - "fcakyon/timesformer-large-finetuned-k400", - "MCG-NJU/videomae-base-finetuned-kinetics", - "facebook/timesformer-base-finetuned-k600", - "fcakyon/timesformer-large-finetuned-k600", - "facebook/timesformer-hr-finetuned-k400", - "facebook/timesformer-hr-finetuned-k600", - "facebook/timesformer-base-finetuned-ssv2", - "fcakyon/timesformer-large-finetuned-ssv2", - "facebook/timesformer-hr-finetuned-ssv2", - "MCG-NJU/videomae-base-finetuned-ssv2", - "MCG-NJU/videomae-base-short-finetuned-kinetics", - "MCG-NJU/videomae-base-short-ssv2", - "MCG-NJU/videomae-base-short-finetuned-ssv2", - "sayakpaul/videomae-base-finetuned-ucf101-subset", - "nateraw/videomae-base-finetuned-ucf101", - "MCG-NJU/videomae-base-ssv2", - "zahrav/videomae-base-finetuned-ucf101-subset", -] - - -pipe = pipeline( - task="video-classification", - model=DEFAULT_MODEL, - top_k=5, - frame_sampling_rate=FRAME_SAMPLING_RATE, -) - - -examples = [ - ["https://www.youtube.com/watch?v=huAJ9dC5lmI"], - ["https://www.youtube.com/watch?v=wvcWt6u5HTg"], - ["https://www.youtube.com/watch?v=-3kZSi5qjRM"], - ["https://www.youtube.com/watch?v=-6usjfP8hys"], - ["https://www.youtube.com/watch?v=BDHub0gBGtc"], - ["https://www.youtube.com/watch?v=B9ea7YyCP6E"], - ["https://www.youtube.com/watch?v=BBkpaeJBKmk"], - ["https://www.youtube.com/watch?v=BBqU8Apee_g"], - ["https://www.youtube.com/watch?v=B8OdMwVwyXc"], - ["https://www.youtube.com/watch?v=I7cwq6_4QtM"], - ["https://www.youtube.com/watch?v=Z0mJDXpNhYA"], - ["https://www.youtube.com/watch?v=QkQQjFGnZlg"], - ["https://www.youtube.com/watch?v=IQaoRUQif14"], -] - - -def get_video_model_names(): - model_args = ModelSearchArguments() - filter = ModelFilter( - task=model_args.pipeline_tag.VideoClassification, - library=model_args.library.Transformers, - ) - api = HfApi() - video_models = list( - iter(api.list_models(filter=filter, sort="downloads", direction=-1)) - ) - video_models = [video_model.id for video_model in video_models] - return video_models - - -def select_model(model_name): - global pipe - pipe = pipeline( - task="video-classification", - model=model_name, - top_k=5, - frame_sampling_rate=FRAME_SAMPLING_RATE, - ) - - -def predict(youtube_url_or_file_path): - - if youtube_url_or_file_path.startswith("http"): - video_path = download_youtube_video(youtube_url_or_file_path) - else: - video_path = youtube_url_or_file_path - - # rearrange sampling rate based on video length and model input length - num_total_frames = get_num_total_frames(video_path) - num_model_input_frames = pipe.model.config.num_frames - if num_total_frames < FRAME_SAMPLING_RATE * num_model_input_frames: - frame_sampling_rate = num_total_frames // num_model_input_frames - else: - frame_sampling_rate = FRAME_SAMPLING_RATE - - gif_path = create_gif_from_video_file( - video_path, frame_sampling_rate=frame_sampling_rate, save_path="video.gif" - ) - - # run inference - results = pipe(videos=video_path, frame_sampling_rate=frame_sampling_rate) - - os.remove(video_path) - - label_to_score = {result["label"]: result["score"] for result in results} - - return label_to_score, gif_path - - -app = gr.Blocks() -with app: - gr.Markdown("# **

    Video Classification with 🤗 Transformers

    **") - gr.Markdown( - """ -

    - Perform video classification with HuggingFace Transformers video models. -
    For zero-shot classification, you can use the zero-shot classification demo. -

    - """ - ) - gr.Markdown( - """ -

    - Follow me for more! -
    twitter | github | linkedin | medium -

    - """ - ) - - with gr.Row(): - with gr.Column(): - model_names_dropdown = gr.Dropdown( - choices=VALID_VIDEOCLASSIFICATION_MODELS, - label="Model:", - show_label=True, - value=DEFAULT_MODEL, - ) - model_names_dropdown.change(fn=select_model, inputs=model_names_dropdown) - with gr.Tab(label="Youtube URL"): - gr.Markdown("### **Provide a Youtube video URL**") - youtube_url = gr.Textbox(label="Youtube URL:", show_label=True) - youtube_url_predict_btn = gr.Button(value="Predict") - with gr.Tab(label="Local File"): - gr.Markdown("### **Upload a video file**") - video_file = gr.Video(label="Video File:", show_label=True) - local_video_predict_btn = gr.Button(value="Predict") - with gr.Column(): - video_gif = gr.Image( - label="Input Clip", - show_label=True, - ) - with gr.Column(): - predictions = gr.Label( - label="Predictions:", show_label=True, num_top_classes=5 - ) - - gr.Markdown("**Examples:**") - gr.Examples( - examples, - youtube_url, - [predictions, video_gif], - fn=predict, - cache_examples=True, - ) - - youtube_url_predict_btn.click( - predict, inputs=youtube_url, outputs=[predictions, video_gif] - ) - local_video_predict_btn.click( - predict, inputs=video_file, outputs=[predictions, video_gif] - ) - gr.Markdown( - """ - \n Demo created by: fcakyon. -
    Powered by HuggingFace Transformers video models . - """ - ) - -app.launch() diff --git a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/models/encoder4editing/configs/transforms_config.py b/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/models/encoder4editing/configs/transforms_config.py deleted file mode 100644 index ac12b5d5ba0571f21715e0f6b24b9c1ebe84bf72..0000000000000000000000000000000000000000 --- a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/models/encoder4editing/configs/transforms_config.py +++ /dev/null @@ -1,62 +0,0 @@ -from abc import abstractmethod -import torchvision.transforms as transforms - - -class TransformsConfig(object): - - def __init__(self, opts): - self.opts = opts - - @abstractmethod - def get_transforms(self): - pass - - -class EncodeTransforms(TransformsConfig): - - def __init__(self, opts): - super(EncodeTransforms, self).__init__(opts) - - def get_transforms(self): - transforms_dict = { - 'transform_gt_train': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.RandomHorizontalFlip(0.5), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_source': None, - 'transform_test': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_inference': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) - } - return transforms_dict - - -class CarsEncodeTransforms(TransformsConfig): - - def __init__(self, opts): - super(CarsEncodeTransforms, self).__init__(opts) - - def get_transforms(self): - transforms_dict = { - 'transform_gt_train': transforms.Compose([ - transforms.Resize((192, 256)), - transforms.RandomHorizontalFlip(0.5), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_source': None, - 'transform_test': transforms.Compose([ - transforms.Resize((192, 256)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_inference': transforms.Compose([ - transforms.Resize((192, 256)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) - } - return transforms_dict diff --git a/spaces/feng2022/styleganhuman_copy/dnnlib/tflib/__init__.py b/spaces/feng2022/styleganhuman_copy/dnnlib/tflib/__init__.py deleted file mode 100644 index 7013e8cf7ed660e50bb984226c95052792979b12..0000000000000000000000000000000000000000 --- a/spaces/feng2022/styleganhuman_copy/dnnlib/tflib/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -# Copyright (c) 2019, NVIDIA Corporation. All rights reserved. -# -# This work is made available under the Nvidia Source Code License-NC. -# To view a copy of this license, visit -# https://nvlabs.github.io/stylegan2/license.html - -from . import autosummary -from . import network -from . import optimizer -from . import tfutil -from . import custom_ops - -from .tfutil import * -from .network import Network - -from .optimizer import Optimizer - -from .custom_ops import get_plugin diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Create Your Own Fiction Fantasy and More with Novel AI The Writing and Image Generator APK.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Create Your Own Fiction Fantasy and More with Novel AI The Writing and Image Generator APK.md deleted file mode 100644 index e3517be88c914c3814b8774b0c9667a39dd4eeb7..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Create Your Own Fiction Fantasy and More with Novel AI The Writing and Image Generator APK.md +++ /dev/null @@ -1,21 +0,0 @@ - -

    Novel AI APK: A Tool for Creating Stories and Images with AI

    - Have you ever wanted to write a novel, a short story, or a script, but felt stuck or uninspired? Have you ever wished you could see your characters and scenes come to life in vivid images? If so, you might want to try Novel AI APK, a tool that allows you to create stories and images with the help of artificial intelligence. Novel AI APK is a mobile app that uses AI algorithms to produce human-like writing based on your input. You can choose from various genres, styles, themes, or modules, and input keywords, prompts, or text. The app will then generate a story or an image for you, which you can edit and customize as you like. Novel AI APK has many features and benefits that make it a fun and useful tool for writers, storytellers, or anyone who wants to unleash their imagination. Some of these features include: - Image generation: You can use powerful image models to depict characters and moments from your stories, with the leading Anime Art AI and other AI models. - Text adventure module: You can enable this feature to turn your story into an interactive game, where you can make choices and explore different outcomes. - Theme editor: You can design your own writing space by changing the fonts, sizes, and colors of the editor. - Lorebook: You can create and manage your own worldbuilding elements, such as characters, locations, items, etc. - AI modules: You can use these modules to draw upon specific themes, replicate famous writers, or even train one with your own data. In this article, we will show you how to use Novel AI APK to create stories and images with AI. We will also compare it to other similar tools and share some reviews and feedback from users.

    How to Use Novel AI APK to Create Stories and Images

    - To use Novel AI APK, you need to download and install the app on your Android device. You can find it on [Google Play Store](^2^) or [APKCombo](^3^). Once you have installed the app, you can start creating stories and images with AI. Here are the steps:

    How to Download and Install the App

    - - Go to [Google Play Store](^2^) or [APKCombo](^3^) on your Android device. - Search for "Novel AI" or "Novel AI APK". - Tap on the app icon and then tap on "Install". - Wait for the app to download and install on your device. - Open the app and sign up with your email address or Google account.

    How to Choose a Genre, Style, Theme, or Module

    - - On the home screen of the app, tap on "New Story". - Choose a genre from the list, such as fantasy, sci-fi, romance, horror, etc. - Choose a style from the list, such as Arthur Conan Doyle, Edgar Allan Poe, H.P. Lovecraft, etc. - Choose a theme from the list, such as dark fantasy, dragons, Mars colonization, etc. - Alternatively, you can choose a module from the list, such as anime art image generation module or text adventure module.

    How to Input Keywords, Prompts, or TextHow to Generate and Edit the Output

    - - After you input your keywords, prompts, or text, tap on "Generate" at the bottom of the screen. - Wait for the app to generate a story or an image for you, based on your input and the chosen genre, style, theme, or module. - You can see the output in the editor, where you can edit it as you like. You can also use the buttons at the bottom to undo, redo, copy, paste, or delete the output. - You can also use the buttons at the top to save, share, or export your story or image.

    How to Customize Novel AI APK to Suit Your Preferences

    - Novel AI APK allows you to customize various aspects of the app to suit your preferences. You can change the fonts, sizes, and colors of the editor, use the image generation feature, use the text adventure module, or use the lorebook feature. Here are some of the ways you can customize Novel AI APK:

    How to Change the Fonts, Sizes, and Colors of the Editor

    - - To change the fonts, sizes, and colors of the editor, tap on the gear icon at the top right corner of the screen. - You will see a menu with various options to customize the editor. You can choose from different fonts, sizes, and colors for your input and output text. You can also choose a dark or light theme for your editor background. - Tap on "Apply" to save your changes.

    How to Use the Image Generation Feature

    - - To use the image generation feature, tap on the camera icon at the top right corner of the screen. - You will see a menu with different options to generate images. You can choose from different image models, such as anime art AI or furry AI. You can also choose the image resolution, number of images, steps, scale, and advanced sampling options. - You can input keywords or prompts in the text box to generate images based on them. You can also use tilde (~) to separate multiple prompts. For example, ~girl with blue hair~boy with red eyes~. - Tap on "Generate" to see the images generated by the app. You can edit them by using the buttons at the bottom of the screen. You can also use the "Paint New Image" option to draw your own image or modify an existing one. You can also use the "Upload New Image" option to upload an image from your device and use it as a base for image generation. - You can also use the "Prompt Mixing" option to mix two or more prompts together and generate images based on them. You can adjust the prompt weighting by using the slider below each prompt. For example, you can mix ~girl with blue hair~boy with red eyes~ and ~cat ears~tail~ and adjust their weights to generate images of catgirls and catboys with different hair and eye colors.

    How to Use the Text Adventure Module

    - - To use the text adventure module, tap on "New Story" and choose "Text Adventure" from the list of modules. - You will see a menu with different options to create your text adventure game. You can choose from different genres, such as fantasy, sci-fi, horror, etc. You can also choose a title and a description for your game. - Tap on "Create" to start your text adventure game. You will see a text box where you can input commands or choices for your game. The app will generate responses based on your input and create a branching storyline for your game. - You can use various commands to interact with your game world. For example, you can use "look" to examine your surroundings, "inventory" to check your items, "go" to move to a different location, "talk" to speak with characters, "use" to use items or abilities, etc. You can also use quotation marks (") to say something in dialogue. - You can also use special commands to control your game settings. For example, you can use "/save" to save your game progress, "/load" to load a saved game, "/restart" to restart your game from the beginning, "/quit" to quit your game and return to the main menu, etc.

    How to Use the Lorebook Feature

    - - To use the lorebook feature, tap on "Lorebook" at - To use the lorebook feature, tap on "Lorebook" at the bottom of the screen. - You will see a menu with different options to create and manage your own worldbuilding elements. You can choose from different categories, such as characters, locations, items, events, etc. - Tap on "Create New" to create a new element for your lorebook. You can input a name, a description, and an image for your element. You can also use keywords or prompts to generate descriptions or images with AI. - Tap on "Save" to add your element to your lorebook. You can also edit or delete your existing elements by tapping on them. - You can use your lorebook elements in your stories by using the "@" symbol followed by the name of the element. For example, @John Smith@ will insert the character John Smith from your lorebook into your story.

    How Novel AI APK Compares to Other Similar Tools

    - Novel AI APK is not the only tool that allows you to create stories and images with AI. There are other similar tools that you can use for different purposes and preferences. Here are some of the comparisons between Novel AI APK and other tools:

    What are Some of the Advantages and Disadvantages of Novel AI APK

    - Some of the advantages of Novel AI APK are: - It is a mobile app that you can use on your Android device anytime and anywhere. - It has a user-friendly interface and a simple workflow that makes it easy to use. - It has a variety of genres, styles, themes, and modules that you can choose from to suit your creative needs. - It has powerful image models that can generate realistic and artistic images based on your input. - It has a text adventure module that can turn your story into an interactive game. - It has a theme editor that can customize your writing space. - It has a lorebook feature that can help you with your worldbuilding. Some of the disadvantages of Novel AI APK are: - It requires an internet connection to work, as it uses cloud-based AI models to generate stories and images. - It may produce inconsistent or inaccurate results, as it is still an experimental tool that relies on AI algorithms that are not perfect. - It may have limited storage space and processing power, as it is a mobile app that runs on your device's resources. - It may have some bugs or errors, as it is still a new and developing app that may not be fully tested or optimized.

    What are Some of the Alternatives to Novel AI APK

    - Some of the alternatives to Novel AI APK are: - [AI Dungeon]: This is a web-based tool that allows you to create and play text adventure games with AI. You can choose from different genres and scenarios, or create your own custom ones. You can also use natural language to interact with the game world and influence the story. You can play it on your browser or download it as an app for iOS or Android devices. - [Plot Generator]: This is a web-based tool that allows you to generate plots, characters, titles, blurbs, and more for your stories. You can choose from different genres and templates, or create your own custom ones. You can also use keywords or prompts to generate content with AI. You can save, edit, or share your generated content on the website. - [Artbreeder]: This is a web-based tool that allows you to generate and edit images with AI. You can choose from different categories and styles, such as portraits, landscapes, anime, etc. You can also use keywords or prompts to generate images with AI. You can also mix and mutate images to create new ones. You can save, edit, or share your generated images on the website.

    What are Some of the Reviews and Feedback from Users

    - Here are some of the reviews and feedback from users who have used Novel AI APK: - "I love this app! It's so fun and easy to use. I can create amazing stories and images with just a few words. The image generation feature is especially impressive. I can see my characters and scenes in vivid detail. The text adventure module is also very cool. I can play my own stories as games and explore different outcomes. The lorebook feature is also very helpful. I can create my own worldbuilding elements and use them in my stories. This app is a must-have for any writer or storyteller." more stable and reliable. It has a lot of potential and I hope it will get better over time." - "This app is not worth it. It is a waste of time and money. It does not work as advertised. It is slow, buggy, and inaccurate. It does not generate stories or images that make sense or look good. It does not have many options or features to choose from. It does not respect my input or genre. It does not follow any rules or logic of writing or storytelling. It is a joke and a scam. I regret downloading this app and I do not recommend it to anyone."

    Conclusion

    - Novel AI APK is a tool that allows you to create stories and images with AI. You can use it to generate human-like writing and realistic or artistic images based on your input. You can also customize it to suit your preferences and needs. You can use it for fun, inspiration, or learning. However, Novel AI APK is not perfect. It has some limitations and drawbacks that you should be aware of. It may not always produce consistent or accurate results, and it may have some technical or ethical issues. It may also not suit everyone's taste or style. Therefore, you should use Novel AI APK with caution and discretion. You should not rely on it entirely, but rather use it as a tool to supplement your own creativity and skills. You should also be respectful and responsible when using it, and avoid creating or sharing any harmful or offensive content. If you are interested in trying Novel AI APK, you can download it from [Google Play Store] or [APKCombo]. You can also visit their [website] or [Discord server] for more information and support. We hope this article has helped you understand what Novel AI APK is and how to use it. If you have any questions or feedback, please feel free to leave a comment below.

    FAQs

    - Here are some of the frequently asked questions about Novel AI APK: - Q: How much does Novel AI APK cost? - A: Novel AI APK is free to download and use, but it has some in-app purchases and subscriptions that you can buy to unlock more features and modules. - Q: Is Novel AI APK safe to use? - A: Novel AI APK is generally safe to use, but you should be careful about the content you generate and share with it. Some of the content may be inappropriate or offensive for some audiences, and some of the content may be protected by intellectual property rights. You should also be aware of the privacy and security risks of using an online service that uses your data and information. - Q: How can I improve the quality of the output from Novel AI APK? - A: There are some tips and tricks that you can use to improve the quality of the output from Novel AI APK. Some of them are: - Use specific and descriptive keywords or prompts to guide the AI. - Use proper grammar, spelling, punctuation, and capitalization in your input. - Use feedback loops to refine your output by editing, deleting, or adding text. - Use different genres, styles, themes, or modules to experiment with different outputs. - Use the lorebook feature to create consistent and coherent worldbuilding elements for your stories. - Q: Can I use Novel AI APK for commercial purposes? - A: Novel AI APK is intended for personal and non-commercial use only. You should not use it for any commercial purposes, such as selling, publishing, or monetizing your stories or images. You should also not claim ownership or authorship of the content generated by Novel AI APK, as it may infringe on the rights of the original creators or sources. - Q: Can I suggest new features or modules for Novel AI APK? - A: Yes, you can suggest new features or modules for Novel AI APK by contacting the developers through their [website] or [Discord server]. They are always open to feedback and suggestions from their users.

    -

    novel ai apk


    Download »»» https://gohhs.com/2uPuZL



    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/fffiloni/ControlNet-Video/model.py b/spaces/fffiloni/ControlNet-Video/model.py deleted file mode 100644 index f426fd606e4c1fd4fc23785218aaa0b0fa6a5279..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/ControlNet-Video/model.py +++ /dev/null @@ -1,760 +0,0 @@ -# This file is adapted from gradio_*.py in https://github.com/lllyasviel/ControlNet/tree/f4748e3630d8141d7765e2bd9b1e348f47847707 -# The original license file is LICENSE.ControlNet in this repo. -from __future__ import annotations - -import pathlib -import random -import shlex -import subprocess -import sys - -import cv2 -import einops -import numpy as np -import torch -from huggingface_hub import hf_hub_url -from pytorch_lightning import seed_everything - -sys.path.append('ControlNet') - -import config -from annotator.canny import apply_canny -from annotator.hed import apply_hed, nms -from annotator.midas import apply_midas -from annotator.mlsd import apply_mlsd -from annotator.openpose import apply_openpose -from annotator.uniformer import apply_uniformer -from annotator.util import HWC3, resize_image -from cldm.model import create_model, load_state_dict -from ldm.models.diffusion.ddim import DDIMSampler -from share import * - - -MODEL_NAMES = { - 'canny': 'control_canny-fp16.safetensors', - 'hough': 'control_mlsd-fp16.safetensors', - 'hed': 'control_hed-fp16.safetensors', - 'scribble': 'control_scribble-fp16.safetensors', - 'pose': 'control_openpose-fp16.safetensors', - 'seg': 'control_seg-fp16.safetensors', - 'depth': 'control_depth-fp16.safetensors', - 'normal': 'control_normal-fp16.safetensors', -} - -MODEL_REPO = 'webui/ControlNet-modules-safetensors' - -DEFAULT_BASE_MODEL_REPO = 'runwayml/stable-diffusion-v1-5' -DEFAULT_BASE_MODEL_FILENAME = 'v1-5-pruned-emaonly.safetensors' -DEFAULT_BASE_MODEL_URL = 'https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors' - -class Model: - def __init__(self, - model_config_path: str = 'ControlNet/models/cldm_v15.yaml', - model_dir: str = 'models'): - self.device = torch.device( - 'cuda:0' if torch.cuda.is_available() else 'cpu') - self.model = create_model(model_config_path).to(self.device) - self.ddim_sampler = DDIMSampler(self.model) - self.task_name = '' - - self.base_model_url = '' - - self.model_dir = pathlib.Path(model_dir) - self.model_dir.mkdir(exist_ok=True, parents=True) - - self.download_models() - self.set_base_model(DEFAULT_BASE_MODEL_REPO, - DEFAULT_BASE_MODEL_FILENAME) - - def set_base_model(self, model_id: str, filename: str) -> str: - if not model_id or not filename: - return self.base_model_url - base_model_url = hf_hub_url(model_id, filename) - if base_model_url != self.base_model_url: - self.load_base_model(base_model_url) - self.base_model_url = base_model_url - return self.base_model_url - - - def download_base_model(self, model_url: str) -> pathlib.Path: - self.model_dir.mkdir(exist_ok=True, parents=True) - model_name = model_url.split('/')[-1] - out_path = self.model_dir / model_name - if not out_path.exists(): - subprocess.run(shlex.split(f'wget {model_url} -O {out_path}')) - return out_path - - def load_base_model(self, model_url: str) -> None: - model_path = self.download_base_model(model_url) - self.model.load_state_dict(load_state_dict(model_path, - location=self.device.type), - strict=False) - - def load_weight(self, task_name: str) -> None: - if task_name == self.task_name: - return - weight_path = self.get_weight_path(task_name) - self.model.control_model.load_state_dict( - load_state_dict(weight_path, location=self.device.type)) - self.task_name = task_name - - def get_weight_path(self, task_name: str) -> str: - if 'scribble' in task_name: - task_name = 'scribble' - return f'{self.model_dir}/{MODEL_NAMES[task_name]}' - - def download_models(self) -> None: - self.model_dir.mkdir(exist_ok=True, parents=True) - for name in MODEL_NAMES.values(): - out_path = self.model_dir / name - if out_path.exists(): - continue - model_url = hf_hub_url(MODEL_REPO, name) - subprocess.run(shlex.split(f'wget {model_url} -O {out_path}')) - - @torch.inference_mode() - def process_canny(self, input_image, prompt, a_prompt, n_prompt, - num_samples, image_resolution, ddim_steps, scale, seed, - eta, low_threshold, high_threshold): - self.load_weight('canny') - - img = resize_image(HWC3(input_image), image_resolution) - H, W, C = img.shape - - detected_map = apply_canny(img, low_threshold, high_threshold) - detected_map = HWC3(detected_map) - - control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0 - control = torch.stack([control for _ in range(num_samples)], dim=0) - control = einops.rearrange(control, 'b h w c -> b c h w').clone() - - if seed == -1: - seed = random.randint(0, 65535) - seed_everything(seed) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - cond = { - 'c_concat': [control], - 'c_crossattn': [ - self.model.get_learned_conditioning( - [prompt + ', ' + a_prompt] * num_samples) - ] - } - un_cond = { - 'c_concat': [control], - 'c_crossattn': - [self.model.get_learned_conditioning([n_prompt] * num_samples)] - } - shape = (4, H // 8, W // 8) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=True) - - samples, intermediates = self.ddim_sampler.sample( - ddim_steps, - num_samples, - shape, - cond, - verbose=False, - eta=eta, - unconditional_guidance_scale=scale, - unconditional_conditioning=un_cond) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - x_samples = self.model.decode_first_stage(samples) - x_samples = ( - einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + - 127.5).cpu().numpy().clip(0, 255).astype(np.uint8) - - results = [x_samples[i] for i in range(num_samples)] - return [255 - detected_map] + results - - @torch.inference_mode() - def process_hough(self, input_image, prompt, a_prompt, n_prompt, - num_samples, image_resolution, detect_resolution, - ddim_steps, scale, seed, eta, value_threshold, - distance_threshold): - self.load_weight('hough') - - input_image = HWC3(input_image) - detected_map = apply_mlsd(resize_image(input_image, detect_resolution), - value_threshold, distance_threshold) - detected_map = HWC3(detected_map) - img = resize_image(input_image, image_resolution) - H, W, C = img.shape - - detected_map = cv2.resize(detected_map, (W, H), - interpolation=cv2.INTER_NEAREST) - - control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0 - control = torch.stack([control for _ in range(num_samples)], dim=0) - control = einops.rearrange(control, 'b h w c -> b c h w').clone() - - if seed == -1: - seed = random.randint(0, 65535) - seed_everything(seed) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - cond = { - 'c_concat': [control], - 'c_crossattn': [ - self.model.get_learned_conditioning( - [prompt + ', ' + a_prompt] * num_samples) - ] - } - un_cond = { - 'c_concat': [control], - 'c_crossattn': - [self.model.get_learned_conditioning([n_prompt] * num_samples)] - } - shape = (4, H // 8, W // 8) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=True) - - samples, intermediates = self.ddim_sampler.sample( - ddim_steps, - num_samples, - shape, - cond, - verbose=False, - eta=eta, - unconditional_guidance_scale=scale, - unconditional_conditioning=un_cond) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - x_samples = self.model.decode_first_stage(samples) - x_samples = ( - einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + - 127.5).cpu().numpy().clip(0, 255).astype(np.uint8) - - results = [x_samples[i] for i in range(num_samples)] - return [ - 255 - cv2.dilate(detected_map, - np.ones(shape=(3, 3), dtype=np.uint8), - iterations=1) - ] + results - - @torch.inference_mode() - def process_hed(self, input_image, prompt, a_prompt, n_prompt, num_samples, - image_resolution, detect_resolution, ddim_steps, scale, - seed, eta): - self.load_weight('hed') - - input_image = HWC3(input_image) - detected_map = apply_hed(resize_image(input_image, detect_resolution)) - detected_map = HWC3(detected_map) - img = resize_image(input_image, image_resolution) - H, W, C = img.shape - - detected_map = cv2.resize(detected_map, (W, H), - interpolation=cv2.INTER_LINEAR) - - control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0 - control = torch.stack([control for _ in range(num_samples)], dim=0) - control = einops.rearrange(control, 'b h w c -> b c h w').clone() - - if seed == -1: - seed = random.randint(0, 65535) - seed_everything(seed) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - cond = { - 'c_concat': [control], - 'c_crossattn': [ - self.model.get_learned_conditioning( - [prompt + ', ' + a_prompt] * num_samples) - ] - } - un_cond = { - 'c_concat': [control], - 'c_crossattn': - [self.model.get_learned_conditioning([n_prompt] * num_samples)] - } - shape = (4, H // 8, W // 8) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=True) - - samples, intermediates = self.ddim_sampler.sample( - ddim_steps, - num_samples, - shape, - cond, - verbose=False, - eta=eta, - unconditional_guidance_scale=scale, - unconditional_conditioning=un_cond) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - x_samples = self.model.decode_first_stage(samples) - x_samples = ( - einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + - 127.5).cpu().numpy().clip(0, 255).astype(np.uint8) - - results = [x_samples[i] for i in range(num_samples)] - return [detected_map] + results - - @torch.inference_mode() - def process_scribble(self, input_image, prompt, a_prompt, n_prompt, - num_samples, image_resolution, ddim_steps, scale, - seed, eta): - self.load_weight('scribble') - - img = resize_image(HWC3(input_image), image_resolution) - H, W, C = img.shape - - detected_map = np.zeros_like(img, dtype=np.uint8) - detected_map[np.min(img, axis=2) < 127] = 255 - - control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0 - control = torch.stack([control for _ in range(num_samples)], dim=0) - control = einops.rearrange(control, 'b h w c -> b c h w').clone() - - if seed == -1: - seed = random.randint(0, 65535) - seed_everything(seed) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - cond = { - 'c_concat': [control], - 'c_crossattn': [ - self.model.get_learned_conditioning( - [prompt + ', ' + a_prompt] * num_samples) - ] - } - un_cond = { - 'c_concat': [control], - 'c_crossattn': - [self.model.get_learned_conditioning([n_prompt] * num_samples)] - } - shape = (4, H // 8, W // 8) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=True) - - samples, intermediates = self.ddim_sampler.sample( - ddim_steps, - num_samples, - shape, - cond, - verbose=False, - eta=eta, - unconditional_guidance_scale=scale, - unconditional_conditioning=un_cond) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - x_samples = self.model.decode_first_stage(samples) - x_samples = ( - einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + - 127.5).cpu().numpy().clip(0, 255).astype(np.uint8) - - results = [x_samples[i] for i in range(num_samples)] - return [255 - detected_map] + results - - @torch.inference_mode() - def process_scribble_interactive(self, input_image, prompt, a_prompt, - n_prompt, num_samples, image_resolution, - ddim_steps, scale, seed, eta): - self.load_weight('scribble') - - img = resize_image(HWC3(input_image['mask'][:, :, 0]), - image_resolution) - H, W, C = img.shape - - detected_map = np.zeros_like(img, dtype=np.uint8) - detected_map[np.min(img, axis=2) > 127] = 255 - - control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0 - control = torch.stack([control for _ in range(num_samples)], dim=0) - control = einops.rearrange(control, 'b h w c -> b c h w').clone() - - if seed == -1: - seed = random.randint(0, 65535) - seed_everything(seed) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - cond = { - 'c_concat': [control], - 'c_crossattn': [ - self.model.get_learned_conditioning( - [prompt + ', ' + a_prompt] * num_samples) - ] - } - un_cond = { - 'c_concat': [control], - 'c_crossattn': - [self.model.get_learned_conditioning([n_prompt] * num_samples)] - } - shape = (4, H // 8, W // 8) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=True) - - samples, intermediates = self.ddim_sampler.sample( - ddim_steps, - num_samples, - shape, - cond, - verbose=False, - eta=eta, - unconditional_guidance_scale=scale, - unconditional_conditioning=un_cond) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - x_samples = self.model.decode_first_stage(samples) - x_samples = ( - einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + - 127.5).cpu().numpy().clip(0, 255).astype(np.uint8) - - results = [x_samples[i] for i in range(num_samples)] - return [255 - detected_map] + results - - @torch.inference_mode() - def process_fake_scribble(self, input_image, prompt, a_prompt, n_prompt, - num_samples, image_resolution, detect_resolution, - ddim_steps, scale, seed, eta): - self.load_weight('scribble') - - input_image = HWC3(input_image) - detected_map = apply_hed(resize_image(input_image, detect_resolution)) - detected_map = HWC3(detected_map) - img = resize_image(input_image, image_resolution) - H, W, C = img.shape - - detected_map = cv2.resize(detected_map, (W, H), - interpolation=cv2.INTER_LINEAR) - detected_map = nms(detected_map, 127, 3.0) - detected_map = cv2.GaussianBlur(detected_map, (0, 0), 3.0) - detected_map[detected_map > 4] = 255 - detected_map[detected_map < 255] = 0 - - control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0 - control = torch.stack([control for _ in range(num_samples)], dim=0) - control = einops.rearrange(control, 'b h w c -> b c h w').clone() - - if seed == -1: - seed = random.randint(0, 65535) - seed_everything(seed) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - cond = { - 'c_concat': [control], - 'c_crossattn': [ - self.model.get_learned_conditioning( - [prompt + ', ' + a_prompt] * num_samples) - ] - } - un_cond = { - 'c_concat': [control], - 'c_crossattn': - [self.model.get_learned_conditioning([n_prompt] * num_samples)] - } - shape = (4, H // 8, W // 8) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=True) - - samples, intermediates = self.ddim_sampler.sample( - ddim_steps, - num_samples, - shape, - cond, - verbose=False, - eta=eta, - unconditional_guidance_scale=scale, - unconditional_conditioning=un_cond) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - x_samples = self.model.decode_first_stage(samples) - x_samples = ( - einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + - 127.5).cpu().numpy().clip(0, 255).astype(np.uint8) - - results = [x_samples[i] for i in range(num_samples)] - return [255 - detected_map] + results - - @torch.inference_mode() - def process_pose(self, input_image, prompt, a_prompt, n_prompt, - num_samples, image_resolution, detect_resolution, - ddim_steps, scale, seed, eta): - self.load_weight('pose') - - input_image = HWC3(input_image) - detected_map, _ = apply_openpose( - resize_image(input_image, detect_resolution)) - detected_map = HWC3(detected_map) - img = resize_image(input_image, image_resolution) - H, W, C = img.shape - - detected_map = cv2.resize(detected_map, (W, H), - interpolation=cv2.INTER_NEAREST) - - control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0 - control = torch.stack([control for _ in range(num_samples)], dim=0) - control = einops.rearrange(control, 'b h w c -> b c h w').clone() - - if seed == -1: - seed = random.randint(0, 65535) - seed_everything(seed) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - cond = { - 'c_concat': [control], - 'c_crossattn': [ - self.model.get_learned_conditioning( - [prompt + ', ' + a_prompt] * num_samples) - ] - } - un_cond = { - 'c_concat': [control], - 'c_crossattn': - [self.model.get_learned_conditioning([n_prompt] * num_samples)] - } - shape = (4, H // 8, W // 8) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=True) - - samples, intermediates = self.ddim_sampler.sample( - ddim_steps, - num_samples, - shape, - cond, - verbose=False, - eta=eta, - unconditional_guidance_scale=scale, - unconditional_conditioning=un_cond) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - x_samples = self.model.decode_first_stage(samples) - x_samples = ( - einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + - 127.5).cpu().numpy().clip(0, 255).astype(np.uint8) - - results = [x_samples[i] for i in range(num_samples)] - return [detected_map] + results - - @torch.inference_mode() - def process_seg(self, input_image, prompt, a_prompt, n_prompt, num_samples, - image_resolution, detect_resolution, ddim_steps, scale, - seed, eta): - self.load_weight('seg') - - input_image = HWC3(input_image) - detected_map = apply_uniformer( - resize_image(input_image, detect_resolution)) - img = resize_image(input_image, image_resolution) - H, W, C = img.shape - - detected_map = cv2.resize(detected_map, (W, H), - interpolation=cv2.INTER_NEAREST) - - control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0 - control = torch.stack([control for _ in range(num_samples)], dim=0) - control = einops.rearrange(control, 'b h w c -> b c h w').clone() - - if seed == -1: - seed = random.randint(0, 65535) - seed_everything(seed) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - cond = { - 'c_concat': [control], - 'c_crossattn': [ - self.model.get_learned_conditioning( - [prompt + ', ' + a_prompt] * num_samples) - ] - } - un_cond = { - 'c_concat': [control], - 'c_crossattn': - [self.model.get_learned_conditioning([n_prompt] * num_samples)] - } - shape = (4, H // 8, W // 8) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=True) - - samples, intermediates = self.ddim_sampler.sample( - ddim_steps, - num_samples, - shape, - cond, - verbose=False, - eta=eta, - unconditional_guidance_scale=scale, - unconditional_conditioning=un_cond) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - x_samples = self.model.decode_first_stage(samples) - x_samples = ( - einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + - 127.5).cpu().numpy().clip(0, 255).astype(np.uint8) - - results = [x_samples[i] for i in range(num_samples)] - return [detected_map] + results - - @torch.inference_mode() - def process_depth(self, input_image, prompt, a_prompt, n_prompt, - num_samples, image_resolution, detect_resolution, - ddim_steps, scale, seed, eta): - self.load_weight('depth') - - input_image = HWC3(input_image) - detected_map, _ = apply_midas( - resize_image(input_image, detect_resolution)) - detected_map = HWC3(detected_map) - img = resize_image(input_image, image_resolution) - H, W, C = img.shape - - detected_map = cv2.resize(detected_map, (W, H), - interpolation=cv2.INTER_LINEAR) - - control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0 - control = torch.stack([control for _ in range(num_samples)], dim=0) - control = einops.rearrange(control, 'b h w c -> b c h w').clone() - - if seed == -1: - seed = random.randint(0, 65535) - seed_everything(seed) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - cond = { - 'c_concat': [control], - 'c_crossattn': [ - self.model.get_learned_conditioning( - [prompt + ', ' + a_prompt] * num_samples) - ] - } - un_cond = { - 'c_concat': [control], - 'c_crossattn': - [self.model.get_learned_conditioning([n_prompt] * num_samples)] - } - shape = (4, H // 8, W // 8) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=True) - - samples, intermediates = self.ddim_sampler.sample( - ddim_steps, - num_samples, - shape, - cond, - verbose=False, - eta=eta, - unconditional_guidance_scale=scale, - unconditional_conditioning=un_cond) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - x_samples = self.model.decode_first_stage(samples) - x_samples = ( - einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + - 127.5).cpu().numpy().clip(0, 255).astype(np.uint8) - - results = [x_samples[i] for i in range(num_samples)] - return [detected_map] + results - - @torch.inference_mode() - def process_normal(self, input_image, prompt, a_prompt, n_prompt, - num_samples, image_resolution, detect_resolution, - ddim_steps, scale, seed, eta, bg_threshold): - self.load_weight('normal') - - input_image = HWC3(input_image) - _, detected_map = apply_midas(resize_image(input_image, - detect_resolution), - bg_th=bg_threshold) - detected_map = HWC3(detected_map) - img = resize_image(input_image, image_resolution) - H, W, C = img.shape - - detected_map = cv2.resize(detected_map, (W, H), - interpolation=cv2.INTER_LINEAR) - - control = torch.from_numpy( - detected_map[:, :, ::-1].copy()).float().cuda() / 255.0 - control = torch.stack([control for _ in range(num_samples)], dim=0) - control = einops.rearrange(control, 'b h w c -> b c h w').clone() - - if seed == -1: - seed = random.randint(0, 65535) - seed_everything(seed) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - cond = { - 'c_concat': [control], - 'c_crossattn': [ - self.model.get_learned_conditioning( - [prompt + ', ' + a_prompt] * num_samples) - ] - } - un_cond = { - 'c_concat': [control], - 'c_crossattn': - [self.model.get_learned_conditioning([n_prompt] * num_samples)] - } - shape = (4, H // 8, W // 8) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=True) - - samples, intermediates = self.ddim_sampler.sample( - ddim_steps, - num_samples, - shape, - cond, - verbose=False, - eta=eta, - unconditional_guidance_scale=scale, - unconditional_conditioning=un_cond) - - if config.save_memory: - self.model.low_vram_shift(is_diffusing=False) - - x_samples = self.model.decode_first_stage(samples) - x_samples = ( - einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + - 127.5).cpu().numpy().clip(0, 255).astype(np.uint8) - - results = [x_samples[i] for i in range(num_samples)] - return [detected_map] + results \ No newline at end of file diff --git a/spaces/fffiloni/Music_Source_Separation/bytesep/train.py b/spaces/fffiloni/Music_Source_Separation/bytesep/train.py deleted file mode 100644 index bf4f6fb4d815bb791b7d578aca055124e495bb93..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/Music_Source_Separation/bytesep/train.py +++ /dev/null @@ -1,299 +0,0 @@ -import argparse -import logging -import os -import pathlib -from functools import partial -from typing import List, NoReturn - -import pytorch_lightning as pl -from pytorch_lightning.plugins import DDPPlugin - -from bytesep.callbacks import get_callbacks -from bytesep.data.augmentors import Augmentor -from bytesep.data.batch_data_preprocessors import ( - get_batch_data_preprocessor_class, -) -from bytesep.data.data_modules import DataModule, Dataset -from bytesep.data.samplers import SegmentSampler -from bytesep.losses import get_loss_function -from bytesep.models.lightning_modules import ( - LitSourceSeparation, - get_model_class, -) -from bytesep.optimizers.lr_schedulers import get_lr_lambda -from bytesep.utils import ( - create_logging, - get_pitch_shift_factor, - read_yaml, - check_configs_gramma, -) - - -def get_dirs( - workspace: str, task_name: str, filename: str, config_yaml: str, gpus: int -) -> List[str]: - r"""Get directories. - - Args: - workspace: str - task_name, str, e.g., 'musdb18' - filenmae: str - config_yaml: str - gpus: int, e.g., 0 for cpu and 8 for training with 8 gpu cards - - Returns: - checkpoints_dir: str - logs_dir: str - logger: pl.loggers.TensorBoardLogger - statistics_path: str - """ - - # save checkpoints dir - checkpoints_dir = os.path.join( - workspace, - "checkpoints", - task_name, - filename, - "config={},gpus={}".format(pathlib.Path(config_yaml).stem, gpus), - ) - os.makedirs(checkpoints_dir, exist_ok=True) - - # logs dir - logs_dir = os.path.join( - workspace, - "logs", - task_name, - filename, - "config={},gpus={}".format(pathlib.Path(config_yaml).stem, gpus), - ) - os.makedirs(logs_dir, exist_ok=True) - - # loggings - create_logging(logs_dir, filemode='w') - logging.info(args) - - # tensorboard logs dir - tb_logs_dir = os.path.join(workspace, "tensorboard_logs") - os.makedirs(tb_logs_dir, exist_ok=True) - - experiment_name = os.path.join(task_name, filename, pathlib.Path(config_yaml).stem) - logger = pl.loggers.TensorBoardLogger(save_dir=tb_logs_dir, name=experiment_name) - - # statistics path - statistics_path = os.path.join( - workspace, - "statistics", - task_name, - filename, - "config={},gpus={}".format(pathlib.Path(config_yaml).stem, gpus), - "statistics.pkl", - ) - os.makedirs(os.path.dirname(statistics_path), exist_ok=True) - - return checkpoints_dir, logs_dir, logger, statistics_path - - -def _get_data_module( - workspace: str, config_yaml: str, num_workers: int, distributed: bool -) -> DataModule: - r"""Create data_module. Mini-batch data can be obtained by: - - code-block:: python - - data_module.setup() - for batch_data_dict in data_module.train_dataloader(): - print(batch_data_dict.keys()) - break - - Args: - workspace: str - config_yaml: str - num_workers: int, e.g., 0 for non-parallel and 8 for using cpu cores - for preparing data in parallel - distributed: bool - - Returns: - data_module: DataModule - """ - - configs = read_yaml(config_yaml) - input_source_types = configs['train']['input_source_types'] - indexes_path = os.path.join(workspace, configs['train']['indexes_dict']) - sample_rate = configs['train']['sample_rate'] - segment_seconds = configs['train']['segment_seconds'] - mixaudio_dict = configs['train']['augmentations']['mixaudio'] - augmentations = configs['train']['augmentations'] - max_pitch_shift = max( - [ - augmentations['pitch_shift'][source_type] - for source_type in input_source_types - ] - ) - batch_size = configs['train']['batch_size'] - steps_per_epoch = configs['train']['steps_per_epoch'] - - segment_samples = int(segment_seconds * sample_rate) - ex_segment_samples = int(segment_samples * get_pitch_shift_factor(max_pitch_shift)) - - # sampler - train_sampler = SegmentSampler( - indexes_path=indexes_path, - segment_samples=ex_segment_samples, - mixaudio_dict=mixaudio_dict, - batch_size=batch_size, - steps_per_epoch=steps_per_epoch, - ) - - # augmentor - augmentor = Augmentor(augmentations=augmentations) - - # dataset - train_dataset = Dataset(augmentor, segment_samples) - - # data module - data_module = DataModule( - train_sampler=train_sampler, - train_dataset=train_dataset, - num_workers=num_workers, - distributed=distributed, - ) - - return data_module - - -def train(args) -> NoReturn: - r"""Train & evaluate and save checkpoints. - - Args: - workspace: str, directory of workspace - gpus: int - config_yaml: str, path of config file for training - """ - - # arugments & parameters - workspace = args.workspace - gpus = args.gpus - config_yaml = args.config_yaml - filename = args.filename - - num_workers = 8 - distributed = True if gpus > 1 else False - evaluate_device = "cuda" if gpus > 0 else "cpu" - - # Read config file. - configs = read_yaml(config_yaml) - check_configs_gramma(configs) - task_name = configs['task_name'] - target_source_types = configs['train']['target_source_types'] - target_sources_num = len(target_source_types) - channels = configs['train']['channels'] - batch_data_preprocessor_type = configs['train']['batch_data_preprocessor'] - model_type = configs['train']['model_type'] - loss_type = configs['train']['loss_type'] - optimizer_type = configs['train']['optimizer_type'] - learning_rate = float(configs['train']['learning_rate']) - precision = configs['train']['precision'] - early_stop_steps = configs['train']['early_stop_steps'] - warm_up_steps = configs['train']['warm_up_steps'] - reduce_lr_steps = configs['train']['reduce_lr_steps'] - - # paths - checkpoints_dir, logs_dir, logger, statistics_path = get_dirs( - workspace, task_name, filename, config_yaml, gpus - ) - - # training data module - data_module = _get_data_module( - workspace=workspace, - config_yaml=config_yaml, - num_workers=num_workers, - distributed=distributed, - ) - - # batch data preprocessor - BatchDataPreprocessor = get_batch_data_preprocessor_class( - batch_data_preprocessor_type=batch_data_preprocessor_type - ) - - batch_data_preprocessor = BatchDataPreprocessor( - target_source_types=target_source_types - ) - - # model - Model = get_model_class(model_type=model_type) - model = Model(input_channels=channels, target_sources_num=target_sources_num) - - # loss function - loss_function = get_loss_function(loss_type=loss_type) - - # callbacks - callbacks = get_callbacks( - task_name=task_name, - config_yaml=config_yaml, - workspace=workspace, - checkpoints_dir=checkpoints_dir, - statistics_path=statistics_path, - logger=logger, - model=model, - evaluate_device=evaluate_device, - ) - # callbacks = [] - - # learning rate reduce function - lr_lambda = partial( - get_lr_lambda, warm_up_steps=warm_up_steps, reduce_lr_steps=reduce_lr_steps - ) - - # pytorch-lightning model - pl_model = LitSourceSeparation( - batch_data_preprocessor=batch_data_preprocessor, - model=model, - optimizer_type=optimizer_type, - loss_function=loss_function, - learning_rate=learning_rate, - lr_lambda=lr_lambda, - ) - - # trainer - trainer = pl.Trainer( - checkpoint_callback=False, - gpus=gpus, - callbacks=callbacks, - max_steps=early_stop_steps, - accelerator="ddp", - sync_batchnorm=True, - precision=precision, - replace_sampler_ddp=False, - plugins=[DDPPlugin(find_unused_parameters=True)], - profiler='simple', - ) - - # Fit, evaluate, and save checkpoints. - trainer.fit(pl_model, data_module) - - -if __name__ == "__main__": - - parser = argparse.ArgumentParser(description="") - subparsers = parser.add_subparsers(dest="mode") - - parser_train = subparsers.add_parser("train") - parser_train.add_argument( - "--workspace", type=str, required=True, help="Directory of workspace." - ) - parser_train.add_argument("--gpus", type=int, required=True) - parser_train.add_argument( - "--config_yaml", - type=str, - required=True, - help="Path of config file for training.", - ) - - args = parser.parse_args() - args.filename = pathlib.Path(__file__).stem - - if args.mode == "train": - train(args) - - else: - raise Exception("Error argument!") diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io-adapter/dist/index.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io-adapter/dist/index.js deleted file mode 100644 index 1bfabf6c9abcbb9e2b6cd1984e67a0c5f33ac4eb..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io-adapter/dist/index.js +++ /dev/null @@ -1,394 +0,0 @@ -"use strict"; -var _a; -Object.defineProperty(exports, "__esModule", { value: true }); -exports.SessionAwareAdapter = exports.Adapter = void 0; -const events_1 = require("events"); -const yeast_1 = require("./contrib/yeast"); -const WebSocket = require("ws"); -const canPreComputeFrame = typeof ((_a = WebSocket === null || WebSocket === void 0 ? void 0 : WebSocket.Sender) === null || _a === void 0 ? void 0 : _a.frame) === "function"; -class Adapter extends events_1.EventEmitter { - /** - * In-memory adapter constructor. - * - * @param {Namespace} nsp - */ - constructor(nsp) { - super(); - this.nsp = nsp; - this.rooms = new Map(); - this.sids = new Map(); - this.encoder = nsp.server.encoder; - } - /** - * To be overridden - */ - init() { } - /** - * To be overridden - */ - close() { } - /** - * Returns the number of Socket.IO servers in the cluster - * - * @public - */ - serverCount() { - return Promise.resolve(1); - } - /** - * Adds a socket to a list of room. - * - * @param {SocketId} id the socket id - * @param {Set} rooms a set of rooms - * @public - */ - addAll(id, rooms) { - if (!this.sids.has(id)) { - this.sids.set(id, new Set()); - } - for (const room of rooms) { - this.sids.get(id).add(room); - if (!this.rooms.has(room)) { - this.rooms.set(room, new Set()); - this.emit("create-room", room); - } - if (!this.rooms.get(room).has(id)) { - this.rooms.get(room).add(id); - this.emit("join-room", room, id); - } - } - } - /** - * Removes a socket from a room. - * - * @param {SocketId} id the socket id - * @param {Room} room the room name - */ - del(id, room) { - if (this.sids.has(id)) { - this.sids.get(id).delete(room); - } - this._del(room, id); - } - _del(room, id) { - const _room = this.rooms.get(room); - if (_room != null) { - const deleted = _room.delete(id); - if (deleted) { - this.emit("leave-room", room, id); - } - if (_room.size === 0 && this.rooms.delete(room)) { - this.emit("delete-room", room); - } - } - } - /** - * Removes a socket from all rooms it's joined. - * - * @param {SocketId} id the socket id - */ - delAll(id) { - if (!this.sids.has(id)) { - return; - } - for (const room of this.sids.get(id)) { - this._del(room, id); - } - this.sids.delete(id); - } - /** - * Broadcasts a packet. - * - * Options: - * - `flags` {Object} flags for this packet - * - `except` {Array} sids that should be excluded - * - `rooms` {Array} list of rooms to broadcast to - * - * @param {Object} packet the packet object - * @param {Object} opts the options - * @public - */ - broadcast(packet, opts) { - const flags = opts.flags || {}; - const packetOpts = { - preEncoded: true, - volatile: flags.volatile, - compress: flags.compress, - }; - packet.nsp = this.nsp.name; - const encodedPackets = this._encode(packet, packetOpts); - this.apply(opts, (socket) => { - if (typeof socket.notifyOutgoingListeners === "function") { - socket.notifyOutgoingListeners(packet); - } - socket.client.writeToEngine(encodedPackets, packetOpts); - }); - } - /** - * Broadcasts a packet and expects multiple acknowledgements. - * - * Options: - * - `flags` {Object} flags for this packet - * - `except` {Array} sids that should be excluded - * - `rooms` {Array} list of rooms to broadcast to - * - * @param {Object} packet the packet object - * @param {Object} opts the options - * @param clientCountCallback - the number of clients that received the packet - * @param ack - the callback that will be called for each client response - * - * @public - */ - broadcastWithAck(packet, opts, clientCountCallback, ack) { - const flags = opts.flags || {}; - const packetOpts = { - preEncoded: true, - volatile: flags.volatile, - compress: flags.compress, - }; - packet.nsp = this.nsp.name; - // we can use the same id for each packet, since the _ids counter is common (no duplicate) - packet.id = this.nsp._ids++; - const encodedPackets = this._encode(packet, packetOpts); - let clientCount = 0; - this.apply(opts, (socket) => { - // track the total number of acknowledgements that are expected - clientCount++; - // call the ack callback for each client response - socket.acks.set(packet.id, ack); - if (typeof socket.notifyOutgoingListeners === "function") { - socket.notifyOutgoingListeners(packet); - } - socket.client.writeToEngine(encodedPackets, packetOpts); - }); - clientCountCallback(clientCount); - } - _encode(packet, packetOpts) { - const encodedPackets = this.encoder.encode(packet); - if (canPreComputeFrame && - encodedPackets.length === 1 && - typeof encodedPackets[0] === "string") { - // "4" being the "message" packet type in the Engine.IO protocol - const data = Buffer.from("4" + encodedPackets[0]); - // see https://github.com/websockets/ws/issues/617#issuecomment-283002469 - packetOpts.wsPreEncodedFrame = WebSocket.Sender.frame(data, { - readOnly: false, - mask: false, - rsv1: false, - opcode: 1, - fin: true, - }); - } - return encodedPackets; - } - /** - * Gets a list of sockets by sid. - * - * @param {Set} rooms the explicit set of rooms to check. - */ - sockets(rooms) { - const sids = new Set(); - this.apply({ rooms }, (socket) => { - sids.add(socket.id); - }); - return Promise.resolve(sids); - } - /** - * Gets the list of rooms a given socket has joined. - * - * @param {SocketId} id the socket id - */ - socketRooms(id) { - return this.sids.get(id); - } - /** - * Returns the matching socket instances - * - * @param opts - the filters to apply - */ - fetchSockets(opts) { - const sockets = []; - this.apply(opts, (socket) => { - sockets.push(socket); - }); - return Promise.resolve(sockets); - } - /** - * Makes the matching socket instances join the specified rooms - * - * @param opts - the filters to apply - * @param rooms - the rooms to join - */ - addSockets(opts, rooms) { - this.apply(opts, (socket) => { - socket.join(rooms); - }); - } - /** - * Makes the matching socket instances leave the specified rooms - * - * @param opts - the filters to apply - * @param rooms - the rooms to leave - */ - delSockets(opts, rooms) { - this.apply(opts, (socket) => { - rooms.forEach((room) => socket.leave(room)); - }); - } - /** - * Makes the matching socket instances disconnect - * - * @param opts - the filters to apply - * @param close - whether to close the underlying connection - */ - disconnectSockets(opts, close) { - this.apply(opts, (socket) => { - socket.disconnect(close); - }); - } - apply(opts, callback) { - const rooms = opts.rooms; - const except = this.computeExceptSids(opts.except); - if (rooms.size) { - const ids = new Set(); - for (const room of rooms) { - if (!this.rooms.has(room)) - continue; - for (const id of this.rooms.get(room)) { - if (ids.has(id) || except.has(id)) - continue; - const socket = this.nsp.sockets.get(id); - if (socket) { - callback(socket); - ids.add(id); - } - } - } - } - else { - for (const [id] of this.sids) { - if (except.has(id)) - continue; - const socket = this.nsp.sockets.get(id); - if (socket) - callback(socket); - } - } - } - computeExceptSids(exceptRooms) { - const exceptSids = new Set(); - if (exceptRooms && exceptRooms.size > 0) { - for (const room of exceptRooms) { - if (this.rooms.has(room)) { - this.rooms.get(room).forEach((sid) => exceptSids.add(sid)); - } - } - } - return exceptSids; - } - /** - * Send a packet to the other Socket.IO servers in the cluster - * @param packet - an array of arguments, which may include an acknowledgement callback at the end - */ - serverSideEmit(packet) { - console.warn("this adapter does not support the serverSideEmit() functionality"); - } - /** - * Save the client session in order to restore it upon reconnection. - */ - persistSession(session) { } - /** - * Restore the session and find the packets that were missed by the client. - * @param pid - * @param offset - */ - restoreSession(pid, offset) { - return null; - } -} -exports.Adapter = Adapter; -class SessionAwareAdapter extends Adapter { - constructor(nsp) { - super(nsp); - this.nsp = nsp; - this.sessions = new Map(); - this.packets = []; - this.maxDisconnectionDuration = - nsp.server.opts.connectionStateRecovery.maxDisconnectionDuration; - const timer = setInterval(() => { - const threshold = Date.now() - this.maxDisconnectionDuration; - this.sessions.forEach((session, sessionId) => { - const hasExpired = session.disconnectedAt < threshold; - if (hasExpired) { - this.sessions.delete(sessionId); - } - }); - for (let i = this.packets.length - 1; i >= 0; i--) { - const hasExpired = this.packets[i].emittedAt < threshold; - if (hasExpired) { - this.packets.splice(0, i + 1); - break; - } - } - }, 60 * 1000); - // prevents the timer from keeping the process alive - timer.unref(); - } - persistSession(session) { - session.disconnectedAt = Date.now(); - this.sessions.set(session.pid, session); - } - restoreSession(pid, offset) { - const session = this.sessions.get(pid); - if (!session) { - // the session may have expired - return null; - } - const hasExpired = session.disconnectedAt + this.maxDisconnectionDuration < Date.now(); - if (hasExpired) { - // the session has expired - this.sessions.delete(pid); - return null; - } - const index = this.packets.findIndex((packet) => packet.id === offset); - if (index === -1) { - // the offset may be too old - return null; - } - const missedPackets = []; - for (let i = index + 1; i < this.packets.length; i++) { - const packet = this.packets[i]; - if (shouldIncludePacket(session.rooms, packet.opts)) { - missedPackets.push(packet.data); - } - } - return Promise.resolve(Object.assign(Object.assign({}, session), { missedPackets })); - } - broadcast(packet, opts) { - var _a; - const isEventPacket = packet.type === 2; - // packets with acknowledgement are not stored because the acknowledgement function cannot be serialized and - // restored on another server upon reconnection - const withoutAcknowledgement = packet.id === undefined; - const notVolatile = ((_a = opts.flags) === null || _a === void 0 ? void 0 : _a.volatile) === undefined; - if (isEventPacket && withoutAcknowledgement && notVolatile) { - const id = (0, yeast_1.yeast)(); - // the offset is stored at the end of the data array, so the client knows the ID of the last packet it has - // processed (and the format is backward-compatible) - packet.data.push(id); - this.packets.push({ - id, - opts, - data: packet.data, - emittedAt: Date.now(), - }); - } - super.broadcast(packet, opts); - } -} -exports.SessionAwareAdapter = SessionAwareAdapter; -function shouldIncludePacket(sessionRooms, opts) { - const included = opts.rooms.size === 0 || sessionRooms.some((room) => opts.rooms.has(room)); - const notExcluded = sessionRooms.every((room) => !opts.except.has(room)); - return included && notExcluded; -} diff --git a/spaces/fffiloni/lama-video-watermark-remover/saicinpainting/utils.py b/spaces/fffiloni/lama-video-watermark-remover/saicinpainting/utils.py deleted file mode 100644 index d0914320eab96e197ae379b94ea7eeb2fe5dfd79..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/lama-video-watermark-remover/saicinpainting/utils.py +++ /dev/null @@ -1,174 +0,0 @@ -import bisect -import functools -import logging -import numbers -import os -import signal -import sys -import traceback -import warnings - -import torch -from pytorch_lightning import seed_everything - -LOGGER = logging.getLogger(__name__) - - -def check_and_warn_input_range(tensor, min_value, max_value, name): - actual_min = tensor.min() - actual_max = tensor.max() - if actual_min < min_value or actual_max > max_value: - warnings.warn(f"{name} must be in {min_value}..{max_value} range, but it ranges {actual_min}..{actual_max}") - - -def sum_dict_with_prefix(target, cur_dict, prefix, default=0): - for k, v in cur_dict.items(): - target_key = prefix + k - target[target_key] = target.get(target_key, default) + v - - -def average_dicts(dict_list): - result = {} - norm = 1e-3 - for dct in dict_list: - sum_dict_with_prefix(result, dct, '') - norm += 1 - for k in list(result): - result[k] /= norm - return result - - -def add_prefix_to_keys(dct, prefix): - return {prefix + k: v for k, v in dct.items()} - - -def set_requires_grad(module, value): - for param in module.parameters(): - param.requires_grad = value - - -def flatten_dict(dct): - result = {} - for k, v in dct.items(): - if isinstance(k, tuple): - k = '_'.join(k) - if isinstance(v, dict): - for sub_k, sub_v in flatten_dict(v).items(): - result[f'{k}_{sub_k}'] = sub_v - else: - result[k] = v - return result - - -class LinearRamp: - def __init__(self, start_value=0, end_value=1, start_iter=-1, end_iter=0): - self.start_value = start_value - self.end_value = end_value - self.start_iter = start_iter - self.end_iter = end_iter - - def __call__(self, i): - if i < self.start_iter: - return self.start_value - if i >= self.end_iter: - return self.end_value - part = (i - self.start_iter) / (self.end_iter - self.start_iter) - return self.start_value * (1 - part) + self.end_value * part - - -class LadderRamp: - def __init__(self, start_iters, values): - self.start_iters = start_iters - self.values = values - assert len(values) == len(start_iters) + 1, (len(values), len(start_iters)) - - def __call__(self, i): - segment_i = bisect.bisect_right(self.start_iters, i) - return self.values[segment_i] - - -def get_ramp(kind='ladder', **kwargs): - if kind == 'linear': - return LinearRamp(**kwargs) - if kind == 'ladder': - return LadderRamp(**kwargs) - raise ValueError(f'Unexpected ramp kind: {kind}') - - -def print_traceback_handler(sig, frame): - LOGGER.warning(f'Received signal {sig}') - bt = ''.join(traceback.format_stack()) - LOGGER.warning(f'Requested stack trace:\n{bt}') - - -def register_debug_signal_handlers(sig=signal.SIGUSR1, handler=print_traceback_handler): - LOGGER.warning(f'Setting signal {sig} handler {handler}') - signal.signal(sig, handler) - - -def handle_deterministic_config(config): - seed = dict(config).get('seed', None) - if seed is None: - return False - - seed_everything(seed) - return True - - -def get_shape(t): - if torch.is_tensor(t): - return tuple(t.shape) - elif isinstance(t, dict): - return {n: get_shape(q) for n, q in t.items()} - elif isinstance(t, (list, tuple)): - return [get_shape(q) for q in t] - elif isinstance(t, numbers.Number): - return type(t) - else: - raise ValueError('unexpected type {}'.format(type(t))) - - -def get_has_ddp_rank(): - master_port = os.environ.get('MASTER_PORT', None) - node_rank = os.environ.get('NODE_RANK', None) - local_rank = os.environ.get('LOCAL_RANK', None) - world_size = os.environ.get('WORLD_SIZE', None) - has_rank = master_port is not None or node_rank is not None or local_rank is not None or world_size is not None - return has_rank - - -def handle_ddp_subprocess(): - def main_decorator(main_func): - @functools.wraps(main_func) - def new_main(*args, **kwargs): - # Trainer sets MASTER_PORT, NODE_RANK, LOCAL_RANK, WORLD_SIZE - parent_cwd = os.environ.get('TRAINING_PARENT_WORK_DIR', None) - has_parent = parent_cwd is not None - has_rank = get_has_ddp_rank() - assert has_parent == has_rank, f'Inconsistent state: has_parent={has_parent}, has_rank={has_rank}' - - if has_parent: - # we are in the worker - sys.argv.extend([ - f'hydra.run.dir={parent_cwd}', - # 'hydra/hydra_logging=disabled', - # 'hydra/job_logging=disabled' - ]) - # do nothing if this is a top-level process - # TRAINING_PARENT_WORK_DIR is set in handle_ddp_parent_process after hydra initialization - - main_func(*args, **kwargs) - return new_main - return main_decorator - - -def handle_ddp_parent_process(): - parent_cwd = os.environ.get('TRAINING_PARENT_WORK_DIR', None) - has_parent = parent_cwd is not None - has_rank = get_has_ddp_rank() - assert has_parent == has_rank, f'Inconsistent state: has_parent={has_parent}, has_rank={has_rank}' - - if parent_cwd is None: - os.environ['TRAINING_PARENT_WORK_DIR'] = os.getcwd() - - return has_parent diff --git a/spaces/fkhuggingme/gpt-academic/crazy_functions/test_project/python/dqn/__init__.py b/spaces/fkhuggingme/gpt-academic/crazy_functions/test_project/python/dqn/__init__.py deleted file mode 100644 index 4ae42872c812a7c8a18dff002086c7e6e935f580..0000000000000000000000000000000000000000 --- a/spaces/fkhuggingme/gpt-academic/crazy_functions/test_project/python/dqn/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from stable_baselines3.dqn.dqn import DQN -from stable_baselines3.dqn.policies import CnnPolicy, MlpPolicy diff --git a/spaces/focusit/BhagwadGita/app.py b/spaces/focusit/BhagwadGita/app.py deleted file mode 100644 index 4d2f8ccb3dbaad17d3290d4023b47792f145cf56..0000000000000000000000000000000000000000 --- a/spaces/focusit/BhagwadGita/app.py +++ /dev/null @@ -1,36 +0,0 @@ -import google.generativeai as palm -import streamlit as st -import os - -st.set_page_config(layout="wide") - -st.markdown(""" - -""", unsafe_allow_html=True) - - - -# Set your API key -palm.configure(api_key = os.environ['PALM_KEY']) - -# Select the PaLM 2 model -model = 'models/text-bison-001' - - -# Generate text -if prompt := st.chat_input("Ask your query..."): - enprom = f"""Answer the below provided input in context to Bhagwad Geeta. Use the verses and chapters sentences as references to your answer with suggestions - coming from Bhagwad Geeta. Your answer to below input should only be in context to Bhagwad geeta only.\nInput= {prompt}""" - completion = palm.generate_text(model=model, prompt=enprom, temperature=0.5, max_output_tokens=800) - -# response = palm.chat(messages=["Hello."]) -# print(response.last) # 'Hello! What can I help you with?' -# response.reply("Can you tell me a joke?") - -# Print the generated text - with st.chat_message("Assistant"): - st.write(completion.result) diff --git a/spaces/fsdl2022emotion/meme-manipulation-gradio-space/emotion_synthesizer/models/__init__.py b/spaces/fsdl2022emotion/meme-manipulation-gradio-space/emotion_synthesizer/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/fuckyoudeki/AutoGPT/CONTRIBUTING.md b/spaces/fuckyoudeki/AutoGPT/CONTRIBUTING.md deleted file mode 100644 index 79169a0c1951853303f73ffa1fddb3518685606a..0000000000000000000000000000000000000000 --- a/spaces/fuckyoudeki/AutoGPT/CONTRIBUTING.md +++ /dev/null @@ -1,105 +0,0 @@ -# Contributing to ProjectName - -First of all, thank you for considering contributing to our project! We appreciate your time and effort, and we value any contribution, whether it's reporting a bug, suggesting a new feature, or submitting a pull request. - -This document provides guidelines and best practices to help you contribute effectively. - -## Table of Contents - -- [Code of Conduct](#code-of-conduct) -- [Getting Started](#getting-started) -- [How to Contribute](#how-to-contribute) - - [Reporting Bugs](#reporting-bugs) - - [Suggesting Enhancements](#suggesting-enhancements) - - [Submitting Pull Requests](#submitting-pull-requests) -- [Style Guidelines](#style-guidelines) - - [Code Formatting](#code-formatting) - - [Pre-Commit Hooks](#pre-commit-hooks) - -## Code of Conduct - -By participating in this project, you agree to abide by our [Code of Conduct](CODE_OF_CONDUCT.md). Please read it to understand the expectations we have for everyone who contributes to this project. - -## 📢 A Quick Word -Right now we will not be accepting any Contributions that add non-essential commands to Auto-GPT. - -However, you absolutely can still add these commands to Auto-GPT in the form of plugins. Please check out this [template](https://github.com/Significant-Gravitas/Auto-GPT-Plugin-Template). -> ⚠️ Plugin support is expected to ship within the week. You can follow PR #757 for more updates! - -## Getting Started - -To start contributing, follow these steps: - -1. Fork the repository and clone your fork. -2. Create a new branch for your changes (use a descriptive name, such as `fix-bug-123` or `add-new-feature`). -3. Make your changes in the new branch. -4. Test your changes thoroughly. -5. Commit and push your changes to your fork. -6. Create a pull request following the guidelines in the [Submitting Pull Requests](#submitting-pull-requests) section. - -## How to Contribute - -### Reporting Bugs - -If you find a bug in the project, please create an issue on GitHub with the following information: - -- A clear, descriptive title for the issue. -- A description of the problem, including steps to reproduce the issue. -- Any relevant logs, screenshots, or other supporting information. - -### Suggesting Enhancements - -If you have an idea for a new feature or improvement, please create an issue on GitHub with the following information: - -- A clear, descriptive title for the issue. -- A detailed description of the proposed enhancement, including any benefits and potential drawbacks. -- Any relevant examples, mockups, or supporting information. - -### Submitting Pull Requests - -When submitting a pull request, please ensure that your changes meet the following criteria: - -- Your pull request should be atomic and focus on a single change. -- Your pull request should include tests for your change. -- You should have thoroughly tested your changes with multiple different prompts. -- You should have considered potential risks and mitigations for your changes. -- You should have documented your changes clearly and comprehensively. -- You should not include any unrelated or "extra" small tweaks or changes. - -## Style Guidelines - -### Code Formatting - -We use the `black` code formatter to maintain a consistent coding style across the project. Please ensure that your code is formatted using `black` before submitting a pull request. You can install `black` using `pip`: - -```bash -pip install black -``` - -To format your code, run the following command in the project's root directory: - -```bash -black . -``` -### Pre-Commit Hooks -We use pre-commit hooks to ensure that code formatting and other checks are performed automatically before each commit. To set up pre-commit hooks for this project, follow these steps: - -Install the pre-commit package using pip: -```bash -pip install pre-commit -``` - -Run the following command in the project's root directory to install the pre-commit hooks: -```bash -pre-commit install -``` - -Now, the pre-commit hooks will run automatically before each commit, checking your code formatting and other requirements. - -If you encounter any issues or have questions, feel free to reach out to the maintainers or open a new issue on GitHub. We're here to help and appreciate your efforts to contribute to the project. - -Happy coding, and once again, thank you for your contributions! - -Maintainers will look at PR that have no merge conflicts when deciding what to add to the project. Make sure your PR shows up here: - -https://github.com/Torantulino/Auto-GPT/pulls?q=is%3Apr+is%3Aopen+-is%3Aconflict+ \ No newline at end of file diff --git a/spaces/gradio/HuBERT/examples/m2m_100/tokenizers/tokenize_zh.py b/spaces/gradio/HuBERT/examples/m2m_100/tokenizers/tokenize_zh.py deleted file mode 100644 index 674b5849cba829cf4f07a69369e9cc6eed376d4c..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/examples/m2m_100/tokenizers/tokenize_zh.py +++ /dev/null @@ -1,14 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import fileinput - -import sacrebleu - - -for line in fileinput.input(): - print(sacrebleu.tokenize_zh(line)) diff --git a/spaces/gradio/HuBERT/fairseq/models/lightconv.py b/spaces/gradio/HuBERT/fairseq/models/lightconv.py deleted file mode 100644 index b614da366513091132c8b6bd8b8e170cce33a1c4..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/models/lightconv.py +++ /dev/null @@ -1,1018 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import utils -from fairseq.models import ( - FairseqEncoder, - FairseqEncoderDecoderModel, - FairseqIncrementalDecoder, - register_model, - register_model_architecture, -) -from fairseq.modules import ( - AdaptiveSoftmax, - DynamicConv, - FairseqDropout, - LayerNorm, - LightweightConv, - MultiheadAttention, - PositionalEmbedding, -) - - -@register_model("lightconv") -class LightConvModel(FairseqEncoderDecoderModel): - """ - LightConv and DynamicConv model from `"Pay Less Attention with Lightweight and Dynamic Convolutions" (Wu, et al, 2019) - `_. - To use LightConv please set ``--encoder-conv-type lightweight --decoder-conv-type lightweight`` - To use DynamicConv please set ``--encoder-conv-type dynamic --decoder-conv-type dynamic`` - - Args: - encoder (LightConvEncoder): the encoder - decoder (LightConvDecoder): the decoder - - The LightConv model provides the following named architectures and - command-line arguments: - - .. argparse:: - :ref: fairseq.models.lightconv_parser - :prog: - """ - - @classmethod - def hub_models(cls): - # fmt: off - - def moses_subword(path): - return { - 'path': path, - 'tokenizer': 'moses', - 'bpe': 'subword_nmt', - } - - return { - 'lightconv.no_glu.iwslt14.de-en': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/iwslt14.de-en.lightconv.tar.gz'), - 'dynamicconv.no_glu.iwslt14.de-en': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/iwslt14.de-en.dynamicconv.tar.gz'), - 'lightconv.no_glu.wmt16.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv.tar.gz'), - 'dynamicconv.no_glu.wmt16.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv.tar.gz'), - 'lightconv.glu.wmt16.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv-glu.tar.gz'), - 'dynamicconv.glu.wmt16.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv-glu.tar.gz'), - 'lightconv.glu.wmt17.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv-glu.tar.gz'), - 'dynamicconv.glu.wmt17.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv-glu.tar.gz'), - 'lightconv.glu.wmt14.en-fr': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt14.en-fr.joined-dict.lightconv-glu.tar.gz'), - 'dynamicconv.glu.wmt14.en-fr': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt14.en-fr.joined-dict.dynamicconv-glu.tar.gz'), - 'lightconv.glu.wmt17.zh-en': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt17.zh-en.lightconv-glu.tar.gz'), - 'dynamicconv.glu.wmt17.zh-en': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt17.zh-en.dynamicconv-glu.tar.gz'), - } - # fmt: on - - def __init__(self, encoder, decoder): - super().__init__(encoder, decoder) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - parser.add_argument( - "--dropout", type=float, metavar="D", help="dropout probability" - ) - parser.add_argument( - "--attention-dropout", - type=float, - metavar="D", - help="dropout probability for attention weights", - ) - parser.add_argument( - "--relu-dropout", - type=float, - metavar="D", - help="dropout probability after ReLU in FFN", - ) - parser.add_argument( - "--input-dropout", - type=float, - metavar="D", - help="dropout probability of the inputs", - ) - parser.add_argument( - "--encoder-embed-path", - type=str, - metavar="STR", - help="path to pre-trained encoder embedding", - ) - parser.add_argument( - "--encoder-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension", - ) - parser.add_argument( - "--encoder-conv-dim", - type=int, - metavar="N", - help="encoder embedding dimension", - ) - parser.add_argument( - "--encoder-ffn-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension for FFN", - ) - parser.add_argument( - "--encoder-layers", type=int, metavar="N", help="num encoder layers" - ) - parser.add_argument( - "--encoder-attention-heads", - type=int, - metavar="N", - help="num encoder attention heads or LightConv/DynamicConv heads", - ) - parser.add_argument( - "--encoder-normalize-before", - action="store_true", - help="apply layernorm before each encoder block", - ) - parser.add_argument( - "--encoder-learned-pos", - action="store_true", - help="use learned positional embeddings in the encoder", - ) - parser.add_argument( - "--decoder-embed-path", - type=str, - metavar="STR", - help="path to pre-trained decoder embedding", - ) - parser.add_argument( - "--decoder-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension", - ) - parser.add_argument( - "--decoder-conv-dim", - type=int, - metavar="N", - help="decoder embedding dimension", - ) - parser.add_argument( - "--decoder-ffn-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension for FFN", - ) - parser.add_argument( - "--decoder-layers", type=int, metavar="N", help="num decoder layers" - ) - parser.add_argument( - "--decoder-attention-heads", - type=int, - metavar="N", - help="num decoder attention heads or LightConv/DynamicConv heads", - ) - parser.add_argument( - "--decoder-learned-pos", - action="store_true", - help="use learned positional embeddings in the decoder", - ) - parser.add_argument( - "--decoder-normalize-before", - action="store_true", - help="apply layernorm before each decoder block", - ) - parser.add_argument( - "--share-decoder-input-output-embed", - action="store_true", - help="share decoder input and output embeddings", - ) - parser.add_argument( - "--share-all-embeddings", - action="store_true", - help="share encoder, decoder and output embeddings" - " (requires shared dictionary and embed dim)", - ) - parser.add_argument( - "--adaptive-softmax-cutoff", - metavar="EXPR", - help="comma separated list of adaptive softmax cutoff points. " - "Must be used with adaptive_loss criterion", - ), - parser.add_argument( - "--adaptive-softmax-dropout", - type=float, - metavar="D", - help="sets adaptive softmax dropout for the tail projections", - ) - - """LightConv and DynamicConv arguments""" - parser.add_argument( - "--encoder-kernel-size-list", - type=lambda x: utils.eval_str_list(x, int), - help='list of kernel size (default: "[3,7,15,31,31,31,31]")', - ) - parser.add_argument( - "--decoder-kernel-size-list", - type=lambda x: utils.eval_str_list(x, int), - help='list of kernel size (default: "[3,7,15,31,31,31]")', - ) - parser.add_argument( - "--encoder-glu", type=utils.eval_bool, help="glu after in proj" - ) - parser.add_argument( - "--decoder-glu", type=utils.eval_bool, help="glu after in proj" - ) - parser.add_argument( - "--encoder-conv-type", - default="dynamic", - type=str, - choices=["dynamic", "lightweight"], - help="type of convolution", - ) - parser.add_argument( - "--decoder-conv-type", - default="dynamic", - type=str, - choices=["dynamic", "lightweight"], - help="type of convolution", - ) - parser.add_argument("--weight-softmax", default=True, type=utils.eval_bool) - parser.add_argument( - "--weight-dropout", - type=float, - metavar="D", - help="dropout probability for conv weights", - ) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - - # make sure all arguments are present in older models - base_architecture(args) - - if not hasattr(args, "max_source_positions"): - args.max_source_positions = 1024 - if not hasattr(args, "max_target_positions"): - args.max_target_positions = 1024 - - src_dict, tgt_dict = task.source_dictionary, task.target_dictionary - - def build_embedding(dictionary, embed_dim, path=None): - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - emb = Embedding(num_embeddings, embed_dim, padding_idx) - # if provided, load from preloaded dictionaries - if path: - embed_dict = utils.parse_embedding(path) - utils.load_embedding(embed_dict, dictionary, emb) - return emb - - if args.share_all_embeddings: - if src_dict != tgt_dict: - raise RuntimeError( - "--share-all-embeddings requires a joined dictionary" - ) - if args.encoder_embed_dim != args.decoder_embed_dim: - raise RuntimeError( - "--share-all-embeddings requires --encoder-embed-dim to match --decoder-embed-dim" - ) - if args.decoder_embed_path and ( - args.decoder_embed_path != args.encoder_embed_path - ): - raise RuntimeError( - "--share-all-embeddings not compatible with --decoder-embed-path" - ) - encoder_embed_tokens = build_embedding( - src_dict, args.encoder_embed_dim, args.encoder_embed_path - ) - decoder_embed_tokens = encoder_embed_tokens - args.share_decoder_input_output_embed = True - else: - encoder_embed_tokens = build_embedding( - src_dict, args.encoder_embed_dim, args.encoder_embed_path - ) - decoder_embed_tokens = build_embedding( - tgt_dict, args.decoder_embed_dim, args.decoder_embed_path - ) - - encoder = LightConvEncoder(args, src_dict, encoder_embed_tokens) - decoder = LightConvDecoder(args, tgt_dict, decoder_embed_tokens) - return LightConvModel(encoder, decoder) - - -class LightConvEncoder(FairseqEncoder): - """ - LightConv encoder consisting of *args.encoder_layers* layers. Each layer - is a :class:`LightConvEncoderLayer`. - - Args: - args (argparse.Namespace): parsed command-line arguments - dictionary (~fairseq.data.Dictionary): encoding dictionary - embed_tokens (torch.nn.Embedding): input embedding - """ - - def __init__(self, args, dictionary, embed_tokens): - super().__init__(dictionary) - self.dropout_module = FairseqDropout( - args.dropout, module_name=self.__class__.__name__ - ) - - embed_dim = embed_tokens.embedding_dim - self.padding_idx = embed_tokens.padding_idx - self.max_source_positions = args.max_source_positions - - self.embed_tokens = embed_tokens - self.embed_scale = math.sqrt(embed_dim) - self.embed_positions = ( - PositionalEmbedding( - args.max_source_positions, - embed_dim, - self.padding_idx, - learned=args.encoder_learned_pos, - ) - if not args.no_token_positional_embeddings - else None - ) - - self.layers = nn.ModuleList([]) - self.layers.extend( - [ - LightConvEncoderLayer( - args, kernel_size=args.encoder_kernel_size_list[i] - ) - for i in range(args.encoder_layers) - ] - ) - self.register_buffer("version", torch.Tensor([2])) - self.normalize = args.encoder_normalize_before - if self.normalize: - self.layer_norm = LayerNorm(embed_dim) - - def forward(self, src_tokens, **unused): - """ - Args: - src_tokens (LongTensor): tokens in the source language of shape - `(batch, src_len)` - - Returns: - dict: - - **encoder_out** (Tensor): the last encoder layer's output of - shape `(src_len, batch, embed_dim)` - - **encoder_padding_mask** (ByteTensor): the positions of - padding elements of shape `(batch, src_len)` - """ - # embed tokens and positions - x = self.embed_scale * self.embed_tokens(src_tokens) - if self.embed_positions is not None: - x += self.embed_positions(src_tokens) - x = self.dropout_module(x) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - # compute padding mask - encoder_padding_mask = src_tokens.eq(self.padding_idx) - if not encoder_padding_mask.any(): - encoder_padding_mask = None - - # encoder layers - for layer in self.layers: - x = layer(x, encoder_padding_mask) - - if self.normalize: - x = self.layer_norm(x) - - return { - "encoder_out": x, # T x B x C - "encoder_padding_mask": encoder_padding_mask, # B x T - } - - def reorder_encoder_out(self, encoder_out, new_order): - """ - Reorder encoder output according to *new_order*. - - Args: - encoder_out: output from the ``forward()`` method - new_order (LongTensor): desired order - - Returns: - *encoder_out* rearranged according to *new_order* - """ - if encoder_out["encoder_out"] is not None: - encoder_out["encoder_out"] = encoder_out["encoder_out"].index_select( - 1, new_order - ) - if encoder_out["encoder_padding_mask"] is not None: - encoder_out["encoder_padding_mask"] = encoder_out[ - "encoder_padding_mask" - ].index_select(0, new_order) - return encoder_out - - def max_positions(self): - """Maximum input length supported by the encoder.""" - if self.embed_positions is None: - return self.max_source_positions - return min(self.max_source_positions, self.embed_positions.max_positions) - - -class LightConvDecoder(FairseqIncrementalDecoder): - """ - LightConv decoder consisting of *args.decoder_layers* layers. Each layer - is a :class:`LightConvDecoderLayer`. - - Args: - args (argparse.Namespace): parsed command-line arguments - dictionary (~fairseq.data.Dictionary): decoding dictionary - embed_tokens (torch.nn.Embedding): output embedding - no_encoder_attn (bool, optional): whether to attend to encoder outputs. - Default: ``False`` - """ - - def __init__( - self, args, dictionary, embed_tokens, no_encoder_attn=False, final_norm=True - ): - super().__init__(dictionary) - self.dropout_module = FairseqDropout( - args.dropout, module_name=self.__class__.__name__ - ) - self.share_input_output_embed = args.share_decoder_input_output_embed - - input_embed_dim = embed_tokens.embedding_dim - embed_dim = args.decoder_embed_dim - output_embed_dim = args.decoder_output_dim - - padding_idx = embed_tokens.padding_idx - self.max_target_positions = args.max_target_positions - - self.embed_tokens = embed_tokens - self.embed_scale = math.sqrt(embed_dim) # todo: try with input_embed_dim - - self.project_in_dim = ( - Linear(input_embed_dim, embed_dim, bias=False) - if embed_dim != input_embed_dim - else None - ) - - self.embed_positions = ( - PositionalEmbedding( - args.max_target_positions, - embed_dim, - padding_idx, - learned=args.decoder_learned_pos, - ) - if not args.no_token_positional_embeddings - else None - ) - - self.layers = nn.ModuleList([]) - self.layers.extend( - [ - LightConvDecoderLayer( - args, no_encoder_attn, kernel_size=args.decoder_kernel_size_list[i] - ) - for i in range(args.decoder_layers) - ] - ) - - self.adaptive_softmax = None - - self.project_out_dim = ( - Linear(embed_dim, output_embed_dim, bias=False) - if embed_dim != output_embed_dim and not args.tie_adaptive_weights - else None - ) - - if args.adaptive_softmax_cutoff is not None: - self.adaptive_softmax = AdaptiveSoftmax( - len(dictionary), - output_embed_dim, - utils.eval_str_list(args.adaptive_softmax_cutoff, type=int), - dropout=args.adaptive_softmax_dropout, - adaptive_inputs=embed_tokens if args.tie_adaptive_weights else None, - factor=args.adaptive_softmax_factor, - tie_proj=args.tie_adaptive_proj, - ) - elif not self.share_input_output_embed: - self.embed_out = nn.Parameter( - torch.Tensor(len(dictionary), output_embed_dim) - ) - nn.init.normal_(self.embed_out, mean=0, std=output_embed_dim ** -0.5) - self.register_buffer("version", torch.Tensor([2])) - self.normalize = args.decoder_normalize_before and final_norm - if self.normalize: - self.layer_norm = LayerNorm(embed_dim) - - def forward( - self, prev_output_tokens, encoder_out=None, incremental_state=None, **kwargs - ): - """ - Args: - prev_output_tokens (LongTensor): previous decoder outputs of shape - `(batch, tgt_len)`, for teacher forcing - encoder_out (Tensor, optional): output from the encoder, used for - encoder-side attention - incremental_state (dict): dictionary used for storing state during - :ref:`Incremental decoding` - - Returns: - tuple: - - the last decoder layer's output of shape `(batch, tgt_len, - vocab)` - - the last decoder layer's attention weights of shape `(batch, - tgt_len, src_len)` - """ - # embed positions - positions = ( - self.embed_positions( - prev_output_tokens, - incremental_state=incremental_state, - ) - if self.embed_positions is not None - else None - ) - - if incremental_state is not None: - prev_output_tokens = prev_output_tokens[:, -1:] - if positions is not None: - positions = positions[:, -1:] - - # embed tokens and positions - x = self.embed_scale * self.embed_tokens(prev_output_tokens) - - if self.project_in_dim is not None: - x = self.project_in_dim(x) - - if positions is not None: - x += positions - x = self.dropout_module(x) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - attn = None - - inner_states = [x] - - # decoder layers - for layer in self.layers: - x, attn = layer( - x, - encoder_out["encoder_out"] if encoder_out is not None else None, - encoder_out["encoder_padding_mask"] - if encoder_out is not None - else None, - incremental_state, - ) - inner_states.append(x) - - if self.normalize: - x = self.layer_norm(x) - - # T x B x C -> B x T x C - x = x.transpose(0, 1) - - if self.project_out_dim is not None: - x = self.project_out_dim(x) - - if self.adaptive_softmax is None: - # project back to size of vocabulary - if self.share_input_output_embed: - x = F.linear(x, self.embed_tokens.weight) - else: - x = F.linear(x, self.embed_out) - - return x, {"attn": attn, "inner_states": inner_states} - - def max_positions(self): - """Maximum output length supported by the decoder.""" - if self.embed_positions is None: - return self.max_target_positions - return min(self.max_target_positions, self.embed_positions.max_positions) - - def buffered_future_mask(self, tensor): - dim = tensor.size(0) - if ( - not hasattr(self, "_future_mask") - or self._future_mask is None - or self._future_mask.device != tensor.device - ): - self._future_mask = torch.triu( - utils.fill_with_neg_inf(tensor.new(dim, dim)), 1 - ) - if self._future_mask.size(0) < dim: - self._future_mask = torch.triu( - utils.fill_with_neg_inf(self._future_mask.resize_(dim, dim)), 1 - ) - return self._future_mask[:dim, :dim] - - -class LightConvEncoderLayer(nn.Module): - """Encoder layer block. - - Args: - args (argparse.Namespace): parsed command-line arguments - kernel_size: kernel size of the convolution - """ - - def __init__(self, args, kernel_size=0): - super().__init__() - self.embed_dim = args.encoder_embed_dim - self.conv_dim = args.encoder_conv_dim - padding_l = ( - kernel_size // 2 - if kernel_size % 2 == 1 - else ((kernel_size - 1) // 2, kernel_size // 2) - ) - - if args.encoder_glu: - self.linear1 = Linear(self.embed_dim, 2 * self.conv_dim) - self.act = nn.GLU() - else: - self.linear1 = Linear(self.embed_dim, self.conv_dim) - self.act = None - if args.encoder_conv_type == "lightweight": - self.conv = LightweightConv( - self.conv_dim, - kernel_size, - padding_l=padding_l, - weight_softmax=args.weight_softmax, - num_heads=args.encoder_attention_heads, - weight_dropout=args.weight_dropout, - ) - elif args.encoder_conv_type == "dynamic": - self.conv = DynamicConv( - self.conv_dim, - kernel_size, - padding_l=padding_l, - weight_softmax=args.weight_softmax, - num_heads=args.encoder_attention_heads, - weight_dropout=args.weight_dropout, - ) - else: - raise NotImplementedError - self.linear2 = Linear(self.conv_dim, self.embed_dim) - - self.dropout_module = FairseqDropout( - args.dropout, module_name=self.__class__.__name__ - ) - self.relu_dropout_module = FairseqDropout( - args.relu_dropout, module_name=self.__class__.__name__ - ) - self.input_dropout_module = FairseqDropout( - args.input_dropout, module_name=self.__class__.__name__ - ) - self.normalize_before = args.encoder_normalize_before - self.fc1 = Linear(self.embed_dim, args.encoder_ffn_embed_dim) - self.fc2 = Linear(args.encoder_ffn_embed_dim, self.embed_dim) - self.layer_norms = nn.ModuleList([LayerNorm(self.embed_dim) for _ in range(2)]) - - def forward(self, x, encoder_padding_mask): - """ - Args: - x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - encoder_padding_mask (ByteTensor): binary ByteTensor of shape - `(batch, src_len)` where padding elements are indicated by ``1``. - - Returns: - encoded output of shape `(batch, src_len, embed_dim)` - """ - residual = x - x = self.maybe_layer_norm(0, x, before=True) - x = self.input_dropout_module(x) - x = self.linear1(x) - if self.act is not None: - x = self.act(x) - if encoder_padding_mask is not None: - x = x.masked_fill(encoder_padding_mask.transpose(0, 1).unsqueeze(2), 0) - x = self.conv(x) - x = self.linear2(x) - x = self.dropout_module(x) - x = residual + x - x = self.maybe_layer_norm(0, x, after=True) - - residual = x - x = self.maybe_layer_norm(1, x, before=True) - x = F.relu(self.fc1(x)) - x = self.relu_dropout_module(x) - x = self.fc2(x) - x = self.dropout_module(x) - x = residual + x - x = self.maybe_layer_norm(1, x, after=True) - return x - - def maybe_layer_norm(self, i, x, before=False, after=False): - assert before ^ after - if after ^ self.normalize_before: - return self.layer_norms[i](x) - else: - return x - - def extra_repr(self): - return ( - "dropout={}, relu_dropout={}, input_dropout={}, normalize_before={}".format( - self.dropout_module.p, - self.relu_dropout_module.p, - self.input_dropout_module.p, - self.normalize_before, - ) - ) - - -class LightConvDecoderLayer(nn.Module): - """Decoder layer block. - - Args: - args (argparse.Namespace): parsed command-line arguments - no_encoder_attn (bool, optional): whether to attend to encoder outputs. - Default: ``False`` - kernel_size: kernel size of the convolution - """ - - def __init__(self, args, no_encoder_attn=False, kernel_size=0): - super().__init__() - self.embed_dim = args.decoder_embed_dim - self.conv_dim = args.decoder_conv_dim - if args.decoder_glu: - self.linear1 = Linear(self.embed_dim, 2 * self.conv_dim) - self.act = nn.GLU() - else: - self.linear1 = Linear(self.embed_dim, self.conv_dim) - self.act = None - if args.decoder_conv_type == "lightweight": - self.conv = LightweightConv( - self.conv_dim, - kernel_size, - padding_l=kernel_size - 1, - weight_softmax=args.weight_softmax, - num_heads=args.decoder_attention_heads, - weight_dropout=args.weight_dropout, - ) - elif args.decoder_conv_type == "dynamic": - self.conv = DynamicConv( - self.conv_dim, - kernel_size, - padding_l=kernel_size - 1, - weight_softmax=args.weight_softmax, - num_heads=args.decoder_attention_heads, - weight_dropout=args.weight_dropout, - ) - else: - raise NotImplementedError - self.linear2 = Linear(self.conv_dim, self.embed_dim) - - self.dropout_module = FairseqDropout( - args.dropout, module_name=self.__class__.__name__ - ) - self.relu_dropout_module = FairseqDropout( - args.relu_dropout, module_name=self.__class__.__name__ - ) - self.input_dropout_module = FairseqDropout( - args.input_dropout, module_name=self.__class__.__name__ - ) - self.normalize_before = args.decoder_normalize_before - - self.conv_layer_norm = LayerNorm(self.embed_dim) - - if no_encoder_attn: - self.encoder_attn = None - self.encoder_attn_layer_norm = None - else: - self.encoder_attn = MultiheadAttention( - self.embed_dim, - args.decoder_attention_heads, - dropout=args.attention_dropout, - encoder_decoder_attention=True, - ) - self.encoder_attn_layer_norm = LayerNorm(self.embed_dim) - - self.fc1 = Linear(self.embed_dim, args.decoder_ffn_embed_dim) - self.fc2 = Linear(args.decoder_ffn_embed_dim, self.embed_dim) - - self.final_layer_norm = LayerNorm(self.embed_dim) - self.need_attn = True - - def forward( - self, - x, - encoder_out, - encoder_padding_mask, - incremental_state, - prev_conv_state=None, - prev_attn_state=None, - conv_mask=None, - conv_padding_mask=None, - ): - """ - Args: - x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - encoder_padding_mask (ByteTensor): binary ByteTensor of shape - `(batch, src_len)` where padding elements are indicated by ``1``. - - Returns: - encoded output of shape `(batch, src_len, embed_dim)` - """ - residual = x - x = self.maybe_layer_norm(self.conv_layer_norm, x, before=True) - if prev_conv_state is not None: - if incremental_state is None: - incremental_state = {} - self.conv._set_input_buffer(incremental_state, prev_conv_state) - x = self.input_dropout_module(x) - x = self.linear1(x) - if self.act is not None: - x = self.act(x) - x = self.conv(x, incremental_state=incremental_state) - x = self.linear2(x) - x = self.dropout_module(x) - x = residual + x - x = self.maybe_layer_norm(self.conv_layer_norm, x, after=True) - - attn = None - if self.encoder_attn is not None: - residual = x - x = self.maybe_layer_norm(self.encoder_attn_layer_norm, x, before=True) - if prev_attn_state is not None: - if incremental_state is None: - incremental_state = {} - prev_key, prev_value = prev_attn_state - saved_state = {"prev_key": prev_key, "prev_value": prev_value} - self.encoder_attn._set_input_buffer(incremental_state, saved_state) - x, attn = self.encoder_attn( - query=x, - key=encoder_out, - value=encoder_out, - key_padding_mask=encoder_padding_mask, - incremental_state=incremental_state, - static_kv=True, - need_weights=(not self.training and self.need_attn), - ) - x = self.dropout_module(x) - x = residual + x - x = self.maybe_layer_norm(self.encoder_attn_layer_norm, x, after=True) - - residual = x - x = self.maybe_layer_norm(self.final_layer_norm, x, before=True) - x = F.relu(self.fc1(x)) - x = self.relu_dropout_module(x) - x = self.fc2(x) - x = self.dropout_module(x) - x = residual + x - x = self.maybe_layer_norm(self.final_layer_norm, x, after=True) - return x, attn - - def maybe_layer_norm(self, layer_norm, x, before=False, after=False): - assert before ^ after - if after ^ self.normalize_before: - return layer_norm(x) - else: - return x - - def make_generation_fast_(self, need_attn=False, **kwargs): - self.need_attn = need_attn - - def extra_repr(self): - return ( - "dropout={}, relu_dropout={}, input_dropout={}, normalize_before={}".format( - self.dropout_module.p, - self.relu_dropout_module.p, - self.input_dropout_module.p, - self.normalize_before, - ) - ) - - -def Embedding(num_embeddings, embedding_dim, padding_idx): - m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx) - nn.init.normal_(m.weight, mean=0, std=embedding_dim ** -0.5) - nn.init.constant_(m.weight[padding_idx], 0) - return m - - -def Linear(in_features, out_features, bias=True): - m = nn.Linear(in_features, out_features, bias) - nn.init.xavier_uniform_(m.weight) - if bias: - nn.init.constant_(m.bias, 0.0) - return m - - -@register_model_architecture("lightconv", "lightconv") -def base_architecture(args): - args.encoder_embed_path = getattr(args, "encoder_embed_path", None) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 2048) - args.encoder_layers = getattr(args, "encoder_layers", 7) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", False) - args.decoder_embed_path = getattr(args, "decoder_embed_path", None) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", args.encoder_embed_dim) - args.decoder_ffn_embed_dim = getattr( - args, "decoder_ffn_embed_dim", args.encoder_ffn_embed_dim - ) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", False) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False) - args.attention_dropout = getattr(args, "attention_dropout", 0.0) - args.relu_dropout = getattr(args, "relu_dropout", 0.0) - args.dropout = getattr(args, "dropout", 0.1) - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", False - ) - args.share_all_embeddings = getattr(args, "share_all_embeddings", False) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - - args.encoder_conv_dim = getattr(args, "encoder_conv_dim", args.encoder_embed_dim) - args.decoder_conv_dim = getattr(args, "decoder_conv_dim", args.decoder_embed_dim) - - args.encoder_kernel_size_list = getattr( - args, "encoder_kernel_size_list", [3, 7, 15, 31, 31, 31, 31] - ) - args.decoder_kernel_size_list = getattr( - args, "decoder_kernel_size_list", [3, 7, 15, 31, 31, 31] - ) - if len(args.encoder_kernel_size_list) == 1: - args.encoder_kernel_size_list = ( - args.encoder_kernel_size_list * args.encoder_layers - ) - if len(args.decoder_kernel_size_list) == 1: - args.decoder_kernel_size_list = ( - args.decoder_kernel_size_list * args.decoder_layers - ) - assert ( - len(args.encoder_kernel_size_list) == args.encoder_layers - ), "encoder_kernel_size_list doesn't match encoder_layers" - assert ( - len(args.decoder_kernel_size_list) == args.decoder_layers - ), "decoder_kernel_size_list doesn't match decoder_layers" - args.encoder_glu = getattr(args, "encoder_glu", True) - args.decoder_glu = getattr(args, "decoder_glu", True) - args.input_dropout = getattr(args, "input_dropout", 0.1) - args.weight_dropout = getattr(args, "weight_dropout", args.attention_dropout) - - -@register_model_architecture("lightconv", "lightconv_iwslt_de_en") -def lightconv_iwslt_de_en(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 1024) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 4) - args.encoder_layers = getattr(args, "encoder_layers", 7) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 1024) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 4) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - args.weight_dropout = getattr(args, "weight_dropout", 0.1) - args.encoder_glu = getattr(args, "encoder_glu", False) - args.decoder_glu = getattr(args, "decoder_glu", False) - args.input_dropout = getattr(args, "input_dropout", 0.0) - base_architecture(args) - - -@register_model_architecture("lightconv", "lightconv_wmt_en_de") -def lightconv_wmt_en_de(args): - base_architecture(args) - - -@register_model_architecture("lightconv", "lightconv_wmt_en_de_big") -def lightconv_wmt_en_de_big(args): - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 1024) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4096) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16) - args.dropout = getattr(args, "dropout", 0.3) - base_architecture(args) - - -@register_model_architecture("lightconv", "lightconv_wmt_en_fr_big") -def lightconv_wmt_en_fr_big(args): - args.dropout = getattr(args, "dropout", 0.1) - lightconv_wmt_en_de_big(args) - - -@register_model_architecture("lightconv", "lightconv_wmt_zh_en_big") -def lightconv_wmt_zh_en_big(args): - args.dropout = getattr(args, "dropout", 0.2) - args.attention_dropout = getattr(args, "attention_dropout", 0.2) - args.weight_dropout = getattr(args, "weight_dropout", 0.2) - lightconv_wmt_en_de_big(args) diff --git a/spaces/gradio/HuBERT/tests/speech_recognition/test_vggtransformer.py b/spaces/gradio/HuBERT/tests/speech_recognition/test_vggtransformer.py deleted file mode 100644 index 4dc73b8c7379970dc0bcc16fcb088a64a1bd7e3b..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/tests/speech_recognition/test_vggtransformer.py +++ /dev/null @@ -1,135 +0,0 @@ -#!/usr/bin/env python3 - -# import models/encoder/decoder to be tested -from examples.speech_recognition.models.vggtransformer import ( - TransformerDecoder, - VGGTransformerEncoder, - VGGTransformerModel, - vggtransformer_1, - vggtransformer_2, - vggtransformer_base, -) - -# import base test class -from .asr_test_base import ( - DEFAULT_TEST_VOCAB_SIZE, - TestFairseqDecoderBase, - TestFairseqEncoderBase, - TestFairseqEncoderDecoderModelBase, - get_dummy_dictionary, - get_dummy_encoder_output, - get_dummy_input, -) - - -class VGGTransformerModelTest_mid(TestFairseqEncoderDecoderModelBase): - def setUp(self): - def override_config(args): - """ - vggtrasformer_1 use 14 layers of transformer, - for testing purpose, it is too expensive. For fast turn-around - test, reduce the number of layers to 3. - """ - args.transformer_enc_config = ( - "((1024, 16, 4096, True, 0.15, 0.15, 0.15),) * 3" - ) - - super().setUp() - extra_args_setter = [vggtransformer_1, override_config] - - self.setUpModel(VGGTransformerModel, extra_args_setter) - self.setUpInput(get_dummy_input(T=50, D=80, B=5, K=DEFAULT_TEST_VOCAB_SIZE)) - - -class VGGTransformerModelTest_big(TestFairseqEncoderDecoderModelBase): - def setUp(self): - def override_config(args): - """ - vggtrasformer_2 use 16 layers of transformer, - for testing purpose, it is too expensive. For fast turn-around - test, reduce the number of layers to 3. - """ - args.transformer_enc_config = ( - "((1024, 16, 4096, True, 0.15, 0.15, 0.15),) * 3" - ) - - super().setUp() - extra_args_setter = [vggtransformer_2, override_config] - - self.setUpModel(VGGTransformerModel, extra_args_setter) - self.setUpInput(get_dummy_input(T=50, D=80, B=5, K=DEFAULT_TEST_VOCAB_SIZE)) - - -class VGGTransformerModelTest_base(TestFairseqEncoderDecoderModelBase): - def setUp(self): - def override_config(args): - """ - vggtrasformer_base use 12 layers of transformer, - for testing purpose, it is too expensive. For fast turn-around - test, reduce the number of layers to 3. - """ - args.transformer_enc_config = ( - "((512, 8, 2048, True, 0.15, 0.15, 0.15),) * 3" - ) - - super().setUp() - extra_args_setter = [vggtransformer_base, override_config] - - self.setUpModel(VGGTransformerModel, extra_args_setter) - self.setUpInput(get_dummy_input(T=50, D=80, B=5, K=DEFAULT_TEST_VOCAB_SIZE)) - - -class VGGTransformerEncoderTest(TestFairseqEncoderBase): - def setUp(self): - super().setUp() - - self.setUpInput(get_dummy_input(T=50, D=80, B=5)) - - def test_forward(self): - print("1. test standard vggtransformer") - self.setUpEncoder(VGGTransformerEncoder(input_feat_per_channel=80)) - super().test_forward() - print("2. test vggtransformer with limited right context") - self.setUpEncoder( - VGGTransformerEncoder( - input_feat_per_channel=80, transformer_context=(-1, 5) - ) - ) - super().test_forward() - print("3. test vggtransformer with limited left context") - self.setUpEncoder( - VGGTransformerEncoder( - input_feat_per_channel=80, transformer_context=(5, -1) - ) - ) - super().test_forward() - print("4. test vggtransformer with limited right context and sampling") - self.setUpEncoder( - VGGTransformerEncoder( - input_feat_per_channel=80, - transformer_context=(-1, 12), - transformer_sampling=(2, 2), - ) - ) - super().test_forward() - print("5. test vggtransformer with windowed context and sampling") - self.setUpEncoder( - VGGTransformerEncoder( - input_feat_per_channel=80, - transformer_context=(12, 12), - transformer_sampling=(2, 2), - ) - ) - - -class TransformerDecoderTest(TestFairseqDecoderBase): - def setUp(self): - super().setUp() - - dict = get_dummy_dictionary(vocab_size=DEFAULT_TEST_VOCAB_SIZE) - decoder = TransformerDecoder(dict) - dummy_encoder_output = get_dummy_encoder_output(encoder_out_shape=(50, 5, 256)) - - self.setUpDecoder(decoder) - self.setUpInput(dummy_encoder_output) - self.setUpPrevOutputTokens() diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/tools/analyze_model.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/tools/analyze_model.py deleted file mode 100644 index 9c06ea4b5fbfd551d85702171976f9bc33f2e275..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/tools/analyze_model.py +++ /dev/null @@ -1,127 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -import logging -import numpy as np -from collections import Counter -import tqdm - -from detectron2.checkpoint import DetectionCheckpointer -from detectron2.config import get_cfg -from detectron2.data import build_detection_test_loader -from detectron2.engine import default_argument_parser -from detectron2.modeling import build_model -from detectron2.utils.analysis import ( - activation_count_operators, - flop_count_operators, - parameter_count_table, -) -from detectron2.utils.logger import setup_logger - -logger = logging.getLogger("detectron2") - - -def setup(args): - cfg = get_cfg() - cfg.merge_from_file(args.config_file) - cfg.DATALOADER.NUM_WORKERS = 0 - cfg.merge_from_list(args.opts) - cfg.freeze() - setup_logger() - return cfg - - -def do_flop(cfg): - data_loader = build_detection_test_loader(cfg, cfg.DATASETS.TEST[0]) - model = build_model(cfg) - DetectionCheckpointer(model).load(cfg.MODEL.WEIGHTS) - model.eval() - - counts = Counter() - total_flops = [] - for idx, data in zip(tqdm.trange(args.num_inputs), data_loader): # noqa - count = flop_count_operators(model, data) - counts += count - total_flops.append(sum(count.values())) - logger.info( - "(G)Flops for Each Type of Operators:\n" + str([(k, v / idx) for k, v in counts.items()]) - ) - logger.info("Total (G)Flops: {}±{}".format(np.mean(total_flops), np.std(total_flops))) - - -def do_activation(cfg): - data_loader = build_detection_test_loader(cfg, cfg.DATASETS.TEST[0]) - model = build_model(cfg) - DetectionCheckpointer(model).load(cfg.MODEL.WEIGHTS) - model.eval() - - counts = Counter() - total_activations = [] - for idx, data in zip(tqdm.trange(args.num_inputs), data_loader): # noqa - count = activation_count_operators(model, data) - counts += count - total_activations.append(sum(count.values())) - logger.info( - "(Million) Activations for Each Type of Operators:\n" - + str([(k, v / idx) for k, v in counts.items()]) - ) - logger.info( - "Total (Million) Activations: {}±{}".format( - np.mean(total_activations), np.std(total_activations) - ) - ) - - -def do_parameter(cfg): - model = build_model(cfg) - logger.info("Parameter Count:\n" + parameter_count_table(model, max_depth=5)) - - -def do_structure(cfg): - model = build_model(cfg) - logger.info("Model Structure:\n" + str(model)) - - -if __name__ == "__main__": - parser = default_argument_parser( - epilog=""" -Examples: - -To show parameters of a model: -$ ./analyze_model.py --tasks parameter \\ - --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml - -Flops and activations are data-dependent, therefore inputs and model weights -are needed to count them: - -$ ./analyze_model.py --num-inputs 100 --tasks flop \\ - --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml \\ - MODEL.WEIGHTS /path/to/model.pkl -""" - ) - parser.add_argument( - "--tasks", - choices=["flop", "activation", "parameter", "structure"], - required=True, - nargs="+", - ) - parser.add_argument( - "--num-inputs", - default=100, - type=int, - help="number of inputs used to compute statistics for flops/activations, " - "both are data dependent.", - ) - args = parser.parse_args() - assert not args.eval_only - assert args.num_gpus == 1 - - cfg = setup(args) - - for task in args.tasks: - { - "flop": do_flop, - "activation": do_activation, - "parameter": do_parameter, - "structure": do_structure, - }[task](cfg) diff --git a/spaces/hlydecker/RA-document-QAchat/streamlit_langchain_chat/inputs/__init__.py b/spaces/hlydecker/RA-document-QAchat/streamlit_langchain_chat/inputs/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/huggingface-projects/diffuse-the-rest/build/_app/immutable/chunks/singletons-d6c43dab.js b/spaces/huggingface-projects/diffuse-the-rest/build/_app/immutable/chunks/singletons-d6c43dab.js deleted file mode 100644 index a4ec3c8d93aa6019c607f527fdf6d1b5efe64f1e..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/diffuse-the-rest/build/_app/immutable/chunks/singletons-d6c43dab.js +++ /dev/null @@ -1 +0,0 @@ -import{A as l,s as g}from"./index-032ac624.js";const u=[];function b(e,s=l){let t;const a=new Set;function i(n){if(g(e,n)&&(e=n,t)){const c=!u.length;for(const r of a)r[1](),u.push(r,e);if(c){for(let r=0;r{a.delete(r),a.size===0&&(t(),t=null)}}return{set:i,update:f,subscribe:o}}let d="",p="";function U(e){d=e.base,p=e.assets||d}function w(e){let s=e.baseURI;if(!s){const t=e.getElementsByTagName("base");s=t.length?t[0].href:e.URL}return s}function R(){return{x:pageXOffset,y:pageYOffset}}function y(e){return e.composedPath().find(t=>t instanceof Node&&t.nodeName.toUpperCase()==="A")}function T(e){return e instanceof SVGAElement?new URL(e.href.baseVal,document.baseURI):new URL(e.href)}function h(e){const s=b(e);let t=!0;function a(){t=!0,s.update(o=>o)}function i(o){t=!1,s.set(o)}function f(o){let n;return s.subscribe(c=>{(n===void 0||t&&c!==n)&&o(n=c)})}return{notify:a,set:i,subscribe:f}}function _(){const{set:e,subscribe:s}=b(!1);let t;async function a(){clearTimeout(t);const i=await fetch(`${p}/_app/version.json`,{headers:{pragma:"no-cache","cache-control":"no-cache"}});if(i.ok){const{version:f}=await i.json(),o=f!=="1666877460376";return o&&(e(!0),clearTimeout(t)),o}else throw new Error(`Version check failed: ${i.status}`)}return{subscribe:s,check:a}}function k(e){e.client}const q={url:h({}),page:h({}),navigating:b(null),updated:_()};export{T as a,R as b,U as c,y as f,w as g,k as i,q as s}; diff --git a/spaces/hysts/danbooru-pretrained/app.py b/spaces/hysts/danbooru-pretrained/app.py deleted file mode 100644 index fa7370676104c9a82248a0a09771255ad95a0cf6..0000000000000000000000000000000000000000 --- a/spaces/hysts/danbooru-pretrained/app.py +++ /dev/null @@ -1,112 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import functools -import json -import os -import pathlib -import tarfile -from typing import Callable - -import gradio as gr -import huggingface_hub -import PIL.Image -import torch -import torchvision.transforms as T - -DESCRIPTION = '# [RF5/danbooru-pretrained](https://github.com/RF5/danbooru-pretrained)' - -MODEL_REPO = 'public-data/danbooru-pretrained' - - -def load_sample_image_paths() -> list[pathlib.Path]: - image_dir = pathlib.Path('images') - if not image_dir.exists(): - dataset_repo = 'hysts/sample-images-TADNE' - path = huggingface_hub.hf_hub_download(dataset_repo, - 'images.tar.gz', - repo_type='dataset') - with tarfile.open(path) as f: - f.extractall() - return sorted(image_dir.glob('*')) - - -def load_model(device: torch.device) -> torch.nn.Module: - path = huggingface_hub.hf_hub_download(MODEL_REPO, 'resnet50-13306192.pth') - state_dict = torch.load(path) - model = torch.hub.load('RF5/danbooru-pretrained', - 'resnet50', - pretrained=False) - model.load_state_dict(state_dict) - model.to(device) - model.eval() - return model - - -def load_labels() -> list[str]: - path = huggingface_hub.hf_hub_download(MODEL_REPO, 'class_names_6000.json') - with open(path) as f: - labels = json.load(f) - return labels - - -@torch.inference_mode() -def predict(image: PIL.Image.Image, score_threshold: float, - transform: Callable, device: torch.device, model: torch.nn.Module, - labels: list[str]) -> dict[str, float]: - data = transform(image) - data = data.to(device).unsqueeze(0) - preds = model(data)[0] - preds = torch.sigmoid(preds) - preds = preds.cpu().numpy().astype(float) - - res = dict() - for prob, label in zip(preds.tolist(), labels): - if prob < score_threshold: - continue - res[label] = prob - return res - - -image_paths = load_sample_image_paths() -examples = [[path.as_posix(), 0.4] for path in image_paths] - -device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') -model = load_model(device) -labels = load_labels() - -transform = T.Compose([ - T.Resize(360), - T.ToTensor(), - T.Normalize(mean=[0.7137, 0.6628, 0.6519], std=[0.2970, 0.3017, 0.2979]), -]) - -fn = functools.partial(predict, - transform=transform, - device=device, - model=model, - labels=labels) - -with gr.Blocks(css='style.css') as demo: - gr.Markdown(DESCRIPTION) - with gr.Row(): - with gr.Column(): - image = gr.Image(label='Input', type='pil') - threshold = gr.Slider(label='Score Threshold', - minimum=0, - maximum=1, - step=0.05, - value=0.4) - run_button = gr.Button('Run') - with gr.Column(): - result = gr.Label(label='Output') - - inputs = [image, threshold] - gr.Examples(examples=examples, - inputs=inputs, - outputs=result, - fn=fn, - cache_examples=os.getenv('CACHE_EXAMPLES') == '1') - run_button.click(fn=fn, inputs=inputs, outputs=result, api_name='predict') -demo.queue(max_size=15).launch() diff --git a/spaces/hzwluoye/gpt4/g4f/__init__.py b/spaces/hzwluoye/gpt4/g4f/__init__.py deleted file mode 100644 index 5252688464f8a31edab800592ee5043b95a4e9df..0000000000000000000000000000000000000000 --- a/spaces/hzwluoye/gpt4/g4f/__init__.py +++ /dev/null @@ -1,39 +0,0 @@ -import sys -from . import Provider -from g4f.models import Model, ModelUtils - - -class ChatCompletion: - @staticmethod - def create(model: Model.model or str, messages: list, api_key: str = None, provider: Provider.Provider = None, stream: bool = False, auth: str = False, **kwargs): - kwargs['auth'] = auth - - if provider and provider.needs_auth and not auth: - print( - f'ValueError: {provider.__name__} requires authentication (use auth="cookie or token or jwt ..." param)', file=sys.stderr) - sys.exit(1) - - try: - if isinstance(model, str): - try: - model = ModelUtils.convert[model] - except KeyError: - raise Exception(f'The model: {model} does not exist') - - engine = model.best_provider if not provider else provider - - if not engine.supports_stream and stream == True: - print( - f"ValueError: {engine.__name__} does not support 'stream' argument", file=sys.stderr) - sys.exit(1) - - print(f'Using {engine.__name__} provider') - - return (engine._create_completion(api_key=api_key, model=model.name, messages=messages, stream=stream, **kwargs) - if stream else ''.join(engine._create_completion(api_key=api_key, model=model.name, messages=messages, stream=stream, **kwargs))) - except TypeError as e: - print(e) - arg: str = str(e).split("'")[1] - print( - f"ValueError: {engine.__name__} does not support '{arg}' argument", file=sys.stderr) - sys.exit(1) diff --git a/spaces/imju/flower_detector/app.py b/spaces/imju/flower_detector/app.py deleted file mode 100644 index f0e6725c0e445d17e103f612cea211e406cbc28f..0000000000000000000000000000000000000000 --- a/spaces/imju/flower_detector/app.py +++ /dev/null @@ -1,18 +0,0 @@ - -from fastai.vision.all import * -import gradio as gr - -learn_inf = load_learner('./export.pkl') -labels = learn_inf.dls.vocab -def predict(img): - pred,pred_idx,probs = learn_inf.predict(img) - return {labels[i]: float(probs[i]) for i in range(len(labels))} - -title = "Tulip/Rose/Daisy Flower Classifier" -description = "Tulip/Rose/Daisy flower classifier with fastai using Gradio and HuggingFace Spaces." -article="

    Blog post

    " -interpretation='default' -enable_queue=True - - -gr.Interface(fn=predict, inputs=gr.Image(shape=(512, 512)), outputs=gr.Label(num_top_classes=3), examples='samples').launch() \ No newline at end of file diff --git a/spaces/inamXcontru/PoeticTTS/Arbitrage Underdog Reloaded UPD Crack Site.md b/spaces/inamXcontru/PoeticTTS/Arbitrage Underdog Reloaded UPD Crack Site.md deleted file mode 100644 index 7a0ece60da0ce10fe3d82458c9e431eaad463e45..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Arbitrage Underdog Reloaded UPD Crack Site.md +++ /dev/null @@ -1,6 +0,0 @@ -

    arbitrage underdog reloaded crack site


    Download 🆓 https://gohhs.com/2uz3Pn



    -
    -Arbitrage Underdog RELOADED Pro Black Label Edition 5.0 Cracked – Free Download Crack ... Craigslist has a LOT of rules, due to spammers hitting their site. 1fdad05405
    -
    -
    -

    diff --git a/spaces/inamXcontru/PoeticTTS/CyberLink PowerProducer Ultra 6.0.7613.0 Pre-crack BESTed Serial Key.md b/spaces/inamXcontru/PoeticTTS/CyberLink PowerProducer Ultra 6.0.7613.0 Pre-crack BESTed Serial Key.md deleted file mode 100644 index 2966ac9b747465a431d027ce64b7f8cce948b2d0..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/CyberLink PowerProducer Ultra 6.0.7613.0 Pre-crack BESTed Serial Key.md +++ /dev/null @@ -1,181 +0,0 @@ -
    -

    CyberLink PowerProducer Ultra 6.0.7613.0 Pre-Cracked Serial Key: A Complete Guide

    - -

    If you are looking for a powerful and easy-to-use software to create professional-quality Blu-ray and DVD discs, you might want to check out CyberLink PowerProducer Ultra 6.0.7613.0 Pre-Cracked Serial Key. This software allows you to turn your photos and videos into stunning multimedia products with complete disc authoring tools, support for the latest media formats and fast rendering speed. In this article, we will show you how to download, install and use CyberLink PowerProducer Ultra 6.0.7613.0 Pre-Cracked Serial Key to create your own discs.

    - -

    How to Download CyberLink PowerProducer Ultra 6.0.7613.0 Pre-Cracked Serial Key

    - -

    One of the advantages of CyberLink PowerProducer Ultra 6.0.7613.0 Pre-Cracked Serial Key is that you don't need to worry about activation or patching, as the software comes with a pre-cracked serial key that works for any version of Windows from XP to 10. To download the software, you can use the following link:

    -

    CyberLink PowerProducer Ultra 6.0.7613.0 Pre-Cracked Serial Key


    Download File 🌟 https://gohhs.com/2uz5nL



    - -

    https://tinurli.com/282cb0

    - -

    This link will take you to a page where you can choose from different file hosting services to download the software, such as IntoUpload, upload-4ever or KolomBox. The file size is about 384 MB, so it might take some time depending on your internet speed.

    - -

    How to Install CyberLink PowerProducer Ultra 6.0.7613.0 Pre-Cracked Serial Key

    - -

    After downloading the software, you need to unzip the file using a program like WinRAR or 7-Zip. You will find a folder named CyberLink PowerProducer Ultra 6.0.7613.0 + Crack [CracksNow] that contains the following files:

    - -
      -
    • Downloaded from CracksNow.com.txt
    • -
    • PowerProducer_6.0.7613.0_Ultra.exe
    • -
    • Read me.txt
    • -
    • Visit CracksNow.com.url
    • -
    • Visit SoupGet.com.url
    • -
    - -

    To install the software, you need to run the PowerProducer_6.0.7613.0_Ultra.exe file and follow the instructions on the screen. You can choose the language, destination folder and components to install.

    - -

    After the installation is complete, you need to run the CyberLink PowerProducer Ultra 6 Serial Key Generator 2015 file and click on Generate button to get a serial key for the software.

    -

    - -

    Then, you need to run the CyberLink PowerProducer Ultra 6 Activator file and wait for the process to finish.

    - -

    That's it! You have successfully installed CyberLink PowerProducer Ultra 6.0.7613.0 Pre-Cracked Serial Key on your PC.

    - -

    How to Use CyberLink PowerProducer Ultra 6.0.7613.0 Pre-Cracked Serial Key

    - -

    To use CyberLink PowerProducer Ultra 6.0.7613.0 Pre-Cracked Serial Key, you need to launch the software from your desktop or start menu shortcut.

    - -

    You will see a welcome screen that gives you four options:

    - -
      -
    • Create a Movie Disc: This option allows you to create Blu-ray or DVD discs with your videos and photos.
    • -
    • Create a Data Disc: This option allows you to create data discs with any files or folders.
    • -
    • Burn an Image File: This option allows you to burn an image file (ISO or BIN) to a disc.
    • -
    • Erase a Disc: This option allows you to erase a rewritable disc.
    • -
    - -

    To create a movie disc, you need to click on the first option and choose the type of disc you want to create: Blu-ray Disc, DVD-Video or AVCHD.

    - -

    Then, you need to add your photos and videos by clicking on Add Media button or dragging and dropping them from your computer.

    - -

    You can edit your photos and videos by clicking on Edit button or using the Magic Tools on the left side of the screen.

    - -

    You can also add music, titles, transitions and effects by using the tabs on the right side of the screen.

    - -

    When you are happy with your project, you can click on Next button to choose a menu template for your disc.

    - -

    You can customize your menu by changing the background, buttons, text and music.

    - -

    When you are done with your menu, you can click on Next button to preview your disc and make sure everything is OK.

    - -

    Finally, you can click on Burn button to start burning your disc.

    - -

    Conclusion

    - -

    CyberLink PowerProducer Ultra 6.0.7613.0 Pre-Cracked Serial Key is a great software for creating Blu-ray and DVD discs with your photos and videos.

    - -

    It has a user-friendly interface, complete disc authoring tools, support for the latest media formats and fast rendering speed.

    - -

    You can download, install and use CyberLink PowerProducer Ultra 6.0.7613.0 Pre-Cracked Serial Key by following our guide above.

    - -

    We hope this article was helpful for you and we wish you happy disc creation!

    -

    Benefits of CyberLink PowerProducer Ultra 6.0.7613.0 Pre-Cracked Serial Key

    - -

    CyberLink PowerProducer Ultra 6.0.7613.0 Pre-Cracked Serial Key has many benefits that make it a great choice for anyone who wants to create Blu-ray and DVD discs with their photos and videos.

    - -

    Some of the benefits are:

    - -
      -
    • It supports a wide range of media formats, including Quicktime, AVI, MPG, DivX, XVID, H.264, XVID, ASF, WMV, VOB, RM, RMVB, FLV, MKV, DIVX, H.264, XVID and more.
    • -
    • It has a user-friendly interface that makes it easy to navigate and use.
    • -
    • It has complete disc authoring tools that allow you to create stylish disc menus, add titles, transitions and effects, and customize your disc layout.
    • -
    • It has Magic Tools that help you enhance your photos and videos with color correction, stabilization, red-eye removal and more.
    • -
    • It has a fast rendering speed that saves you time and ensures high-quality output.
    • -
    • It has social media integration that lets you upload your videos to YouTube or share them on Facebook and MySpace.
    • -
    • It comes with a pre-cracked serial key that works for any version of Windows from XP to 10.
    • -
    - -

    With CyberLink PowerProducer Ultra 6.0.7613.0 Pre-Cracked Serial Key, you can create amazing Blu-ray and DVD discs with your photos and videos in no time.

    - -

    Tips and Tricks for CyberLink PowerProducer Ultra 6.0.7613.0 Pre-Cracked Serial Key

    - -

    To get the most out of CyberLink PowerProducer Ultra 6.0.7613.0 Pre-Cracked Serial Key, here are some tips and tricks that you can use:

    - -
      -
    • To add multiple photos or videos at once, you can select them from your computer and drag and drop them to the media panel.
    • -
    • To change the order of your photos or videos, you can drag and drop them in the timeline or use the arrow buttons on the toolbar.
    • -
    • To trim or crop your photos or videos, you can double-click on them in the timeline or use the Edit button on the toolbar.
    • -
    • To add music to your project, you can click on the Music tab on the right side of the screen and choose from the built-in tracks or add your own music files.
    • -
    • To adjust the volume of your music or video soundtracks, you can use the sliders on the audio mixer panel.
    • -
    • To add titles to your project, you can click on the Titles tab on the right side of the screen and choose from the built-in templates or create your own titles.
    • -
    • To add transitions or effects to your project, you can click on the Transitions or Effects tab on the right side of the screen and choose from the built-in options or download more from DirectorZone.com.
    • -
    • To preview your project before burning it to a disc, you can click on the Preview button on the toolbar or use the spacebar to play or pause.
    • -
    • To burn your project to a disc, you can click on the Burn button on the toolbar or use Ctrl+B shortcut key.
    • -
    - -

    These are some of the tips and tricks that you can use to make your CyberLink PowerProducer Ultra 6.0.7613.0 Pre-Cracked Serial Key experience more enjoyable and productive.

    - -

    Conclusion

    - -

    CyberLink PowerProducer Ultra 6.0.7613.0 Pre-Cracked Serial Key is a great software for creating Blu-ray and DVD discs with your photos and videos.

    - -

    It has a user-friendly interface, complete disc authoring tools, support for the latest media formats and fast rendering speed.

    - -

    You can download, install and use CyberLink PowerProducer Ultra 6.0.7613.0 Pre-Cracked Serial Key by following our guide above.

    - -

    We hope this article was helpful for you and we wish you happy disc creation!

    -

    Frequently Asked Questions about CyberLink PowerProducer Ultra 6.0.7613.0 Pre-Cracked Serial Key

    - -

    Here are some of the frequently asked questions about CyberLink PowerProducer Ultra 6.0.7613.0 Pre-Cracked Serial Key that you might find useful:

    - -

    What are the system requirements for CyberLink PowerProducer Ultra 6.0.7613.0 Pre-Cracked Serial Key?

    - -

    To run CyberLink PowerProducer Ultra 6.0.7613.0 Pre-Cracked Serial Key, you need to have the following system requirements:

    - -
      -
    • Operating System: Windows XP/Vista/7/8/8.1/10 (32-bit or 64-bit)
    • -
    • Processor: Intel Pentium 4 3.0 GHz or AMD Athlon 64 X2 or above
    • -
    • Memory: 2 GB RAM or above
    • -
    • Hard Disk Space: 5 GB for product installation
    • -
    • Display Device: HDCP compliant display for Blu-ray playback
    • -
    • Graphics Card: NVIDIA GeForce 7600 GT or ATI X1600 series or above
    • -
    • Sound Card: PCI sound card or on-board audio output
    • -
    • Optical Drive: DVD burner (DVD+R/RW or DVD-R/RW) for DVD output; Blu-ray Disc recordable drive for Blu-ray Disc output
    • -
    • Internet Connection: Required for online services and activation
    • -
    - -

    How to update CyberLink PowerProducer Ultra 6.0.7613.0 Pre-Cracked Serial Key?

    - -

    To update CyberLink PowerProducer Ultra 6.0.7613.0 Pre-Cracked Serial Key, you can follow these steps:

    - -
      -
    1. Launch the software and click on the Help menu.
    2. -
    3. Select Check for Updates from the drop-down menu.
    4. -
    5. If there are any updates available, you will see a notification window.
    6. -
    7. Click on Download Now to download and install the updates.
    8. -
    9. Restart the software to apply the updates.
    10. -
    - -

    How to uninstall CyberLink PowerProducer Ultra 6.0.7613.0 Pre-Cracked Serial Key?

    - -

    To uninstall CyberLink PowerProducer Ultra 6.0.7613.0 Pre-Cracked Serial Key, you can follow these steps:

    - -
      -
    1. Go to Start menu and click on Control Panel.
    2. -
    3. Select Programs and Features or Add or Remove Programs.
    4. -
    5. Find CyberLink PowerProducer Ultra 6 in the list of installed programs and click on Uninstall or Change/Remove.
    6. -
    7. Follow the instructions on the screen to complete the uninstallation process.
    8. -
    - -

    Conclusion

    - -

    CyberLink PowerProducer Ultra 6.0.7613.0 Pre-Cracked Serial Key is a great software for creating Blu-ray and DVD discs with your photos and videos.

    - -

    It has a user-friendly interface, complete disc authoring tools, support for the latest media formats and fast rendering speed.

    - -

    You can download, install and use CyberLink PowerProducer Ultra 6.0.7613.0 Pre-Cracked Serial Key by following our guide above.

    - -

    We hope this article was helpful for you and we wish you happy disc creation!

    -

    Conclusion

    - -

    CyberLink PowerProducer Ultra 6.0.7613.0 Pre-Cracked Serial Key is a great software for creating Blu-ray and DVD discs with your photos and videos.

    - -

    It has a user-friendly interface, complete disc authoring tools, support for the latest media formats and fast rendering speed.

    - -

    You can download, install and use CyberLink PowerProducer Ultra 6.0.7613.0 Pre-Cracked Serial Key by following our guide above.

    - -

    We hope this article was helpful for you and we wish you happy disc creation!

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/2 States 2 Full Movie In Hindi 720p Torrent.md b/spaces/inplisQlawa/anything-midjourney-v4-1/2 States 2 Full Movie In Hindi 720p Torrent.md deleted file mode 100644 index bc462796f9416f4c7b919582b250e1c5a48dd246..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/2 States 2 Full Movie In Hindi 720p Torrent.md +++ /dev/null @@ -1,6 +0,0 @@ -

    2 States 2 full movie in hindi 720p torrent


    Downloadhttps://urlin.us/2uExZK



    - - d5da3c52bf
    -
    -
    -

    diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Adobe Photoshop CC 2020 WORK Crack Serial Key.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Adobe Photoshop CC 2020 WORK Crack Serial Key.md deleted file mode 100644 index 8a644de652da45c62daf26de1a6e139efe4ccc58..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Adobe Photoshop CC 2020 WORK Crack Serial Key.md +++ /dev/null @@ -1,89 +0,0 @@ -
    -

    Adobe Photoshop CC 2020 Crack Serial Key: What You Need to Know

    - -

    Adobe Photoshop CC 2020 is one of the most popular and powerful image editing software in the world. It offers a wide range of features and tools to help you create stunning photos, graphics, designs, animations, and more. However, to use Adobe Photoshop CC 2020, you need to have a valid serial key that activates the full version of the software. Otherwise, you will only be able to use the free trial version for a limited time.

    - -

    Some people may try to find Adobe Photoshop CC 2020 crack serial key online, which is a code that bypasses the activation process and allows you to use the software for free. However, this is not a safe or legal way of using Adobe Photoshop CC 2020, as it can expose your computer to various risks and consequences. In this article, we will explain why you should avoid using Adobe Photoshop CC 2020 crack serial key, and how you can download and install Adobe Photoshop CC 2020 legally and safely.

    -

    Adobe Photoshop CC 2020 Crack Serial Key


    Download »»» https://urlin.us/2uEwA1



    - -

    Why You Should Avoid Using Adobe Photoshop CC 2020 Crack Serial Key

    - -

    Using Adobe Photoshop CC 2020 crack serial key may seem like a tempting option to save money and get access to all the features and functions of the software. However, there are many reasons why you should avoid using it, such as:

    - -
      -
    • It is illegal. Using Adobe Photoshop CC 2020 crack serial key violates the terms and conditions of Adobe, which prohibit the unauthorized use or distribution of their software. You may face legal actions from Adobe or other authorities if you are caught using it.
    • -
    • It is risky. Using Adobe Photoshop CC 2020 crack serial key can also expose your computer to various threats, such as malware, viruses, spyware, adware, ransomware, etc. These malicious programs can harm your system, steal your personal data, encrypt your files, or even take control of your device.
    • -
    • It is unreliable. Using Adobe Photoshop CC 2020 crack serial key can also affect the performance and quality of the software. You may experience errors, crashes, glitches, or compatibility issues with other programs or devices. You may also miss out on the latest updates, patches, or features that Adobe releases for their software.
    • -
    - -

    Therefore, using Adobe Photoshop CC 2020 crack serial key is not worth the risk or trouble. You may end up losing more than you gain by using it.

    - -

    How to Download and Install Adobe Photoshop CC 2020 Legally and Safely

    - -

    The best way to download and install Adobe Photoshop CC 2020 is to use the official website of Adobe. Here are the steps to follow:

    - -
      -
    1. Go to https://www.adobe.com/products/photoshop.html and click on the "Free Trial" button.
    2. -
    3. Create an Adobe account or sign in with your existing one.
    4. -
    5. Download the Adobe Creative Cloud app and install it on your computer.
    6. -
    7. Launch the Adobe Creative Cloud app and sign in with your Adobe account.
    8. -
    9. Select "Photoshop" from the list of apps and click on the "Install" button.
    10. -
    11. Wait for the installation to complete and launch Adobe Photoshop CC 2020 from the app or from your desktop.
    12. -
    - -

    You can use Adobe Photoshop CC 2020 for free for 7 days. After that, you will need to purchase a subscription plan to continue using it. You can choose from different plans depending on your needs and budget. You can also cancel your subscription anytime.

    - -

    How to Use Adobe Photoshop CC 2020 Effectively and Creatively

    - -

    Adobe Photoshop CC 2020 is a powerful and versatile image editing software that can help you create amazing images for various purposes. Whether you are a beginner or a professional, you can use Adobe Photoshop CC 2020 to enhance your photos,design graphics,create animations,edit videos,and more. Here are some tips and tricks on how to use Adobe Photoshop CC 2020 effectively and creatively:

    - -
      -
    • Explore the new features and improvements of Adobe Photoshop CC 2020,such as the Object Selection tool,the Content-Aware Fill workspace,the Enhanced Transform Warp tool,the Color Wheel,and more.
    • -
    • Use presetsto apply different effects and styles to your images quickly and easily. You can also create your own presets or download presets from other users online.
    • -
    • Use layersto organize your work and edit different parts of your image separately. You can also use layer masks,adjustment layers,smart objects,layer styles,and blending modesto enhance your images.
    • -
    • Use brushesto paint with different colors,textures,shapes,and effects. You can also customize your brushes or download brushes from other users online.
    • -
    • Use filtersto apply different artistic effects to your images. You can also use smart filters to apply filters non-destructively and edit them later.
    • -
    • Use tools such as crop, clone stamp, healing brush, spot healing brush, patch tool, content-aware move tool, liquify tool, perspective warp tool, puppet warp tool, etc.to adjust and manipulate your images. 
    • -
    •  Use tools such as pen tool, shape tool, text tool, path selection tool, direct selection tool, etc.  to create vector graphicsand text. 
    • -
    •  Use tools such as frame tool, artboard tool, slice tool, etc.  to create layouts -
    • Enter your serial key in the box and click on "Activate".
    • -
    • Wait for the activation process to complete and restart Adobe Photoshop CC 2020.
    • -
- -

You can now enjoy all the features and functions of Adobe Photoshop CC 2020 without any limitations. You can also check your subscription status and manage your account from the Adobe Creative Cloud app.

- -

How to Update Adobe Photoshop CC 2020 to the Latest Version

- -

Adobe Photoshop CC 2020 is constantly updated with new features, improvements, bug fixes, and security patches. To keep your software up to date and secure, you should always install the latest updates as soon as they are available. Here are the steps to follow:

- -
    -
  1. Launch the Adobe Creative Cloud app and sign in with your Adobe account.
  2. -
  3. Select "Photoshop" from the list of apps and click on the "Update" button.
  4. -
  5. Wait for the update to download and install.
  6. -
  7. Restart Adobe Photoshop CC 2020 and enjoy the new features and improvements.
  8. -
- -

You can also enable automatic updates from the Adobe Creative Cloud app settings, so you don't have to worry about missing any updates.

- -

How to Uninstall Adobe Photoshop CC 2020 from Your Computer

- -

If you want to uninstall Adobe Photoshop CC 2020 from your computer, you can do so easily and safely. Here are the steps to follow:

- -
    -
  1. Launch the Adobe Creative Cloud app and sign in with your Adobe account.
  2. -
  3. Select "Photoshop" from the list of apps and click on the "Uninstall" button.
  4. -
  5. Follow the instructions on the screen to complete the uninstallation process.
  6. -
  7. Delete any leftover files or folders related to Adobe Photoshop CC 2020 from your computer.
  8. -
- -

You can also cancel your subscription plan from the Adobe Creative Cloud app or from your Adobe account online.

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Aridi Vector Clipart Collection REPACK.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Aridi Vector Clipart Collection REPACK.md deleted file mode 100644 index 047a8cba73e8257f50f153878109ab37cbfb357e..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Aridi Vector Clipart Collection REPACK.md +++ /dev/null @@ -1,23 +0,0 @@ - -

Aridi Vector Clipart Collection: A Treasure Trove of Design Elements

-

If you are looking for a rich and diverse collection of vector clipart for your design projects, you might want to check out the Aridi Vector Clipart Collection. This collection contains 37 CDs of high-quality vector graphics that cover a wide range of themes and styles, from ornamental borders and frames, to floral motifs and patterns, to historical and cultural icons and symbols.

-

Aridi Vector Clipart Collection


Download Zip ————— https://urlin.us/2uEwaT



-

What is vector clipart? Vector clipart is a type of digital image that is composed of mathematical shapes and curves, rather than pixels. This means that vector images can be scaled up or down without losing quality or clarity. Vector clipart is ideal for creating logos, icons, illustrations, posters, flyers, banners, and other graphics that require crisp and smooth edges.

-

What is the Aridi Vector Clipart Collection? The Aridi Vector Clipart Collection is a product of Aridi Graphics, a company founded by Marwan Aridi, a Lebanese-American artist and designer who has been creating vector art since 1988. Aridi's work is inspired by various historical and cultural sources, such as medieval manuscripts, Islamic art, Celtic art, Art Nouveau, Art Deco, and more. His collection features over 12,000 vector images that can be used for personal or commercial purposes.

-

What are the benefits of using the Aridi Vector Clipart Collection? The Aridi Vector Clipart Collection offers many benefits for designers who want to add some flair and elegance to their projects. Some of the benefits are:

-
    -
  • The collection is compatible with most graphic software programs, such as Adobe Illustrator, CorelDRAW, Inkscape, and more.
  • -
  • The collection is easy to use and customize. You can change the colors, sizes, shapes, and orientations of the vector images to suit your needs.
  • -
  • The collection is versatile and adaptable. You can use the vector images for various types of projects, such as web design, print design, embroidery design, engraving design, and more.
  • -
  • The collection is affordable and accessible. You can download the entire collection for free from the Internet Archive[^1^], or purchase individual CDs or bundles from the Aridi Graphics website[^2^].
  • -
-

If you are looking for a way to spice up your design projects with some stunning and unique vector clipart, you should definitely give the Aridi Vector Clipart Collection a try. You will be amazed by the variety and quality of the vector images that you can find in this collection.

How to use the Aridi Vector Clipart Collection? If you are wondering how to use the Aridi Vector Clipart Collection for your design projects, here are some tips and tricks that can help you get started.

-
    -
  1. Choose the right software. To work with vector clipart, you need a software program that can handle vector graphics, such as Adobe Illustrator[^1^], CorelDRAW, Inkscape, or others. These programs allow you to open, edit, and export vector files in various formats.
  2. -
  3. Import or open the vector clipart. Depending on whether you want to use the vector clipart as a part of an existing design or as a standalone image, you can either import it or open it in your software. To import a vector clipart, go to File > Place (or Ctrl + Shift + P) and select the file you want to use[^2^]. To open a vector clipart, go to File > Open (or Ctrl + O) and choose the file you want to work with[^2^]. If you downloaded the Aridi Vector Clipart Collection from the Internet Archive[^1^], you will need to unzip the files first.
  4. -
  5. Customize and modify the vector clipart. Once you have imported or opened the vector clipart, you can customize and modify it according to your needs. You can change the colors, sizes, shapes, and orientations of the vector elements using various tools and commands in your software. You can also combine different vector cliparts to create new designs.
  6. -
  7. Export or save the vector clipart. When you are done with your design, you can export or save the vector clipart in a format that suits your purpose. For example, if you want to use it for web design, you can export it as an SVG file. If you want to use it for print design, you can save it as an EPS or PDF file. You can also convert it to a raster image if needed.
  8. -
-

The Aridi Vector Clipart Collection is a great resource for designers who want to create stunning and unique graphics with ease. By following these simple steps, you can use the Aridi Vector Clipart Collection for any project you have in mind.

d5da3c52bf
-
-
\ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Corel PaintShop Pro 2019 Ultimate 21.1.0.8 Keygen [CracksMind] 64 Bit NEW!.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Corel PaintShop Pro 2019 Ultimate 21.1.0.8 Keygen [CracksMind] 64 Bit NEW!.md deleted file mode 100644 index 2932788fd55270fab6cdede262616aea7e7ba11f..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Corel PaintShop Pro 2019 Ultimate 21.1.0.8 Keygen [CracksMind] 64 Bit NEW!.md +++ /dev/null @@ -1,6 +0,0 @@ -

Corel PaintShop Pro 2019 Ultimate 21.1.0.8 Keygen [CracksMind] 64 Bit


Download Ziphttps://urlin.us/2uEyj5



- -September 19, 2019 by J. The sounds are selected from the vArranger ... Roland JP-08 ... Corel PaintShop Pro 2019 Ultimate 21.1.0.8 Keygen [CracksMind] crack ... Adobe Illustrator CC 2018 19.0.0 (64-Bit) Crack Serial Key 1fdad05405
-
-
-

diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Crack Mot De Passe Site Internet !FREE!.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Crack Mot De Passe Site Internet !FREE!.md deleted file mode 100644 index bb32ab0dadfaf16255eb0557efcdd787ad5fbacc..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Crack Mot De Passe Site Internet !FREE!.md +++ /dev/null @@ -1,6 +0,0 @@ -

Crack Mot De Passe Site Internet


DOWNLOADhttps://urlin.us/2uEw2s



-
-Fatigué d'oublier les mots de passe? Ce carnet est parfait pour conserver le nom de votre site Web en un seul endroit. Vous pouvez aussi noter votre nom ... en un pouce de lintr-temps. Lintr-temps est difficile ; la mémoire vous permets; nos derniers clavions feront votre nom. Vous pouvez nous contacter quelque chose sur votre nom. A cesser de langobards dans le chatons et de la couleur d'aucun de ces coups de pelouses, vous pourrez ajouter : • "La petite pupil" avec 8a78ff9644
-
-
-

diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Minions (English) Movie !EXCLUSIVE! Free Download In Hindi Mp4 !EXCLUSIVE! Free.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Minions (English) Movie !EXCLUSIVE! Free Download In Hindi Mp4 !EXCLUSIVE! Free.md deleted file mode 100644 index 674f1eb51b972360ca7544bcd49772e21db11ca9..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Minions (English) Movie !EXCLUSIVE! Free Download In Hindi Mp4 !EXCLUSIVE! Free.md +++ /dev/null @@ -1,8 +0,0 @@ -

Minions (English) movie free download in hindi mp4 free


Download File ———>>> https://urlin.us/2uEwLT



- -The Minions [Hindi] (2015) - full movie download - The Minions [Hindi] (2015) - full movie download - скачать бесплатно The Minions [Hindi] (2015) - full movie download - скачать бесплатно на iphone, iphone, ipad, ipod, компенсационные флеш-менеджеры и подключения - -The Minions [Hindi] (2015) - Full Movie Online,Download The Minions [Hindi] (2015) - Full Movie Online,Download The Minions [Hindi] (2015) - Full Movie Online,Download The Minions [Hindi] (2015) - Full Movie Online,Download The Minions [Hindi] (2015) - Full Movie Online,Download The Minions [Hindi] (2015) - Full Movie Online,Download The Minions [Hindi] (2015) - Full Movie Online,Download The Minions [Hindi] (2015) - Full Movie Online,Download The Minions [Hindi] (2015) - Full Movie Online,Download The Minions [Hindi] (2015) - Full Movie Online,Download The Minions [Hindi] (2015) - Full Movie Online,Download The Minions [Hindi] (2015) - Full Movie Online,Download The Minions [Hindi] (2015) - Full Movie Online,Download The Minions [Hindi] (2015) - Full Movie Online,Download The Minions [Hindi] (2015) - Full Movie Online,Download The Minions [Hindi] (2015) - Full Movie Online,Download The Minions [Hindi] (2015) - Full Movie Online,Download The Minions [Hindi] (2015) - Full Movie Online,Download The Minions [Hindi] (2015) - Full Movie Online,Download The Minions [Hindi] (2015) - Full Movie Online,Download The Minions [Hindi] (2015) 4fefd39f24
-
-
-

diff --git a/spaces/inreVtussa/clothingai/Examples/Die Siedler Aufbruch Der Kulturen Crack Download __TOP__.md b/spaces/inreVtussa/clothingai/Examples/Die Siedler Aufbruch Der Kulturen Crack Download __TOP__.md deleted file mode 100644 index 45d7d0ca04ce7c89c2224fa07dd0285abaa25cf5..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Die Siedler Aufbruch Der Kulturen Crack Download __TOP__.md +++ /dev/null @@ -1,6 +0,0 @@ -

die siedler aufbruch der kulturen crack download


Download ✸✸✸ https://tiurll.com/2uClRU



-
-Picktorrent: welt im aufbruch - Free Search and Download Torrents at search engine. ... Visit MAIN N E T W O R K Die Siedler Aufbruch der Kulturen System Language Protection ... Cutlist plus crack keygen patch download. 1fdad05405
-
-
-

diff --git a/spaces/introduck/introduck/README.md b/spaces/introduck/introduck/README.md deleted file mode 100644 index dcd2cb82c187414d599d70d7af856c277cd450c1..0000000000000000000000000000000000000000 --- a/spaces/introduck/introduck/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Introduck -emoji: 🦆 -colorFrom: blue -colorTo: green -sdk: gradio -sdk_version: 3.4 -python_version: 3.10.4 -app_file: main_subprocess.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/james-oldfield/PandA/networks/stylegan3/legacy.py b/spaces/james-oldfield/PandA/networks/stylegan3/legacy.py deleted file mode 100644 index 8cf53cb9396a639261bbcadb4e264e39415c1a56..0000000000000000000000000000000000000000 --- a/spaces/james-oldfield/PandA/networks/stylegan3/legacy.py +++ /dev/null @@ -1,323 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Converting legacy network pickle into the new format.""" - -import click -import pickle -import re -import copy -import numpy as np -import torch -import dnnlib -from torch_utils import misc - -#---------------------------------------------------------------------------- - -def load_network_pkl(f, force_fp16=False): - data = _LegacyUnpickler(f).load() - - # Legacy TensorFlow pickle => convert. - if isinstance(data, tuple) and len(data) == 3 and all(isinstance(net, _TFNetworkStub) for net in data): - tf_G, tf_D, tf_Gs = data - G = convert_tf_generator(tf_G) - D = convert_tf_discriminator(tf_D) - G_ema = convert_tf_generator(tf_Gs) - data = dict(G=G, D=D, G_ema=G_ema) - - # Add missing fields. - if 'training_set_kwargs' not in data: - data['training_set_kwargs'] = None - if 'augment_pipe' not in data: - data['augment_pipe'] = None - - # Validate contents. - assert isinstance(data['G'], torch.nn.Module) - assert isinstance(data['D'], torch.nn.Module) - assert isinstance(data['G_ema'], torch.nn.Module) - assert isinstance(data['training_set_kwargs'], (dict, type(None))) - assert isinstance(data['augment_pipe'], (torch.nn.Module, type(None))) - - # Force FP16. - if force_fp16: - for key in ['G', 'D', 'G_ema']: - old = data[key] - kwargs = copy.deepcopy(old.init_kwargs) - fp16_kwargs = kwargs.get('synthesis_kwargs', kwargs) - fp16_kwargs.num_fp16_res = 4 - fp16_kwargs.conv_clamp = 256 - if kwargs != old.init_kwargs: - new = type(old)(**kwargs).eval().requires_grad_(False) - misc.copy_params_and_buffers(old, new, require_all=True) - data[key] = new - return data - -#---------------------------------------------------------------------------- - -class _TFNetworkStub(dnnlib.EasyDict): - pass - -class _LegacyUnpickler(pickle.Unpickler): - def find_class(self, module, name): - if module == 'dnnlib.tflib.network' and name == 'Network': - return _TFNetworkStub - return super().find_class(module, name) - -#---------------------------------------------------------------------------- - -def _collect_tf_params(tf_net): - # pylint: disable=protected-access - tf_params = dict() - def recurse(prefix, tf_net): - for name, value in tf_net.variables: - tf_params[prefix + name] = value - for name, comp in tf_net.components.items(): - recurse(prefix + name + '/', comp) - recurse('', tf_net) - return tf_params - -#---------------------------------------------------------------------------- - -def _populate_module_params(module, *patterns): - for name, tensor in misc.named_params_and_buffers(module): - found = False - value = None - for pattern, value_fn in zip(patterns[0::2], patterns[1::2]): - match = re.fullmatch(pattern, name) - if match: - found = True - if value_fn is not None: - value = value_fn(*match.groups()) - break - try: - assert found - if value is not None: - tensor.copy_(torch.from_numpy(np.array(value))) - except: - print(name, list(tensor.shape)) - raise - -#---------------------------------------------------------------------------- - -def convert_tf_generator(tf_G): - if tf_G.version < 4: - raise ValueError('TensorFlow pickle version too low') - - # Collect kwargs. - tf_kwargs = tf_G.static_kwargs - known_kwargs = set() - def kwarg(tf_name, default=None, none=None): - known_kwargs.add(tf_name) - val = tf_kwargs.get(tf_name, default) - return val if val is not None else none - - # Convert kwargs. - from training import networks_stylegan2 - network_class = networks_stylegan2.Generator - kwargs = dnnlib.EasyDict( - z_dim = kwarg('latent_size', 512), - c_dim = kwarg('label_size', 0), - w_dim = kwarg('dlatent_size', 512), - img_resolution = kwarg('resolution', 1024), - img_channels = kwarg('num_channels', 3), - channel_base = kwarg('fmap_base', 16384) * 2, - channel_max = kwarg('fmap_max', 512), - num_fp16_res = kwarg('num_fp16_res', 0), - conv_clamp = kwarg('conv_clamp', None), - architecture = kwarg('architecture', 'skip'), - resample_filter = kwarg('resample_kernel', [1,3,3,1]), - use_noise = kwarg('use_noise', True), - activation = kwarg('nonlinearity', 'lrelu'), - mapping_kwargs = dnnlib.EasyDict( - num_layers = kwarg('mapping_layers', 8), - embed_features = kwarg('label_fmaps', None), - layer_features = kwarg('mapping_fmaps', None), - activation = kwarg('mapping_nonlinearity', 'lrelu'), - lr_multiplier = kwarg('mapping_lrmul', 0.01), - w_avg_beta = kwarg('w_avg_beta', 0.995, none=1), - ), - ) - - # Check for unknown kwargs. - kwarg('truncation_psi') - kwarg('truncation_cutoff') - kwarg('style_mixing_prob') - kwarg('structure') - kwarg('conditioning') - kwarg('fused_modconv') - unknown_kwargs = list(set(tf_kwargs.keys()) - known_kwargs) - if len(unknown_kwargs) > 0: - raise ValueError('Unknown TensorFlow kwarg', unknown_kwargs[0]) - - # Collect params. - tf_params = _collect_tf_params(tf_G) - for name, value in list(tf_params.items()): - match = re.fullmatch(r'ToRGB_lod(\d+)/(.*)', name) - if match: - r = kwargs.img_resolution // (2 ** int(match.group(1))) - tf_params[f'{r}x{r}/ToRGB/{match.group(2)}'] = value - kwargs.synthesis.kwargs.architecture = 'orig' - #for name, value in tf_params.items(): print(f'{name:<50s}{list(value.shape)}') - - # Convert params. - G = network_class(**kwargs).eval().requires_grad_(False) - # pylint: disable=unnecessary-lambda - # pylint: disable=f-string-without-interpolation - _populate_module_params(G, - r'mapping\.w_avg', lambda: tf_params[f'dlatent_avg'], - r'mapping\.embed\.weight', lambda: tf_params[f'mapping/LabelEmbed/weight'].transpose(), - r'mapping\.embed\.bias', lambda: tf_params[f'mapping/LabelEmbed/bias'], - r'mapping\.fc(\d+)\.weight', lambda i: tf_params[f'mapping/Dense{i}/weight'].transpose(), - r'mapping\.fc(\d+)\.bias', lambda i: tf_params[f'mapping/Dense{i}/bias'], - r'synthesis\.b4\.const', lambda: tf_params[f'synthesis/4x4/Const/const'][0], - r'synthesis\.b4\.conv1\.weight', lambda: tf_params[f'synthesis/4x4/Conv/weight'].transpose(3, 2, 0, 1), - r'synthesis\.b4\.conv1\.bias', lambda: tf_params[f'synthesis/4x4/Conv/bias'], - r'synthesis\.b4\.conv1\.noise_const', lambda: tf_params[f'synthesis/noise0'][0, 0], - r'synthesis\.b4\.conv1\.noise_strength', lambda: tf_params[f'synthesis/4x4/Conv/noise_strength'], - r'synthesis\.b4\.conv1\.affine\.weight', lambda: tf_params[f'synthesis/4x4/Conv/mod_weight'].transpose(), - r'synthesis\.b4\.conv1\.affine\.bias', lambda: tf_params[f'synthesis/4x4/Conv/mod_bias'] + 1, - r'synthesis\.b(\d+)\.conv0\.weight', lambda r: tf_params[f'synthesis/{r}x{r}/Conv0_up/weight'][::-1, ::-1].transpose(3, 2, 0, 1), - r'synthesis\.b(\d+)\.conv0\.bias', lambda r: tf_params[f'synthesis/{r}x{r}/Conv0_up/bias'], - r'synthesis\.b(\d+)\.conv0\.noise_const', lambda r: tf_params[f'synthesis/noise{int(np.log2(int(r)))*2-5}'][0, 0], - r'synthesis\.b(\d+)\.conv0\.noise_strength', lambda r: tf_params[f'synthesis/{r}x{r}/Conv0_up/noise_strength'], - r'synthesis\.b(\d+)\.conv0\.affine\.weight', lambda r: tf_params[f'synthesis/{r}x{r}/Conv0_up/mod_weight'].transpose(), - r'synthesis\.b(\d+)\.conv0\.affine\.bias', lambda r: tf_params[f'synthesis/{r}x{r}/Conv0_up/mod_bias'] + 1, - r'synthesis\.b(\d+)\.conv1\.weight', lambda r: tf_params[f'synthesis/{r}x{r}/Conv1/weight'].transpose(3, 2, 0, 1), - r'synthesis\.b(\d+)\.conv1\.bias', lambda r: tf_params[f'synthesis/{r}x{r}/Conv1/bias'], - r'synthesis\.b(\d+)\.conv1\.noise_const', lambda r: tf_params[f'synthesis/noise{int(np.log2(int(r)))*2-4}'][0, 0], - r'synthesis\.b(\d+)\.conv1\.noise_strength', lambda r: tf_params[f'synthesis/{r}x{r}/Conv1/noise_strength'], - r'synthesis\.b(\d+)\.conv1\.affine\.weight', lambda r: tf_params[f'synthesis/{r}x{r}/Conv1/mod_weight'].transpose(), - r'synthesis\.b(\d+)\.conv1\.affine\.bias', lambda r: tf_params[f'synthesis/{r}x{r}/Conv1/mod_bias'] + 1, - r'synthesis\.b(\d+)\.torgb\.weight', lambda r: tf_params[f'synthesis/{r}x{r}/ToRGB/weight'].transpose(3, 2, 0, 1), - r'synthesis\.b(\d+)\.torgb\.bias', lambda r: tf_params[f'synthesis/{r}x{r}/ToRGB/bias'], - r'synthesis\.b(\d+)\.torgb\.affine\.weight', lambda r: tf_params[f'synthesis/{r}x{r}/ToRGB/mod_weight'].transpose(), - r'synthesis\.b(\d+)\.torgb\.affine\.bias', lambda r: tf_params[f'synthesis/{r}x{r}/ToRGB/mod_bias'] + 1, - r'synthesis\.b(\d+)\.skip\.weight', lambda r: tf_params[f'synthesis/{r}x{r}/Skip/weight'][::-1, ::-1].transpose(3, 2, 0, 1), - r'.*\.resample_filter', None, - r'.*\.act_filter', None, - ) - return G - -#---------------------------------------------------------------------------- - -def convert_tf_discriminator(tf_D): - if tf_D.version < 4: - raise ValueError('TensorFlow pickle version too low') - - # Collect kwargs. - tf_kwargs = tf_D.static_kwargs - known_kwargs = set() - def kwarg(tf_name, default=None): - known_kwargs.add(tf_name) - return tf_kwargs.get(tf_name, default) - - # Convert kwargs. - kwargs = dnnlib.EasyDict( - c_dim = kwarg('label_size', 0), - img_resolution = kwarg('resolution', 1024), - img_channels = kwarg('num_channels', 3), - architecture = kwarg('architecture', 'resnet'), - channel_base = kwarg('fmap_base', 16384) * 2, - channel_max = kwarg('fmap_max', 512), - num_fp16_res = kwarg('num_fp16_res', 0), - conv_clamp = kwarg('conv_clamp', None), - cmap_dim = kwarg('mapping_fmaps', None), - block_kwargs = dnnlib.EasyDict( - activation = kwarg('nonlinearity', 'lrelu'), - resample_filter = kwarg('resample_kernel', [1,3,3,1]), - freeze_layers = kwarg('freeze_layers', 0), - ), - mapping_kwargs = dnnlib.EasyDict( - num_layers = kwarg('mapping_layers', 0), - embed_features = kwarg('mapping_fmaps', None), - layer_features = kwarg('mapping_fmaps', None), - activation = kwarg('nonlinearity', 'lrelu'), - lr_multiplier = kwarg('mapping_lrmul', 0.1), - ), - epilogue_kwargs = dnnlib.EasyDict( - mbstd_group_size = kwarg('mbstd_group_size', None), - mbstd_num_channels = kwarg('mbstd_num_features', 1), - activation = kwarg('nonlinearity', 'lrelu'), - ), - ) - - # Check for unknown kwargs. - kwarg('structure') - kwarg('conditioning') - unknown_kwargs = list(set(tf_kwargs.keys()) - known_kwargs) - if len(unknown_kwargs) > 0: - raise ValueError('Unknown TensorFlow kwarg', unknown_kwargs[0]) - - # Collect params. - tf_params = _collect_tf_params(tf_D) - for name, value in list(tf_params.items()): - match = re.fullmatch(r'FromRGB_lod(\d+)/(.*)', name) - if match: - r = kwargs.img_resolution // (2 ** int(match.group(1))) - tf_params[f'{r}x{r}/FromRGB/{match.group(2)}'] = value - kwargs.architecture = 'orig' - #for name, value in tf_params.items(): print(f'{name:<50s}{list(value.shape)}') - - # Convert params. - from training import networks_stylegan2 - D = networks_stylegan2.Discriminator(**kwargs).eval().requires_grad_(False) - # pylint: disable=unnecessary-lambda - # pylint: disable=f-string-without-interpolation - _populate_module_params(D, - r'b(\d+)\.fromrgb\.weight', lambda r: tf_params[f'{r}x{r}/FromRGB/weight'].transpose(3, 2, 0, 1), - r'b(\d+)\.fromrgb\.bias', lambda r: tf_params[f'{r}x{r}/FromRGB/bias'], - r'b(\d+)\.conv(\d+)\.weight', lambda r, i: tf_params[f'{r}x{r}/Conv{i}{["","_down"][int(i)]}/weight'].transpose(3, 2, 0, 1), - r'b(\d+)\.conv(\d+)\.bias', lambda r, i: tf_params[f'{r}x{r}/Conv{i}{["","_down"][int(i)]}/bias'], - r'b(\d+)\.skip\.weight', lambda r: tf_params[f'{r}x{r}/Skip/weight'].transpose(3, 2, 0, 1), - r'mapping\.embed\.weight', lambda: tf_params[f'LabelEmbed/weight'].transpose(), - r'mapping\.embed\.bias', lambda: tf_params[f'LabelEmbed/bias'], - r'mapping\.fc(\d+)\.weight', lambda i: tf_params[f'Mapping{i}/weight'].transpose(), - r'mapping\.fc(\d+)\.bias', lambda i: tf_params[f'Mapping{i}/bias'], - r'b4\.conv\.weight', lambda: tf_params[f'4x4/Conv/weight'].transpose(3, 2, 0, 1), - r'b4\.conv\.bias', lambda: tf_params[f'4x4/Conv/bias'], - r'b4\.fc\.weight', lambda: tf_params[f'4x4/Dense0/weight'].transpose(), - r'b4\.fc\.bias', lambda: tf_params[f'4x4/Dense0/bias'], - r'b4\.out\.weight', lambda: tf_params[f'Output/weight'].transpose(), - r'b4\.out\.bias', lambda: tf_params[f'Output/bias'], - r'.*\.resample_filter', None, - ) - return D - -#---------------------------------------------------------------------------- - -@click.command() -@click.option('--source', help='Input pickle', required=True, metavar='PATH') -@click.option('--dest', help='Output pickle', required=True, metavar='PATH') -@click.option('--force-fp16', help='Force the networks to use FP16', type=bool, default=False, metavar='BOOL', show_default=True) -def convert_network_pickle(source, dest, force_fp16): - """Convert legacy network pickle into the native PyTorch format. - - The tool is able to load the main network configurations exported using the TensorFlow version of StyleGAN2 or StyleGAN2-ADA. - It does not support e.g. StyleGAN2-ADA comparison methods, StyleGAN2 configs A-D, or StyleGAN1 networks. - - Example: - - \b - python legacy.py \\ - --source=https://nvlabs-fi-cdn.nvidia.com/stylegan2/networks/stylegan2-cat-config-f.pkl \\ - --dest=stylegan2-cat-config-f.pkl - """ - print(f'Loading "{source}"...') - with dnnlib.util.open_url(source) as f: - data = load_network_pkl(f, force_fp16=force_fp16) - print(f'Saving "{dest}"...') - with open(dest, 'wb') as f: - pickle.dump(data, f) - print('Done.') - -#---------------------------------------------------------------------------- - -if __name__ == "__main__": - convert_network_pickle() # pylint: disable=no-value-for-parameter - -#---------------------------------------------------------------------------- diff --git a/spaces/jbilcke-hf/VideoQuest/src/app/store.ts b/spaces/jbilcke-hf/VideoQuest/src/app/store.ts deleted file mode 100644 index 9964a4bc8f2a401adfa81c39c100f83c84e79751..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/VideoQuest/src/app/store.ts +++ /dev/null @@ -1,10 +0,0 @@ -"use client" - -import { InventoryItem } from "../types" - -// could also be Zustand or something -export const store: { - currentlyDraggedItem?: InventoryItem -} = { - currentlyDraggedItem: undefined -} diff --git a/spaces/jbilcke-hf/upscaling-server/app.py b/spaces/jbilcke-hf/upscaling-server/app.py deleted file mode 100644 index a24f5025784d791ee6436c78651a55983562a6aa..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/upscaling-server/app.py +++ /dev/null @@ -1,226 +0,0 @@ -import gradio as gr -import cv2 -import numpy -import os -import random -from basicsr.archs.rrdbnet_arch import RRDBNet -from basicsr.utils.download_util import load_file_from_url - -from realesrgan import RealESRGANer -from realesrgan.archs.srvgg_arch import SRVGGNetCompact - - -last_file = None -img_mode = "RGBA" -SECRET_TOKEN = os.getenv('SECRET_TOKEN', 'default_secret') - -def realesrgan(secret_token, img, model_name, denoise_strength, face_enhance, outscale): - """Real-ESRGAN function to restore (and upscale) images. - """ - if secret_token != SECRET_TOKEN: - raise gr.Error( - f'Invalid secret token. Please fork the original space if you want to use it for yourself.') - - if not img: - return - - # Define model parameters - if model_name == 'RealESRGAN_x4plus': # x4 RRDBNet model - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4) - netscale = 4 - file_url = ['https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth'] - elif model_name == 'RealESRNet_x4plus': # x4 RRDBNet model - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4) - netscale = 4 - file_url = ['https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/RealESRNet_x4plus.pth'] - elif model_name == 'RealESRGAN_x4plus_anime_6B': # x4 RRDBNet model with 6 blocks - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=6, num_grow_ch=32, scale=4) - netscale = 4 - file_url = ['https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth'] - elif model_name == 'RealESRGAN_x2plus': # x2 RRDBNet model - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=2) - netscale = 2 - file_url = ['https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth'] - elif model_name == 'realesr-general-x4v3': # x4 VGG-style model (S size) - model = SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=32, upscale=4, act_type='prelu') - netscale = 4 - file_url = [ - 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-wdn-x4v3.pth', - 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-x4v3.pth' - ] - - # Determine model paths - model_path = os.path.join('weights', model_name + '.pth') - if not os.path.isfile(model_path): - ROOT_DIR = os.path.dirname(os.path.abspath(__file__)) - for url in file_url: - # model_path will be updated - model_path = load_file_from_url( - url=url, model_dir=os.path.join(ROOT_DIR, 'weights'), progress=True, file_name=None) - - # Use dni to control the denoise strength - dni_weight = None - if model_name == 'realesr-general-x4v3' and denoise_strength != 1: - wdn_model_path = model_path.replace('realesr-general-x4v3', 'realesr-general-wdn-x4v3') - model_path = [model_path, wdn_model_path] - dni_weight = [denoise_strength, 1 - denoise_strength] - - # Restorer Class - upsampler = RealESRGANer( - scale=netscale, - model_path=model_path, - dni_weight=dni_weight, - model=model, - tile=0, - tile_pad=10, - pre_pad=10, - half=False, - gpu_id=None - ) - - # Use GFPGAN for face enhancement - if face_enhance: - from gfpgan import GFPGANer - face_enhancer = GFPGANer( - model_path='https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth', - upscale=outscale, - arch='clean', - channel_multiplier=2, - bg_upsampler=upsampler) - - # Convert the input PIL image to cv2 image, so that it can be processed by realesrgan - cv_img = numpy.array(img) - img = cv2.cvtColor(cv_img, cv2.COLOR_RGBA2BGRA) - - # Apply restoration - try: - if face_enhance: - _, _, output = face_enhancer.enhance(img, has_aligned=False, only_center_face=False, paste_back=True) - else: - output, _ = upsampler.enhance(img, outscale=outscale) - except RuntimeError as error: - print('Error', error) - print('If you encounter CUDA out of memory, try to set --tile with a smaller number.') - else: - # Save restored image and return it to the output Image component - if img_mode == 'RGBA': # RGBA images should be saved in png format - extension = 'png' - else: - extension = 'jpg' - - out_filename = f"output_{rnd_string(8)}.{extension}" - cv2.imwrite(out_filename, output) - global last_file - last_file = out_filename - return out_filename - - -def rnd_string(x): - """Returns a string of 'x' random characters - """ - characters = "abcdefghijklmnopqrstuvwxyz_0123456789" - result = "".join((random.choice(characters)) for i in range(x)) - return result - - -def reset(): - """Resets the Image components of the Gradio interface and deletes - the last processed image - """ - global last_file - if last_file: - print(f"Deleting {last_file} ...") - os.remove(last_file) - last_file = None - return gr.update(value=None), gr.update(value=None) - - -def has_transparency(img): - """This function works by first checking to see if a "transparency" property is defined - in the image's info -- if so, we return "True". Then, if the image is using indexed colors - (such as in GIFs), it gets the index of the transparent color in the palette - (img.info.get("transparency", -1)) and checks if it's used anywhere in the canvas - (img.getcolors()). If the image is in RGBA mode, then presumably it has transparency in - it, but it double-checks by getting the minimum and maximum values of every color channel - (img.getextrema()), and checks if the alpha channel's smallest value falls below 255. - https://stackoverflow.com/questions/43864101/python-pil-check-if-image-is-transparent - """ - if img.info.get("transparency", None) is not None: - return True - if img.mode == "P": - transparent = img.info.get("transparency", -1) - for _, index in img.getcolors(): - if index == transparent: - return True - elif img.mode == "RGBA": - extrema = img.getextrema() - if extrema[3][0] < 255: - return True - return False - - -def image_properties(img): - """Returns the dimensions (width and height) and color mode of the input image and - also sets the global img_mode variable to be used by the realesrgan function - """ - global img_mode - if img: - if has_transparency(img): - img_mode = "RGBA" - else: - img_mode = "RGB" - properties = f"Width: {img.size[0]}, Height: {img.size[1]} | Color Mode: {img_mode}" - return properties - - -def main(): - # Gradio Interface - with gr.Blocks(title="Upscaling Service", theme="dark") as demo: - - gr.Markdown( - """This Space is a fork of "Real-ESRGAN-Demo", so if you want to use it please refer to [havas79/Real-ESRGAN_Demo](https://huggingface.co/spaces/havas79/Real-ESRGAN_Demo), thank you!""" - ) - secret_token = gr.Text( - label='Secret Token', - max_lines=1, - placeholder='Enter your secret token', - ) - with gr.Accordion("Options/Parameters"): - with gr.Row(): - model_name = gr.Dropdown(label="Real-ESRGAN inference model to be used", - choices=["RealESRGAN_x4plus", "RealESRNet_x4plus", "RealESRGAN_x4plus_anime_6B", - "RealESRGAN_x2plus", "realesr-general-x4v3"], - value="realesr-general-x4v3", show_label=True) - denoise_strength = gr.Slider(label="Denoise Strength (Used only with the realesr-general-x4v3 model)", - minimum=0, maximum=1, step=0.1, value=0.5) - outscale = gr.Slider(label="Image Upscaling Factor", - minimum=1, maximum=10, step=1, value=4, show_label=True) - face_enhance = gr.Checkbox(label="Face Enhancement using GFPGAN (Doesn't work for anime images)", - value=False, show_label=True) - - with gr.Row(): - with gr.Group(): - input_image = gr.Image(label="Source Image", type="pil", image_mode="RGBA") - input_image_properties = gr.Textbox(label="Image Properties", max_lines=1) - output_image = gr.Image(label="Restored Image", image_mode="RGBA") - with gr.Row(): - restore_btn = gr.Button("Upscale") - - # Event listeners: - input_image.change(fn=image_properties, inputs=input_image, outputs=input_image_properties) - restore_btn.click(fn=realesrgan, - inputs=[secret_token, input_image, model_name, denoise_strength, face_enhance, outscale], - outputs=output_image, - api_name="upscale") - - gr.Markdown( - """*Please note that support for animated GIFs is not yet implemented. Should an animated GIF is chosen for restoration, - the demo will output only the first frame saved in PNG format (to preserve probable transparency).* - """ - ) - - demo.launch() - - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/jbitel/dalle/index.html b/spaces/jbitel/dalle/index.html deleted file mode 100644 index 34e195b924d641d56e48ca7f05870c79ba68ca66..0000000000000000000000000000000000000000 --- a/spaces/jbitel/dalle/index.html +++ /dev/null @@ -1,54 +0,0 @@ - - - - - - - - - - - - - - - - - - - - -
- - \ No newline at end of file diff --git a/spaces/jdposa/medical_ner_spanish/README.md b/spaces/jdposa/medical_ner_spanish/README.md deleted file mode 100644 index e10b8f489a1c92a0f8d1c9d724c0a572493c6b2f..0000000000000000000000000000000000000000 --- a/spaces/jdposa/medical_ner_spanish/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Medical_ner_spanish -emoji: ⚡ -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 2.8.10 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/jengiskhann/FahsaiChatbot-03/README.md b/spaces/jengiskhann/FahsaiChatbot-03/README.md deleted file mode 100644 index f2c60fc5219455c8e7885e0cf2feff7fd2fa1abb..0000000000000000000000000000000000000000 --- a/spaces/jengiskhann/FahsaiChatbot-03/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: FahsaiChatbot 02 -emoji: 💻 -colorFrom: purple -colorTo: purple -sdk: gradio -sdk_version: 3.41.2 -app_file: app.py -pinned: false -duplicated_from: jengiskhann/FahsaiChatbot-02 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jiaxianustc/mbp/UltraFlow/layers/interact.py b/spaces/jiaxianustc/mbp/UltraFlow/layers/interact.py deleted file mode 100644 index 49c0e2fa2b1be389937fb662eec1c9a849816273..0000000000000000000000000000000000000000 --- a/spaces/jiaxianustc/mbp/UltraFlow/layers/interact.py +++ /dev/null @@ -1,70 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -import dgl.function as fn -from dgl.nn.pytorch import edge_softmax - -class intra_message(nn.Module): - def __init__(self,node_feat_size, graph_feat_size, dropout): - super(intra_message, self).__init__() - - self.project_edge = nn.Sequential( - nn.Dropout(dropout), - nn.Linear(2 * node_feat_size, 1), - nn.LeakyReLU() - ) - self.project_node = nn.Sequential( - nn.Dropout(dropout), - nn.Linear(node_feat_size, graph_feat_size), - nn.LeakyReLU() - ) - - self.bn_layer = nn.BatchNorm1d(graph_feat_size) - - def apply_edges(self, edges): - return {'he': torch.cat([edges.dst['hv'], edges.src['hv']], dim=1)} - - def forward(self,g, node_feats): - g = g.local_var() - g.ndata['hv'] = node_feats - g.apply_edges(self.apply_edges) - logits = self.project_edge(g.edata['he']) - g.edata['a'] = edge_softmax(g, logits) - g.ndata['hv'] = self.project_node(node_feats) - g.update_all(fn.src_mul_edge('hv', 'a', 'm'), fn.sum('m', 'c')) - - return F.elu(g.ndata['c']) - -class inter_message(nn.Module): - def __init__(self,in_dim, out_dim, dropout): - super(inter_message, self).__init__() - self.project_edges = nn.Sequential( - nn.Dropout(dropout), - nn.Linear(in_dim, out_dim), - nn.LeakyReLU() - ) - def apply_edges(self, edges): - return {'m': self.project_edges(torch.cat([edges.data['e'],edges.src['h'], edges.dst['h']], dim=1))} - - def forward(self,g, node_feats): - g = g.local_var() - g.ndata['h'] = node_feats - g.update_all(self.apply_edges, fn.mean('m','c')) - return F.elu(g.ndata['c']) - -class update_node_feats(nn.Module): - def __init__(self,in_dim, out_dim, dropout): - super(update_node_feats, self).__init__() - self.gru = nn.GRUCell(out_dim, out_dim) - self.project_node = nn.Sequential( - nn.Dropout(dropout), - nn.Linear(in_dim, out_dim), - nn.LeakyReLU() - ) - self.bn_layer = nn.BatchNorm1d(out_dim) - - def forward(self, g, node_feats, intra_m, inter_m): - g = g.local_var() - return self.bn_layer(F.relu(self.gru(self.project_node(torch.cat([node_feats, intra_m, inter_m], dim=1)),node_feats))) - - diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/Cipher/test_SIV.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/Cipher/test_SIV.py deleted file mode 100644 index a80ddc1e2ced2de4cc8ec4a26801135daf8c7614..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/Cipher/test_SIV.py +++ /dev/null @@ -1,552 +0,0 @@ -# =================================================================== -# -# Copyright (c) 2015, Legrandin -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions -# are met: -# -# 1. Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# 2. Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in -# the documentation and/or other materials provided with the -# distribution. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS -# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE -# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, -# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, -# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT -# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN -# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. -# =================================================================== - -import json -import unittest -from binascii import unhexlify - -from Crypto.SelfTest.st_common import list_test_cases -from Crypto.SelfTest.loader import load_test_vectors_wycheproof - -from Crypto.Util.py3compat import tobytes, bchr -from Crypto.Cipher import AES -from Crypto.Hash import SHAKE128 - -from Crypto.Util.strxor import strxor - - -def get_tag_random(tag, length): - return SHAKE128.new(data=tobytes(tag)).read(length) - - -class SivTests(unittest.TestCase): - - key_256 = get_tag_random("key_256", 32) - key_384 = get_tag_random("key_384", 48) - key_512 = get_tag_random("key_512", 64) - nonce_96 = get_tag_random("nonce_128", 12) - data = get_tag_random("data", 128) - - def test_loopback_128(self): - for key in self.key_256, self.key_384, self.key_512: - cipher = AES.new(key, AES.MODE_SIV, nonce=self.nonce_96) - pt = get_tag_random("plaintext", 16 * 100) - ct, mac = cipher.encrypt_and_digest(pt) - - cipher = AES.new(key, AES.MODE_SIV, nonce=self.nonce_96) - pt2 = cipher.decrypt_and_verify(ct, mac) - self.assertEqual(pt, pt2) - - def test_nonce(self): - # Deterministic encryption - AES.new(self.key_256, AES.MODE_SIV) - - cipher = AES.new(self.key_256, AES.MODE_SIV, self.nonce_96) - ct1, tag1 = cipher.encrypt_and_digest(self.data) - - cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) - ct2, tag2 = cipher.encrypt_and_digest(self.data) - self.assertEqual(ct1 + tag1, ct2 + tag2) - - def test_nonce_must_be_bytes(self): - self.assertRaises(TypeError, AES.new, self.key_256, AES.MODE_SIV, - nonce=u'test12345678') - - def test_nonce_length(self): - # nonce can be of any length (but not empty) - self.assertRaises(ValueError, AES.new, self.key_256, AES.MODE_SIV, - nonce=b"") - - for x in range(1, 128): - cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=bchr(1) * x) - cipher.encrypt_and_digest(b'\x01') - - def test_block_size_128(self): - cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) - self.assertEqual(cipher.block_size, AES.block_size) - - def test_nonce_attribute(self): - cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) - self.assertEqual(cipher.nonce, self.nonce_96) - - # By default, no nonce is randomly generated - self.assertFalse(hasattr(AES.new(self.key_256, AES.MODE_SIV), "nonce")) - - def test_unknown_parameters(self): - self.assertRaises(TypeError, AES.new, self.key_256, AES.MODE_SIV, - self.nonce_96, 7) - self.assertRaises(TypeError, AES.new, self.key_256, AES.MODE_SIV, - nonce=self.nonce_96, unknown=7) - - # But some are only known by the base cipher - # (e.g. use_aesni consumed by the AES module) - AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96, - use_aesni=False) - - def test_encrypt_excludes_decrypt(self): - cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) - cipher.encrypt_and_digest(self.data) - self.assertRaises(TypeError, cipher.decrypt, self.data) - - cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) - cipher.encrypt_and_digest(self.data) - self.assertRaises(TypeError, cipher.decrypt_and_verify, - self.data, self.data) - - def test_data_must_be_bytes(self): - cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) - self.assertRaises(TypeError, cipher.encrypt, u'test1234567890-*') - - cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) - self.assertRaises(TypeError, cipher.decrypt_and_verify, - u'test1234567890-*', b"xxxx") - - def test_mac_len(self): - cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) - _, mac = cipher.encrypt_and_digest(self.data) - self.assertEqual(len(mac), 16) - - def test_invalid_mac(self): - from Crypto.Util.strxor import strxor_c - cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) - ct, mac = cipher.encrypt_and_digest(self.data) - - invalid_mac = strxor_c(mac, 0x01) - - cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) - self.assertRaises(ValueError, cipher.decrypt_and_verify, ct, - invalid_mac) - - def test_hex_mac(self): - cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) - mac_hex = cipher.hexdigest() - self.assertEqual(cipher.digest(), unhexlify(mac_hex)) - - cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) - cipher.hexverify(mac_hex) - - def test_bytearray(self): - - # Encrypt - key = bytearray(self.key_256) - nonce = bytearray(self.nonce_96) - data = bytearray(self.data) - header = bytearray(self.data) - - cipher1 = AES.new(self.key_256, - AES.MODE_SIV, - nonce=self.nonce_96) - cipher1.update(self.data) - ct, tag = cipher1.encrypt_and_digest(self.data) - - cipher2 = AES.new(key, - AES.MODE_SIV, - nonce=nonce) - key[:3] = b'\xFF\xFF\xFF' - nonce[:3] = b'\xFF\xFF\xFF' - cipher2.update(header) - header[:3] = b'\xFF\xFF\xFF' - ct_test, tag_test = cipher2.encrypt_and_digest(data) - - self.assertEqual(ct, ct_test) - self.assertEqual(tag, tag_test) - self.assertEqual(cipher1.nonce, cipher2.nonce) - - # Decrypt - key = bytearray(self.key_256) - nonce = bytearray(self.nonce_96) - header = bytearray(self.data) - ct_ba = bytearray(ct) - tag_ba = bytearray(tag) - - cipher3 = AES.new(key, - AES.MODE_SIV, - nonce=nonce) - key[:3] = b'\xFF\xFF\xFF' - nonce[:3] = b'\xFF\xFF\xFF' - cipher3.update(header) - header[:3] = b'\xFF\xFF\xFF' - pt_test = cipher3.decrypt_and_verify(ct_ba, tag_ba) - - self.assertEqual(self.data, pt_test) - - def test_memoryview(self): - - # Encrypt - key = memoryview(bytearray(self.key_256)) - nonce = memoryview(bytearray(self.nonce_96)) - data = memoryview(bytearray(self.data)) - header = memoryview(bytearray(self.data)) - - cipher1 = AES.new(self.key_256, - AES.MODE_SIV, - nonce=self.nonce_96) - cipher1.update(self.data) - ct, tag = cipher1.encrypt_and_digest(self.data) - - cipher2 = AES.new(key, - AES.MODE_SIV, - nonce=nonce) - key[:3] = b'\xFF\xFF\xFF' - nonce[:3] = b'\xFF\xFF\xFF' - cipher2.update(header) - header[:3] = b'\xFF\xFF\xFF' - ct_test, tag_test= cipher2.encrypt_and_digest(data) - - self.assertEqual(ct, ct_test) - self.assertEqual(tag, tag_test) - self.assertEqual(cipher1.nonce, cipher2.nonce) - - # Decrypt - key = memoryview(bytearray(self.key_256)) - nonce = memoryview(bytearray(self.nonce_96)) - header = memoryview(bytearray(self.data)) - ct_ba = memoryview(bytearray(ct)) - tag_ba = memoryview(bytearray(tag)) - - cipher3 = AES.new(key, - AES.MODE_SIV, - nonce=nonce) - key[:3] = b'\xFF\xFF\xFF' - nonce[:3] = b'\xFF\xFF\xFF' - cipher3.update(header) - header[:3] = b'\xFF\xFF\xFF' - pt_test = cipher3.decrypt_and_verify(ct_ba, tag_ba) - - self.assertEqual(self.data, pt_test) - - def test_output_param(self): - - pt = b'5' * 128 - cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) - ct, tag = cipher.encrypt_and_digest(pt) - - output = bytearray(128) - cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) - res, tag_out = cipher.encrypt_and_digest(pt, output=output) - self.assertEqual(ct, output) - self.assertEqual(res, None) - self.assertEqual(tag, tag_out) - - cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) - res = cipher.decrypt_and_verify(ct, tag, output=output) - self.assertEqual(pt, output) - self.assertEqual(res, None) - - def test_output_param_memoryview(self): - - pt = b'5' * 128 - cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) - ct, tag = cipher.encrypt_and_digest(pt) - - output = memoryview(bytearray(128)) - cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) - cipher.encrypt_and_digest(pt, output=output) - self.assertEqual(ct, output) - - cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) - cipher.decrypt_and_verify(ct, tag, output=output) - self.assertEqual(pt, output) - - def test_output_param_neg(self): - LEN_PT = 128 - - pt = b'5' * LEN_PT - cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) - ct, tag = cipher.encrypt_and_digest(pt) - - cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) - self.assertRaises(TypeError, cipher.encrypt_and_digest, pt, output=b'0' * LEN_PT) - - cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) - self.assertRaises(TypeError, cipher.decrypt_and_verify, ct, tag, output=b'0' * LEN_PT) - - shorter_output = bytearray(LEN_PT - 1) - cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) - self.assertRaises(ValueError, cipher.encrypt_and_digest, pt, output=shorter_output) - cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) - self.assertRaises(ValueError, cipher.decrypt_and_verify, ct, tag, output=shorter_output) - - -class SivFSMTests(unittest.TestCase): - - key_256 = get_tag_random("key_256", 32) - nonce_96 = get_tag_random("nonce_96", 12) - data = get_tag_random("data", 128) - - def test_invalid_init_encrypt(self): - # Path INIT->ENCRYPT fails - cipher = AES.new(self.key_256, AES.MODE_SIV, - nonce=self.nonce_96) - self.assertRaises(TypeError, cipher.encrypt, b"xxx") - - def test_invalid_init_decrypt(self): - # Path INIT->DECRYPT fails - cipher = AES.new(self.key_256, AES.MODE_SIV, - nonce=self.nonce_96) - self.assertRaises(TypeError, cipher.decrypt, b"xxx") - - def test_valid_init_update_digest_verify(self): - # No plaintext, fixed authenticated data - # Verify path INIT->UPDATE->DIGEST - cipher = AES.new(self.key_256, AES.MODE_SIV, - nonce=self.nonce_96) - cipher.update(self.data) - mac = cipher.digest() - - # Verify path INIT->UPDATE->VERIFY - cipher = AES.new(self.key_256, AES.MODE_SIV, - nonce=self.nonce_96) - cipher.update(self.data) - cipher.verify(mac) - - def test_valid_init_digest(self): - # Verify path INIT->DIGEST - cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) - cipher.digest() - - def test_valid_init_verify(self): - # Verify path INIT->VERIFY - cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) - mac = cipher.digest() - - cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) - cipher.verify(mac) - - def test_valid_multiple_digest_or_verify(self): - # Multiple calls to digest - cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) - cipher.update(self.data) - first_mac = cipher.digest() - for x in range(4): - self.assertEqual(first_mac, cipher.digest()) - - # Multiple calls to verify - cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) - cipher.update(self.data) - for x in range(5): - cipher.verify(first_mac) - - def test_valid_encrypt_and_digest_decrypt_and_verify(self): - # encrypt_and_digest - cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) - cipher.update(self.data) - ct, mac = cipher.encrypt_and_digest(self.data) - - # decrypt_and_verify - cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) - cipher.update(self.data) - pt = cipher.decrypt_and_verify(ct, mac) - self.assertEqual(self.data, pt) - - def test_invalid_multiple_encrypt_and_digest(self): - cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) - ct, tag = cipher.encrypt_and_digest(self.data) - self.assertRaises(TypeError, cipher.encrypt_and_digest, b'') - - def test_invalid_multiple_decrypt_and_verify(self): - cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) - ct, tag = cipher.encrypt_and_digest(self.data) - - cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) - cipher.decrypt_and_verify(ct, tag) - self.assertRaises(TypeError, cipher.decrypt_and_verify, ct, tag) - - -def transform(tv): - new_tv = [[unhexlify(x) for x in tv[0].split("-")]] - new_tv += [ unhexlify(x) for x in tv[1:5]] - if tv[5]: - nonce = unhexlify(tv[5]) - else: - nonce = None - new_tv += [ nonce ] - return new_tv - - -class TestVectors(unittest.TestCase): - """Class exercising the SIV test vectors found in RFC5297""" - - # This is a list of tuples with 5 items: - # - # 1. Header + '|' + plaintext - # 2. Header + '|' + ciphertext + '|' + MAC - # 3. AES-128 key - # 4. Description - # 5. Dictionary of parameters to be passed to AES.new(). - # It must include the nonce. - # - # A "Header" is a dash ('-') separated sequece of components. - # - test_vectors_hex = [ - ( - '101112131415161718191a1b1c1d1e1f2021222324252627', - '112233445566778899aabbccddee', - '40c02b9690c4dc04daef7f6afe5c', - '85632d07c6e8f37f950acd320a2ecc93', - 'fffefdfcfbfaf9f8f7f6f5f4f3f2f1f0f0f1f2f3f4f5f6f7f8f9fafbfcfdfeff', - None - ), - ( - '00112233445566778899aabbccddeeffdeaddadadeaddadaffeeddccbbaa9988' + - '7766554433221100-102030405060708090a0', - '7468697320697320736f6d6520706c61696e7465787420746f20656e63727970' + - '74207573696e67205349562d414553', - 'cb900f2fddbe404326601965c889bf17dba77ceb094fa663b7a3f748ba8af829' + - 'ea64ad544a272e9c485b62a3fd5c0d', - '7bdb6e3b432667eb06f4d14bff2fbd0f', - '7f7e7d7c7b7a79787776757473727170404142434445464748494a4b4c4d4e4f', - '09f911029d74e35bd84156c5635688c0' - ), - ] - - test_vectors = [ transform(tv) for tv in test_vectors_hex ] - - def runTest(self): - for assoc_data, pt, ct, mac, key, nonce in self.test_vectors: - - # Encrypt - cipher = AES.new(key, AES.MODE_SIV, nonce=nonce) - for x in assoc_data: - cipher.update(x) - ct2, mac2 = cipher.encrypt_and_digest(pt) - self.assertEqual(ct, ct2) - self.assertEqual(mac, mac2) - - # Decrypt - cipher = AES.new(key, AES.MODE_SIV, nonce=nonce) - for x in assoc_data: - cipher.update(x) - pt2 = cipher.decrypt_and_verify(ct, mac) - self.assertEqual(pt, pt2) - - -class TestVectorsWycheproof(unittest.TestCase): - - def __init__(self): - unittest.TestCase.__init__(self) - self._id = "None" - - def setUp(self): - self.tv = load_test_vectors_wycheproof(("Cipher", "wycheproof"), - "aes_siv_cmac_test.json", - "Wycheproof AES SIV") - - def shortDescription(self): - return self._id - - def test_encrypt(self, tv): - self._id = "Wycheproof Encrypt AES-SIV Test #" + str(tv.id) - - cipher = AES.new(tv.key, AES.MODE_SIV) - cipher.update(tv.aad) - ct, tag = cipher.encrypt_and_digest(tv.msg) - if tv.valid: - self.assertEqual(tag + ct, tv.ct) - - def test_decrypt(self, tv): - self._id = "Wycheproof Decrypt AES_SIV Test #" + str(tv.id) - - cipher = AES.new(tv.key, AES.MODE_SIV) - cipher.update(tv.aad) - try: - pt = cipher.decrypt_and_verify(tv.ct[16:], tv.ct[:16]) - except ValueError: - assert not tv.valid - else: - assert tv.valid - self.assertEqual(pt, tv.msg) - - def runTest(self): - - for tv in self.tv: - self.test_encrypt(tv) - self.test_decrypt(tv) - - -class TestVectorsWycheproof2(unittest.TestCase): - - def __init__(self): - unittest.TestCase.__init__(self) - self._id = "None" - - def setUp(self): - self.tv = load_test_vectors_wycheproof(("Cipher", "wycheproof"), - "aead_aes_siv_cmac_test.json", - "Wycheproof AEAD SIV") - - def shortDescription(self): - return self._id - - def test_encrypt(self, tv): - self._id = "Wycheproof Encrypt AEAD-AES-SIV Test #" + str(tv.id) - - cipher = AES.new(tv.key, AES.MODE_SIV, nonce=tv.iv) - cipher.update(tv.aad) - ct, tag = cipher.encrypt_and_digest(tv.msg) - if tv.valid: - self.assertEqual(ct, tv.ct) - self.assertEqual(tag, tv.tag) - - def test_decrypt(self, tv): - self._id = "Wycheproof Decrypt AEAD-AES-SIV Test #" + str(tv.id) - - cipher = AES.new(tv.key, AES.MODE_SIV, nonce=tv.iv) - cipher.update(tv.aad) - try: - pt = cipher.decrypt_and_verify(tv.ct, tv.tag) - except ValueError: - assert not tv.valid - else: - assert tv.valid - self.assertEqual(pt, tv.msg) - - def runTest(self): - - for tv in self.tv: - self.test_encrypt(tv) - self.test_decrypt(tv) - - -def get_tests(config={}): - wycheproof_warnings = config.get('wycheproof_warnings') - - tests = [] - tests += list_test_cases(SivTests) - tests += list_test_cases(SivFSMTests) - tests += [ TestVectors() ] - tests += [ TestVectorsWycheproof() ] - tests += [ TestVectorsWycheproof2() ] - return tests - - -if __name__ == '__main__': - suite = lambda: unittest.TestSuite(get_tests()) - unittest.main(defaultTest='suite') diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/aiohttp/locks.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/aiohttp/locks.py deleted file mode 100644 index de2dc83d09dd950fc1ed8d7edaeb20e7697c94ba..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/aiohttp/locks.py +++ /dev/null @@ -1,41 +0,0 @@ -import asyncio -import collections -from typing import Any, Deque, Optional - - -class EventResultOrError: - """Event asyncio lock helper class. - - Wraps the Event asyncio lock allowing either to awake the - locked Tasks without any error or raising an exception. - - thanks to @vorpalsmith for the simple design. - """ - - def __init__(self, loop: asyncio.AbstractEventLoop) -> None: - self._loop = loop - self._exc: Optional[BaseException] = None - self._event = asyncio.Event() - self._waiters: Deque[asyncio.Future[Any]] = collections.deque() - - def set(self, exc: Optional[BaseException] = None) -> None: - self._exc = exc - self._event.set() - - async def wait(self) -> Any: - waiter = self._loop.create_task(self._event.wait()) - self._waiters.append(waiter) - try: - val = await waiter - finally: - self._waiters.remove(waiter) - - if self._exc is not None: - raise self._exc - - return val - - def cancel(self) -> None: - """Cancel all waiters""" - for waiter in self._waiters: - waiter.cancel() diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/indices/tree/base.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/indices/tree/base.py deleted file mode 100644 index d60c794698327f49b2224eb8de985a9e8fc445d6..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/indices/tree/base.py +++ /dev/null @@ -1,145 +0,0 @@ -"""Tree-based index.""" - -from typing import Any, Dict, Optional, Sequence, Type - -from gpt_index.data_structs.data_structs import IndexGraph -from gpt_index.indices.base import DOCUMENTS_INPUT, BaseGPTIndex -from gpt_index.indices.common.tree.base import GPTTreeIndexBuilder -from gpt_index.indices.query.base import BaseGPTIndexQuery -from gpt_index.indices.query.schema import QueryMode -from gpt_index.indices.query.tree.embedding_query import GPTTreeIndexEmbeddingQuery -from gpt_index.indices.query.tree.leaf_query import GPTTreeIndexLeafQuery -from gpt_index.indices.query.tree.retrieve_query import GPTTreeIndexRetQuery -from gpt_index.indices.query.tree.summarize_query import GPTTreeIndexSummarizeQuery -from gpt_index.indices.tree.inserter import GPTIndexInserter -from gpt_index.langchain_helpers.chain_wrapper import LLMPredictor -from gpt_index.langchain_helpers.text_splitter import TextSplitter -from gpt_index.prompts.default_prompts import ( - DEFAULT_INSERT_PROMPT, - DEFAULT_SUMMARY_PROMPT, -) -from gpt_index.prompts.prompts import SummaryPrompt, TreeInsertPrompt -from gpt_index.schema import BaseDocument - -REQUIRE_TREE_MODES = { - QueryMode.DEFAULT, - QueryMode.EMBEDDING, - QueryMode.RETRIEVE, -} - - -class GPTTreeIndex(BaseGPTIndex[IndexGraph]): - """GPT Tree Index. - - The tree index is a tree-structured index, where each node is a summary of - the children nodes. During index construction, the tree is constructed - in a bottoms-up fashion until we end up with a set of root_nodes. - - There are a few different options during query time (see :ref:`Ref-Query`). - The main option is to traverse down the tree from the root nodes. - A secondary answer is to directly synthesize the answer from the root nodes. - - Args: - summary_template (Optional[SummaryPrompt]): A Summarization Prompt - (see :ref:`Prompt-Templates`). - insert_prompt (Optional[TreeInsertPrompt]): An Tree Insertion Prompt - (see :ref:`Prompt-Templates`). - num_children (int): The number of children each node should have. - build_tree (bool): Whether to build the tree during index construction. - - """ - - index_struct_cls = IndexGraph - - def __init__( - self, - documents: Optional[Sequence[DOCUMENTS_INPUT]] = None, - index_struct: Optional[IndexGraph] = None, - summary_template: Optional[SummaryPrompt] = None, - insert_prompt: Optional[TreeInsertPrompt] = None, - num_children: int = 10, - llm_predictor: Optional[LLMPredictor] = None, - text_splitter: Optional[TextSplitter] = None, - build_tree: bool = True, - use_async: bool = False, - **kwargs: Any, - ) -> None: - """Initialize params.""" - # need to set parameters before building index in base class. - self.num_children = num_children - self.summary_template = summary_template or DEFAULT_SUMMARY_PROMPT - self.insert_prompt: TreeInsertPrompt = insert_prompt or DEFAULT_INSERT_PROMPT - self.build_tree = build_tree - self._use_async = use_async - super().__init__( - documents=documents, - index_struct=index_struct, - llm_predictor=llm_predictor, - text_splitter=text_splitter, - **kwargs, - ) - - @classmethod - def get_query_map(self) -> Dict[str, Type[BaseGPTIndexQuery]]: - """Get query map.""" - return { - QueryMode.DEFAULT: GPTTreeIndexLeafQuery, - QueryMode.EMBEDDING: GPTTreeIndexEmbeddingQuery, - QueryMode.RETRIEVE: GPTTreeIndexRetQuery, - QueryMode.SUMMARIZE: GPTTreeIndexSummarizeQuery, - } - - def _build_fallback_text_splitter(self) -> TextSplitter: - # if not specified, use "smart" text splitter to ensure chunks fit in prompt - return self._prompt_helper.get_text_splitter_given_prompt( - self.summary_template, self.num_children - ) - - def _validate_build_tree_required(self, mode: QueryMode) -> None: - """Check if index supports modes that require trees.""" - if mode in REQUIRE_TREE_MODES and not self.build_tree: - raise ValueError( - "Index was constructed without building trees, " - f"but mode {mode} requires trees." - ) - - def _preprocess_query(self, mode: QueryMode, query_kwargs: Any) -> None: - """Query mode to class.""" - super()._preprocess_query(mode, query_kwargs) - self._validate_build_tree_required(mode) - - def _build_index_from_documents( - self, documents: Sequence[BaseDocument] - ) -> IndexGraph: - """Build the index from documents.""" - # do simple concatenation - index_builder = GPTTreeIndexBuilder( - self.num_children, - self.summary_template, - self._llm_predictor, - self._prompt_helper, - self._text_splitter, - use_async=self._use_async, - ) - index_graph = index_builder.build_from_text( - documents, build_tree=self.build_tree - ) - return index_graph - - def _insert(self, document: BaseDocument, **insert_kwargs: Any) -> None: - """Insert a document.""" - # TODO: allow to customize insert prompt - inserter = GPTIndexInserter( - self.index_struct, - num_children=self.num_children, - insert_prompt=self.insert_prompt, - summary_prompt=self.summary_template, - llm_predictor=self._llm_predictor, - prompt_helper=self._prompt_helper, - text_splitter=self._text_splitter, - ) - inserter.insert(document) - - def _delete(self, doc_id: str, **delete_kwargs: Any) -> None: - """Delete a document.""" - raise NotImplementedError("Delete not implemented for tree index.") diff --git a/spaces/jone/Music_Source_Separation/README.md b/spaces/jone/Music_Source_Separation/README.md deleted file mode 100644 index e8112f530f97d392653d9ef7a70aa319c9ab98ac..0000000000000000000000000000000000000000 --- a/spaces/jone/Music_Source_Separation/README.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -title: Music_Source_Separation -emoji: ⚡ -colorFrom: green -colorTo: yellow -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/jyseo/3DFuse/run_nerf.py b/spaces/jyseo/3DFuse/run_nerf.py deleted file mode 100644 index a66ed3c600ff43614c8dab4127e28f928a580dc8..0000000000000000000000000000000000000000 --- a/spaces/jyseo/3DFuse/run_nerf.py +++ /dev/null @@ -1,62 +0,0 @@ -from typing import List -from pydantic import validator - -from my.config import BaseConf, SingleOrList, dispatch -from my.utils.seed import seed_everything - -import numpy as np -from voxnerf.vox import VOXRF_REGISTRY -from voxnerf.pipelines import train - - -class VoxConfig(BaseConf): - model_type: str = "VoxRF" - bbox_len: float = 1.5 - grid_size: SingleOrList(int) = [128, 128, 128] - step_ratio: float = 0.5 - density_shift: float = -10. - ray_march_weight_thres: float = 0.0001 - c: int = 3 - blend_bg_texture: bool = False - bg_texture_hw: int = 64 - - @validator("grid_size") - def check_gsize(cls, grid_size): - if isinstance(grid_size, int): - return [grid_size, ] * 3 - else: - assert len(grid_size) == 3 - return grid_size - - def make(self): - params = self.dict() - m_type = params.pop("model_type") - model_fn = VOXRF_REGISTRY.get(m_type) - - radius = params.pop('bbox_len') - aabb = radius * np.array([ - [-1, -1, -1], - [1, 1, 1] - ]) - model = model_fn(aabb=aabb, **params) - return model - - -class TrainerConfig(BaseConf): - model: VoxConfig = VoxConfig() - scene: str = "lego" - n_epoch: int = 2 - bs: int = 4096 - lr: float = 0.02 - - def run(self): - args = self.dict() - args.pop("model") - - model = self.model.make() - train(model, **args) - - -if __name__ == "__main__": - seed_everything(0) - dispatch(TrainerConfig) diff --git a/spaces/k2s0/ask-theologian/app.py b/spaces/k2s0/ask-theologian/app.py deleted file mode 100644 index 66ad942988b3c6e7239cf412b1c379d797181c47..0000000000000000000000000000000000000000 --- a/spaces/k2s0/ask-theologian/app.py +++ /dev/null @@ -1,58 +0,0 @@ -import os -import json -import openai -import gradio as gr -import firebase_admin -from firebase_admin import credentials, firestore - -# Get the service account key from the environment variable -service_account_key = os.environ["firebasekey"] - -# Parse the service account key into a dictionary -service_account_info = json.loads(service_account_key) - -# Create a Certificate object from the service account info -cred = credentials.Certificate(service_account_info) - -# Initialize the Firebase Admin SDK -firebase_admin.initialize_app(cred) - -# # Create a reference to the Firestore database -db = firestore.client() - -openai.api_key = os.environ.get("openai_api_key") - -def store_message(user_input, completion): - new_completion = db.collection('askTheologianCompletions').document() - new_completion.set({ - 'user_input': user_input, - 'completion': completion, - 'created_time': firestore.SERVER_TIMESTAMP, - 'model': 'text-davinci-003', - 'temperature': 0.7, - 'title': 'Ask Theologian' - }) - - -def greet(input): - myInput = input - myPrompt = f"Religious Tutor: I am a theology professor and religion studies tutor \n You: What is the religious text of the christians? Religious Tutor: The religious text of Christians is the Bible. The Bible is a collection of sacred texts or scriptures that is widely considered to be the word of God by Christians. It is divided into two main parts: the Old Testament, which contains the texts of the Hebrew Bible, and the New Testament, which contains the texts of the Christian faith. \n The Old Testament is made up of 39 books and is considered to be the foundational text of Judaism. It contains the stories, laws, and teachings of the ancient Israelites, as well as prophecies about the coming of the Messiah. \n The New Testament is made up of 27 books and is considered to be the primary source of teachings about Jesus Christ and the Christian faith. It includes the four Gospels (Matthew, Mark, Luke, and John), which contain the accounts of Jesus' life, teachings, and miracles, as well as the Acts of the Apostles, which describes the early spread of Christianity, and the letters (Epistles) written by the apostle Paul and other early Christian leaders. \n\n The Bible is central to the beliefs and practices of Christians and is considered to be the ultimate authority on matters of faith and morals. It is widely read and studied by believers, and is used as a guide for personal and spiritual growth.. \n\n You: {myInput} cite specific text references in all answers" - response = openai.Completion.create( - model="text-davinci-003", - prompt=myPrompt, - temperature=0.7, - max_tokens=3000, - top_p=1.0, - frequency_penalty=0.0, - presence_penalty=0.0 - ) - raw_response = response['choices'][0]['text'] - split_response = raw_response.split('r:') - trimmed_response = split_response[1] - print(trimmed_response) - store_message(myInput, trimmed_response) - return trimmed_response - -demo = gr.Interface(fn=greet, inputs="text", outputs="text") - -demo.launch() \ No newline at end of file diff --git a/spaces/katielink/biogpt-qa-demo/README.md b/spaces/katielink/biogpt-qa-demo/README.md deleted file mode 100644 index bdcd48c973dbd8b53829744486e89fbd6ceef5fc..0000000000000000000000000000000000000000 --- a/spaces/katielink/biogpt-qa-demo/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: BioGPT Q&A Demo -emoji: 🤔 -colorFrom: yellow -colorTo: pink -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false -license: mit -models: - - microsoft/biogpt-large ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/kazuk/youtube-whisper-18/README.md b/spaces/kazuk/youtube-whisper-18/README.md deleted file mode 100644 index 542684a6910794b194854f8b84bf8436b4bbda1a..0000000000000000000000000000000000000000 --- a/spaces/kazuk/youtube-whisper-18/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Youtube Whisper -emoji: ⚡ -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false -license: unknown -duplicated_from: kazuk/youtube-whisper-15 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/keras-io/NeRF/app.py b/spaces/keras-io/NeRF/app.py deleted file mode 100644 index 44949016589ea3717aa50ffbcb93601c6be94d8d..0000000000000000000000000000000000000000 --- a/spaces/keras-io/NeRF/app.py +++ /dev/null @@ -1,279 +0,0 @@ -import streamlit as st -import tensorflow as tf -import numpy as np - -# Setting random seed to obtain reproducible results. -tf.random.set_seed(42) - -# Initialize global variables. -AUTO = tf.data.AUTOTUNE -BATCH_SIZE = 1 -NUM_SAMPLES = 32 -POS_ENCODE_DIMS = 16 -EPOCHS = 20 -H = 100 -W = 100 -focal = 138.88 - -def encode_position(x): - """Encodes the position into its corresponding Fourier feature. - - Args: - x: The input coordinate. - - Returns: - Fourier features tensors of the position. - """ - positions = [x] - for i in range(POS_ENCODE_DIMS): - for fn in [tf.sin, tf.cos]: - positions.append(fn(2.0 ** i * x)) - return tf.concat(positions, axis=-1) - - -def get_rays(height, width, focal, pose): - """Computes origin point and direction vector of rays. - - Args: - height: Height of the image. - width: Width of the image. - focal: The focal length between the images and the camera. - pose: The pose matrix of the camera. - - Returns: - Tuple of origin point and direction vector for rays. - """ - # Build a meshgrid for the rays. - i, j = tf.meshgrid( - tf.range(width, dtype=tf.float32), - tf.range(height, dtype=tf.float32), - indexing="xy", - ) - - # Normalize the x axis coordinates. - transformed_i = (i - width * 0.5) / focal - - # Normalize the y axis coordinates. - transformed_j = (j - height * 0.5) / focal - - # Create the direction unit vectors. - directions = tf.stack([transformed_i, -transformed_j, -tf.ones_like(i)], axis=-1) - - # Get the camera matrix. - camera_matrix = pose[:3, :3] - height_width_focal = pose[:3, -1] - - # Get origins and directions for the rays. - transformed_dirs = directions[..., None, :] - camera_dirs = transformed_dirs * camera_matrix - ray_directions = tf.reduce_sum(camera_dirs, axis=-1) - ray_origins = tf.broadcast_to(height_width_focal, tf.shape(ray_directions)) - - # Return the origins and directions. - return (ray_origins, ray_directions) - - -def render_flat_rays(ray_origins, ray_directions, near, far, num_samples, rand=False): - """Renders the rays and flattens it. - - Args: - ray_origins: The origin points for rays. - ray_directions: The direction unit vectors for the rays. - near: The near bound of the volumetric scene. - far: The far bound of the volumetric scene. - num_samples: Number of sample points in a ray. - rand: Choice for randomising the sampling strategy. - - Returns: - Tuple of flattened rays and sample points on each rays. - """ - # Compute 3D query points. - # Equation: r(t) = o+td -> Building the "t" here. - t_vals = tf.linspace(near, far, num_samples) - if rand: - # Inject uniform noise into sample space to make the sampling - # continuous. - shape = list(ray_origins.shape[:-1]) + [num_samples] - noise = tf.random.uniform(shape=shape) * (far - near) / num_samples - t_vals = t_vals + noise - - # Equation: r(t) = o + td -> Building the "r" here. - rays = ray_origins[..., None, :] + ( - ray_directions[..., None, :] * t_vals[..., None] - ) - rays_flat = tf.reshape(rays, [-1, 3]) - rays_flat = encode_position(rays_flat) - return (rays_flat, t_vals) - - -def map_fn(pose): - """Maps individual pose to flattened rays and sample points. - - Args: - pose: The pose matrix of the camera. - - Returns: - Tuple of flattened rays and sample points corresponding to the - camera pose. - """ - (ray_origins, ray_directions) = get_rays(height=H, width=W, focal=focal, pose=pose) - (rays_flat, t_vals) = render_flat_rays( - ray_origins=ray_origins, - ray_directions=ray_directions, - near=2.0, - far=6.0, - num_samples=NUM_SAMPLES, - rand=True, - ) - return (rays_flat, t_vals) - - -def render_rgb_depth(model, rays_flat, t_vals, rand=True, train=True): - """Generates the RGB image and depth map from model prediction. - - Args: - model: The MLP model that is trained to predict the rgb and - volume density of the volumetric scene. - rays_flat: The flattened rays that serve as the input to - the NeRF model. - t_vals: The sample points for the rays. - rand: Choice to randomise the sampling strategy. - train: Whether the model is in the training or testing phase. - - Returns: - Tuple of rgb image and depth map. - """ - # Get the predictions from the nerf model and reshape it. - if train: - predictions = model(rays_flat) - else: - predictions = model.predict(rays_flat) - predictions = tf.reshape(predictions, shape=(BATCH_SIZE, H, W, NUM_SAMPLES, 4)) - - # Slice the predictions into rgb and sigma. - rgb = tf.sigmoid(predictions[..., :-1]) - sigma_a = tf.nn.relu(predictions[..., -1]) - - # Get the distance of adjacent intervals. - delta = t_vals[..., 1:] - t_vals[..., :-1] - # delta shape = (num_samples) - if rand: - delta = tf.concat( - [delta, tf.broadcast_to([1e10], shape=(BATCH_SIZE, H, W, 1))], axis=-1 - ) - alpha = 1.0 - tf.exp(-sigma_a * delta) - else: - delta = tf.concat( - [delta, tf.broadcast_to([1e10], shape=(BATCH_SIZE, 1))], axis=-1 - ) - alpha = 1.0 - tf.exp(-sigma_a * delta[:, None, None, :]) - - # Get transmittance. - exp_term = 1.0 - alpha - epsilon = 1e-10 - transmittance = tf.math.cumprod(exp_term + epsilon, axis=-1, exclusive=True) - weights = alpha * transmittance - rgb = tf.reduce_sum(weights[..., None] * rgb, axis=-2) - - if rand: - depth_map = tf.reduce_sum(weights * t_vals, axis=-1) - else: - depth_map = tf.reduce_sum(weights * t_vals[:, None, None], axis=-1) - return (rgb, depth_map) - - -def get_translation_t(t): - """Get the translation matrix for movement in t.""" - matrix = [ - [1, 0, 0, 0], - [0, 1, 0, 0], - [0, 0, 1, t], - [0, 0, 0, 1], - ] - return tf.convert_to_tensor(matrix, dtype=tf.float32) - - -def get_rotation_phi(phi): - """Get the rotation matrix for movement in phi.""" - matrix = [ - [1, 0, 0, 0], - [0, tf.cos(phi), -tf.sin(phi), 0], - [0, tf.sin(phi), tf.cos(phi), 0], - [0, 0, 0, 1], - ] - return tf.convert_to_tensor(matrix, dtype=tf.float32) - - -def get_rotation_theta(theta): - """Get the rotation matrix for movement in theta.""" - matrix = [ - [tf.cos(theta), 0, -tf.sin(theta), 0], - [0, 1, 0, 0], - [tf.sin(theta), 0, tf.cos(theta), 0], - [0, 0, 0, 1], - ] - return tf.convert_to_tensor(matrix, dtype=tf.float32) - - -def pose_spherical(theta, phi, t): - """ - Get the camera to world matrix for the corresponding theta, phi - and t. - """ - c2w = get_translation_t(t) - c2w = get_rotation_phi(phi / 180.0 * np.pi) @ c2w - c2w = get_rotation_theta(theta / 180.0 * np.pi) @ c2w - c2w = np.array([[-1, 0, 0, 0], [0, 0, 1, 0], [0, 1, 0, 0], [0, 0, 0, 1]]) @ c2w - return c2w - - -def show_rendered_image(r,theta,phi): - # Get the camera to world matrix. - c2w = pose_spherical(theta, phi, r) - - ray_oris, ray_dirs = get_rays(H, W, focal, c2w) - rays_flat, t_vals = render_flat_rays( - ray_oris, ray_dirs, near=2.0, far=6.0, num_samples=NUM_SAMPLES, rand=False - ) - - rgb, depth = render_rgb_depth( - nerf_loaded, rays_flat[None, ...], t_vals[None, ...], rand=False, train=False - ) - return(rgb[0], depth[0]) - - -# app.py text matter starts here -st.title('NeRF:3D volumetric rendering with NeRF') -st.markdown("Authors: [Aritra Roy Gosthipathy](https://twitter.com/ariG23498) and [Ritwik Raha](https://twitter.com/ritwik_raha)") -st.markdown("## Description") -st.markdown("[NeRF](https://arxiv.org/abs/2003.08934) proposes an ingenious way to synthesize novel views of a scene by modelling the volumetric scene function through a neural network.") -st.markdown("## Interactive Demo") - -# load the pre-trained model -nerf_loaded = tf.keras.models.load_model("nerf", compile=False) - -# set the values of r theta phi -r = 4.0 -theta = st.slider("Enter a value for Θ:", min_value=0.0, max_value=360.0) -phi = -30.0 -color, depth = show_rendered_image(r, theta, phi) - -col1, col2= st.columns(2) - -with col1: - color = tf.keras.utils.array_to_img(color) - st.image(color, caption="Color Image", clamp=True, width=300) - -with col2: - depth = tf.keras.utils.array_to_img(depth[..., None]) - st.image(depth, caption="Depth Map", clamp=True, width=300) - -st.markdown("## Tutorials") -st.markdown("- [Keras](https://keras.io/examples/vision/nerf/)") -st.markdown("- [PyImageSearch NeRF 1](https://www.pyimagesearch.com/2021/11/10/computer-graphics-and-deep-learning-with-nerf-using-tensorflow-and-keras-part-1/)") -st.markdown("- [PyImageSearch NeRF 2](https://www.pyimagesearch.com/2021/11/17/computer-graphics-and-deep-learning-with-nerf-using-tensorflow-and-keras-part-2/)") -st.markdown("- [PyImageSearch NeRF 3](https://www.pyimagesearch.com/2021/11/24/computer-graphics-and-deep-learning-with-nerf-using-tensorflow-and-keras-part-3/)") - -st.markdown("## Credits") -st.markdown("- [PyImageSearch](https://www.pyimagesearch.com/)") -st.markdown("- [JarvisLabs.ai GPU credits](https://jarvislabs.ai/)") diff --git a/spaces/keras-io/timeseries-anomaly-detection-autoencoders/app.py b/spaces/keras-io/timeseries-anomaly-detection-autoencoders/app.py deleted file mode 100644 index 3e019a96be760a7f00ce13204e0620420bf9cb64..0000000000000000000000000000000000000000 --- a/spaces/keras-io/timeseries-anomaly-detection-autoencoders/app.py +++ /dev/null @@ -1,80 +0,0 @@ -import gradio as gr -from huggingface_hub import from_pretrained_keras -import pandas as pd -import numpy as np -import json -from matplotlib import pyplot as plt - -f = open('scaler.json') -scaler = json.load(f) - -TIME_STEPS = 288 - -# Generated training sequences for use in the model. -def create_sequences(values, time_steps=TIME_STEPS): - output = [] - for i in range(len(values) - time_steps + 1): - output.append(values[i : (i + time_steps)]) - return np.stack(output) - - -def normalize_data(data): - df_test_value = (data - scaler["mean"]) / scaler["std"] - return df_test_value - -def plot_test_data(df_test_value): - fig, ax = plt.subplots() - df_test_value.plot(legend=False, ax=ax) - return fig - -def get_anomalies(df_test_value): - # Create sequences from test values. - x_test = create_sequences(df_test_value.values) - model = from_pretrained_keras("keras-io/timeseries-anomaly-detection") - - # Get test MAE loss. - x_test_pred = model.predict(x_test) - test_mae_loss = np.mean(np.abs(x_test_pred - x_test), axis=1) - test_mae_loss = test_mae_loss.reshape((-1)) - - # Detect all the samples which are anomalies. - anomalies = test_mae_loss > scaler["threshold"] - return anomalies - -def plot_anomalies(df_test_value, data, anomalies): - # data i is an anomaly if samples [(i - timesteps + 1) to (i)] are anomalies - anomalous_data_indices = [] - for data_idx in range(TIME_STEPS - 1, len(df_test_value) - TIME_STEPS + 1): - if np.all(anomalies[data_idx - TIME_STEPS + 1 : data_idx]): - anomalous_data_indices.append(data_idx) - df_subset = data.iloc[anomalous_data_indices] - fig, ax = plt.subplots() - data.plot(legend=False, ax=ax) - df_subset.plot(legend=False, ax=ax, color="r") - return fig - -def master(file): - # read file - data = pd.read_csv(file, parse_dates=True, index_col="timestamp") - df_test_value = normalize_data(data) - # plot input test data - plot1 = plot_test_data(df_test_value) - # predict - anomalies = get_anomalies(df_test_value) - #plot anomalous data points - plot2 = plot_anomalies(df_test_value, data, anomalies) - return plot2 - -outputs = gr.Plot() - - -iface = gr.Interface(master, -gr.inputs.File(label="csv file"), -outputs=outputs, -examples=["art_daily_jumpsup.csv"], title="Timeseries Anomaly Detection Using an Autoencoder", -description = "Anomaly detection of timeseries data.", - article = "Space by: Reme Ajayi
Keras Example by Pavithra Vijay") - - - -iface.launch() \ No newline at end of file diff --git a/spaces/kevinwang676/Bark-UI-with-Voice-Cloning-2/bark/__init__.py b/spaces/kevinwang676/Bark-UI-with-Voice-Cloning-2/bark/__init__.py deleted file mode 100644 index e0b17c8b44869c554931c723446c65d3903821a9..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/Bark-UI-with-Voice-Cloning-2/bark/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .api import generate_audio, text_to_semantic, semantic_to_waveform, save_as_prompt -from .generation import SAMPLE_RATE, preload_models diff --git a/spaces/kevinwang676/ChatGLM2-VC-SadTalker/src/audio2exp_models/audio2exp.py b/spaces/kevinwang676/ChatGLM2-VC-SadTalker/src/audio2exp_models/audio2exp.py deleted file mode 100644 index 9e79a929560592687a505e13188796e2b0ca8772..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-VC-SadTalker/src/audio2exp_models/audio2exp.py +++ /dev/null @@ -1,41 +0,0 @@ -from tqdm import tqdm -import torch -from torch import nn - - -class Audio2Exp(nn.Module): - def __init__(self, netG, cfg, device, prepare_training_loss=False): - super(Audio2Exp, self).__init__() - self.cfg = cfg - self.device = device - self.netG = netG.to(device) - - def test(self, batch): - - mel_input = batch['indiv_mels'] # bs T 1 80 16 - bs = mel_input.shape[0] - T = mel_input.shape[1] - - exp_coeff_pred = [] - - for i in tqdm(range(0, T, 10),'audio2exp:'): # every 10 frames - - current_mel_input = mel_input[:,i:i+10] - - #ref = batch['ref'][:, :, :64].repeat((1,current_mel_input.shape[1],1)) #bs T 64 - ref = batch['ref'][:, :, :64][:, i:i+10] - ratio = batch['ratio_gt'][:, i:i+10] #bs T - - audiox = current_mel_input.view(-1, 1, 80, 16) # bs*T 1 80 16 - - curr_exp_coeff_pred = self.netG(audiox, ref, ratio) # bs T 64 - - exp_coeff_pred += [curr_exp_coeff_pred] - - # BS x T x 64 - results_dict = { - 'exp_coeff_pred': torch.cat(exp_coeff_pred, axis=1) - } - return results_dict - - diff --git a/spaces/kevinwang676/M4Singer/modules/diffsinger_midi/fs2.py b/spaces/kevinwang676/M4Singer/modules/diffsinger_midi/fs2.py deleted file mode 100644 index 8ddf2aa42bfb6109cd41d149fa7a8059e7e186c1..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/M4Singer/modules/diffsinger_midi/fs2.py +++ /dev/null @@ -1,118 +0,0 @@ -from modules.commons.common_layers import * -from modules.commons.common_layers import Embedding -from modules.fastspeech.tts_modules import FastspeechDecoder, DurationPredictor, LengthRegulator, PitchPredictor, \ - EnergyPredictor, FastspeechEncoder -from utils.cwt import cwt2f0 -from utils.hparams import hparams -from utils.pitch_utils import f0_to_coarse, denorm_f0, norm_f0 -from modules.fastspeech.fs2 import FastSpeech2 - - -class FastspeechMIDIEncoder(FastspeechEncoder): - def forward_embedding(self, txt_tokens, midi_embedding, midi_dur_embedding, slur_embedding): - # embed tokens and positions - x = self.embed_scale * self.embed_tokens(txt_tokens) - x = x + midi_embedding + midi_dur_embedding + slur_embedding - if hparams['use_pos_embed']: - if hparams.get('rel_pos') is not None and hparams['rel_pos']: - x = self.embed_positions(x) - else: - positions = self.embed_positions(txt_tokens) - x = x + positions - x = F.dropout(x, p=self.dropout, training=self.training) - return x - - def forward(self, txt_tokens, midi_embedding, midi_dur_embedding, slur_embedding): - """ - - :param txt_tokens: [B, T] - :return: { - 'encoder_out': [T x B x C] - } - """ - encoder_padding_mask = txt_tokens.eq(self.padding_idx).data - x = self.forward_embedding(txt_tokens, midi_embedding, midi_dur_embedding, slur_embedding) # [B, T, H] - x = super(FastspeechEncoder, self).forward(x, encoder_padding_mask) - return x - - -FS_ENCODERS = { - 'fft': lambda hp, embed_tokens, d: FastspeechMIDIEncoder( - embed_tokens, hp['hidden_size'], hp['enc_layers'], hp['enc_ffn_kernel_size'], - num_heads=hp['num_heads']), -} - - -class FastSpeech2MIDI(FastSpeech2): - def __init__(self, dictionary, out_dims=None): - super().__init__(dictionary, out_dims) - del self.encoder - self.encoder = FS_ENCODERS[hparams['encoder_type']](hparams, self.encoder_embed_tokens, self.dictionary) - self.midi_embed = Embedding(300, self.hidden_size, self.padding_idx) - self.midi_dur_layer = Linear(1, self.hidden_size) - self.is_slur_embed = Embedding(2, self.hidden_size) - - def forward(self, txt_tokens, mel2ph=None, spk_embed=None, - ref_mels=None, f0=None, uv=None, energy=None, skip_decoder=False, - spk_embed_dur_id=None, spk_embed_f0_id=None, infer=False, **kwargs): - ret = {} - - midi_embedding = self.midi_embed(kwargs['pitch_midi']) - midi_dur_embedding, slur_embedding = 0, 0 - if kwargs.get('midi_dur') is not None: - midi_dur_embedding = self.midi_dur_layer(kwargs['midi_dur'][:, :, None]) # [B, T, 1] -> [B, T, H] - if kwargs.get('is_slur') is not None: - slur_embedding = self.is_slur_embed(kwargs['is_slur']) - encoder_out = self.encoder(txt_tokens, midi_embedding, midi_dur_embedding, slur_embedding) # [B, T, C] - src_nonpadding = (txt_tokens > 0).float()[:, :, None] - - # add ref style embed - # Not implemented - # variance encoder - var_embed = 0 - - # encoder_out_dur denotes encoder outputs for duration predictor - # in speech adaptation, duration predictor use old speaker embedding - if hparams['use_spk_embed']: - spk_embed_dur = spk_embed_f0 = spk_embed = self.spk_embed_proj(spk_embed)[:, None, :] - elif hparams['use_spk_id']: - spk_embed_id = spk_embed - if spk_embed_dur_id is None: - spk_embed_dur_id = spk_embed_id - if spk_embed_f0_id is None: - spk_embed_f0_id = spk_embed_id - spk_embed = self.spk_embed_proj(spk_embed_id)[:, None, :] - spk_embed_dur = spk_embed_f0 = spk_embed - if hparams['use_split_spk_id']: - spk_embed_dur = self.spk_embed_dur(spk_embed_dur_id)[:, None, :] - spk_embed_f0 = self.spk_embed_f0(spk_embed_f0_id)[:, None, :] - else: - spk_embed_dur = spk_embed_f0 = spk_embed = 0 - - # add dur - dur_inp = (encoder_out + var_embed + spk_embed_dur) * src_nonpadding - - mel2ph = self.add_dur(dur_inp, mel2ph, txt_tokens, ret) - - decoder_inp = F.pad(encoder_out, [0, 0, 1, 0]) - - mel2ph_ = mel2ph[..., None].repeat([1, 1, encoder_out.shape[-1]]) - decoder_inp_origin = decoder_inp = torch.gather(decoder_inp, 1, mel2ph_) # [B, T, H] - - tgt_nonpadding = (mel2ph > 0).float()[:, :, None] - - # add pitch and energy embed - pitch_inp = (decoder_inp_origin + var_embed + spk_embed_f0) * tgt_nonpadding - if hparams['use_pitch_embed']: - pitch_inp_ph = (encoder_out + var_embed + spk_embed_f0) * src_nonpadding - decoder_inp = decoder_inp + self.add_pitch(pitch_inp, f0, uv, mel2ph, ret, encoder_out=pitch_inp_ph) - if hparams['use_energy_embed']: - decoder_inp = decoder_inp + self.add_energy(pitch_inp, energy, ret) - - ret['decoder_inp'] = decoder_inp = (decoder_inp + spk_embed) * tgt_nonpadding - - if skip_decoder: - return ret - ret['mel_out'] = self.run_decoder(decoder_inp, tgt_nonpadding, ret, infer=infer, **kwargs) - - return ret diff --git a/spaces/kevinwang676/vits-fast-finetuning-pcr/text/korean.py b/spaces/kevinwang676/vits-fast-finetuning-pcr/text/korean.py deleted file mode 100644 index edee07429a450c55e3d8e246997faaa1e0b89cc9..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/vits-fast-finetuning-pcr/text/korean.py +++ /dev/null @@ -1,210 +0,0 @@ -import re -from jamo import h2j, j2hcj -import ko_pron - - -# This is a list of Korean classifiers preceded by pure Korean numerals. -_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통' - -# List of (hangul, hangul divided) pairs: -_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄳ', 'ㄱㅅ'), - ('ㄵ', 'ㄴㅈ'), - ('ㄶ', 'ㄴㅎ'), - ('ㄺ', 'ㄹㄱ'), - ('ㄻ', 'ㄹㅁ'), - ('ㄼ', 'ㄹㅂ'), - ('ㄽ', 'ㄹㅅ'), - ('ㄾ', 'ㄹㅌ'), - ('ㄿ', 'ㄹㅍ'), - ('ㅀ', 'ㄹㅎ'), - ('ㅄ', 'ㅂㅅ'), - ('ㅘ', 'ㅗㅏ'), - ('ㅙ', 'ㅗㅐ'), - ('ㅚ', 'ㅗㅣ'), - ('ㅝ', 'ㅜㅓ'), - ('ㅞ', 'ㅜㅔ'), - ('ㅟ', 'ㅜㅣ'), - ('ㅢ', 'ㅡㅣ'), - ('ㅑ', 'ㅣㅏ'), - ('ㅒ', 'ㅣㅐ'), - ('ㅕ', 'ㅣㅓ'), - ('ㅖ', 'ㅣㅔ'), - ('ㅛ', 'ㅣㅗ'), - ('ㅠ', 'ㅣㅜ') -]] - -# List of (Latin alphabet, hangul) pairs: -_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', '에이'), - ('b', '비'), - ('c', '시'), - ('d', '디'), - ('e', '이'), - ('f', '에프'), - ('g', '지'), - ('h', '에이치'), - ('i', '아이'), - ('j', '제이'), - ('k', '케이'), - ('l', '엘'), - ('m', '엠'), - ('n', '엔'), - ('o', '오'), - ('p', '피'), - ('q', '큐'), - ('r', '아르'), - ('s', '에스'), - ('t', '티'), - ('u', '유'), - ('v', '브이'), - ('w', '더블유'), - ('x', '엑스'), - ('y', '와이'), - ('z', '제트') -]] - -# List of (ipa, lazy ipa) pairs: -_ipa_to_lazy_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('t͡ɕ','ʧ'), - ('d͡ʑ','ʥ'), - ('ɲ','n^'), - ('ɕ','ʃ'), - ('ʷ','w'), - ('ɭ','l`'), - ('ʎ','ɾ'), - ('ɣ','ŋ'), - ('ɰ','ɯ'), - ('ʝ','j'), - ('ʌ','ə'), - ('ɡ','g'), - ('\u031a','#'), - ('\u0348','='), - ('\u031e',''), - ('\u0320',''), - ('\u0339','') -]] - - -def latin_to_hangul(text): - for regex, replacement in _latin_to_hangul: - text = re.sub(regex, replacement, text) - return text - - -def divide_hangul(text): - text = j2hcj(h2j(text)) - for regex, replacement in _hangul_divided: - text = re.sub(regex, replacement, text) - return text - - -def hangul_number(num, sino=True): - '''Reference https://github.com/Kyubyong/g2pK''' - num = re.sub(',', '', num) - - if num == '0': - return '영' - if not sino and num == '20': - return '스무' - - digits = '123456789' - names = '일이삼사오육칠팔구' - digit2name = {d: n for d, n in zip(digits, names)} - - modifiers = '한 두 세 네 다섯 여섯 일곱 여덟 아홉' - decimals = '열 스물 서른 마흔 쉰 예순 일흔 여든 아흔' - digit2mod = {d: mod for d, mod in zip(digits, modifiers.split())} - digit2dec = {d: dec for d, dec in zip(digits, decimals.split())} - - spelledout = [] - for i, digit in enumerate(num): - i = len(num) - i - 1 - if sino: - if i == 0: - name = digit2name.get(digit, '') - elif i == 1: - name = digit2name.get(digit, '') + '십' - name = name.replace('일십', '십') - else: - if i == 0: - name = digit2mod.get(digit, '') - elif i == 1: - name = digit2dec.get(digit, '') - if digit == '0': - if i % 4 == 0: - last_three = spelledout[-min(3, len(spelledout)):] - if ''.join(last_three) == '': - spelledout.append('') - continue - else: - spelledout.append('') - continue - if i == 2: - name = digit2name.get(digit, '') + '백' - name = name.replace('일백', '백') - elif i == 3: - name = digit2name.get(digit, '') + '천' - name = name.replace('일천', '천') - elif i == 4: - name = digit2name.get(digit, '') + '만' - name = name.replace('일만', '만') - elif i == 5: - name = digit2name.get(digit, '') + '십' - name = name.replace('일십', '십') - elif i == 6: - name = digit2name.get(digit, '') + '백' - name = name.replace('일백', '백') - elif i == 7: - name = digit2name.get(digit, '') + '천' - name = name.replace('일천', '천') - elif i == 8: - name = digit2name.get(digit, '') + '억' - elif i == 9: - name = digit2name.get(digit, '') + '십' - elif i == 10: - name = digit2name.get(digit, '') + '백' - elif i == 11: - name = digit2name.get(digit, '') + '천' - elif i == 12: - name = digit2name.get(digit, '') + '조' - elif i == 13: - name = digit2name.get(digit, '') + '십' - elif i == 14: - name = digit2name.get(digit, '') + '백' - elif i == 15: - name = digit2name.get(digit, '') + '천' - spelledout.append(name) - return ''.join(elem for elem in spelledout) - - -def number_to_hangul(text): - '''Reference https://github.com/Kyubyong/g2pK''' - tokens = set(re.findall(r'(\d[\d,]*)([\uac00-\ud71f]+)', text)) - for token in tokens: - num, classifier = token - if classifier[:2] in _korean_classifiers or classifier[0] in _korean_classifiers: - spelledout = hangul_number(num, sino=False) - else: - spelledout = hangul_number(num, sino=True) - text = text.replace(f'{num}{classifier}', f'{spelledout}{classifier}') - # digit by digit for remaining digits - digits = '0123456789' - names = '영일이삼사오육칠팔구' - for d, n in zip(digits, names): - text = text.replace(d, n) - return text - - -def korean_to_lazy_ipa(text): - text = latin_to_hangul(text) - text = number_to_hangul(text) - text=re.sub('[\uac00-\ud7af]+',lambda x:ko_pron.romanise(x.group(0),'ipa').split('] ~ [')[0],text) - for regex, replacement in _ipa_to_lazy_ipa: - text = re.sub(regex, replacement, text) - return text - - -def korean_to_ipa(text): - text = korean_to_lazy_ipa(text) - return text.replace('ʧ','tʃ').replace('ʥ','dʑ') diff --git a/spaces/king007/Stable-Diffusion-ControlNet-WebUI/README.md b/spaces/king007/Stable-Diffusion-ControlNet-WebUI/README.md deleted file mode 100644 index 7caf773b12ab1d5595a76af0628fc0255c646b1f..0000000000000000000000000000000000000000 --- a/spaces/king007/Stable-Diffusion-ControlNet-WebUI/README.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -title: Stable Diffusion ControlNet WebUI -emoji: 🚀 -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.19 -app_file: app.py -pinned: true -license: openrail -tags: -- making-demos -duplicated_from: ArtGAN/Stable-Diffusion-ControlNet-WebUI ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/king007/table_questions/app3.py b/spaces/king007/table_questions/app3.py deleted file mode 100644 index 16f43ebcbe700a94267a7ae307620bba0b2911b0..0000000000000000000000000000000000000000 --- a/spaces/king007/table_questions/app3.py +++ /dev/null @@ -1,16 +0,0 @@ -from transformers import pipeline -import pandas as pd - -# prepare table + question -data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]} -table = pd.DataFrame.from_dict(data) -question = "how many movies does Leonardo Di Caprio have?" - -# pipeline model -# Note: you must to install torch-scatter first. -tqa = pipeline(task="table-question-answering", model="google/tapas-large-finetuned-wtq") - -# result - -print(tqa(table=table, query=query)['cells'][0]) -#53 diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/datasets/dataset_wrappers.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/datasets/dataset_wrappers.py deleted file mode 100644 index d6a5e957ec3b44465432617cf6e8f0b86a8a5efa..0000000000000000000000000000000000000000 --- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/datasets/dataset_wrappers.py +++ /dev/null @@ -1,50 +0,0 @@ -from torch.utils.data.dataset import ConcatDataset as _ConcatDataset - -from .builder import DATASETS - - -@DATASETS.register_module() -class ConcatDataset(_ConcatDataset): - """A wrapper of concatenated dataset. - - Same as :obj:`torch.utils.data.dataset.ConcatDataset`, but - concat the group flag for image aspect ratio. - - Args: - datasets (list[:obj:`Dataset`]): A list of datasets. - """ - - def __init__(self, datasets): - super(ConcatDataset, self).__init__(datasets) - self.CLASSES = datasets[0].CLASSES - self.PALETTE = datasets[0].PALETTE - - -@DATASETS.register_module() -class RepeatDataset(object): - """A wrapper of repeated dataset. - - The length of repeated dataset will be `times` larger than the original - dataset. This is useful when the data loading time is long but the dataset - is small. Using RepeatDataset can reduce the data loading time between - epochs. - - Args: - dataset (:obj:`Dataset`): The dataset to be repeated. - times (int): Repeat times. - """ - - def __init__(self, dataset, times): - self.dataset = dataset - self.times = times - self.CLASSES = dataset.CLASSES - self.PALETTE = dataset.PALETTE - self._ori_len = len(self.dataset) - - def __getitem__(self, idx): - """Get item from original dataset.""" - return self.dataset[idx % self._ori_len] - - def __len__(self): - """The length is multiplied by ``times``""" - return self.times * self._ori_len diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/__init__.py b/spaces/koajoel/PolyFormer/fairseq/examples/__init__.py deleted file mode 100644 index 44bb24ae614941f23fea29c56d60167650c39bcb..0000000000000000000000000000000000000000 --- a/spaces/koajoel/PolyFormer/fairseq/examples/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -try: - from fairseq.version import __version__ # noqa -except ImportError: - pass diff --git a/spaces/kukuhtw/AutoGPT/autogpt/cli.py b/spaces/kukuhtw/AutoGPT/autogpt/cli.py deleted file mode 100644 index a2e99cb421cad005528cb160e948ce59ccfcdb66..0000000000000000000000000000000000000000 --- a/spaces/kukuhtw/AutoGPT/autogpt/cli.py +++ /dev/null @@ -1,145 +0,0 @@ -"""Main script for the autogpt package.""" -import click - - -@click.group(invoke_without_command=True) -@click.option("-c", "--continuous", is_flag=True, help="Enable Continuous Mode") -@click.option( - "--skip-reprompt", - "-y", - is_flag=True, - help="Skips the re-prompting messages at the beginning of the script", -) -@click.option( - "--ai-settings", - "-C", - help="Specifies which ai_settings.yaml file to use, will also automatically skip the re-prompt.", -) -@click.option( - "-l", - "--continuous-limit", - type=int, - help="Defines the number of times to run in continuous mode", -) -@click.option("--speak", is_flag=True, help="Enable Speak Mode") -@click.option("--debug", is_flag=True, help="Enable Debug Mode") -@click.option("--gpt3only", is_flag=True, help="Enable GPT3.5 Only Mode") -@click.option("--gpt4only", is_flag=True, help="Enable GPT4 Only Mode") -@click.option( - "--use-memory", - "-m", - "memory_type", - type=str, - help="Defines which Memory backend to use", -) -@click.option( - "-b", - "--browser-name", - help="Specifies which web-browser to use when using selenium to scrape the web.", -) -@click.option( - "--allow-downloads", - is_flag=True, - help="Dangerous: Allows Auto-GPT to download files natively.", -) -@click.option( - "--skip-news", - is_flag=True, - help="Specifies whether to suppress the output of latest news on startup.", -) -@click.pass_context -def main( - ctx: click.Context, - continuous: bool, - continuous_limit: int, - ai_settings: str, - skip_reprompt: bool, - speak: bool, - debug: bool, - gpt3only: bool, - gpt4only: bool, - memory_type: str, - browser_name: str, - allow_downloads: bool, - skip_news: bool, -) -> None: - """ - Welcome to AutoGPT an experimental open-source application showcasing the capabilities of the GPT-4 pushing the boundaries of AI. - - Start an Auto-GPT assistant. - """ - # Put imports inside function to avoid importing everything when starting the CLI - import logging - - from colorama import Fore - - from autogpt.agent.agent import Agent - from autogpt.config import Config, check_openai_api_key - from autogpt.configurator import create_config - from autogpt.logs import logger - from autogpt.memory import get_memory - from autogpt.prompt import construct_prompt - from autogpt.utils import get_current_git_branch, get_latest_bulletin - - if ctx.invoked_subcommand is None: - cfg = Config() - # TODO: fill in llm values here - check_openai_api_key() - create_config( - continuous, - continuous_limit, - ai_settings, - skip_reprompt, - speak, - debug, - gpt3only, - gpt4only, - memory_type, - browser_name, - allow_downloads, - skip_news, - ) - logger.set_level(logging.DEBUG if cfg.debug_mode else logging.INFO) - ai_name = "" - if not cfg.skip_news: - motd = get_latest_bulletin() - if motd: - logger.typewriter_log("NEWS: ", Fore.GREEN, motd) - git_branch = get_current_git_branch() - if git_branch and git_branch != "stable": - logger.typewriter_log( - "WARNING: ", - Fore.RED, - f"You are running on `{git_branch}` branch " - "- this is not a supported branch.", - ) - system_prompt = construct_prompt() - # print(prompt) - # Initialize variables - full_message_history = [] - next_action_count = 0 - # Make a constant: - triggering_prompt = ( - "Determine which next command to use, and respond using the" - " format specified above:" - ) - # Initialize memory and make sure it is empty. - # this is particularly important for indexing and referencing pinecone memory - memory = get_memory(cfg, init=True) - logger.typewriter_log( - "Using memory of type:", Fore.GREEN, f"{memory.__class__.__name__}" - ) - logger.typewriter_log("Using Browser:", Fore.GREEN, cfg.selenium_web_browser) - agent = Agent( - ai_name=ai_name, - memory=memory, - full_message_history=full_message_history, - next_action_count=next_action_count, - system_prompt=system_prompt, - triggering_prompt=triggering_prompt, - ) - agent.start_interaction_loop() - - -if __name__ == "__main__": - main() diff --git a/spaces/kukuhtw/VToonify/vtoonify/model/stylegan/non_leaking.py b/spaces/kukuhtw/VToonify/vtoonify/model/stylegan/non_leaking.py deleted file mode 100644 index d0447535fed22d3ad4ac719b2b5ac6b7c58e6435..0000000000000000000000000000000000000000 --- a/spaces/kukuhtw/VToonify/vtoonify/model/stylegan/non_leaking.py +++ /dev/null @@ -1,469 +0,0 @@ -import math - -import torch -from torch import autograd -from torch.nn import functional as F -import numpy as np - -from model.stylegan.distributed import reduce_sum -from model.stylegan.op import upfirdn2d - - -class AdaptiveAugment: - def __init__(self, ada_aug_target, ada_aug_len, update_every, device): - self.ada_aug_target = ada_aug_target - self.ada_aug_len = ada_aug_len - self.update_every = update_every - - self.ada_update = 0 - self.ada_aug_buf = torch.tensor([0.0, 0.0], device=device) - self.r_t_stat = 0 - self.ada_aug_p = 0 - - @torch.no_grad() - def tune(self, real_pred): - self.ada_aug_buf += torch.tensor( - (torch.sign(real_pred).sum().item(), real_pred.shape[0]), - device=real_pred.device, - ) - self.ada_update += 1 - - if self.ada_update % self.update_every == 0: - self.ada_aug_buf = reduce_sum(self.ada_aug_buf) - pred_signs, n_pred = self.ada_aug_buf.tolist() - - self.r_t_stat = pred_signs / n_pred - - if self.r_t_stat > self.ada_aug_target: - sign = 1 - - else: - sign = -1 - - self.ada_aug_p += sign * n_pred / self.ada_aug_len - self.ada_aug_p = min(1, max(0, self.ada_aug_p)) - self.ada_aug_buf.mul_(0) - self.ada_update = 0 - - return self.ada_aug_p - - -SYM6 = ( - 0.015404109327027373, - 0.0034907120842174702, - -0.11799011114819057, - -0.048311742585633, - 0.4910559419267466, - 0.787641141030194, - 0.3379294217276218, - -0.07263752278646252, - -0.021060292512300564, - 0.04472490177066578, - 0.0017677118642428036, - -0.007800708325034148, -) - - -def translate_mat(t_x, t_y, device="cpu"): - batch = t_x.shape[0] - - mat = torch.eye(3, device=device).unsqueeze(0).repeat(batch, 1, 1) - translate = torch.stack((t_x, t_y), 1) - mat[:, :2, 2] = translate - - return mat - - -def rotate_mat(theta, device="cpu"): - batch = theta.shape[0] - - mat = torch.eye(3, device=device).unsqueeze(0).repeat(batch, 1, 1) - sin_t = torch.sin(theta) - cos_t = torch.cos(theta) - rot = torch.stack((cos_t, -sin_t, sin_t, cos_t), 1).view(batch, 2, 2) - mat[:, :2, :2] = rot - - return mat - - -def scale_mat(s_x, s_y, device="cpu"): - batch = s_x.shape[0] - - mat = torch.eye(3, device=device).unsqueeze(0).repeat(batch, 1, 1) - mat[:, 0, 0] = s_x - mat[:, 1, 1] = s_y - - return mat - - -def translate3d_mat(t_x, t_y, t_z): - batch = t_x.shape[0] - - mat = torch.eye(4).unsqueeze(0).repeat(batch, 1, 1) - translate = torch.stack((t_x, t_y, t_z), 1) - mat[:, :3, 3] = translate - - return mat - - -def rotate3d_mat(axis, theta): - batch = theta.shape[0] - - u_x, u_y, u_z = axis - - eye = torch.eye(3).unsqueeze(0) - cross = torch.tensor([(0, -u_z, u_y), (u_z, 0, -u_x), (-u_y, u_x, 0)]).unsqueeze(0) - outer = torch.tensor(axis) - outer = (outer.unsqueeze(1) * outer).unsqueeze(0) - - sin_t = torch.sin(theta).view(-1, 1, 1) - cos_t = torch.cos(theta).view(-1, 1, 1) - - rot = cos_t * eye + sin_t * cross + (1 - cos_t) * outer - - eye_4 = torch.eye(4).unsqueeze(0).repeat(batch, 1, 1) - eye_4[:, :3, :3] = rot - - return eye_4 - - -def scale3d_mat(s_x, s_y, s_z): - batch = s_x.shape[0] - - mat = torch.eye(4).unsqueeze(0).repeat(batch, 1, 1) - mat[:, 0, 0] = s_x - mat[:, 1, 1] = s_y - mat[:, 2, 2] = s_z - - return mat - - -def luma_flip_mat(axis, i): - batch = i.shape[0] - - eye = torch.eye(4).unsqueeze(0).repeat(batch, 1, 1) - axis = torch.tensor(axis + (0,)) - flip = 2 * torch.ger(axis, axis) * i.view(-1, 1, 1) - - return eye - flip - - -def saturation_mat(axis, i): - batch = i.shape[0] - - eye = torch.eye(4).unsqueeze(0).repeat(batch, 1, 1) - axis = torch.tensor(axis + (0,)) - axis = torch.ger(axis, axis) - saturate = axis + (eye - axis) * i.view(-1, 1, 1) - - return saturate - - -def lognormal_sample(size, mean=0, std=1, device="cpu"): - return torch.empty(size, device=device).log_normal_(mean=mean, std=std) - - -def category_sample(size, categories, device="cpu"): - category = torch.tensor(categories, device=device) - sample = torch.randint(high=len(categories), size=(size,), device=device) - - return category[sample] - - -def uniform_sample(size, low, high, device="cpu"): - return torch.empty(size, device=device).uniform_(low, high) - - -def normal_sample(size, mean=0, std=1, device="cpu"): - return torch.empty(size, device=device).normal_(mean, std) - - -def bernoulli_sample(size, p, device="cpu"): - return torch.empty(size, device=device).bernoulli_(p) - - -def random_mat_apply(p, transform, prev, eye, device="cpu"): - size = transform.shape[0] - select = bernoulli_sample(size, p, device=device).view(size, 1, 1) - select_transform = select * transform + (1 - select) * eye - - return select_transform @ prev - - -def sample_affine(p, size, height, width, device="cpu"): - G = torch.eye(3, device=device).unsqueeze(0).repeat(size, 1, 1) - eye = G - - # flip - param = category_sample(size, (0, 1)) - Gc = scale_mat(1 - 2.0 * param, torch.ones(size), device=device) - G = random_mat_apply(p, Gc, G, eye, device=device) - # print('flip', G, scale_mat(1 - 2.0 * param, torch.ones(size)), sep='\n') - - # 90 rotate - #param = category_sample(size, (0, 3)) - #Gc = rotate_mat(-math.pi / 2 * param, device=device) - #G = random_mat_apply(p, Gc, G, eye, device=device) - # print('90 rotate', G, rotate_mat(-math.pi / 2 * param), sep='\n') - - # integer translate - param = uniform_sample(size, -0.125, 0.125) - param_height = torch.round(param * height) / height - param_width = torch.round(param * width) / width - Gc = translate_mat(param_width, param_height, device=device) - G = random_mat_apply(p, Gc, G, eye, device=device) - # print('integer translate', G, translate_mat(param_width, param_height), sep='\n') - - # isotropic scale - param = lognormal_sample(size, std=0.2 * math.log(2)) - Gc = scale_mat(param, param, device=device) - G = random_mat_apply(p, Gc, G, eye, device=device) - # print('isotropic scale', G, scale_mat(param, param), sep='\n') - - p_rot = 1 - math.sqrt(1 - p) - - # pre-rotate - param = uniform_sample(size, -math.pi, math.pi) - Gc = rotate_mat(-param, device=device) - G = random_mat_apply(p_rot, Gc, G, eye, device=device) - # print('pre-rotate', G, rotate_mat(-param), sep='\n') - - # anisotropic scale - param = lognormal_sample(size, std=0.2 * math.log(2)) - Gc = scale_mat(param, 1 / param, device=device) - G = random_mat_apply(p, Gc, G, eye, device=device) - # print('anisotropic scale', G, scale_mat(param, 1 / param), sep='\n') - - # post-rotate - param = uniform_sample(size, -math.pi, math.pi) - Gc = rotate_mat(-param, device=device) - G = random_mat_apply(p_rot, Gc, G, eye, device=device) - # print('post-rotate', G, rotate_mat(-param), sep='\n') - - # fractional translate - param = normal_sample(size, std=0.125) - Gc = translate_mat(param, param, device=device) - G = random_mat_apply(p, Gc, G, eye, device=device) - # print('fractional translate', G, translate_mat(param, param), sep='\n') - - return G - - -def sample_color(p, size): - C = torch.eye(4).unsqueeze(0).repeat(size, 1, 1) - eye = C - axis_val = 1 / math.sqrt(3) - axis = (axis_val, axis_val, axis_val) - - # brightness - param = normal_sample(size, std=0.2) - Cc = translate3d_mat(param, param, param) - C = random_mat_apply(p, Cc, C, eye) - - # contrast - param = lognormal_sample(size, std=0.5 * math.log(2)) - Cc = scale3d_mat(param, param, param) - C = random_mat_apply(p, Cc, C, eye) - - # luma flip - param = category_sample(size, (0, 1)) - Cc = luma_flip_mat(axis, param) - C = random_mat_apply(p, Cc, C, eye) - - # hue rotation - param = uniform_sample(size, -math.pi, math.pi) - Cc = rotate3d_mat(axis, param) - C = random_mat_apply(p, Cc, C, eye) - - # saturation - param = lognormal_sample(size, std=1 * math.log(2)) - Cc = saturation_mat(axis, param) - C = random_mat_apply(p, Cc, C, eye) - - return C - - -def make_grid(shape, x0, x1, y0, y1, device): - n, c, h, w = shape - grid = torch.empty(n, h, w, 3, device=device) - grid[:, :, :, 0] = torch.linspace(x0, x1, w, device=device) - grid[:, :, :, 1] = torch.linspace(y0, y1, h, device=device).unsqueeze(-1) - grid[:, :, :, 2] = 1 - - return grid - - -def affine_grid(grid, mat): - n, h, w, _ = grid.shape - return (grid.view(n, h * w, 3) @ mat.transpose(1, 2)).view(n, h, w, 2) - - -def get_padding(G, height, width, kernel_size): - device = G.device - - cx = (width - 1) / 2 - cy = (height - 1) / 2 - cp = torch.tensor( - [(-cx, -cy, 1), (cx, -cy, 1), (cx, cy, 1), (-cx, cy, 1)], device=device - ) - cp = G @ cp.T - - pad_k = kernel_size // 4 - - pad = cp[:, :2, :].permute(1, 0, 2).flatten(1) - pad = torch.cat((-pad, pad)).max(1).values - pad = pad + torch.tensor([pad_k * 2 - cx, pad_k * 2 - cy] * 2, device=device) - pad = pad.max(torch.tensor([0, 0] * 2, device=device)) - pad = pad.min(torch.tensor([width - 1, height - 1] * 2, device=device)) - - pad_x1, pad_y1, pad_x2, pad_y2 = pad.ceil().to(torch.int32) - - return pad_x1, pad_x2, pad_y1, pad_y2 - - -def try_sample_affine_and_pad(img, p, kernel_size, G=None): - batch, _, height, width = img.shape - - G_try = G - - if G is None: - G_try = torch.inverse(sample_affine(p, batch, height, width)) - - pad_x1, pad_x2, pad_y1, pad_y2 = get_padding(G_try, height, width, kernel_size) - - img_pad = F.pad(img, (pad_x1, pad_x2, pad_y1, pad_y2), mode="reflect") - - return img_pad, G_try, (pad_x1, pad_x2, pad_y1, pad_y2) - - -class GridSampleForward(autograd.Function): - @staticmethod - def forward(ctx, input, grid): - out = F.grid_sample( - input, grid, mode="bilinear", padding_mode="zeros", align_corners=False - ) - ctx.save_for_backward(input, grid) - - return out - - @staticmethod - def backward(ctx, grad_output): - input, grid = ctx.saved_tensors - grad_input, grad_grid = GridSampleBackward.apply(grad_output, input, grid) - - return grad_input, grad_grid - - -class GridSampleBackward(autograd.Function): - @staticmethod - def forward(ctx, grad_output, input, grid): - op = torch._C._jit_get_operation("aten::grid_sampler_2d_backward") - grad_input, grad_grid = op(grad_output, input, grid, 0, 0, False) - ctx.save_for_backward(grid) - - return grad_input, grad_grid - - @staticmethod - def backward(ctx, grad_grad_input, grad_grad_grid): - grid, = ctx.saved_tensors - grad_grad_output = None - - if ctx.needs_input_grad[0]: - grad_grad_output = GridSampleForward.apply(grad_grad_input, grid) - - return grad_grad_output, None, None - - -grid_sample = GridSampleForward.apply - - -def scale_mat_single(s_x, s_y): - return torch.tensor(((s_x, 0, 0), (0, s_y, 0), (0, 0, 1)), dtype=torch.float32) - - -def translate_mat_single(t_x, t_y): - return torch.tensor(((1, 0, t_x), (0, 1, t_y), (0, 0, 1)), dtype=torch.float32) - - -def random_apply_affine(img, p, G=None, antialiasing_kernel=SYM6): - kernel = antialiasing_kernel - len_k = len(kernel) - - kernel = torch.as_tensor(kernel).to(img) - # kernel = torch.ger(kernel, kernel).to(img) - kernel_flip = torch.flip(kernel, (0,)) - - img_pad, G, (pad_x1, pad_x2, pad_y1, pad_y2) = try_sample_affine_and_pad( - img, p, len_k, G - ) - - G_inv = ( - translate_mat_single((pad_x1 - pad_x2).item() / 2, (pad_y1 - pad_y2).item() / 2) - @ G - ) - up_pad = ( - (len_k + 2 - 1) // 2, - (len_k - 2) // 2, - (len_k + 2 - 1) // 2, - (len_k - 2) // 2, - ) - img_2x = upfirdn2d(img_pad, kernel.unsqueeze(0), up=(2, 1), pad=(*up_pad[:2], 0, 0)) - img_2x = upfirdn2d(img_2x, kernel.unsqueeze(1), up=(1, 2), pad=(0, 0, *up_pad[2:])) - G_inv = scale_mat_single(2, 2) @ G_inv @ scale_mat_single(1 / 2, 1 / 2) - G_inv = translate_mat_single(-0.5, -0.5) @ G_inv @ translate_mat_single(0.5, 0.5) - batch_size, channel, height, width = img.shape - pad_k = len_k // 4 - shape = (batch_size, channel, (height + pad_k * 2) * 2, (width + pad_k * 2) * 2) - G_inv = ( - scale_mat_single(2 / img_2x.shape[3], 2 / img_2x.shape[2]) - @ G_inv - @ scale_mat_single(1 / (2 / shape[3]), 1 / (2 / shape[2])) - ) - grid = F.affine_grid(G_inv[:, :2, :].to(img_2x), shape, align_corners=False) - img_affine = grid_sample(img_2x, grid) - d_p = -pad_k * 2 - down_pad = ( - d_p + (len_k - 2 + 1) // 2, - d_p + (len_k - 2) // 2, - d_p + (len_k - 2 + 1) // 2, - d_p + (len_k - 2) // 2, - ) - img_down = upfirdn2d( - img_affine, kernel_flip.unsqueeze(0), down=(2, 1), pad=(*down_pad[:2], 0, 0) - ) - img_down = upfirdn2d( - img_down, kernel_flip.unsqueeze(1), down=(1, 2), pad=(0, 0, *down_pad[2:]) - ) - - return img_down, G - - -def apply_color(img, mat): - batch = img.shape[0] - img = img.permute(0, 2, 3, 1) - mat_mul = mat[:, :3, :3].transpose(1, 2).view(batch, 1, 3, 3) - mat_add = mat[:, :3, 3].view(batch, 1, 1, 3) - img = img @ mat_mul + mat_add - img = img.permute(0, 3, 1, 2) - - return img - - -def random_apply_color(img, p, C=None): - if C is None: - C = sample_color(p, img.shape[0]) - - img = apply_color(img, C.to(img)) - - return img, C - - -def augment(img, p, transform_matrix=(None, None)): - img, G = random_apply_affine(img, p, transform_matrix[0]) - if img.shape[1] == 3: - img, C = random_apply_color(img, p, transform_matrix[1]) - else: - tmp, C = random_apply_color(img[:,0:3], p, transform_matrix[1]) - img = torch.cat((tmp, img[:,3:]), dim=1) - - return img, (G, C) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/attrs/converters.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/attrs/converters.py deleted file mode 100644 index edfa8d3c16ac8642773651778012a3cd57005d9b..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/attrs/converters.py +++ /dev/null @@ -1,3 +0,0 @@ -# SPDX-License-Identifier: MIT - -from attr.converters import * # noqa diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ufoLib/etree.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ufoLib/etree.py deleted file mode 100644 index 5054f8169a0dd42599aecbfac779f15d171f3b61..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ufoLib/etree.py +++ /dev/null @@ -1,5 +0,0 @@ -"""DEPRECATED - This module is kept here only as a backward compatibility shim -for the old ufoLib.etree module, which was moved to fontTools.misc.etree. -Please use the latter instead. -""" -from fontTools.misc.etree import * diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fsspec/mapping.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fsspec/mapping.py deleted file mode 100644 index 74cc7b9f2fe118fac02379db4181c53d11fbbbea..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fsspec/mapping.py +++ /dev/null @@ -1,239 +0,0 @@ -import array -import posixpath -import warnings -from collections.abc import MutableMapping - -from .core import url_to_fs - - -class FSMap(MutableMapping): - """Wrap a FileSystem instance as a mutable wrapping. - - The keys of the mapping become files under the given root, and the - values (which must be bytes) the contents of those files. - - Parameters - ---------- - root: string - prefix for all the files - fs: FileSystem instance - check: bool (=True) - performs a touch at the location, to check for write access. - - Examples - -------- - >>> fs = FileSystem(**parameters) # doctest: +SKIP - >>> d = FSMap('my-data/path/', fs) # doctest: +SKIP - or, more likely - >>> d = fs.get_mapper('my-data/path/') - - >>> d['loc1'] = b'Hello World' # doctest: +SKIP - >>> list(d.keys()) # doctest: +SKIP - ['loc1'] - >>> d['loc1'] # doctest: +SKIP - b'Hello World' - """ - - def __init__(self, root, fs, check=False, create=False, missing_exceptions=None): - self.fs = fs - self.root = fs._strip_protocol(root).rstrip("/") - self._root_key_to_str = fs._strip_protocol(posixpath.join(root, "x"))[:-1] - if missing_exceptions is None: - missing_exceptions = ( - FileNotFoundError, - IsADirectoryError, - NotADirectoryError, - ) - self.missing_exceptions = missing_exceptions - self.check = check - self.create = create - if create: - if not self.fs.exists(root): - self.fs.mkdir(root) - if check: - if not self.fs.exists(root): - raise ValueError( - "Path %s does not exist. Create " - " with the ``create=True`` keyword" % root - ) - self.fs.touch(root + "/a") - self.fs.rm(root + "/a") - - def clear(self): - """Remove all keys below root - empties out mapping""" - try: - self.fs.rm(self.root, True) - self.fs.mkdir(self.root) - except: # noqa: E722 - pass - - def getitems(self, keys, on_error="raise"): - """Fetch multiple items from the store - - If the backend is async-able, this might proceed concurrently - - Parameters - ---------- - keys: list(str) - They keys to be fetched - on_error : "raise", "omit", "return" - If raise, an underlying exception will be raised (converted to KeyError - if the type is in self.missing_exceptions); if omit, keys with exception - will simply not be included in the output; if "return", all keys are - included in the output, but the value will be bytes or an exception - instance. - - Returns - ------- - dict(key, bytes|exception) - """ - keys2 = [self._key_to_str(k) for k in keys] - oe = on_error if on_error == "raise" else "return" - try: - out = self.fs.cat(keys2, on_error=oe) - if isinstance(out, bytes): - out = {keys2[0]: out} - except self.missing_exceptions as e: - raise KeyError from e - out = { - k: (KeyError() if isinstance(v, self.missing_exceptions) else v) - for k, v in out.items() - } - return { - key: out[k2] - for key, k2 in zip(keys, keys2) - if on_error == "return" or not isinstance(out[k2], BaseException) - } - - def setitems(self, values_dict): - """Set the values of multiple items in the store - - Parameters - ---------- - values_dict: dict(str, bytes) - """ - values = {self._key_to_str(k): maybe_convert(v) for k, v in values_dict.items()} - self.fs.pipe(values) - - def delitems(self, keys): - """Remove multiple keys from the store""" - self.fs.rm([self._key_to_str(k) for k in keys]) - - def _key_to_str(self, key): - """Generate full path for the key""" - if not isinstance(key, str): - # raise TypeError("key must be of type `str`, got `{type(key).__name__}`" - warnings.warn( - "from fsspec 2023.5 onward FSMap non-str keys will raise TypeError", - DeprecationWarning, - ) - if isinstance(key, list): - key = tuple(key) - key = str(key) - return f"{self._root_key_to_str}{key}" - - def _str_to_key(self, s): - """Strip path of to leave key name""" - return s[len(self.root) :].lstrip("/") - - def __getitem__(self, key, default=None): - """Retrieve data""" - k = self._key_to_str(key) - try: - result = self.fs.cat(k) - except self.missing_exceptions: - if default is not None: - return default - raise KeyError(key) - return result - - def pop(self, key, default=None): - """Pop data""" - result = self.__getitem__(key, default) - try: - del self[key] - except KeyError: - pass - return result - - def __setitem__(self, key, value): - """Store value in key""" - key = self._key_to_str(key) - self.fs.mkdirs(self.fs._parent(key), exist_ok=True) - self.fs.pipe_file(key, maybe_convert(value)) - - def __iter__(self): - return (self._str_to_key(x) for x in self.fs.find(self.root)) - - def __len__(self): - return len(self.fs.find(self.root)) - - def __delitem__(self, key): - """Remove key""" - try: - self.fs.rm(self._key_to_str(key)) - except: # noqa: E722 - raise KeyError - - def __contains__(self, key): - """Does key exist in mapping?""" - path = self._key_to_str(key) - return self.fs.exists(path) and self.fs.isfile(path) - - def __reduce__(self): - return FSMap, (self.root, self.fs, False, False, self.missing_exceptions) - - -def maybe_convert(value): - if isinstance(value, array.array) or hasattr(value, "__array__"): - # bytes-like things - if hasattr(value, "dtype") and value.dtype.kind in "Mm": - # The buffer interface doesn't support datetime64/timdelta64 numpy - # arrays - value = value.view("int64") - value = bytes(memoryview(value)) - return value - - -def get_mapper( - url="", - check=False, - create=False, - missing_exceptions=None, - alternate_root=None, - **kwargs, -): - """Create key-value interface for given URL and options - - The URL will be of the form "protocol://location" and point to the root - of the mapper required. All keys will be file-names below this location, - and their values the contents of each key. - - Also accepts compound URLs like zip::s3://bucket/file.zip , see ``fsspec.open``. - - Parameters - ---------- - url: str - Root URL of mapping - check: bool - Whether to attempt to read from the location before instantiation, to - check that the mapping does exist - create: bool - Whether to make the directory corresponding to the root before - instantiating - missing_exceptions: None or tuple - If given, these exception types will be regarded as missing keys and - return KeyError when trying to read data. By default, you get - (FileNotFoundError, IsADirectoryError, NotADirectoryError) - alternate_root: None or str - In cases of complex URLs, the parser may fail to pick the correct part - for the mapper root, so this arg can override - - Returns - ------- - ``FSMap`` instance, the dict-like key-value store. - """ - # Removing protocol here - could defer to each open() on the backend - fs, urlpath = url_to_fs(url, **kwargs) - root = alternate_root if alternate_root is not None else urlpath - return FSMap(root, fs, check, create, missing_exceptions=missing_exceptions) diff --git a/spaces/lc202301/ChuanhuChatGPT/utils.py b/spaces/lc202301/ChuanhuChatGPT/utils.py deleted file mode 100644 index 8eeabfe5bfc3a80e4c875c778426608f66ce41da..0000000000000000000000000000000000000000 --- a/spaces/lc202301/ChuanhuChatGPT/utils.py +++ /dev/null @@ -1,389 +0,0 @@ -# -*- coding:utf-8 -*- -from __future__ import annotations -from typing import TYPE_CHECKING, Any, Callable, Dict, List, Tuple, Type -import logging -import json -import os -import datetime -import hashlib -import csv -import requests -import re - -import gradio as gr -from pypinyin import lazy_pinyin -import tiktoken -import mdtex2html -from markdown import markdown -from pygments import highlight -from pygments.lexers import get_lexer_by_name -from pygments.formatters import HtmlFormatter - -from presets import * - -# logging.basicConfig(level=logging.INFO, format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s") - -if TYPE_CHECKING: - from typing import TypedDict - - class DataframeData(TypedDict): - headers: List[str] - data: List[List[str | int | bool]] - - -def count_token(message): - encoding = tiktoken.get_encoding("cl100k_base") - input_str = f"role: {message['role']}, content: {message['content']}" - length = len(encoding.encode(input_str)) - return length - - -def markdown_to_html_with_syntax_highlight(md_str): - def replacer(match): - lang = match.group(1) or "text" - code = match.group(2) - - try: - lexer = get_lexer_by_name(lang, stripall=True) - except ValueError: - lexer = get_lexer_by_name("text", stripall=True) - - formatter = HtmlFormatter() - highlighted_code = highlight(code, lexer, formatter) - - return f'
{highlighted_code}
' - - code_block_pattern = r"```(\w+)?\n([\s\S]+?)\n```" - md_str = re.sub(code_block_pattern, replacer, md_str, flags=re.MULTILINE) - - html_str = markdown(md_str) - return html_str - - -def normalize_markdown(md_text: str) -> str: - lines = md_text.split("\n") - normalized_lines = [] - inside_list = False - - for i, line in enumerate(lines): - if re.match(r"^(\d+\.|-|\*|\+)\s", line.strip()): - if not inside_list and i > 0 and lines[i - 1].strip() != "": - normalized_lines.append("") - inside_list = True - normalized_lines.append(line) - elif inside_list and line.strip() == "": - if i < len(lines) - 1 and not re.match( - r"^(\d+\.|-|\*|\+)\s", lines[i + 1].strip() - ): - normalized_lines.append(line) - continue - else: - inside_list = False - normalized_lines.append(line) - - return "\n".join(normalized_lines) - - -def convert_mdtext(md_text): - code_block_pattern = re.compile(r"```(.*?)(?:```|$)", re.DOTALL) - inline_code_pattern = re.compile(r"`(.*?)`", re.DOTALL) - code_blocks = code_block_pattern.findall(md_text) - non_code_parts = code_block_pattern.split(md_text)[::2] - - result = [] - for non_code, code in zip(non_code_parts, code_blocks + [""]): - if non_code.strip(): - non_code = normalize_markdown(non_code) - if inline_code_pattern.search(non_code): - result.append(markdown(non_code, extensions=["tables"])) - else: - result.append(mdtex2html.convert(non_code, extensions=["tables"])) - if code.strip(): - # _, code = detect_language(code) # 暂时去除代码高亮功能,因为在大段代码的情况下会出现问题 - # code = code.replace("\n\n", "\n") # 暂时去除代码中的空行,因为在大段代码的情况下会出现问题 - code = f"```{code}\n\n```" - code = markdown_to_html_with_syntax_highlight(code) - result.append(code) - result = "".join(result) - return result - -def convert_user(userinput): - userinput = userinput.replace("\n", "
") - return f"
{userinput}
" - -def detect_language(code): - if code.startswith("\n"): - first_line = "" - else: - first_line = code.strip().split("\n", 1)[0] - language = first_line.lower() if first_line else "" - code_without_language = code[len(first_line) :].lstrip() if first_line else code - return language, code_without_language - - -def construct_text(role, text): - return {"role": role, "content": text} - - -def construct_user(text): - return construct_text("user", text) - - -def construct_system(text): - return construct_text("system", text) - - -def construct_assistant(text): - return construct_text("assistant", text) - - -def construct_token_message(token, stream=False): - return f"Token 计数: {token}" - - -def delete_last_conversation(chatbot, history, previous_token_count): - if len(chatbot) > 0 and standard_error_msg in chatbot[-1][1]: - logging.info("由于包含报错信息,只删除chatbot记录") - chatbot.pop() - return chatbot, history - if len(history) > 0: - logging.info("删除了一组对话历史") - history.pop() - history.pop() - if len(chatbot) > 0: - logging.info("删除了一组chatbot对话") - chatbot.pop() - if len(previous_token_count) > 0: - logging.info("删除了一组对话的token计数记录") - previous_token_count.pop() - return ( - chatbot, - history, - previous_token_count, - construct_token_message(sum(previous_token_count)), - ) - - -def save_file(filename, system, history, chatbot): - logging.info("保存对话历史中……") - os.makedirs(HISTORY_DIR, exist_ok=True) - if filename.endswith(".json"): - json_s = {"system": system, "history": history, "chatbot": chatbot} - print(json_s) - with open(os.path.join(HISTORY_DIR, filename), "w") as f: - json.dump(json_s, f) - elif filename.endswith(".md"): - md_s = f"system: \n- {system} \n" - for data in history: - md_s += f"\n{data['role']}: \n- {data['content']} \n" - with open(os.path.join(HISTORY_DIR, filename), "w", encoding="utf8") as f: - f.write(md_s) - logging.info("保存对话历史完毕") - return os.path.join(HISTORY_DIR, filename) - - -def save_chat_history(filename, system, history, chatbot): - if filename == "": - return - if not filename.endswith(".json"): - filename += ".json" - return save_file(filename, system, history, chatbot) - - -def export_markdown(filename, system, history, chatbot): - if filename == "": - return - if not filename.endswith(".md"): - filename += ".md" - return save_file(filename, system, history, chatbot) - - -def load_chat_history(filename, system, history, chatbot): - logging.info("加载对话历史中……") - if type(filename) != str: - filename = filename.name - try: - with open(os.path.join(HISTORY_DIR, filename), "r") as f: - json_s = json.load(f) - try: - if type(json_s["history"][0]) == str: - logging.info("历史记录格式为旧版,正在转换……") - new_history = [] - for index, item in enumerate(json_s["history"]): - if index % 2 == 0: - new_history.append(construct_user(item)) - else: - new_history.append(construct_assistant(item)) - json_s["history"] = new_history - logging.info(new_history) - except: - # 没有对话历史 - pass - logging.info("加载对话历史完毕") - return filename, json_s["system"], json_s["history"], json_s["chatbot"] - except FileNotFoundError: - logging.info("没有找到对话历史文件,不执行任何操作") - return filename, system, history, chatbot - - -def sorted_by_pinyin(list): - return sorted(list, key=lambda char: lazy_pinyin(char)[0][0]) - - -def get_file_names(dir, plain=False, filetypes=[".json"]): - logging.info(f"获取文件名列表,目录为{dir},文件类型为{filetypes},是否为纯文本列表{plain}") - files = [] - try: - for type in filetypes: - files += [f for f in os.listdir(dir) if f.endswith(type)] - except FileNotFoundError: - files = [] - files = sorted_by_pinyin(files) - if files == []: - files = [""] - if plain: - return files - else: - return gr.Dropdown.update(choices=files) - - -def get_history_names(plain=False): - logging.info("获取历史记录文件名列表") - return get_file_names(HISTORY_DIR, plain) - - -def load_template(filename, mode=0): - logging.info(f"加载模板文件{filename},模式为{mode}(0为返回字典和下拉菜单,1为返回下拉菜单,2为返回字典)") - lines = [] - logging.info("Loading template...") - if filename.endswith(".json"): - with open(os.path.join(TEMPLATES_DIR, filename), "r", encoding="utf8") as f: - lines = json.load(f) - lines = [[i["act"], i["prompt"]] for i in lines] - else: - with open( - os.path.join(TEMPLATES_DIR, filename), "r", encoding="utf8" - ) as csvfile: - reader = csv.reader(csvfile) - lines = list(reader) - lines = lines[1:] - if mode == 1: - return sorted_by_pinyin([row[0] for row in lines]) - elif mode == 2: - return {row[0]: row[1] for row in lines} - else: - choices = sorted_by_pinyin([row[0] for row in lines]) - return {row[0]: row[1] for row in lines}, gr.Dropdown.update( - choices=choices, value=choices[0] - ) - - -def get_template_names(plain=False): - logging.info("获取模板文件名列表") - return get_file_names(TEMPLATES_DIR, plain, filetypes=[".csv", "json"]) - - -def get_template_content(templates, selection, original_system_prompt): - logging.info(f"应用模板中,选择为{selection},原始系统提示为{original_system_prompt}") - try: - return templates[selection] - except: - return original_system_prompt - - -def reset_state(): - logging.info("重置状态") - return [], [], [], construct_token_message(0) - - -def reset_textbox(): - return gr.update(value="") - - -def reset_default(): - global API_URL - API_URL = "https://api.openai.com/v1/chat/completions" - os.environ.pop("HTTPS_PROXY", None) - os.environ.pop("https_proxy", None) - return gr.update(value=API_URL), gr.update(value=""), "API URL 和代理已重置" - - -def change_api_url(url): - global API_URL - API_URL = url - msg = f"API地址更改为了{url}" - logging.info(msg) - return msg - - -def change_proxy(proxy): - os.environ["HTTPS_PROXY"] = proxy - msg = f"代理更改为了{proxy}" - logging.info(msg) - return msg - - -def hide_middle_chars(s): - if len(s) <= 8: - return s - else: - head = s[:4] - tail = s[-4:] - hidden = "*" * (len(s) - 8) - return head + hidden + tail - - -def submit_key(key): - key = key.strip() - msg = f"API密钥更改为了{hide_middle_chars(key)}" - logging.info(msg) - return key, msg - - -def sha1sum(filename): - sha1 = hashlib.sha1() - sha1.update(filename.encode("utf-8")) - return sha1.hexdigest() - - -def replace_today(prompt): - today = datetime.datetime.today().strftime("%Y-%m-%d") - return prompt.replace("{current_date}", today) - - -def get_geoip(): - response = requests.get("https://ipapi.co/json/", timeout=5) - try: - data = response.json() - except: - data = {"error": True, "reason": "连接ipapi失败"} - if "error" in data.keys(): - logging.warning(f"无法获取IP地址信息。\n{data}") - if data["reason"] == "RateLimited": - return ( - f"获取IP地理位置失败,因为达到了检测IP的速率限制。聊天功能可能仍然可用,但请注意,如果您的IP地址在不受支持的地区,您可能会遇到问题。" - ) - else: - return f"获取IP地理位置失败。原因:{data['reason']}。你仍然可以使用聊天功能。" - else: - country = data["country_name"] - if country == "China": - text = "**您的IP区域:中国。请立即检查代理设置,在不受支持的地区使用API可能导致账号被封禁。**" - else: - text = f"您的IP区域:{country}。" - logging.info(text) - return text - - -def find_n(lst, max_num): - n = len(lst) - total = sum(lst) - - if total < max_num: - return n - - for i in range(len(lst)): - if total - lst[i] < max_num: - return n - i -1 - total = total - lst[i] - return 1 diff --git a/spaces/leafShen/CodeFormer/CodeFormer/facelib/utils/face_utils.py b/spaces/leafShen/CodeFormer/CodeFormer/facelib/utils/face_utils.py deleted file mode 100644 index f1474a2a4419b6b62fab8a919ef805b802556464..0000000000000000000000000000000000000000 --- a/spaces/leafShen/CodeFormer/CodeFormer/facelib/utils/face_utils.py +++ /dev/null @@ -1,248 +0,0 @@ -import cv2 -import numpy as np -import torch - - -def compute_increased_bbox(bbox, increase_area, preserve_aspect=True): - left, top, right, bot = bbox - width = right - left - height = bot - top - - if preserve_aspect: - width_increase = max(increase_area, ((1 + 2 * increase_area) * height - width) / (2 * width)) - height_increase = max(increase_area, ((1 + 2 * increase_area) * width - height) / (2 * height)) - else: - width_increase = height_increase = increase_area - left = int(left - width_increase * width) - top = int(top - height_increase * height) - right = int(right + width_increase * width) - bot = int(bot + height_increase * height) - return (left, top, right, bot) - - -def get_valid_bboxes(bboxes, h, w): - left = max(bboxes[0], 0) - top = max(bboxes[1], 0) - right = min(bboxes[2], w) - bottom = min(bboxes[3], h) - return (left, top, right, bottom) - - -def align_crop_face_landmarks(img, - landmarks, - output_size, - transform_size=None, - enable_padding=True, - return_inverse_affine=False, - shrink_ratio=(1, 1)): - """Align and crop face with landmarks. - - The output_size and transform_size are based on width. The height is - adjusted based on shrink_ratio_h/shring_ration_w. - - Modified from: - https://github.com/NVlabs/ffhq-dataset/blob/master/download_ffhq.py - - Args: - img (Numpy array): Input image. - landmarks (Numpy array): 5 or 68 or 98 landmarks. - output_size (int): Output face size. - transform_size (ing): Transform size. Usually the four time of - output_size. - enable_padding (float): Default: True. - shrink_ratio (float | tuple[float] | list[float]): Shring the whole - face for height and width (crop larger area). Default: (1, 1). - - Returns: - (Numpy array): Cropped face. - """ - lm_type = 'retinaface_5' # Options: dlib_5, retinaface_5 - - if isinstance(shrink_ratio, (float, int)): - shrink_ratio = (shrink_ratio, shrink_ratio) - if transform_size is None: - transform_size = output_size * 4 - - # Parse landmarks - lm = np.array(landmarks) - if lm.shape[0] == 5 and lm_type == 'retinaface_5': - eye_left = lm[0] - eye_right = lm[1] - mouth_avg = (lm[3] + lm[4]) * 0.5 - elif lm.shape[0] == 5 and lm_type == 'dlib_5': - lm_eye_left = lm[2:4] - lm_eye_right = lm[0:2] - eye_left = np.mean(lm_eye_left, axis=0) - eye_right = np.mean(lm_eye_right, axis=0) - mouth_avg = lm[4] - elif lm.shape[0] == 68: - lm_eye_left = lm[36:42] - lm_eye_right = lm[42:48] - eye_left = np.mean(lm_eye_left, axis=0) - eye_right = np.mean(lm_eye_right, axis=0) - mouth_avg = (lm[48] + lm[54]) * 0.5 - elif lm.shape[0] == 98: - lm_eye_left = lm[60:68] - lm_eye_right = lm[68:76] - eye_left = np.mean(lm_eye_left, axis=0) - eye_right = np.mean(lm_eye_right, axis=0) - mouth_avg = (lm[76] + lm[82]) * 0.5 - - eye_avg = (eye_left + eye_right) * 0.5 - eye_to_eye = eye_right - eye_left - eye_to_mouth = mouth_avg - eye_avg - - # Get the oriented crop rectangle - # x: half width of the oriented crop rectangle - x = eye_to_eye - np.flipud(eye_to_mouth) * [-1, 1] - # - np.flipud(eye_to_mouth) * [-1, 1]: rotate 90 clockwise - # norm with the hypotenuse: get the direction - x /= np.hypot(*x) # get the hypotenuse of a right triangle - rect_scale = 1 # TODO: you can edit it to get larger rect - x *= max(np.hypot(*eye_to_eye) * 2.0 * rect_scale, np.hypot(*eye_to_mouth) * 1.8 * rect_scale) - # y: half height of the oriented crop rectangle - y = np.flipud(x) * [-1, 1] - - x *= shrink_ratio[1] # width - y *= shrink_ratio[0] # height - - # c: center - c = eye_avg + eye_to_mouth * 0.1 - # quad: (left_top, left_bottom, right_bottom, right_top) - quad = np.stack([c - x - y, c - x + y, c + x + y, c + x - y]) - # qsize: side length of the square - qsize = np.hypot(*x) * 2 - - quad_ori = np.copy(quad) - # Shrink, for large face - # TODO: do we really need shrink - shrink = int(np.floor(qsize / output_size * 0.5)) - if shrink > 1: - h, w = img.shape[0:2] - rsize = (int(np.rint(float(w) / shrink)), int(np.rint(float(h) / shrink))) - img = cv2.resize(img, rsize, interpolation=cv2.INTER_AREA) - quad /= shrink - qsize /= shrink - - # Crop - h, w = img.shape[0:2] - border = max(int(np.rint(qsize * 0.1)), 3) - crop = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))), - int(np.ceil(max(quad[:, 1])))) - crop = (max(crop[0] - border, 0), max(crop[1] - border, 0), min(crop[2] + border, w), min(crop[3] + border, h)) - if crop[2] - crop[0] < w or crop[3] - crop[1] < h: - img = img[crop[1]:crop[3], crop[0]:crop[2], :] - quad -= crop[0:2] - - # Pad - # pad: (width_left, height_top, width_right, height_bottom) - h, w = img.shape[0:2] - pad = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))), - int(np.ceil(max(quad[:, 1])))) - pad = (max(-pad[0] + border, 0), max(-pad[1] + border, 0), max(pad[2] - w + border, 0), max(pad[3] - h + border, 0)) - if enable_padding and max(pad) > border - 4: - pad = np.maximum(pad, int(np.rint(qsize * 0.3))) - img = np.pad(img, ((pad[1], pad[3]), (pad[0], pad[2]), (0, 0)), 'reflect') - h, w = img.shape[0:2] - y, x, _ = np.ogrid[:h, :w, :1] - mask = np.maximum(1.0 - np.minimum(np.float32(x) / pad[0], - np.float32(w - 1 - x) / pad[2]), - 1.0 - np.minimum(np.float32(y) / pad[1], - np.float32(h - 1 - y) / pad[3])) - blur = int(qsize * 0.02) - if blur % 2 == 0: - blur += 1 - blur_img = cv2.boxFilter(img, 0, ksize=(blur, blur)) - - img = img.astype('float32') - img += (blur_img - img) * np.clip(mask * 3.0 + 1.0, 0.0, 1.0) - img += (np.median(img, axis=(0, 1)) - img) * np.clip(mask, 0.0, 1.0) - img = np.clip(img, 0, 255) # float32, [0, 255] - quad += pad[:2] - - # Transform use cv2 - h_ratio = shrink_ratio[0] / shrink_ratio[1] - dst_h, dst_w = int(transform_size * h_ratio), transform_size - template = np.array([[0, 0], [0, dst_h], [dst_w, dst_h], [dst_w, 0]]) - # use cv2.LMEDS method for the equivalence to skimage transform - # ref: https://blog.csdn.net/yichxi/article/details/115827338 - affine_matrix = cv2.estimateAffinePartial2D(quad, template, method=cv2.LMEDS)[0] - cropped_face = cv2.warpAffine( - img, affine_matrix, (dst_w, dst_h), borderMode=cv2.BORDER_CONSTANT, borderValue=(135, 133, 132)) # gray - - if output_size < transform_size: - cropped_face = cv2.resize( - cropped_face, (output_size, int(output_size * h_ratio)), interpolation=cv2.INTER_LINEAR) - - if return_inverse_affine: - dst_h, dst_w = int(output_size * h_ratio), output_size - template = np.array([[0, 0], [0, dst_h], [dst_w, dst_h], [dst_w, 0]]) - # use cv2.LMEDS method for the equivalence to skimage transform - # ref: https://blog.csdn.net/yichxi/article/details/115827338 - affine_matrix = cv2.estimateAffinePartial2D( - quad_ori, np.array([[0, 0], [0, output_size], [dst_w, dst_h], [dst_w, 0]]), method=cv2.LMEDS)[0] - inverse_affine = cv2.invertAffineTransform(affine_matrix) - else: - inverse_affine = None - return cropped_face, inverse_affine - - -def paste_face_back(img, face, inverse_affine): - h, w = img.shape[0:2] - face_h, face_w = face.shape[0:2] - inv_restored = cv2.warpAffine(face, inverse_affine, (w, h)) - mask = np.ones((face_h, face_w, 3), dtype=np.float32) - inv_mask = cv2.warpAffine(mask, inverse_affine, (w, h)) - # remove the black borders - inv_mask_erosion = cv2.erode(inv_mask, np.ones((2, 2), np.uint8)) - inv_restored_remove_border = inv_mask_erosion * inv_restored - total_face_area = np.sum(inv_mask_erosion) // 3 - # compute the fusion edge based on the area of face - w_edge = int(total_face_area**0.5) // 20 - erosion_radius = w_edge * 2 - inv_mask_center = cv2.erode(inv_mask_erosion, np.ones((erosion_radius, erosion_radius), np.uint8)) - blur_size = w_edge * 2 - inv_soft_mask = cv2.GaussianBlur(inv_mask_center, (blur_size + 1, blur_size + 1), 0) - img = inv_soft_mask * inv_restored_remove_border + (1 - inv_soft_mask) * img - # float32, [0, 255] - return img - - -if __name__ == '__main__': - import os - - from facelib.detection import init_detection_model - from facelib.utils.face_restoration_helper import get_largest_face - - img_path = '/home/wxt/datasets/ffhq/ffhq_wild/00009.png' - img_name = os.splitext(os.path.basename(img_path))[0] - - # initialize model - det_net = init_detection_model('retinaface_resnet50', half=False) - img_ori = cv2.imread(img_path) - h, w = img_ori.shape[0:2] - # if larger than 800, scale it - scale = max(h / 800, w / 800) - if scale > 1: - img = cv2.resize(img_ori, (int(w / scale), int(h / scale)), interpolation=cv2.INTER_LINEAR) - - with torch.no_grad(): - bboxes = det_net.detect_faces(img, 0.97) - if scale > 1: - bboxes *= scale # the score is incorrect - bboxes = get_largest_face(bboxes, h, w)[0] - - landmarks = np.array([[bboxes[i], bboxes[i + 1]] for i in range(5, 15, 2)]) - - cropped_face, inverse_affine = align_crop_face_landmarks( - img_ori, - landmarks, - output_size=512, - transform_size=None, - enable_padding=True, - return_inverse_affine=True, - shrink_ratio=(1, 1)) - - cv2.imwrite(f'tmp/{img_name}_cropeed_face.png', cropped_face) - img = paste_face_back(img_ori, cropped_face, inverse_affine) - cv2.imwrite(f'tmp/{img_name}_back.png', img) diff --git "a/spaces/leogabraneth/text-generation-webui-main/docs/08 \342\200\220 Additional Tips.md" "b/spaces/leogabraneth/text-generation-webui-main/docs/08 \342\200\220 Additional Tips.md" deleted file mode 100644 index 89675ccac64ee18ec6a753026cef9afa20c5d8f5..0000000000000000000000000000000000000000 --- "a/spaces/leogabraneth/text-generation-webui-main/docs/08 \342\200\220 Additional Tips.md" +++ /dev/null @@ -1,155 +0,0 @@ -## Audio notification - -If your computer takes a long time to generate each response for the model that you are using, you can enable an audio notification for when the response is completed. This feature was kindly contributed by HappyWorldGames in [#1277](https://github.com/oobabooga/text-generation-webui/pull/1277). - -### Installation - -Simply place a file called "notification.mp3" in the same folder as `server.py`. Here you can find some examples: - -* https://pixabay.com/sound-effects/search/ding/?duration=0-30 -* https://pixabay.com/sound-effects/search/notification/?duration=0-30 - -Source: https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/1126 - -This file will be automatically detected the next time you start the web UI. - -## Using LoRAs with GPTQ-for-LLaMa - -This requires using a monkey patch that is supported by this web UI: https://github.com/johnsmith0031/alpaca_lora_4bit - -To use it: - -Install alpaca_lora_4bit using pip - -``` -git clone https://github.com/johnsmith0031/alpaca_lora_4bit.git -cd alpaca_lora_4bit -git fetch origin winglian-setup_pip -git checkout winglian-setup_pip -pip install . -``` - -Start the UI with the --monkey-patch flag: - -``` -python server.py --model llama-7b-4bit-128g --listen --lora tloen_alpaca-lora-7b --monkey-patch -``` - -## DeepSpeed - -`DeepSpeed ZeRO-3` is an alternative offloading strategy for full-precision (16-bit) transformers models. - -With this, I have been able to load a 6b model (GPT-J 6B) with less than 6GB of VRAM. The speed of text generation is very decent and much better than what would be accomplished with `--auto-devices --gpu-memory 6`. - -As far as I know, DeepSpeed is only available for Linux at the moment. - -### How to use it - -1. Install DeepSpeed: - -``` -conda install -c conda-forge mpi4py mpich -pip install -U deepspeed -``` - -2. Start the web UI replacing `python` with `deepspeed --num_gpus=1` and adding the `--deepspeed` flag. Example: - -``` -deepspeed --num_gpus=1 server.py --deepspeed --chat --model gpt-j-6B -``` - -> RWKV: RNN with Transformer-level LLM Performance -> -> It combines the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding (using the final hidden state). - -https://github.com/BlinkDL/RWKV-LM - -https://github.com/BlinkDL/ChatRWKV - -## Using RWKV in the web UI - -### Hugging Face weights - -Simply download the weights from https://huggingface.co/RWKV and load them as you would for any other model. - -There is a bug in transformers==4.29.2 that prevents RWKV from being loaded in 8-bit mode. You can install the dev branch to solve this bug: `pip install git+https://github.com/huggingface/transformers` - -### Original .pth weights - -The instructions below are from before RWKV was supported in transformers, and they are kept for legacy purposes. The old implementation is possibly faster, but it lacks the full range of samplers that the transformers library offers. - -#### 0. Install the RWKV library - -``` -pip install rwkv -``` - -`0.7.3` was the last version that I tested. If you experience any issues, try ```pip install rwkv==0.7.3```. - -#### 1. Download the model - -It is available in different sizes: - -* https://huggingface.co/BlinkDL/rwkv-4-pile-3b/ -* https://huggingface.co/BlinkDL/rwkv-4-pile-7b/ -* https://huggingface.co/BlinkDL/rwkv-4-pile-14b/ - -There are also older releases with smaller sizes like: - -* https://huggingface.co/BlinkDL/rwkv-4-pile-169m/resolve/main/RWKV-4-Pile-169M-20220807-8023.pth - -Download the chosen `.pth` and put it directly in the `models` folder. - -#### 2. Download the tokenizer - -[20B_tokenizer.json](https://raw.githubusercontent.com/BlinkDL/ChatRWKV/main/v2/20B_tokenizer.json) - -Also put it directly in the `models` folder. Make sure to not rename it. It should be called `20B_tokenizer.json`. - -#### 3. Launch the web UI - -No additional steps are required. Just launch it as you would with any other model. - -``` -python server.py --listen --no-stream --model RWKV-4-Pile-169M-20220807-8023.pth -``` - -#### Setting a custom strategy - -It is possible to have very fine control over the offloading and precision for the model with the `--rwkv-strategy` flag. Possible values include: - -``` -"cpu fp32" # CPU mode -"cuda fp16" # GPU mode with float16 precision -"cuda fp16 *30 -> cpu fp32" # GPU+CPU offloading. The higher the number after *, the higher the GPU allocation. -"cuda fp16i8" # GPU mode with 8-bit precision -``` - -See the README for the PyPl package for more details: https://pypi.org/project/rwkv/ - -#### Compiling the CUDA kernel - -You can compile the CUDA kernel for the model with `--rwkv-cuda-on`. This should improve the performance a lot but I haven't been able to get it to work yet. - -## Miscellaneous info - -### You can train LoRAs in CPU mode - -Load the web UI with - -``` -python server.py --cpu -``` - -and start training the LoRA from the training tab as usual. - -### You can check the sha256sum of downloaded models with the download script - -``` -python download-model.py facebook/galactica-125m --check -``` - -### The download script continues interrupted downloads by default - -It doesn't start over. - diff --git a/spaces/leogabraneth/text-generation-webui-main/js/switch_tabs.js b/spaces/leogabraneth/text-generation-webui-main/js/switch_tabs.js deleted file mode 100644 index 75d563670dbd7a6d5e1b81eb5d38b025a868c01b..0000000000000000000000000000000000000000 --- a/spaces/leogabraneth/text-generation-webui-main/js/switch_tabs.js +++ /dev/null @@ -1,59 +0,0 @@ -let chat_tab = document.getElementById("chat-tab"); -let main_parent = chat_tab.parentNode; - -function scrollToTop() { - window.scrollTo({ - top: 0, - // behavior: 'smooth' - }); -} - -function findButtonsByText(buttonText) { - const buttons = document.getElementsByTagName("button"); - const matchingButtons = []; - buttonText = buttonText.trim(); - - for (let i = 0; i < buttons.length; i++) { - const button = buttons[i]; - const buttonInnerText = button.textContent.trim(); - - if (buttonInnerText === buttonText) { - matchingButtons.push(button); - } - } - - return matchingButtons; -} - -function switch_to_chat() { - let chat_tab_button = main_parent.childNodes[0].childNodes[1]; - chat_tab_button.click(); - scrollToTop(); -} - -function switch_to_default() { - let default_tab_button = main_parent.childNodes[0].childNodes[4]; - default_tab_button.click(); - scrollToTop(); -} - -function switch_to_notebook() { - let notebook_tab_button = main_parent.childNodes[0].childNodes[7]; - notebook_tab_button.click(); - findButtonsByText("Raw")[1].click(); - scrollToTop(); -} - -function switch_to_generation_parameters() { - let parameters_tab_button = main_parent.childNodes[0].childNodes[10]; - parameters_tab_button.click(); - findButtonsByText("Generation")[0].click(); - scrollToTop(); -} - -function switch_to_character() { - let parameters_tab_button = main_parent.childNodes[0].childNodes[10]; - parameters_tab_button.click(); - findButtonsByText("Character")[0].click(); - scrollToTop(); -} diff --git a/spaces/leurez/moss/src/store/index.ts b/spaces/leurez/moss/src/store/index.ts deleted file mode 100644 index ad01971d2140a7e9351442451c9680c20f0f8e48..0000000000000000000000000000000000000000 --- a/spaces/leurez/moss/src/store/index.ts +++ /dev/null @@ -1,10 +0,0 @@ -import type { App } from 'vue' -import { createPinia } from 'pinia' - -export const store = createPinia() - -export function setupStore(app: App) { - app.use(store) -} - -export * from './modules' diff --git a/spaces/liliyRehtina/color/models/network.py b/spaces/liliyRehtina/color/models/network.py deleted file mode 100644 index bd702e6cf6b3cc9092dc685bd8a65e12508b9636..0000000000000000000000000000000000000000 --- a/spaces/liliyRehtina/color/models/network.py +++ /dev/null @@ -1,352 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.nn import init -import torchvision -import torch.nn.utils.spectral_norm as spectral_norm -import math - - -class ConvBlock(nn.Module): - def __init__(self, inChannels, outChannels, convNum, normLayer=None): - super(ConvBlock, self).__init__() - self.inConv = nn.Sequential( - nn.Conv2d(inChannels, outChannels, kernel_size=3, padding=1), - nn.ReLU(inplace=True) - ) - layers = [] - for _ in range(convNum - 1): - layers.append(nn.Conv2d(outChannels, outChannels, kernel_size=3, padding=1)) - layers.append(nn.ReLU(inplace=True)) - if not (normLayer is None): - layers.append(normLayer(outChannels)) - self.conv = nn.Sequential(*layers) - - def forward(self, x): - x = self.inConv(x) - x = self.conv(x) - return x - - -class ResidualBlock(nn.Module): - def __init__(self, channels, normLayer=None): - super(ResidualBlock, self).__init__() - layers = [] - layers.append(nn.Conv2d(channels, channels, kernel_size=3, padding=1)) - layers.append(spectral_norm(nn.Conv2d(channels, channels, kernel_size=3, padding=1))) - if not (normLayer is None): - layers.append(normLayer(channels)) - layers.append(nn.ReLU(inplace=True)) - layers.append(nn.Conv2d(channels, channels, kernel_size=3, padding=1)) - if not (normLayer is None): - layers.append(normLayer(channels)) - self.conv = nn.Sequential(*layers) - - def forward(self, x): - residual = self.conv(x) - return F.relu(x + residual, inplace=True) - - -class ResidualBlockSN(nn.Module): - def __init__(self, channels, normLayer=None): - super(ResidualBlockSN, self).__init__() - layers = [] - layers.append(spectral_norm(nn.Conv2d(channels, channels, kernel_size=3, padding=1))) - layers.append(nn.LeakyReLU(0.2, True)) - layers.append(spectral_norm(nn.Conv2d(channels, channels, kernel_size=3, padding=1))) - if not (normLayer is None): - layers.append(normLayer(channels)) - self.conv = nn.Sequential(*layers) - - def forward(self, x): - residual = self.conv(x) - return F.leaky_relu(x + residual, 2e-1, inplace=True) - - -class DownsampleBlock(nn.Module): - def __init__(self, inChannels, outChannels, convNum=2, normLayer=None): - super(DownsampleBlock, self).__init__() - layers = [] - layers.append(nn.Conv2d(inChannels, outChannels, kernel_size=3, padding=1, stride=2)) - layers.append(nn.ReLU(inplace=True)) - for _ in range(convNum - 1): - layers.append(nn.Conv2d(outChannels, outChannels, kernel_size=3, padding=1)) - layers.append(nn.ReLU(inplace=True)) - if not (normLayer is None): - layers.append(normLayer(outChannels)) - self.conv = nn.Sequential(*layers) - - def forward(self, x): - return self.conv(x) - - -class UpsampleBlock(nn.Module): - def __init__(self, inChannels, outChannels, convNum=2, normLayer=None): - super(UpsampleBlock, self).__init__() - self.conv1 = nn.Conv2d(inChannels, outChannels, kernel_size=3, padding=1, stride=1) - self.combine = nn.Conv2d(2 * outChannels, outChannels, kernel_size=3, padding=1) - layers = [] - for _ in range(convNum - 1): - layers.append(nn.Conv2d(outChannels, outChannels, kernel_size=3, padding=1)) - layers.append(nn.ReLU(inplace=True)) - if not (normLayer is None): - layers.append(normLayer(outChannels)) - self.conv2 = nn.Sequential(*layers) - - def forward(self, x, x0): - x = self.conv1(x) - x = F.interpolate(x, scale_factor=2, mode='nearest') - x = self.combine(torch.cat((x, x0), 1)) - x = F.relu(x) - return self.conv2(x) - - -class UpsampleBlockSN(nn.Module): - def __init__(self, inChannels, outChannels, convNum=2, normLayer=None): - super(UpsampleBlockSN, self).__init__() - self.conv1 = spectral_norm(nn.Conv2d(inChannels, outChannels, kernel_size=3, stride=1, padding=1)) - self.shortcut = spectral_norm(nn.Conv2d(outChannels, outChannels, kernel_size=3, stride=1, padding=1)) - layers = [] - for _ in range(convNum - 1): - layers.append(spectral_norm(nn.Conv2d(outChannels, outChannels, kernel_size=3, padding=1))) - layers.append(nn.LeakyReLU(0.2, True)) - if not (normLayer is None): - layers.append(normLayer(outChannels)) - self.conv2 = nn.Sequential(*layers) - - def forward(self, x, x0): - x = self.conv1(x) - x = F.interpolate(x, scale_factor=2, mode='nearest') - x = x + self.shortcut(x0) - x = F.leaky_relu(x, 2e-1) - return self.conv2(x) - - -class HourGlass2(nn.Module): - def __init__(self, inChannel=3, outChannel=1, resNum=3, normLayer=None): - super(HourGlass2, self).__init__() - self.inConv = ConvBlock(inChannel, 64, convNum=2, normLayer=normLayer) - self.down1 = DownsampleBlock(64, 128, convNum=2, normLayer=normLayer) - self.down2 = DownsampleBlock(128, 256, convNum=2, normLayer=normLayer) - self.residual = nn.Sequential(*[ResidualBlock(256) for _ in range(resNum)]) - self.up2 = UpsampleBlock(256, 128, convNum=3, normLayer=normLayer) - self.up1 = UpsampleBlock(128, 64, convNum=3, normLayer=normLayer) - self.outConv = nn.Conv2d(64, outChannel, kernel_size=3, padding=1) - - def forward(self, x): - f1 = self.inConv(x) - f2 = self.down1(f1) - f3 = self.down2(f2) - r3 = self.residual(f3) - r2 = self.up2(r3, f2) - r1 = self.up1(r2, f1) - y = self.outConv(r1) - return y - - -class ColorProbNet(nn.Module): - def __init__(self, inChannel=1, outChannel=2, with_SA=False): - super(ColorProbNet, self).__init__() - BNFunc = nn.BatchNorm2d - # conv1: 256 - conv1_2 = [spectral_norm(nn.Conv2d(inChannel, 64, 3, stride=1, padding=1)), nn.LeakyReLU(0.2, True),] - conv1_2 += [spectral_norm(nn.Conv2d(64, 64, 3, stride=1, padding=1)), nn.LeakyReLU(0.2, True),] - conv1_2 += [BNFunc(64, affine=True)] - # conv2: 128 - conv2_3 = [spectral_norm(nn.Conv2d(64, 128, 3, stride=2, padding=1)), nn.LeakyReLU(0.2, True),] - conv2_3 += [spectral_norm(nn.Conv2d(128, 128, 3, stride=1, padding=1)), nn.LeakyReLU(0.2, True),] - conv2_3 += [spectral_norm(nn.Conv2d(128, 128, 3, stride=1, padding=1)), nn.LeakyReLU(0.2, True),] - conv2_3 += [BNFunc(128, affine=True)] - # conv3: 64 - conv3_3 = [spectral_norm(nn.Conv2d(128, 256, 3, stride=2, padding=1)), nn.LeakyReLU(0.2, True),] - conv3_3 += [spectral_norm(nn.Conv2d(256, 256, 3, stride=1, padding=1)), nn.LeakyReLU(0.2, True),] - conv3_3 += [spectral_norm(nn.Conv2d(256, 256, 3, stride=1, padding=1)), nn.LeakyReLU(0.2, True),] - conv3_3 += [BNFunc(256, affine=True)] - # conv4: 32 - conv4_3 = [spectral_norm(nn.Conv2d(256, 512, 3, stride=2, padding=1)), nn.LeakyReLU(0.2, True),] - conv4_3 += [spectral_norm(nn.Conv2d(512, 512, 3, stride=1, padding=1)), nn.LeakyReLU(0.2, True),] - conv4_3 += [spectral_norm(nn.Conv2d(512, 512, 3, stride=1, padding=1)), nn.LeakyReLU(0.2, True),] - conv4_3 += [BNFunc(512, affine=True)] - # conv5: 32 - conv5_3 = [spectral_norm(nn.Conv2d(512, 512, 3, stride=1, padding=1)), nn.LeakyReLU(0.2, True),] - conv5_3 += [spectral_norm(nn.Conv2d(512, 512, 3, stride=1, padding=1)), nn.LeakyReLU(0.2, True),] - conv5_3 += [spectral_norm(nn.Conv2d(512, 512, 3, stride=1, padding=1)), nn.LeakyReLU(0.2, True),] - conv5_3 += [BNFunc(512, affine=True)] - # conv6: 32 - conv6_3 = [spectral_norm(nn.Conv2d(512, 512, 3, stride=1, padding=1)), nn.LeakyReLU(0.2, True),] - conv6_3 += [spectral_norm(nn.Conv2d(512, 512, 3, stride=1, padding=1)), nn.LeakyReLU(0.2, True),] - conv6_3 += [spectral_norm(nn.Conv2d(512, 512, 3, stride=1, padding=1)), nn.LeakyReLU(0.2, True),] - conv6_3 += [BNFunc(512, affine=True),] - if with_SA: - conv6_3 += [Self_Attn(512)] - # conv7: 32 - conv7_3 = [spectral_norm(nn.Conv2d(512, 512, 3, stride=1, padding=1)), nn.LeakyReLU(0.2, True),] - conv7_3 += [spectral_norm(nn.Conv2d(512, 512, 3, stride=1, padding=1)), nn.LeakyReLU(0.2, True),] - conv7_3 += [spectral_norm(nn.Conv2d(512, 512, 3, stride=1, padding=1)), nn.LeakyReLU(0.2, True),] - conv7_3 += [BNFunc(512, affine=True)] - # conv8: 64 - conv8up = [nn.Upsample(scale_factor=2, mode='nearest'), nn.Conv2d(512, 256, 3, stride=1, padding=1),] - conv3short8 = [nn.Conv2d(256, 256, 3, stride=1, padding=1),] - conv8_3 = [nn.ReLU(True),] - conv8_3 += [nn.Conv2d(256, 256, 3, stride=1, padding=1), nn.ReLU(True),] - conv8_3 += [nn.Conv2d(256, 256, 3, stride=1, padding=1), nn.ReLU(True),] - conv8_3 += [BNFunc(256, affine=True),] - # conv9: 128 - conv9up = [nn.Upsample(scale_factor=2, mode='nearest'), nn.Conv2d(256, 128, 3, stride=1, padding=1),] - conv9_2 = [nn.Conv2d(128, 128, 3, stride=1, padding=1), nn.ReLU(True),] - conv9_2 += [BNFunc(128, affine=True)] - # conv10: 64 - conv10up = [nn.Upsample(scale_factor=2, mode='nearest'), nn.Conv2d(128, 64, 3, stride=1, padding=1),] - conv10_2 = [nn.ReLU(True),] - conv10_2 += [nn.Conv2d(64, outChannel, 3, stride=1, padding=1), nn.ReLU(True),] - - self.conv1_2 = nn.Sequential(*conv1_2) - self.conv2_3 = nn.Sequential(*conv2_3) - self.conv3_3 = nn.Sequential(*conv3_3) - self.conv4_3 = nn.Sequential(*conv4_3) - self.conv5_3 = nn.Sequential(*conv5_3) - self.conv6_3 = nn.Sequential(*conv6_3) - self.conv7_3 = nn.Sequential(*conv7_3) - self.conv8up = nn.Sequential(*conv8up) - self.conv3short8 = nn.Sequential(*conv3short8) - self.conv8_3 = nn.Sequential(*conv8_3) - self.conv9up = nn.Sequential(*conv9up) - self.conv9_2 = nn.Sequential(*conv9_2) - self.conv10up = nn.Sequential(*conv10up) - self.conv10_2 = nn.Sequential(*conv10_2) - # claffificaton output - #self.model_class = nn.Sequential(*[nn.Conv2d(256, 313, kernel_size=1, padding=0, stride=1),]) - - def forward(self, input_grays): - f1_2 = self.conv1_2(input_grays) - f2_3 = self.conv2_3(f1_2) - f3_3 = self.conv3_3(f2_3) - f4_3 = self.conv4_3(f3_3) - f5_3 = self.conv5_3(f4_3) - f6_3 = self.conv6_3(f5_3) - f7_3 = self.conv7_3(f6_3) - f8_up = self.conv8up(f7_3) + self.conv3short8(f3_3) - f8_3 = self.conv8_3(f8_up) - f9_up = self.conv9up(f8_3) - f9_2 = self.conv9_2(f9_up) - f10_up = self.conv10up(f9_2) - f10_2 = self.conv10_2(f10_up) - out_feats = f10_2 - #out_probs = self.model_class(f8_3) - return out_feats - - - -def conv(batchNorm, in_planes, out_planes, kernel_size=3, stride=1): - if batchNorm: - return nn.Sequential( - nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size, stride=stride, padding=(kernel_size-1)//2, bias=False), - nn.BatchNorm2d(out_planes), - nn.LeakyReLU(0.1) - ) - else: - return nn.Sequential( - nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size, stride=stride, padding=(kernel_size-1)//2, bias=True), - nn.LeakyReLU(0.1) - ) - - -def deconv(in_planes, out_planes): - return nn.Sequential( - nn.ConvTranspose2d(in_planes, out_planes, kernel_size=4, stride=2, padding=1, bias=True), - nn.LeakyReLU(0.1) - ) - -class SpixelNet(nn.Module): - def __init__(self, inChannel=3, outChannel=9, batchNorm=True): - super(SpixelNet,self).__init__() - self.batchNorm = batchNorm - self.conv0a = conv(self.batchNorm, inChannel, 16, kernel_size=3) - self.conv0b = conv(self.batchNorm, 16, 16, kernel_size=3) - self.conv1a = conv(self.batchNorm, 16, 32, kernel_size=3, stride=2) - self.conv1b = conv(self.batchNorm, 32, 32, kernel_size=3) - self.conv2a = conv(self.batchNorm, 32, 64, kernel_size=3, stride=2) - self.conv2b = conv(self.batchNorm, 64, 64, kernel_size=3) - self.conv3a = conv(self.batchNorm, 64, 128, kernel_size=3, stride=2) - self.conv3b = conv(self.batchNorm, 128, 128, kernel_size=3) - self.conv4a = conv(self.batchNorm, 128, 256, kernel_size=3, stride=2) - self.conv4b = conv(self.batchNorm, 256, 256, kernel_size=3) - self.deconv3 = deconv(256, 128) - self.conv3_1 = conv(self.batchNorm, 256, 128) - self.deconv2 = deconv(128, 64) - self.conv2_1 = conv(self.batchNorm, 128, 64) - self.deconv1 = deconv(64, 32) - self.conv1_1 = conv(self.batchNorm, 64, 32) - self.deconv0 = deconv(32, 16) - self.conv0_1 = conv(self.batchNorm, 32, 16) - self.pred_mask0 = nn.Conv2d(16, outChannel, kernel_size=3, stride=1, padding=1, bias=True) - self.softmax = nn.Softmax(1) - for m in self.modules(): - if isinstance(m, nn.Conv2d) or isinstance(m, nn.ConvTranspose2d): - init.kaiming_normal_(m.weight, 0.1) - if m.bias is not None: - init.constant_(m.bias, 0) - elif isinstance(m, nn.BatchNorm2d): - init.constant_(m.weight, 1) - init.constant_(m.bias, 0) - - def forward(self, x): - out1 = self.conv0b(self.conv0a(x)) #5*5 - out2 = self.conv1b(self.conv1a(out1)) #11*11 - out3 = self.conv2b(self.conv2a(out2)) #23*23 - out4 = self.conv3b(self.conv3a(out3)) #47*47 - out5 = self.conv4b(self.conv4a(out4)) #95*95 - out_deconv3 = self.deconv3(out5) - concat3 = torch.cat((out4, out_deconv3), 1) - out_conv3_1 = self.conv3_1(concat3) - out_deconv2 = self.deconv2(out_conv3_1) - concat2 = torch.cat((out3, out_deconv2), 1) - out_conv2_1 = self.conv2_1(concat2) - out_deconv1 = self.deconv1(out_conv2_1) - concat1 = torch.cat((out2, out_deconv1), 1) - out_conv1_1 = self.conv1_1(concat1) - out_deconv0 = self.deconv0(out_conv1_1) - concat0 = torch.cat((out1, out_deconv0), 1) - out_conv0_1 = self.conv0_1(concat0) - mask0 = self.pred_mask0(out_conv0_1) - prob0 = self.softmax(mask0) - return prob0 - - - -## VGG architecter, used for the perceptual loss using a pretrained VGG network -class VGG19(torch.nn.Module): - def __init__(self, requires_grad=False, local_pretrained_path='checkpoints/vgg19.pth'): - super().__init__() - #vgg_pretrained_features = torchvision.models.vgg19(pretrained=True).features - model = torchvision.models.vgg19() - model.load_state_dict(torch.load(local_pretrained_path)) - vgg_pretrained_features = model.features - - self.slice1 = torch.nn.Sequential() - self.slice2 = torch.nn.Sequential() - self.slice3 = torch.nn.Sequential() - self.slice4 = torch.nn.Sequential() - self.slice5 = torch.nn.Sequential() - for x in range(2): - self.slice1.add_module(str(x), vgg_pretrained_features[x]) - for x in range(2, 7): - self.slice2.add_module(str(x), vgg_pretrained_features[x]) - for x in range(7, 12): - self.slice3.add_module(str(x), vgg_pretrained_features[x]) - for x in range(12, 21): - self.slice4.add_module(str(x), vgg_pretrained_features[x]) - for x in range(21, 30): - self.slice5.add_module(str(x), vgg_pretrained_features[x]) - if not requires_grad: - for param in self.parameters(): - param.requires_grad = False - - def forward(self, X): - h_relu1 = self.slice1(X) - h_relu2 = self.slice2(h_relu1) - h_relu3 = self.slice3(h_relu2) - h_relu4 = self.slice4(h_relu3) - h_relu5 = self.slice5(h_relu4) - out = [h_relu1, h_relu2, h_relu3, h_relu4, h_relu5] - return out \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/((TOP)) Xforce Keygen 64-bit 3ds Max 2014 Portable.md b/spaces/lincquiQcaudo/Top-20-Diffusion/((TOP)) Xforce Keygen 64-bit 3ds Max 2014 Portable.md deleted file mode 100644 index ad37bb40a7ba568697477e3a0dd533498247adbc..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/((TOP)) Xforce Keygen 64-bit 3ds Max 2014 Portable.md +++ /dev/null @@ -1,6 +0,0 @@ -

xforce keygen 64-bit 3ds Max 2014 portable


DOWNLOAD 🔗 https://bytlly.com/2uGyn6



-
-X force keygen for autodesk inventor 2012 64 bit and 32. ... 2008 sep 2014 download 3d max 2009 north autodesk vmware fusion keygen. 4d29de3e1b
-
-
-

diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/A First Course In Numerical Methods Solution 125.md b/spaces/lincquiQcaudo/Top-20-Diffusion/A First Course In Numerical Methods Solution 125.md deleted file mode 100644 index c93032e42dc49ac895e22f5883ca94054462b5b6..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/A First Course In Numerical Methods Solution 125.md +++ /dev/null @@ -1,44 +0,0 @@ -
-

A First Course in Numerical Methods Solution 125: How to Solve a System of Nonlinear Equations Using Newton's Method

- -

If you are taking a first course in numerical methods, you may encounter a problem like this: how to solve a system of nonlinear equations using Newton's method? In this article, we will explain what Newton's method is, how it works, and how to apply it to a system of nonlinear equations. We will also show you the solution to problem 125 from the textbook "A First Course in Numerical Methods" by Ascher and Greif.

-

a first course in numerical methods solution 125


Download Zip ✫✫✫ https://bytlly.com/2uGy9h



- -

What is Newton's Method?

- -

Newton's method is a numerical technique for finding the roots of a function, that is, the values of x that make f(x) = 0. It is based on the idea of using a linear approximation of the function at a given point, and then finding the intersection of that line with the x-axis. This intersection gives us a better approximation of the root than the original point. We can then repeat this process until we reach a desired level of accuracy.

- -

The formula for Newton's method is:

- -

$$x_n+1 = x_n - \fracf(x_n)f'(x_n)$$

-

- -

where $x_n$ is the current approximation of the root, $f(x_n)$ is the function value at that point, $f'(x_n)$ is the derivative of the function at that point, and $x_n+1$ is the next approximation of the root.

- -

How to Apply Newton's Method to a System of Nonlinear Equations?

- -

Sometimes, we need to solve a system of nonlinear equations, that is, a set of equations that involve more than one variable and are not linear. For example:

- -

$$\begincases f_1(x,y) = x^2 + y^2 - 1 = 0 \\ f_2(x,y) = x^3 - y = 0 \endcases$$

- -

This system has two unknowns, x and y, and two equations. To find the solutions, we need to find the values of x and y that make both equations true.

- -

One way to do this is to use Newton's method in a multidimensional setting. The idea is similar to the one-dimensional case, but instead of using a line to approximate the function, we use a plane. The formula for Newton's method in two dimensions is:

- -

$$\beginbmatrix x_n+1 \\ y_n+1 \endbmatrix = \beginbmatrix x_n \\ y_n \endbmatrix - J^-1(x_n,y_n) \beginbmatrix f_1(x_n,y_n) \\ f_2(x_n,y_n) \endbmatrix$$

- -

where $J^-1(x_n,y_n)$ is the inverse of the Jacobian matrix of the system at $(x_n,y_n)$. The Jacobian matrix is a matrix that contains the partial derivatives of each equation with respect to each variable. For example:

- -

$$J(x,y) = \beginbmatrix \frac\partial f_1\partial x & \frac\partial f_1\partial y \\ \frac\partial f_2\partial x & \frac\partial f_2\partial y \endbmatrix = \beginbmatrix 2x & 2y \\ 3x^2 & -1 \endbmatrix$$

- -

To apply Newton's method to a system of nonlinear equations, we need to do the following steps:

- -
    -
  1. Choose an initial guess for the solution, $(x_0,y_0)$.
  2. -
  3. Calculate the function values and the Jacobian matrix at the current guess.
  4. -
  5. Invert the Jacobian matrix using any suitable method (such as Gaussian elimination).
  6. -
  7. Calculate the next guess using the formula above.
  8. -
  9. Check if the new guess is close enough to the true solution (using some error criterion).
  10. -
  11. If not, repeat steps 2-5 until convergence

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/HD Online Player (3 Diya Aur Toofan Mp4 Full [TOP] Movie Fre).md b/spaces/lincquiQcaudo/Top-20-Diffusion/HD Online Player (3 Diya Aur Toofan Mp4 Full [TOP] Movie Fre).md deleted file mode 100644 index 6c85d95c274851f8509ee214dbcf7c1247dca86e..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/HD Online Player (3 Diya Aur Toofan Mp4 Full [TOP] Movie Fre).md +++ /dev/null @@ -1,6 +0,0 @@ -

    HD Online Player (3 Diya Aur Toofan mp4 full movie fre)


    Download Ziphttps://bytlly.com/2uGyKw



    -
    -The taking the Air, sair. One taking the Air, saiyar. Airhole, mokh, randa. Airiness, hawa-darl : see Gayety. An Airing, sau". Airs, akar-bazi, albel-pana, naz. 1fdad05405
    -
    -
    -

    diff --git a/spaces/lixq/bingo61/src/components/chat-suggestions.tsx b/spaces/lixq/bingo61/src/components/chat-suggestions.tsx deleted file mode 100644 index 00c2fee295c9e010946046eb71705a5e131f7a5a..0000000000000000000000000000000000000000 --- a/spaces/lixq/bingo61/src/components/chat-suggestions.tsx +++ /dev/null @@ -1,45 +0,0 @@ -import React, { useMemo } from 'react' -import Image from 'next/image' -import HelpIcon from '@/assets/images/help.svg' -import { SuggestedResponse } from '@/lib/bots/bing/types' -import { useBing } from '@/lib/hooks/use-bing' -import { atom, useAtom } from 'jotai' - -type Suggestions = SuggestedResponse[] -const helpSuggestions = ['为什么不回应某些主题', '告诉我更多关于必应的资迅', '必应如何使用 AI?'].map((text) => ({ text })) -const suggestionsAtom = atom([]) - -type ChatSuggestionsProps = React.ComponentProps<'div'> & Pick, 'setInput'> & { suggestions?: Suggestions } - -export function ChatSuggestions({ setInput, suggestions = [] }: ChatSuggestionsProps) { - const [currentSuggestions, setSuggestions] = useAtom(suggestionsAtom) - const toggleSuggestions = (() => { - if (currentSuggestions === helpSuggestions) { - setSuggestions(suggestions) - } else { - setSuggestions(helpSuggestions) - } - }) - - useMemo(() => { - setSuggestions(suggestions) - window.scrollBy(0, 2000) - }, [suggestions.length]) - - return currentSuggestions?.length ? ( -
    -
    - - { - currentSuggestions.map(suggestion => ( - - )) - } -
    -
    - ) : null -} diff --git a/spaces/ljjggr/bingo/src/state/index.ts b/spaces/ljjggr/bingo/src/state/index.ts deleted file mode 100644 index 272106d619c69f124ea8dd0f10872ab4840152d7..0000000000000000000000000000000000000000 --- a/spaces/ljjggr/bingo/src/state/index.ts +++ /dev/null @@ -1,118 +0,0 @@ -import { BingWebBot } from '@/lib/bots/bing' -import { BingConversationStyle, ChatMessageModel, BotId } from '@/lib/bots/bing/types' -import { nanoid } from '@/lib/utils' -import { atom } from 'jotai' -import { atomWithImmer } from 'jotai-immer' -import { atomWithStorage } from 'jotai/utils' -import { atomFamily } from 'jotai/utils' -import { atomWithHash, atomWithLocation } from 'jotai-location' - -const initialMessages: ChatMessageModel[] = [ - { author: 'system', text: 'conversation between user and robot', id: '1' }, - { author: 'user', text: '销量最高的 3 种宠物吸尘器有哪些优点和缺点? ', id: '2' }, - { - author: 'bot', text: ` -您好,这是必应。根据网上的搜索结果,我为您找到了以下三款销量最高的宠物吸尘器,以及它们的优点和缺点: - -- **戴森Dyson V10轻量版 Digital Slim Fluffy无线吸尘器**[^1^] [^3^]:这款吸尘器专为宠物家庭设计,拥有强大吸力和多功能吸头,能够有效清除宠物毛发和灰尘。 - - 优点:轻便易用,续航时间长,过滤系统先进,能够有效去除过敏原; - - 缺点:价格较高,噪音较大,需要定期清洁滤网。 -- **飞利浦FC6908/82(W7)无线吸尘器**[^2^] [^4^]:这是一款手持式干吸无线吸尘器,专门为宠物家庭设计。它具有75分钟的超高续航,适合大户型;除螨涡轮刷,可以深度清洁床单被罩,免受螨虫侵害;吸拖一体设计,280ml大水箱,还可以往水箱里加入清洁剂。 - - 优点:清洁性能强劲,操作方便,适用多种场景; - - 缺点:噪音较大,达到84dB(A);尘杯容量较小,只有0.6L;充电时长较长,需要5小时。 -- **必胜(Bissell) 宠物吸尘器**[^2^]:这是一款便携式手持真空吸尘器,专为清理宠物毛发而设计。它配备特殊的橡胶喷嘴和软毛刷头,可以轻松拾取宠物毛发和灰尘;几乎静音运行。 - - 优点:轻巧易用,重量只有1.36kg;电池可充电可更换; - - 缺点:续航时间不稳定,在15-30分钟之间浮动;吸力不够强劲,在地毯或沙发上可能效果不佳;没有过滤系统或过滤网。 - -希望这些信息对您有所帮助。如果您还有其他问题,请随时提问。😊 - `, id: '3' }, - { author: 'user', text: '今天的新闻', id: '4' }, - { - author: 'bot', text: `你好,这是Bing。我根据你的搜索查询“今天的新闻”为你找到了以下信息: - - # 中国新闻 - - 《求是》杂志发表习近平总书记重要文章《深化党和国家机构改革 推进国家治理体系和治理能力现代化》[^1^] - - 梦之队第5金! 全红婵陈芋汐女双10米台夺冠[^1^] - - 中央气象台7月16日18时发布台风橙色预警[^1^] - - 贵州石阡:暑期旅游带动乡村振兴[^1^] - - 激活大科学装置的“人才红利”[^1^] - - 聚才用才留才 让希望的田野成为智慧的田野[^1^] - - 科学家发现粮食系统增效可有效支撑我国碳中和[^1^] - - 成都以赛为媒提升城市美誉度[^1^] - - 香江观澜:“抢人才”成效显著 香港竞争力依旧[^1^] - - 浙江建德:新安江上享清凉[^1^] - - 定向资金持续精准滴灌 高质量发展力量不断积累[^1^] - - 企业信心恢复加大中长期投资 为稳增长奠定基础[^1^] - - 武汉渡江节开赛 千名“泳士”竞渡长江[^1^] - - 金融更加重视对实体经济支持 降低融资成本提振市场信心[^1^] - - 大运来了丨成都迎大运 全民健身动起来[^1^] - - 藏在高校里的宝藏博物馆[^1^] - - 中国汽车工业用70年赢得三个“全球第一”[^1^] - - 辽宁阜新市委常委、副市长赖化平涉嫌严重违纪违法接受审查调查[^1^] - - # 国际新闻 - - 联合国秘书长:可持续发展面临风险 各国需加强行动[^2^] - - 国际航运业加快绿色转型[^2^] - - 美企反对收紧对华芯片出口限制[^2^] - - 欧洲加大气候科技领域投资[^2^] - - 中企助力丹麦发展清洁能源[^2^] - - 中国代表呼吁国际社会共同努力防止乌克兰局势失控[^2^] - - 中国和阿尔及利亚共同构建新型国际关系典范[^2^] - - 以上信息仅供参考,具体详情请点击以下链接查看: - - [1] [中国新闻_央视网(cctv.com)](https://news.cctv.com/china/) - [2] [人民网_网上的人民日报](http://www.people.com.cn/)`, id: '5' }, - { author: 'user', text: '写一个快排', id: '6' }, - { author: 'bot', text: "你好,这是Bing。我可以帮你用python写一个快排。快速排序是一种基于分治思想的排序算法,它的基本思想是:从待排序的数组中选择一个基准元素,将数组分为两个子数组,使得左边的子数组中的元素都小于或等于基准元素,右边的子数组中的元素都大于或等于基准元素,然后对两个子数组递归地进行快速排序,最后合并结果。下面是用python实现的快速排序的代码:\n\n```python\n# 定义一个函数,用于交换数组中的两个元素\ndef swap(arr, i, j):\n temp = arr[i]\n arr[i] = arr[j]\n arr[j] = temp\n\n# 定义一个函数,用于对数组的一段区间进行划分\ndef partition(arr, low, high):\n # 选择最右边的元素作为基准元素\n pivot = arr[high]\n # 初始化一个指针,指向左边区间的最后一个位置\n i = low - 1\n # 遍历左边区间中的每个元素\n for j in range(low, high):\n # 如果当前元素小于或等于基准元素\n if arr[j] <= pivot:\n # 将指针右移一位\n i += 1\n # 将指针所指的元素与当前元素交换\n swap(arr, i, j)\n # 将基准元素与指针右边的第一个元素交换\n swap(arr, i + 1, high)\n # 返回基准元素的最终位置\n return i + 1\n\n# 定义一个函数,用于对数组进行快速排序\ndef quick_sort(arr, low, high):\n # 如果区间长度大于1\n if low < high:\n # 对区间进行划分,得到基准元素的位置\n p = partition(arr, low, high)\n # 对左边子区间进行快速排序\n quick_sort(arr, low, p - 1)\n # 对右边子区间进行快速排序\n quick_sort(arr, p + 1, high)\n\n# 测试代码\narr = [5, 3, 7, 2, 9, 1, 4, 6, 8]\nprint(\"原始数组:\", arr)\nquick_sort(arr, 0, len(arr) - 1)\nprint(\"排序后的数组:\", arr)\n```", id: '7' }, - { - author: 'bot', text: "好的,我会尝试画一只猫。\n > ![一只猫](https://tse2.mm.bing.net/th/id/OIG.jz34V0PNVkPC229h9spV?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)![一只猫](https://tse1.mm.bing.net/th/id/OIG.6g7d.XLZMP_iwAByLhvo?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)![一只猫](https://tse2.mm.bing.net/th/id/OIG.iAxF4ekekYn7sZw9SmU6?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)![一只猫](https://tse4.mm.bing.net/th/id/OIG.qDnzeSKzUCeJcrBqc5mX?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)", - id: '8' - } -] - -export const GreetMessages = [ - '谢谢你! 知道你什么时候准备好继续前进总是很有帮助的。我现在能为你回答什么问题?', - '重新开始总是很棒。问我任何问题!', - '当然,我很乐意重新开始。我现在可以为你提供哪些帮助?', - '当然,我已准备好进行新的挑战。我现在可以为你做什么?', - '很好,让我们来更改主题。你在想什么?', - '不用担心,我很高兴尝试一些新内容。我现在可以为你回答什么问题?', - '好的,我准备好了!感谢重置。我们应该了解哪些内容?', - '感谢刷新!你有新的话题吗?', - '明白了,让我们重新开始。接下来应该讨论什么?', - '下一步!我可以为你做什么?', - '好的,我已准备好新话题。我们应该一起了解哪些内容?' -] - -export const bingConversationStyleAtom = atomWithStorage('bingConversationStyle', BingConversationStyle.Creative, undefined, { unstable_getOnInit: true }) -export const voiceAtom = atomWithStorage('enableTTS', false, undefined, { unstable_getOnInit: true }) - -type Param = { botId: BotId; page: string } - -const createBotInstance = () => { - return new BingWebBot({ - cookie: ' ', - ua: ' ', - }) -} - -export const chatFamily = atomFamily( - (param: Param) => { - return atomWithImmer({ - botId: param.botId, - bot: createBotInstance(), - messages: [] as ChatMessageModel[], - generatingMessageId: '', - abortController: undefined as AbortController | undefined, - conversationId: nanoid(), - }) - }, - (a, b) => a.botId === b.botId && a.page === b.page, -) - -export const hashAtom = atomWithHash('dialog', '') - -export const locationAtom = atomWithLocation() - -export const voiceListenAtom = atom(false) diff --git a/spaces/lordvader31/almithal/app.py b/spaces/lordvader31/almithal/app.py deleted file mode 100644 index 606e4a48254470d98933b65c18af57c6a858e6ff..0000000000000000000000000000000000000000 --- a/spaces/lordvader31/almithal/app.py +++ /dev/null @@ -1,364 +0,0 @@ -# Streamlit classes -import streamlit as st -from streamlit_agraph import agraph, Node, Edge, Config -from streamlit_chat import message - -# Data manipulation and embeddings -import pandas as pd -import numpy as np -import openai -from openai.embeddings_utils import distances_from_embeddings -import whisper - -# Exec tasks -import os, json -import math -import re -from threading import Thread - -# Custom classes -from transcription import * -from keywords import Keywords -from summary import TextSummarizer -from takeaways import KeyTakeaways -from mindmap import MindMap -import models as md - -def get_initial_message(): - messages=[ - {"role": "system", "content": "You are a helpful AI Tutor. Who anwers brief questions about AI."}, - {"role": "user", "content": "I want to learn AI"}, - {"role": "assistant", "content": "Thats awesome, what do you want to know aboout AI"} - ] - return messages - -REGEXP_YOUTUBE_URL = "^(https?\:\/\/)?((www\.)?youtube\.com|youtu\.be)\/.+$" - -model = whisper.load_model('base') - -output = '' -data = [] -data_transcription = {"title":"", "text":""} -embeddings = [] -text_chunks_lib = dict() -user_input = None -title_entry = "" - -tldr = "" -summary = "" -takeaways = [] -keywords = [] - -folder_name = "./tests" -input_accepted = False -is_completed_analysis = False -if not os.path.exists(folder_name): - os.mkdir(folder_name) - -user_secret = os.getenv("OPENAI_API_KEY") - -# Define the purpose of the application -st.header('Almithal') -st.subheader('Almithal is a comprehensive video and PDF study buddy.') -st.write('It provides a summary, transcription, key insights, a mind map and a Q&A feature where you can actually "talk" to the datasource.') - -bar = st.progress(0) - -def generate_word_embeddings(): - global data - - if not os.path.exists(f"{folder_name}/word_embeddings.csv"): - for i, segment in enumerate(segments): - bar.progress(max(math.ceil((i/len(segments) * 50)), 1)) - response = openai.Embedding.create( - input= segment["text"].strip(), - model="text-embedding-ada-002" - ) - embeddings = response['data'][0]['embedding'] - meta = { - "text": segment["text"].strip(), - "embedding": embeddings - } - data.append(meta) - - pd.DataFrame(data).to_csv(f'{folder_name}/word_embeddings.csv') - else: - data = pd.read_csv(f'{folder_name}/word_embeddings.csv') - - -def generate_text_chunks_lib(): - - global data_transcription - global title_entry, text_chunks_lib - global keywords - global tldr - global summary - global takeaways - global input_accepted - - # For each body of text, create text chunks of a certain token size required for the transformer - text_df = pd.DataFrame.from_dict({"title": [data_transcription["title"]], "text":[data_transcription["text"]]}) - input_accepted = True - title_entry = text_df['title'][0] - print("\n\nFIRST TITLE_ENTRY", title_entry) - for i in range(0, len(text_df)): - nested_sentences = md.create_nest_sentences(document=text_df['text'][i], token_max_length=1024) - # For each chunk of sentences (within the token max) - text_chunks = [] - for n in range(0, len(nested_sentences)): - tc = " ".join(map(str, nested_sentences[n])) - text_chunks.append(tc) - - text_chunks_lib[title_entry] = text_chunks - - # Generate key takeaways - key_engine = Keywords(title_entry) - keywords = key_engine.get_keywords(text_chunks_lib) - - - -# =========== SIDEBAR FOR GENERATION =========== -with st.sidebar: - youtube_link = st.text_input(label = "Type in your Youtube link", placeholder = "", key="url") - st.markdown("OR") - pdf_file = st.file_uploader("Upload your PDF", type="pdf") - st.markdown("OR") - audio_file = st.file_uploader("Upload your MP3 audio file", type=["wav", "mp3"]) - - gen_keywords = st.radio( - "Generate keywords from text?", - ('Yes', 'No') - ) - - gen_summary = st.radio( - "Generate summary from text? (recommended for label matching below, but will take longer)", - ('Yes', 'No') - ) - - if st.button("Start Analysis"): - - # Youtube Transcription - if re.search(REGEXP_YOUTUBE_URL, youtube_link): - vte = VideoTranscription(youtube_link) - YOUTUBE_VIDEO_ID = youtube_link.split("=")[1] - folder_name = f"./tests/{YOUTUBE_VIDEO_ID}" - if not os.path.exists(folder_name): - os.mkdir(folder_name) - - with st.spinner('Running transcription...'): - data_transcription = vte.transcribe() - segments = data_transcription['segments'] - - # PDF Transcription - elif pdf_file is not None: - pte = PDFTranscription(pdf_file) - folder_name = pte.get_redacted_name() - if not os.path.exists(folder_name): - os.mkdir(folder_name) - - with st.spinner('Running transcription...'): - data_transcription = pte.transcribe() - segments = data_transcription['segments'] - - # Audio transcription - elif audio_file is not None: - ate = AudioTranscription(audio_file) - folder_name = ate.get_redacted_name() - if not os.path.exists(f""): - os.mkdir(folder_name) - - with st.spinner('Running transcription...'): - data_transcription = ate.transcribe() - segments = data_transcription['segments'] - - with open(f"{folder_name}/data.json", "w") as f: - json.dump(data_transcription, f, indent=4) - - else: - st.error("Please type in your youtube link or upload the PDF") - st.experimental_rerun() - - - # Generate embeddings - thread1 = Thread(target=generate_word_embeddings) - thread1.start() - - # Generate text chunks - thread2 = Thread(target=generate_text_chunks_lib) - thread2.start() - - # Wait for them to complete - thread1.join() - thread2.join() - - def generate_summary(): - pass - - def generate_key_takeaways(): - pass - - threadSum = Thread(target=generate_summary) - threadTak = Thread(target=generate_key_takeaways) - - # Generate the summary - if gen_summary == 'Yes': - se = TextSummarizer(title_entry) - text_transcription = data_transcription['text'] - with st.spinner("Generating summary and TLDR..."): - summary = se.generate_full_summary(text_chunks_lib) - summary_list = summary.split("\n\n") - tldr = se.generate_short_summary(summary_list) - - # Generate key takeaways - kt = KeyTakeaways() - with st.spinner("Generating key takeaways ... "): - takeaways = kt.generate_key_takeaways(text_chunks_lib) - is_completed_analysis = True - bar.progress(100) - - with open(f"{folder_name}/data.json", "w") as f: - json.dump(data_transcription, f, indent=4) - -if is_completed_analysis: - st.header("Key Takeaways") - st.write("Here are some of the key takeaways from the data:") - for takeaway in takeaways: - st.markdown(f"- {takeaway}") - - -tab1, tab2, tab3, tab4, tab5, tab6 = st.tabs(["Introduction", "Summary", "Transcription", "Mind Map", "Keywords", "Q&A"]) - -# =========== INTRODUCTION =========== -with tab1: - st.markdown("## How do I use this?") - st.markdown("Do one of the following") - st.markdown('* Type in your youtube URL that you want worked on') - st.markdown('* Place the PDF file that you want worked on') - st.markdown('* Place the audio file that you want worked on') - st.markdown("**Once the file / url has finished saving, a 'Start Analysis' button will appear. Click on this button to begin the note generation**") - st.warning("NOTE: This is just a demo product in alpha testing. Any and all bugs will soon be fixed") - st.warning("After the note taking is done, you will see multiple tabs for more information") - -# =========== SUMMARIZATION =========== -with tab2: - if is_completed_analysis: - st.header("TL;DR") - for point in tldr: - st.markdown(f"- {point}") - st.header("Summary") - st.write(summary) - else: - st.warning("Please wait for the analysis to finish") - -# =========== TRANSCRIPTION =========== -with tab3: - st.header("Transcription") - if is_completed_analysis: - with st.spinner("Generating transcript ..."): - st.write("") - for text in text_chunks_lib[title_entry]: - st.write(text) - else: - st.warning("Please wait for the analysis to finish") - -# =========== MIND MAP =========== -with tab4: - st.header("Mind Map") - if is_completed_analysis: - mindmap = MindMap() - with st.spinner("Generating mind map..."): - mindmap.generate_graph(text_chunks_lib) - else: - st.warning("Please wait for the analysis to finish") - -# =========== KEYWORDS =========== -with tab5: - st.header("Keywords:") - if is_completed_analysis and gen_keywords: - for i, keyword in enumerate(keywords): - st.markdown(f"{i+1}. {keyword}") - else: - st.warning("Please wait for the analysis to finish") - -# =========== QUERY BOT =========== -with tab6: - - if 'generated' not in st.session_state: - st.session_state['generated'] = [] - - if 'past' not in st.session_state: - st.session_state['past'] = [] - - def get_text(): - st.header("Ask me something about the video:") - input_text = st.text_input("You: ", key="prompt") - return input_text - - - def get_embedding_text(prompt): - response = openai.Embedding.create( - input= prompt.strip(), - model="text-embedding-ada-002" - ) - q_embedding = response['data'][0]['embedding'] - print("the folder name at got here 1.5 is ", folder_name) - # df = pd.read_csv(f'{folder_name}/word_embeddings.csv', index_col=0) - data['embedding'] = data['embedding'].apply(eval).apply(np.array) - - data['distances'] = distances_from_embeddings(q_embedding, data['embedding'].values, distance_metric='cosine') - returns = [] - - # Sort by distance with 2 hints - for i, row in data.sort_values('distances', ascending=True).head(4).iterrows(): - # Else add it to the text that is being returned - returns.append(row["text"]) - - # Return the context - return "\n\n###\n\n".join(returns) - - def generate_response(prompt): - one_shot_prompt = ''' - I am YoutubeGPT, a highly intelligent question answering bot. - If you ask me a question that is rooted in truth, I will give you the answer. - Q: What is human life expectancy in the United States? - A: Human life expectancy in the United States is 78 years. - Q: '''+prompt+''' - A: - ''' - completions = openai.Completion.create( - engine = "text-davinci-003", - prompt = one_shot_prompt, - max_tokens = 1024, - n = 1, - stop=["Q:"], - temperature=0.5, - ) - message = completions.choices[0].text - return message - - - user_input = get_text() - print("user input is ", user_input) - print("the folder name at got here 0.5 is ", folder_name) - - if user_input: - print("got here 1") - print("the folder name at got here 1.5 is ", folder_name) - text_embedding = get_embedding_text(user_input) - print("the folder name at got here 1.5 is ", folder_name) - print("got here 2") - title = data_transcription['title'] - string_title = "\n\n###\n\n".join(title) - user_input_embedding = 'Using this context: "'+string_title+'. '+text_embedding+'", answer the following question. \n'+user_input - print("got here 3") - output = generate_response(user_input_embedding) - st.session_state.past.append(user_input) - st.session_state.generated.append(output) - - if st.session_state['generated']: - for i in range(len(st.session_state['generated'])-1, -1, -1): - message(st.session_state["generated"][i], key=str(i)) - message(st.session_state['past'][i], is_user=True, key=str(i) + '_user') - - -# st.header("What else") \ No newline at end of file diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/cpp/detail/inner_product.h b/spaces/ma-xu/LIVE/thrust/thrust/system/cpp/detail/inner_product.h deleted file mode 100644 index c6ae90664ad9538e73febfde86c334011de417c8..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/cpp/detail/inner_product.h +++ /dev/null @@ -1,22 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system has no special version of this algorithm - diff --git a/spaces/maker57sk/linkedin_analysis/README.md b/spaces/maker57sk/linkedin_analysis/README.md deleted file mode 100644 index 2c4b65d3aa937cd10e11825e9e05fa9d796949fa..0000000000000000000000000000000000000000 --- a/spaces/maker57sk/linkedin_analysis/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Linkedin Analysis -emoji: 🚀 -colorFrom: gray -colorTo: indigo -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: mit ---- -# Find the LinkedIn page of any company - - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/mav735/mri-assistent/README.md b/spaces/mav735/mri-assistent/README.md deleted file mode 100644 index 35356b6a2e7549fabddc9fbd0523f4146a7320f8..0000000000000000000000000000000000000000 --- a/spaces/mav735/mri-assistent/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Mri Assistent -emoji: 🌖 -colorFrom: pink -colorTo: blue -sdk: gradio -sdk_version: 3.28.3 -app_file: app.py -pinned: false -license: gpl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/mayordp/DeepFakeAI/DeepFakeAI/processors/__init__.py b/spaces/mayordp/DeepFakeAI/DeepFakeAI/processors/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/merle/PROTEIN_GENERATOR/model/se3_transformer/data_loading/qm9.py b/spaces/merle/PROTEIN_GENERATOR/model/se3_transformer/data_loading/qm9.py deleted file mode 100644 index b45839868626f56d3a42ce859b2033ce1526373e..0000000000000000000000000000000000000000 --- a/spaces/merle/PROTEIN_GENERATOR/model/se3_transformer/data_loading/qm9.py +++ /dev/null @@ -1,171 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# Permission is hereby granted, free of charge, to any person obtaining a -# copy of this software and associated documentation files (the "Software"), -# to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, -# and/or sell copies of the Software, and to permit persons to whom the -# Software is furnished to do so, subject to the following conditions: -# -# The above copyright notice and this permission notice shall be included in -# all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL -# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING -# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER -# DEALINGS IN THE SOFTWARE. -# -# SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES -# SPDX-License-Identifier: MIT -from typing import Tuple - -import dgl -import pathlib -import torch -from dgl.data import QM9EdgeDataset -from dgl import DGLGraph -from torch import Tensor -from torch.utils.data import random_split, DataLoader, Dataset -from tqdm import tqdm - -from se3_transformer.data_loading.data_module import DataModule -from se3_transformer.model.basis import get_basis -from se3_transformer.runtime.utils import get_local_rank, str2bool, using_tensor_cores - - -def _get_relative_pos(qm9_graph: DGLGraph) -> Tensor: - x = qm9_graph.ndata['pos'] - src, dst = qm9_graph.edges() - rel_pos = x[dst] - x[src] - return rel_pos - - -def _get_split_sizes(full_dataset: Dataset) -> Tuple[int, int, int]: - len_full = len(full_dataset) - len_train = 100_000 - len_test = int(0.1 * len_full) - len_val = len_full - len_train - len_test - return len_train, len_val, len_test - - -class QM9DataModule(DataModule): - """ - Datamodule wrapping https://docs.dgl.ai/en/latest/api/python/dgl.data.html#qm9edge-dataset - Training set is 100k molecules. Test set is 10% of the dataset. Validation set is the rest. - This includes all the molecules from QM9 except the ones that are uncharacterized. - """ - - NODE_FEATURE_DIM = 6 - EDGE_FEATURE_DIM = 4 - - def __init__(self, - data_dir: pathlib.Path, - task: str = 'homo', - batch_size: int = 240, - num_workers: int = 8, - num_degrees: int = 4, - amp: bool = False, - precompute_bases: bool = False, - **kwargs): - self.data_dir = data_dir # This needs to be before __init__ so that prepare_data has access to it - super().__init__(batch_size=batch_size, num_workers=num_workers, collate_fn=self._collate) - self.amp = amp - self.task = task - self.batch_size = batch_size - self.num_degrees = num_degrees - - qm9_kwargs = dict(label_keys=[self.task], verbose=False, raw_dir=str(data_dir)) - if precompute_bases: - bases_kwargs = dict(max_degree=num_degrees - 1, use_pad_trick=using_tensor_cores(amp), amp=amp) - full_dataset = CachedBasesQM9EdgeDataset(bases_kwargs=bases_kwargs, batch_size=batch_size, **qm9_kwargs) - else: - full_dataset = QM9EdgeDataset(**qm9_kwargs) - - self.ds_train, self.ds_val, self.ds_test = random_split(full_dataset, _get_split_sizes(full_dataset), - generator=torch.Generator().manual_seed(0)) - - train_targets = full_dataset.targets[self.ds_train.indices, full_dataset.label_keys[0]] - self.targets_mean = train_targets.mean() - self.targets_std = train_targets.std() - - def prepare_data(self): - # Download the QM9 preprocessed data - QM9EdgeDataset(verbose=True, raw_dir=str(self.data_dir)) - - def _collate(self, samples): - graphs, y, *bases = map(list, zip(*samples)) - batched_graph = dgl.batch(graphs) - edge_feats = {'0': batched_graph.edata['edge_attr'][..., None]} - batched_graph.edata['rel_pos'] = _get_relative_pos(batched_graph) - # get node features - node_feats = {'0': batched_graph.ndata['attr'][:, :6, None]} - targets = (torch.cat(y) - self.targets_mean) / self.targets_std - - if bases: - # collate bases - all_bases = { - key: torch.cat([b[key] for b in bases[0]], dim=0) - for key in bases[0][0].keys() - } - - return batched_graph, node_feats, edge_feats, all_bases, targets - else: - return batched_graph, node_feats, edge_feats, targets - - @staticmethod - def add_argparse_args(parent_parser): - parser = parent_parser.add_argument_group("QM9 dataset") - parser.add_argument('--task', type=str, default='homo', const='homo', nargs='?', - choices=['mu', 'alpha', 'homo', 'lumo', 'gap', 'r2', 'zpve', 'U0', 'U', 'H', 'G', 'Cv', - 'U0_atom', 'U_atom', 'H_atom', 'G_atom', 'A', 'B', 'C'], - help='Regression task to train on') - parser.add_argument('--precompute_bases', type=str2bool, nargs='?', const=True, default=False, - help='Precompute bases at the beginning of the script during dataset initialization,' - ' instead of computing them at the beginning of each forward pass.') - return parent_parser - - def __repr__(self): - return f'QM9({self.task})' - - -class CachedBasesQM9EdgeDataset(QM9EdgeDataset): - """ Dataset extending the QM9 dataset from DGL with precomputed (cached in RAM) pairwise bases """ - - def __init__(self, bases_kwargs: dict, batch_size: int, *args, **kwargs): - """ - :param bases_kwargs: Arguments to feed the bases computation function - :param batch_size: Batch size to use when iterating over the dataset for computing bases - """ - self.bases_kwargs = bases_kwargs - self.batch_size = batch_size - self.bases = None - super().__init__(*args, **kwargs) - - def load(self): - super().load() - # Iterate through the dataset and compute bases (pairwise only) - # Potential improvement: use multi-GPU and reduction - dataloader = DataLoader(self, shuffle=False, batch_size=self.batch_size, - collate_fn=lambda samples: dgl.batch([sample[0] for sample in samples])) - bases = [] - for i, graph in tqdm(enumerate(dataloader), total=len(dataloader), desc='Precomputing QM9 bases', - disable=get_local_rank() != 0): - rel_pos = _get_relative_pos(graph) - # Compute the bases with the GPU but convert the result to CPU to store in RAM - bases.append({k: v.cpu() for k, v in get_basis(rel_pos.cuda(), **self.bases_kwargs).items()}) - self.bases = bases # Assign at the end so that __getitem__ isn't confused - - def __getitem__(self, idx: int): - graph, label = super().__getitem__(idx) - - if self.bases: - bases_idx = idx // self.batch_size - bases_cumsum_idx = self.ne_cumsum[idx] - self.ne_cumsum[bases_idx * self.batch_size] - bases_cumsum_next_idx = self.ne_cumsum[idx + 1] - self.ne_cumsum[bases_idx * self.batch_size] - return graph, label, {key: basis[bases_cumsum_idx:bases_cumsum_next_idx] for key, basis in - self.bases[bases_idx].items()} - else: - return graph, label diff --git a/spaces/merve/Grounding_DINO_demo/groundingdino/version.py b/spaces/merve/Grounding_DINO_demo/groundingdino/version.py deleted file mode 100644 index b794fd409a5e3b3b65ad76a43d6a01a318877640..0000000000000000000000000000000000000000 --- a/spaces/merve/Grounding_DINO_demo/groundingdino/version.py +++ /dev/null @@ -1 +0,0 @@ -__version__ = '0.1.0' diff --git a/spaces/merve/anonymization/source/anonymization/make-sliders.js b/spaces/merve/anonymization/source/anonymization/make-sliders.js deleted file mode 100644 index 72f6dfd7c96d6c74cfb35db5854f06b668bf3d46..0000000000000000000000000000000000000000 --- a/spaces/merve/anonymization/source/anonymization/make-sliders.js +++ /dev/null @@ -1,139 +0,0 @@ -window.makeSliders = function(){ - var rv = { - population: 144, - headsProb: .5, - } - - rv.updateHeadsProb = (headsProb) => { - rv.headsProb = headsProb - updateSliderPos() - - - estimates.updateEstimates() - estimates.render() - } - - rv.updatePopulation = (population) => { - rv.population = population - updateSliderPos() - - - var scale = d3.clamp(0, 13 / Math.sqrt(population), 1) - sel.studentGroup.st({ - transformOrigin: 'top', - transformOrigin: c.width/2 + 'px ' + 160 + 'px', - transform: `scale(${scale})` - }) - - estimates.updateEstimates() - estimates.render() - - sel.student.classed('inactive',(d, i) => i >= population) - } - - rv.updatePopulationSlider = (val) => { - rv.updatePopulation(val) - } - - rv.updateNoiseSlider = (val) => { - rv.updateHeadsProb(val) - } - - var updateSliderPos = (function(){ - var width = d3.clamp(50, window.innerWidth/2 - 40, 145) - var height = 30 - var color = '#007276' - - var sliderVals = { - population: { - key: 'population', - textFn: d => rv.population + ' students' , - r: [144, 756], - v: 144, - stepFn: d => rv.updatePopulation(Math.round(d.v/2)*2), - }, - headsProb: { - key: 'headsProb', - textFn: d => d3.format('.1%')(rv.headsProb) + ' chance of heads', - r: [.2, .5], - v: .5, - stepFn: d => rv.updateHeadsProb(d.v), - } - } - var sliders = [sliderVals.headsProb, sliderVals.population, sliderVals.headsProb] - sliders.forEach(d => { - d.s = d3.scaleLinear().domain(d.r).range([0, width]) - }) - - var sliderSel = d3.selectAll('.slide-container-population,.slide-container-heads-prob').html('') - .data(sliders) - .classed('slider', true) - .st({ - display: 'inline-block', - width: width, - paddingRight: (d, i) => i == 1 ? 40 : 0, - marginTop: 20, - }) - - var textSel = sliderSel.append('div.slider-label-container') - .st({marginBottom: -5}) - - var svgSel = sliderSel.append('svg').at({width, height}) - .on('click', function(d){ - d.v = d.s.invert(d3.mouse(this)[0]) - d.stepFn(d) - }) - .st({ - cursor: 'pointer' - }) - .append('g').translate(height/2, 1) - svgSel.append('rect').at({width, height, y: -height/2, fill: 'rgba(0,0,0,0)'}) - - svgSel.append('path').at({ - d: `M 0 -.5 H ${width}`, - stroke: color, - strokeWidth: 1 - }) - - var leftPathSel = svgSel.append('path').at({ - d: `M 0 -.5 H ${width}`, - stroke: color, - strokeWidth: 3 - }) - - - var drag = d3.drag() - .on('drag', function(d){ - var x = d3.mouse(this)[0] - d.v = d3.clamp(d3.min(d.r), d.s.invert(x), d3.max(d.r)) - d.stepFn(d) - }) - - var rectSel = svgSel.append('rect') - .at({ - width: height/2 - 1, - height: height/2 - 1, - stroke: color, - strokeWidth: 3, - fill: '#fff', - }) - .translate([-height/4, -height/4]) - .call(drag) - - return isDrag => { - rectSel.at({x: d => Math.round(d.s(rv[d.key]))}) - textSel.text(d => d.textFn(d)) - - leftPathSel.at({d: d => `M 0 -.5 H ${d.s(rv[d.key])}`}) - } - })() - updateSliderPos() - - - return rv -} - - - - -if (window.init) window.init() \ No newline at end of file diff --git a/spaces/merve/fill-in-the-blank/server-side/fill-in-the-blank/README.md b/spaces/merve/fill-in-the-blank/server-side/fill-in-the-blank/README.md deleted file mode 100644 index e57e5a3ca7690ba5b38b163530268b20ab7f5010..0000000000000000000000000000000000000000 --- a/spaces/merve/fill-in-the-blank/server-side/fill-in-the-blank/README.md +++ /dev/null @@ -1,39 +0,0 @@ -# Python - -## Setup - -Install dependencies - -``` -python3 -m venv env -source env/bin/activate -pip install -r py/requirements.txt -``` - -Download a copy of model weights - -``` -curl https://storage.googleapis.com/uncertainty-over-space/zari-bert-cda/pytorch_model.bin -o zari-bert-cda/pytorch_model.bin - -curl https://huggingface.co/bert-large-uncased-whole-word-masking/resolve/main/pytorch_model.bin -0 bert-large-uncased-whole-word-masking/pytorch_model.bin -``` - -Start server - -``` -source env/bin/activate -cd py && python main.py -``` - -## Deploy - -The `py` folder is bundled with docker and deployed to [Cloud Run](https://cloud.google.com/run/docs/quickstarts/build-and-deploy/python). - -``` -cd py - -gcloud builds submit --tag gcr.io/uncertainty-over-space/helloworld --project=uncertainty-over-space && gcloud run deploy --image gcr.io/uncertainty-over-space/helloworld --project=uncertainty-over-space -``` - -https://huggingface.co/blog/how-to-deploy-a-pipeline-to-google-clouds - diff --git a/spaces/merve/fill-in-the-blank/source/third_party/regl.min.js b/spaces/merve/fill-in-the-blank/source/third_party/regl.min.js deleted file mode 100644 index 7ecf11321eda67a76e019d6881f42b52f3d39c78..0000000000000000000000000000000000000000 --- a/spaces/merve/fill-in-the-blank/source/third_party/regl.min.js +++ /dev/null @@ -1,171 +0,0 @@ -(function(Z,ka){"object"===typeof exports&&"undefined"!==typeof module?module.exports=ka():"function"===typeof define&&define.amd?define(ka):Z.createREGL=ka()})(this,function(){function Z(a,b){this.id=Db++;this.type=a;this.data=b}function ka(a){if(0===a.length)return[];var b=a.charAt(0),c=a.charAt(a.length-1);if(1>>=b;c=(255>>=c;b|=c;c=(15>>=c;b|=c;c=(3>>c>>1}function hb(){function a(a){a:{for(var b=16;268435456>=b;b*=16)if(a<=b){a=b;break a}a=0}b=c[gb(a)>>2];return 0>2].push(a)}var c=R(8,function(){return[]});return{alloc:a,free:b,allocType:function(b,c){var d=null;switch(b){case 5120:d=new Int8Array(a(c),0,c);break;case 5121:d=new Uint8Array(a(c),0,c);break;case 5122:d=new Int16Array(a(2*c),0,c);break;case 5123:d=new Uint16Array(a(2*c),0,c);break;case 5124:d=new Int32Array(a(4*c),0,c);break;case 5125:d=new Uint32Array(a(4*c),0,c);break;case 5126:d=new Float32Array(a(4*c),0,c);break;default:return null}return d.length!== -c?d.subarray(0,c):d},freeType:function(a){b(a.buffer)}}}function la(a){return!!a&&"object"===typeof a&&Array.isArray(a.shape)&&Array.isArray(a.stride)&&"number"===typeof a.offset&&a.shape.length===a.stride.length&&(Array.isArray(a.data)||O(a.data))}function ib(a,b,c,e,f,d){for(var q=0;qe&&(e=d.buffer.byteLength,5123===k?e>>=1:5125===k&&(e>>=2));d.vertCount=e;e=g;0>g&&(e=4,g=d.buffer.dimension,1===g&&(e=0),2===g&&(e=1),3===g&&(e=4));d.primType=e}function q(a){e.elementsCount--;delete n[a.id];a.buffer.destroy();a.buffer=null}var n={},v=0,k={uint8:5121,uint16:5123};b.oes_element_index_uint&&(k.uint32=5125);f.prototype.bind=function(){this.buffer.bind()};var u=[];return{create:function(a, -b){function l(a){if(a)if("number"===typeof a)g(a),h.primType=4,h.vertCount=a|0,h.type=5121;else{var b=null,c=35044,e=-1,f=-1,m=0,n=0;if(Array.isArray(a)||O(a)||la(a))b=a;else if("data"in a&&(b=a.data),"usage"in a&&(c=nb[a.usage]),"primitive"in a&&(e=Ka[a.primitive]),"count"in a&&(f=a.count|0),"type"in a&&(n=k[a.type]),"length"in a)m=a.length|0;else if(m=f,5123===n||5122===n)m*=2;else if(5125===n||5124===n)m*=4;d(h,b,c,e,f,m,n)}else g(),h.primType=4,h.vertCount=0,h.type=5121;return l}var g=c.create(null, -34963,!0),h=new f(g._buffer);e.elementsCount++;l(a);l._reglType="elements";l._elements=h;l.subdata=function(a,b){g.subdata(a,b);return l};l.destroy=function(){q(h)};return l},createStream:function(a){var b=u.pop();b||(b=new f(c.create(null,34963,!0,!1)._buffer));d(b,a,35040,-1,-1,0,0);return b},destroyStream:function(a){u.push(a)},getElements:function(a){return"function"===typeof a&&a._elements instanceof f?a._elements:null},clear:function(){I(n).forEach(q)}}}function ob(a){for(var b=G.allocType(5123, -a.length),c=0;c>>31<<15,d=(e<<1>>>24)-127,e=e>>13&1023;b[c]=-24>d?f:-14>d?f+(e+1024>>-14-d):15>=e,c.height>>=e,x(c,d[e]),a.mipmask|=1<b;++b)a.images[b]=null;return a}function ya(a){for(var b=a.images,c=0;cb){for(var c=0;c=--this.refCount&&F(this)}});q.profile&&(d.getTotalTextureSize=function(){var a=0;Object.keys(ea).forEach(function(b){a+=ea[b].stats.size});return a});return{create2D:function(b,c){function e(a,b){var c=f.texInfo;w.call(c);var d=ma();"number"===typeof a?"number"===typeof b?p(d,a|0,b|0):p(d,a|0,a|0):a?(H(c,a),P(d,a)):p(d,1,1);c.genMipmaps&&(d.mipmask=(d.width<<1)-1);f.mipmask=d.mipmask;v(f, -d);f.internalformat=d.internalformat;e.width=d.width;e.height=d.height;T(f);t(d,3553);M(c,3553);wa();ya(d);q.profile&&(f.stats.size=La(f.internalformat,f.type,d.width,d.height,c.genMipmaps,!1));e.format=ca[f.internalformat];e.type=K[f.type];e.mag=Fa[c.magFilter];e.min=pa[c.minFilter];e.wrapS=qa[c.wrapS];e.wrapT=qa[c.wrapT];return e}var f=new y(3553);ea[f.id]=f;d.textureCount++;e(b,c);e.subimage=function(a,b,c,d){b|=0;c|=0;d|=0;var y=g();v(y,f);y.width=0;y.height=0;x(y,a);y.width=y.width||(f.width>> -d)-b;y.height=y.height||(f.height>>d)-c;T(f);l(y,3553,b,c,d);wa();h(y);return e};e.resize=function(b,c){var d=b|0,g=c|0||d;if(d===f.width&&g===f.height)return e;e.width=f.width=d;e.height=f.height=g;T(f);for(var y=0;f.mipmask>>y;++y){var h=d>>y,z=g>>y;if(!h||!z)break;a.texImage2D(3553,y,f.format,h,z,0,f.format,f.type,null)}wa();q.profile&&(f.stats.size=La(f.internalformat,f.type,d,g,!1,!1));return e};e._reglType="texture2d";e._texture=f;q.profile&&(e.stats=f.stats);e.destroy=function(){f.decRef()}; -return e},createCube:function(b,c,e,f,n,r){function m(a,b,c,d,e,f){var g,da=A.texInfo;w.call(da);for(g=0;6>g;++g)F[g]=ma();if("number"===typeof a||!a)for(a=a|0||1,g=0;6>g;++g)p(F[g],a,a);else if("object"===typeof a)if(b)P(F[0],a),P(F[1],b),P(F[2],c),P(F[3],d),P(F[4],e),P(F[5],f);else if(H(da,a),k(A,a),"faces"in a)for(a=a.faces,g=0;6>g;++g)v(F[g],A),P(F[g],a[g]);else for(g=0;6>g;++g)P(F[g],a);v(A,F[0]);A.mipmask=da.genMipmaps?(F[0].width<<1)-1:F[0].mipmask;A.internalformat=F[0].internalformat;m.width= -F[0].width;m.height=F[0].height;T(A);for(g=0;6>g;++g)t(F[g],34069+g);M(da,34067);wa();q.profile&&(A.stats.size=La(A.internalformat,A.type,m.width,m.height,da.genMipmaps,!0));m.format=ca[A.internalformat];m.type=K[A.type];m.mag=Fa[da.magFilter];m.min=pa[da.minFilter];m.wrapS=qa[da.wrapS];m.wrapT=qa[da.wrapT];for(g=0;6>g;++g)ya(F[g]);return m}var A=new y(34067);ea[A.id]=A;d.cubeCount++;var F=Array(6);m(b,c,e,f,n,r);m.subimage=function(a,b,c,d,e){c|=0;d|=0;e|=0;var f=g();v(f,A);f.width=0;f.height=0; -x(f,b);f.width=f.width||(A.width>>e)-c;f.height=f.height||(A.height>>e)-d;T(A);l(f,34069+a,c,d,e);wa();h(f);return m};m.resize=function(b){b|=0;if(b!==A.width){m.width=A.width=b;m.height=A.height=b;T(A);for(var c=0;6>c;++c)for(var d=0;A.mipmask>>d;++d)a.texImage2D(34069+c,d,A.format,b>>d,b>>d,0,A.format,A.type,null);wa();q.profile&&(A.stats.size=La(A.internalformat,A.type,m.width,m.height,!1,!0));return m}};m._reglType="textureCube";m._texture=A;q.profile&&(m.stats=A.stats);m.destroy=function(){A.decRef()}; -return m},clear:function(){for(var b=0;bc;++c)if(0!==(b.mipmask&1<>c,b.height>>c,0,b.internalformat, -b.type,null);else for(var d=0;6>d;++d)a.texImage2D(34069+d,c,b.internalformat,b.width>>c,b.height>>c,0,b.internalformat,b.type,null);M(b.texInfo,b.target)})},refresh:function(){for(var b=0;bd;++d){for(p= -0;pa;++a)c[a].resize(d);b.width=b.height=d;return b},_reglType:"framebufferCube",destroy:function(){c.forEach(function(a){a.destroy()})}})},clear:function(){I(M).forEach(r)}, -restore:function(){t.cur=null;t.next=null;t.dirty=!0;I(M).forEach(function(b){b.framebuffer=a.createFramebuffer();p(b)})}})}function $a(){this.w=this.z=this.y=this.x=this.state=0;this.buffer=null;this.size=0;this.normalized=!1;this.type=5126;this.divisor=this.stride=this.offset=0}function Sb(a,b,c,e,f,d,q){function n(a){if(a!==r.currentVAO){var c=b.oes_vertex_array_object;a?c.bindVertexArrayOES(a.vao):c.bindVertexArrayOES(null);r.currentVAO=a}}function v(c){if(c!==r.currentVAO){if(c)c.bindAttrs(); -else{for(var d=b.angle_instanced_arrays,e=0;e=m.byteLength?l.subdata(m): -(l.destroy(),c.buffers[h]=null));c.buffers[h]||(l=c.buffers[h]=f.create(p,34962,!1,!0));k.buffer=f.getBuffer(l);k.size=k.buffer.dimension|0;k.normalized=!1;k.type=k.buffer.dtype;k.offset=0;k.stride=0;k.divisor=0;k.state=1;a[h]=1}else f.getBuffer(p)?(k.buffer=f.getBuffer(p),k.size=k.buffer.dimension|0,k.normalized=!1,k.type=k.buffer.dtype,k.offset=0,k.stride=0,k.divisor=0,k.state=1):f.getBuffer(p.buffer)?(k.buffer=f.getBuffer(p.buffer),k.size=(+p.size||k.buffer.dimension)|0,k.normalized=!!p.normalized|| -!1,k.type="type"in p?Ja[p.type]:k.buffer.dtype,k.offset=(p.offset||0)|0,k.stride=(p.stride||0)|0,k.divisor=(p.divisor||0)|0,k.state=1):"x"in p&&(k.x=+p.x||0,k.y=+p.y||0,k.z=+p.z||0,k.w=+p.w||0,k.state=2)}for(l=0;la&&(a=b.stats.uniformsCount)});return a},c.getMaxAttributesCount=function(){var a=0;x.forEach(function(b){b.stats.attributesCount>a&&(a=b.stats.attributesCount)});return a});return{clear:function(){var b=a.deleteShader.bind(a);I(k).forEach(b);k={};I(u).forEach(b); -u={};x.forEach(function(b){a.deleteProgram(b.program)});x.length=0;m={};c.shaderCount=0},program:function(b,d,e,f){var l=m[d];l||(l=m[d]={});var q=l[b];if(q&&(q.refCount++,!f))return q;var w=new n(d,b);c.shaderCount++;v(w,e,f);q||(l[b]=w);x.push(w);return L(w,{destroy:function(){w.refCount--;if(0>=w.refCount){a.deleteProgram(w.program);var b=x.indexOf(w);x.splice(b,1);c.shaderCount--}0>=l[w.vertId].refCount&&(a.deleteShader(u[w.vertId]),delete u[w.vertId],delete m[w.fragId][w.vertId]);Object.keys(m[w.fragId]).length|| -(a.deleteShader(k[w.fragId]),delete k[w.fragId],delete m[w.fragId])}})},restore:function(){k={};u={};for(var a=0;a"+b+"?"+e+".constant["+b+"]:0;"}).join(""),"}}else{","if(",g,"(",e,".buffer)){",k,"=",f,".createStream(",34962,",",e,".buffer);","}else{",k,"=",f,".getBuffer(",e,".buffer);","}",m,'="type" in ',e,"?",z.glTypes,"[",e,".type]:",k,".dtype;",B.normalized,"=!!", -e,".normalized;");d("size");d("offset");d("stride");d("divisor");c("}}");c.exit("if(",B.isStream,"){",f,".destroyStream(",k,");","}");return B})});return g}function F(a){var b=a["static"],c=a.dynamic,d={};Object.keys(b).forEach(function(a){var c=b[a];d[a]=w(function(a,b){return"number"===typeof c||"boolean"===typeof c?""+c:a.link(c)})});Object.keys(c).forEach(function(a){var b=c[a];d[a]=K(b,function(a,c){return a.invoke(c,b)})});return d}function A(a,b,d,e,f){function g(a){var b=p[a];b&&(ja[a]=b)} -var m=O(a,b),l=G(a,f),p=C(a,l,f),X=M(a,f),ja=y(a,f),q=H(a,f,m);g("viewport");g(h("scissor.box"));var n=0>1)",u],");")}function b(){c(t,".drawArraysInstancedANGLE(",[n,q,r,u],");")}p&&"null"!==p?v?a():(c("if(",p,"){"),a(),c("}else{"),b(),c("}")):b()}function g(){function a(){c(l+".drawElements("+[n,r,x,q+"<<(("+x+"-5121)>>1)"]+");")}function b(){c(l+".drawArrays("+[n,q,r]+");")}p&&"null"!==p?v?a():(c("if(",p,"){"),a(),c("}else{"),b(),c("}")):b()}var h=a.shared,l=h.gl,k=h.draw,m=d.draw, -p=function(){var e=m.elements,f=b;if(e){if(e.contextDep&&d.contextDynamic||e.propDep)f=c;e=e.append(a,f);m.elementsActive&&f("if("+e+")"+l+".bindBuffer(34963,"+e+".buffer.buffer);")}else e=f.def(),f(e,"=",k,".","elements",";","if(",e,"){",l,".bindBuffer(",34963,",",e,".buffer.buffer);}","else if(",h.vao,".currentVAO){",e,"=",a.shared.elements+".getElements("+h.vao,".currentVAO.elements);",na?"":"if("+e+")"+l+".bindBuffer(34963,"+e+".buffer.buffer);","}");return e}(),n=e("primitive"),q=e("offset"), -r=function(){var e=m.count,f=b;if(e){if(e.contextDep&&d.contextDynamic||e.propDep)f=c;e=e.append(a,f)}else e=f.def(k,".","count");return e}();if("number"===typeof r){if(0===r)return}else c("if(",r,"){"),c.exit("}");var u,t;W&&(u=e("instances"),t=a.instancing);var x=p+".type",v=m.elements&&xa(m.elements)&&!m.vaoActive;W&&("number"!==typeof u||0<=u)?"string"===typeof u?(c("if(",u,">0){"),f(),c("}else if(",u,"<0){"),g(),c("}")):f():g()}function ca(a,b,c,d,e){b=P();e=b.proc("body",e);W&&(b.instancing= -e.def(b.shared.extensions,".angle_instanced_arrays"));a(b,e,c,d);return b.compile().body}function Z(a,b,c,d){N(a,b);c.useVAO?c.drawVAO?b(a.shared.vao,".setVAO(",c.drawVAO.append(a,b),");"):b(a.shared.vao,".setVAO(",a.shared.vao,".targetVAO);"):(b(a.shared.vao,".setVAO(null);"),ga(a,b,c,d.attributes,function(){return!0}));Q(a,b,c,d.uniforms,function(){return!0},!1);U(a,b,b,c)}function Fa(a,b){var c=a.proc("draw",1);N(a,c);ia(a,c,b.context);S(a,c,b.framebuffer);Aa(a,c,b);I(a,c,b.state);E(a,c,b,!1,!0); -var d=b.shader.progVar.append(a,c);c(a.shared.gl,".useProgram(",d,".program);");if(b.shader.program)Z(a,c,b,b.shader.program);else{c(a.shared.vao,".setVAO(null);");var e=a.global.def("{}"),f=c.def(d,".id"),g=c.def(e,"[",f,"]");c(a.cond(g).then(g,".call(this,a0);")["else"](g,"=",e,"[",f,"]=",a.link(function(c){return ca(Z,a,b,c,1)}),"(",d,");",g,".call(this,a0);"))}0=--this.refCount&&q(this)};f.profile&&(e.getTotalRenderbufferSize=function(){var a=0;Object.keys(u).forEach(function(b){a+=u[b].stats.size});return a});return{create:function(b, -c){function l(b,c){var d=0,e=0,k=32854;"object"===typeof b&&b?("shape"in b?(e=b.shape,d=e[0]|0,e=e[1]|0):("radius"in b&&(d=e=b.radius|0),"width"in b&&(d=b.width|0),"height"in b&&(e=b.height|0)),"format"in b&&(k=n[b.format])):"number"===typeof b?(d=b|0,e="number"===typeof c?c|0:d):b||(d=e=1);if(d!==g.width||e!==g.height||k!==g.format)return l.width=g.width=d,l.height=g.height=e,g.format=k,a.bindRenderbuffer(36161,g.renderbuffer),a.renderbufferStorage(36161,k,d,e),f.profile&&(g.stats.size=Q[g.format]* -g.width*g.height),l.format=v[g.format],l}var g=new d(a.createRenderbuffer());u[g.id]=g;e.renderbufferCount++;l(b,c);l.resize=function(b,c){var d=b|0,e=c|0||d;if(d===g.width&&e===g.height)return l;l.width=g.width=d;l.height=g.height=e;a.bindRenderbuffer(36161,g.renderbuffer);a.renderbufferStorage(36161,g.format,d,e);f.profile&&(g.stats.size=Q[g.format]*g.width*g.height);return l};l._reglType="renderbuffer";l._renderbuffer=g;f.profile&&(l.stats=g.stats);l.destroy=function(){g.decRef()};return l},clear:function(){I(u).forEach(q)}, -restore:function(){I(u).forEach(function(b){b.renderbuffer=a.createRenderbuffer();a.bindRenderbuffer(36161,b.renderbuffer);a.renderbufferStorage(36161,b.format,b.width,b.height)});a.bindRenderbuffer(36161,null)}}},Za=[];Za[6408]=4;Za[6407]=3;var Ra=[];Ra[5121]=1;Ra[5126]=4;Ra[36193]=2;var Da=["x","y","z","w"],Xb="blend.func blend.equation stencil.func stencil.opFront stencil.opBack sample.coverage viewport scissor.box polygonOffset.offset".split(" "),Ga={0:0,1:1,zero:0,one:1,"src color":768,"one minus src color":769, -"src alpha":770,"one minus src alpha":771,"dst color":774,"one minus dst color":775,"dst alpha":772,"one minus dst alpha":773,"constant color":32769,"one minus constant color":32770,"constant alpha":32771,"one minus constant alpha":32772,"src alpha saturate":776},ab={never:512,less:513,"<":513,equal:514,"=":514,"==":514,"===":514,lequal:515,"<=":515,greater:516,">":516,notequal:517,"!=":517,"!==":517,gequal:518,">=":518,always:519},Ta={0:0,zero:0,keep:7680,replace:7681,increment:7682,decrement:7683, -"increment wrap":34055,"decrement wrap":34056,invert:5386},zb={cw:2304,ccw:2305},Ab=new J(!1,!1,!1,function(){}),$b=function(a,b){function c(){this.endQueryIndex=this.startQueryIndex=-1;this.sum=0;this.stats=null}function e(a,b,d){var e=q.pop()||new c;e.startQueryIndex=a;e.endQueryIndex=b;e.sum=0;e.stats=d;n.push(e)}if(!b.ext_disjoint_timer_query)return null;var f=[],d=[],q=[],n=[],v=[],k=[];return{beginQuery:function(a){var c=f.pop()||b.ext_disjoint_timer_query.createQueryEXT();b.ext_disjoint_timer_query.beginQueryEXT(35007, -c);d.push(c);e(d.length-1,d.length,a)},endQuery:function(){b.ext_disjoint_timer_query.endQueryEXT(35007)},pushScopeStats:e,update:function(){var a,c;a=d.length;if(0!==a){k.length=Math.max(k.length,a+1);v.length=Math.max(v.length,a+1);v[0]=0;var e=k[0]=0;for(c=a=0;c=E.length&&e()}var c=Bb(E,a);E[c]=b}}}function k(){var a=Q.viewport,b=Q.scissor_box;a[0]=a[1]=b[0]=b[1]=0;H.viewportWidth=H.framebufferWidth=H.drawingBufferWidth=a[2]=b[2]=l.drawingBufferWidth;H.viewportHeight=H.framebufferHeight=H.drawingBufferHeight=a[3]=b[3]=l.drawingBufferHeight}function u(){H.tick+=1;H.time=x();k();I.procs.poll()}function m(){A.refresh();k();I.procs.refresh();t&&t.update()}function x(){return(Cb()- -G)/1E3}a=Hb(a);if(!a)return null;var l=a.gl,g=l.getContextAttributes();l.isContextLost();var h=Ib(l,a);if(!h)return null;var r=Eb(),p={vaoCount:0,bufferCount:0,elementsCount:0,framebufferCount:0,shaderCount:0,textureCount:0,cubeCount:0,renderbufferCount:0,maxTextureUnits:0},w=h.extensions,t=$b(l,w),G=Cb(),C=l.drawingBufferWidth,J=l.drawingBufferHeight,H={tick:0,time:0,viewportWidth:C,viewportHeight:J,framebufferWidth:C,framebufferHeight:J,drawingBufferWidth:C,drawingBufferHeight:J,pixelRatio:a.pixelRatio}, -C={elements:null,primitive:4,count:-1,offset:0,instances:-1},M=Yb(l,w),y=Jb(l,p,a,function(a){return K.destroyBuffer(a)}),T=Kb(l,w,y,p),K=Sb(l,w,M,p,y,T,C),F=Tb(l,r,p,a),A=Nb(l,w,M,function(){I.procs.poll()},H,p,a),O=Zb(l,w,M,p,a),S=Rb(l,w,M,A,O,p),I=Wb(l,r,w,M,y,T,A,S,{},K,F,C,H,t,a),r=Ub(l,S,I.procs.poll,H,g,w,M),Q=I.next,N=l.canvas,E=[],R=[],U=[],Z=[a.onDestroy],ca=null;N&&(N.addEventListener("webglcontextlost",f,!1),N.addEventListener("webglcontextrestored",d,!1));var aa=S.setFBO=q({framebuffer:Y.define.call(null, -1,"framebuffer")});m();g=L(q,{clear:function(a){if("framebuffer"in a)if(a.framebuffer&&"framebufferCube"===a.framebuffer_reglType)for(var b=0;6>b;++b)aa(L({framebuffer:a.framebuffer.faces[b]},a),n);else aa(a,n);else n(null,a)},prop:Y.define.bind(null,1),context:Y.define.bind(null,2),"this":Y.define.bind(null,3),draw:q({}),buffer:function(a){return y.create(a,34962,!1,!1)},elements:function(a){return T.create(a,!1)},texture:A.create2D,cube:A.createCube,renderbuffer:O.create,framebuffer:S.create,framebufferCube:S.createCube, -vao:K.createVAO,attributes:g,frame:v,on:function(a,b){var c;switch(a){case "frame":return v(b);case "lost":c=R;break;case "restore":c=U;break;case "destroy":c=Z}c.push(b);return{cancel:function(){for(var a=0;a div > div{ - background-size: cover; - background-position: center; -} - -.note, ul{ - opacity: .5; - max-width: 750px; - max-width: 750px; - margin-left: 0px auto; - margin-right: 0px auto; - margin: 0px auto; - margin-top: 1em; - margin-bottom: 1em; - -} - -#columns-height { - margin-bottom: 70px; -} - -.post-summary{ - - margin-bottom: auto; -} - - -#all-shapes{ - pointer-events: none; -} - -#all-shapes .shape{ - outline: 0px !important; -} - -.post-summary{ - display: none; -} - -#pick-metric .top text, #coat-v-gender .top text { - font-weight: 300 !important; -} - diff --git a/spaces/merve/hidden-bias/source/uncertainty-calibration/util.js b/spaces/merve/hidden-bias/source/uncertainty-calibration/util.js deleted file mode 100644 index a0ce5b12a2a642f1186cc4004e90b046a89611f8..0000000000000000000000000000000000000000 --- a/spaces/merve/hidden-bias/source/uncertainty-calibration/util.js +++ /dev/null @@ -1,38 +0,0 @@ -window.initUtil = function(){ - function addAxisLabel(c, xText, yText, xOffset=40, yOffset=-40){ - c.svg.select('.x').append('g') - .translate([c.width/2, xOffset]) - .append('text.axis-label') - .text(xText) - .at({textAnchor: 'middle'}) - .st({fill: '#000', fontSize: 14, fontFamily: 'sans-serif'}) - - c.svg.select('.y') - .append('g') - .translate([yOffset, c.height/2]) - .append('text.axis-label') - .text(yText) - .at({textAnchor: 'middle', transform: 'rotate(-90)'}) - .st({fill: '#000', fontSize: 14, fontFamily: 'sans-serif'}) - } - - function ggPlotBg(c, isBlack=true){ - if (isBlack){ - c.svg.append('rect.bg-rect') - .at({width: c.width, height: c.height, fill: '#eee'}) - .lower() - } - - c.svg.selectAll('.tick').selectAll('line').remove() - c.svg.selectAll('.y .tick') - .append('path').at({d: 'M 0 0 H ' + c.width, stroke: '#fff', strokeWidth: 1}) - c.svg.selectAll('.y text').at({x: -3}) - c.svg.selectAll('.x .tick') - .append('path').at({d: 'M 0 0 V -' + c.height, stroke: '#fff', strokeWidth: 1}) - } - - - return {addAxisLabel, ggPlotBg} -} - -if (window.init) window.init() \ No newline at end of file diff --git a/spaces/mike-ravkine/llm-webapps-results/Dockerfile b/spaces/mike-ravkine/llm-webapps-results/Dockerfile deleted file mode 100644 index 97bc418f062694ff493cee4b4b7a67f9848f8760..0000000000000000000000000000000000000000 --- a/spaces/mike-ravkine/llm-webapps-results/Dockerfile +++ /dev/null @@ -1,13 +0,0 @@ -FROM python:3.9 - -WORKDIR /code - -COPY ./requirements.txt /code/requirements.txt - -RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt - -RUN git clone https://github.com/the-crypt-keeper/llm-webapps.git /code/llm-webapps - -WORKDIR /code/llm-webapps - -CMD ["streamlit", "run", "app.py", "--server.address", "0.0.0.0", "--server.port", "7860"] diff --git a/spaces/mikeee/convbot/convbot/__init__.py b/spaces/mikeee/convbot/convbot/__init__.py deleted file mode 100644 index 9a7bbdebd63e83ccddf18ae7e8621e4d7ad531a3..0000000000000000000000000000000000000000 --- a/spaces/mikeee/convbot/convbot/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -"""Init.""" -__version__ = "0.1.1" -from .convbot import aconvbot, convbot - -__all__ = ( - "convbot", - "aconvbot", -) diff --git a/spaces/mikeee/radiobee-aligner/radiobee/align_sents.py b/spaces/mikeee/radiobee-aligner/radiobee/align_sents.py deleted file mode 100644 index 960fb5ba143f422a9dcee86bda48b382b5dc892c..0000000000000000000000000000000000000000 --- a/spaces/mikeee/radiobee-aligner/radiobee/align_sents.py +++ /dev/null @@ -1,77 +0,0 @@ -"""Align sents via gale-church.""" -# pylint: disable=invalid-name - -from typing import List, Tuple # noqa - -import re - -# from itertools import tee -# from more_itertools import ilen -from nltk.translate.gale_church import align_blocks - -from radiobee.amend_avec import amend_avec - - -def align_sents(lst1: List[str], lst2: List[str]) -> List[Tuple[str, str]]: - """Align sents. - - >>> lst1, lst2 = ['a', 'bs',], ['aaa', '34', 'a', 'b'] - """ - if isinstance(lst1, str): - lst1 = [lst1] - - if isinstance(lst2, str): - lst2 = [lst2] - - src_blocks = [len(re.sub(r"\s+", "", elm)) for elm in lst1] - tgt_blocks = [len(re.sub(r"\s+", "", elm)) for elm in lst2] - - avec = align_blocks(src_blocks, tgt_blocks) - - len1, len2 = len(lst1), len(lst2) - # lst1, _ = tee(lst1) - # len1 = ilen(_) - # lst2, _ = tee(lst2) - # len2 = ilen(_) - - amended_avec = amend_avec(avec, len1, len2) - - texts = [] - # for elm in aset: - # for elm0, elm1 in amended_avec: - for elm in amended_avec: - # elm0, elm1, elm2 = elm - elm0, elm1 = elm[:2] - _ = [] - - # src_text first - if isinstance(elm0, str): - _.append("") - else: - # _.append(src_text[int(elm0)]) - _.append(lst1[int(elm0)]) - - if isinstance(elm1, str): - _.append("") - else: - # _.append(tgt_text[int(elm0)]) - _.append(lst2[int(elm1)]) - - _a = """ - if isinstance(elm2, str): - _.append("") - else: - _.append(round(elm2, 2)) - # """ - del _a - - texts.append(tuple(_)) - - _ = """ - _ = [] - for elm in texts: - _.extend(elm) - return _ - """ - - return texts diff --git a/spaces/mikeee/radiobee-dev/tests/test_detect.py b/spaces/mikeee/radiobee-dev/tests/test_detect.py deleted file mode 100644 index c4b5557fc2726af6cd2ace7124614427544483d4..0000000000000000000000000000000000000000 --- a/spaces/mikeee/radiobee-dev/tests/test_detect.py +++ /dev/null @@ -1,40 +0,0 @@ -"""Test detect.""" -import pytest -from radiobee.detect import detect - - -@pytest.mark.parametrize( - "test_input,expected", [ - ("", "en"), - (" ", "en"), - (" \n ", "en"), - ("注释", "zh"), - ] -) -def test_detect(test_input, expected): - """Test detect.""" - assert detect(test_input) == expected - - # expected set_languages[0], set_languages = ["en", "zh"] - assert detect(test_input, ["en", "zh"]) == expected - - -def test_detect_de(): - """Test detect de.""" - text_de = "4\u3000In der Beschränkung zeigt sich erst der Meister, / Und das Gesetz nur kann uns Freiheit geben. 参见http://www.business-it.nl/files/7d413a5dca62fc735a072b16fbf050b1-27.php." # noqa - assert detect(text_de) == "de" - assert detect(text_de, ["en", "zh"]) == "zh" - - -def test_elm1(): - """Test ——撰文:Thomas Gibbons-Neff和Fahim Abed,摄影:Jim Huylebroek=.""" - elm1 = "——撰文:Thomas Gibbons-Neff和Fahim Abed,摄影:Jim Huylebroek" - assert detect(elm1) == "ja" - assert detect(elm1, ["en", "zh"]) == "zh" - - -def test_elm2(): - """Test 在卢旺达基加利的一家牛奶吧。 JACQUES NKINZINGABO FOR THE NEW YORK TIMES.""" - elm2 = "在卢旺达基加利的一家牛奶吧。 JACQUES NKINZINGABO FOR THE NEW YORK TIMES" - assert detect(elm2) == "zh" - assert detect(elm2, ["en", "zh"]) == "zh" diff --git a/spaces/milyiyo/reimagine-it/captioning/utils/rewards.py b/spaces/milyiyo/reimagine-it/captioning/utils/rewards.py deleted file mode 100644 index 668b830cbdef05d6c3eab8d99a07918a325e9157..0000000000000000000000000000000000000000 --- a/spaces/milyiyo/reimagine-it/captioning/utils/rewards.py +++ /dev/null @@ -1,392 +0,0 @@ -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import numpy as np -import time -from collections import OrderedDict -import torch - -import sys -try: - sys.path.append("cider") - from pyciderevalcap.ciderD.ciderD import CiderD - from pyciderevalcap.cider.cider import Cider - sys.path.append("coco-caption") - from pycocoevalcap.bleu.bleu import Bleu -except: - print('cider or coco-caption missing') - -CiderD_scorer = None -Cider_scorer = None -Bleu_scorer = None -#CiderD_scorer = CiderD(df='corpus') - - -from .misc import decode_sequence - -def init_scorer(cached_tokens): - global CiderD_scorer - CiderD_scorer = CiderD_scorer or CiderD(df=cached_tokens) - global Cider_scorer - Cider_scorer = Cider_scorer or Cider(df=cached_tokens) - global Bleu_scorer - Bleu_scorer = Bleu_scorer or Bleu(4) - -def array_to_str(arr): - out = '' - for i in range(len(arr)): - out += str(arr[i]) + ' ' - if arr[i] == 0: - break - return out.strip() - -def get_self_critical_reward(greedy_res, data_gts, gen_result, opt): - batch_size = len(data_gts) - gen_result_size = gen_result.shape[0] - seq_per_img = gen_result_size // len(data_gts) # gen_result_size = batch_size * seq_per_img - assert greedy_res.shape[0] == batch_size - - res = OrderedDict() - gen_result = gen_result.data.cpu().numpy() - greedy_res = greedy_res.data.cpu().numpy() - for i in range(gen_result_size): - res[i] = [array_to_str(gen_result[i])] - for i in range(batch_size): - res[gen_result_size + i] = [array_to_str(greedy_res[i])] - - gts = OrderedDict() - for i in range(len(data_gts)): - gts[i] = [array_to_str(data_gts[i][j]) for j in range(len(data_gts[i]))] - - res_ = [{'image_id':i, 'caption': res[i]} for i in range(len(res))] - res__ = {i: res[i] for i in range(len(res_))} - gts_ = {i: gts[i // seq_per_img] for i in range(gen_result_size)} - gts_.update({i+gen_result_size: gts[i] for i in range(batch_size)}) - if opt.cider_reward_weight > 0: - _, cider_scores = CiderD_scorer.compute_score(gts_, res_) - if hasattr(opt, 'verbose') and not opt.verbose: - pass - else: - print('Cider scores:', _) - else: - cider_scores = 0 - if opt.bleu_reward_weight > 0: - _, bleu_scores = Bleu_scorer.compute_score(gts_, res__) - bleu_scores = np.array(bleu_scores[3]) - if hasattr(opt, 'verbose') and not opt.verbose: - pass - else: - print('Bleu scores:', _[3]) - else: - bleu_scores = 0 - scores = opt.cider_reward_weight * cider_scores + opt.bleu_reward_weight * bleu_scores - - unnormalized_reward_mean = scores[:gen_result_size].flatten().mean() - - scores = scores[:gen_result_size].reshape(batch_size, seq_per_img) - scores[-batch_size:][:, np.newaxis] - - scores = scores.reshape(gen_result_size) - - rewards = np.repeat(scores[:, np.newaxis], gen_result.shape[1], 1) - - return rewards, unnormalized_reward_mean - - -def get_self_critical_clipscore_reward(greedy_res, data_gts, gen_result, opt, clipscore_model, clip_vis_feats, vocab): - batch_size = len(data_gts) - gen_result_size = gen_result.shape[0] - seq_per_img = gen_result_size // len(data_gts) # gen_result_size = batch_size * seq_per_img - assert greedy_res.shape[0] == batch_size - - B = batch_size - K = seq_per_img - L = gen_result.shape[1] - assert gen_result.shape == (B*K , L) - - # res = OrderedDict() - # gen_result = gen_result.data.cpu().numpy() - # greedy_res = greedy_res.data.cpu().numpy() - # for i in range(gen_result_size): - # res[i] = [array_to_str(gen_result[i])] - # for i in range(batch_size): - # res[gen_result_size + i] = [array_to_str(greedy_res[i])] - - # gts = OrderedDict() - # for i in range(len(data_gts)): - # gts[i] = [array_to_str(data_gts[i][j]) for j in range(len(data_gts[i]))] - - # res_ = [{'image_id':i, 'caption': res[i]} for i in range(len(res))] - # res__ = {i: res[i] for i in range(len(res_))} - # gts_ = {i: gts[i // seq_per_img] for i in range(gen_result_size)} - # gts_.update({i+gen_result_size: gts[i] for i in range(batch_size)}) - - # res = [] - # gen_result = gen_result.data.cpu().numpy() - # greedy_res = greedy_res.data.cpu().numpy() - # # for i in range(gen_result_size): - # # res.append(array_to_str(gen_result[i])) - # res.extend(decode_sequence(vocab, gen_result)) - - - # # for i in range(batch_size): - # # res.append(array_to_str(greedy_res[i])) - # res.extend(decode_sequence(vocab, greedy_res)) - - if clipscore_model.mode == 'refclip_s': - gts = [] - gts_valid_mask = [] - max_n_refs = max([len(_gts) for _gts in data_gts]) - for i in range(len(data_gts)): - _gts = decode_sequence(vocab, data_gts[i]) - # pad references - n_ref = len(_gts) - _gts.extend([''] * (max_n_refs - n_ref)) - gts.extend(_gts) - gts_valid_mask.extend([1] * n_ref + [0] * (max_n_refs - n_ref)) - assert len(gts) == B * max_n_refs - assert len(gts_valid_mask) == B * max_n_refs - - # print(gts) - # print(gts_valid_mask) - # exit() - - - # assert len(res) == B * K + B, len(res) - - # print(res) - # exit() - - if opt.clipscore_reward_weight > 0: - with torch.no_grad(): - clipscore_model.eval() - - # 1) calculate reward - gen_result = gen_result.data.cpu().numpy() - res = decode_sequence(vocab, gen_result) - assert len(res) == B * K, len(res) - - # [B * K, dim) - if getattr(opt, 'use_grammar', False) and not getattr(opt, 'joint_out', False): - text_pre_feat = clipscore_model.text_extract(res, proj_norm=False) - - grammar_logit = clipscore_model.grammar_score_head(text_pre_feat.view(-1, 512)) - grammar_prob = torch.softmax(grammar_logit, dim=-1)[:, 1] - grammar_prob = grammar_prob.view(B*K).detach() - - text_feat = clipscore_model.clip_model.text_projection(text_pre_feat) - text_feat = text_feat / text_feat.norm(dim=-1, keepdim=True) - - else: - text_feat = clipscore_model.text_extract(res) - - - assert text_feat.size() == (B * K, 512), text_feat.size() - assert clip_vis_feats.size() == (B, 512), clip_vis_feats.size() - - # [B * K, dim] - vis_feat = clip_vis_feats.view(B, 1, -1).expand(-1, K, -1).contiguous().view(B * K, -1) - - clip_s = clipscore_model(text_feat=text_feat, img_feat=vis_feat, mode='clip_s') - clip_s = clip_s.view(B * K).detach() - - if clipscore_model.mode == 'refclip_s': - # [B * n_ref, dim] - ref_text_feat = clipscore_model.text_extract(gts) - ref_text_mask = torch.tensor(gts_valid_mask, dtype=ref_text_feat.dtype, device=ref_text_feat.device) - - assert ref_text_feat.size() == (B * max_n_refs, 512), ref_text_feat.size() - assert ref_text_mask.size() == (B * max_n_refs,), ref_text_mask.size() - - # [B * K] - refclip_s = clipscore_model.calc_refclip_s( - text_feat=text_feat, img_feat=vis_feat, - ref_text_feat=ref_text_feat.view(B, 1, max_n_refs, -1).expand(-1, K, -1, -1).contiguous().view(B * K * max_n_refs, -1), - ref_text_mask=ref_text_mask.view(B, 1, max_n_refs).expand(-1, K, -1).contiguous().view(B * K * max_n_refs), - clip_s=clip_s) - refclip_s = refclip_s.view(B * K).detach() - - # 2) calcualte reward for baseline (greedy) - greedy_res = greedy_res.data.cpu().numpy() - res = decode_sequence(vocab, greedy_res) - assert len(res) == B, len(res) - - # [B, dim) - - if getattr(opt, 'use_grammar', False) and getattr(opt, 'use_grammar_baseline', False) and not getattr(opt, 'joint_out', False): - text_pre_feat = clipscore_model.text_extract(res, proj_norm=False) - - grammar_logit = clipscore_model.grammar_score_head(text_pre_feat.view(-1, 512)) - grammar_prob_baseline = torch.softmax(grammar_logit, dim=-1)[:, 1] - grammar_prob_baseline = grammar_prob_baseline.view(B).detach() - - text_feat = clipscore_model.clip_model.text_projection(text_pre_feat) - text_feat = text_feat / text_feat.norm(dim=-1, keepdim=True) - else: - text_feat = clipscore_model.text_extract(res) - - assert text_feat.size() == (B, 512), text_feat.size() - assert clip_vis_feats.size() == (B, 512), clip_vis_feats.size() - - vis_feat = clip_vis_feats.view(B, 512) - - # [B] - clip_s_baseline = clipscore_model(text_feat=text_feat, img_feat=vis_feat, mode='clip_s') - clip_s_baseline = clip_s_baseline.view(B).detach() - - if clipscore_model.mode == 'refclip_s': - # # [B * n_ref] - # ref_text_feat = clipscore_model.text_extract(gts) - # ref_text_mask = torch.tensor(gts_valid_mask, dtype=ref_text_feat.dtype, device=ref_text_feat.device) - # assert ref_text_feat.size() == (B * max_n_refs, 512), ref_text_feat.size() - # assert ref_text_mask.size() == (B * max_n_refs), ref_text_mask.size() - - # [B] - refclip_s_baseline = clipscore_model.calc_refclip_s( - text_feat=text_feat, img_feat=vis_feat, - ref_text_feat=ref_text_feat, - ref_text_mask=ref_text_mask, - clip_s=clip_s_baseline) - refclip_s_baseline = refclip_s_baseline.view(B).detach() - - if clipscore_model.mode == 'clip_s': - rewards = clip_s - clip_s_baseline.view(B, 1).expand(-1, K).contiguous().flatten() - unnormalized_mean_reward = clip_s.mean() - elif clipscore_model.mode == 'refclip_s': - rewards = refclip_s - refclip_s_baseline.view(B, 1).expand(-1, K).contiguous().flatten() - unnormalized_mean_reward = refclip_s.mean() - - # # [B * K + B, dim) - # text_feat = clipscore_model.text_extract(res) - # assert text_feat.size() == (B * K + B, 512), text_feat.size() - - # assert clip_vis_feats.size() == (B, 512), clip_vis_feats.size() - - # # [B, dim] -> [B * K + B, dim] - # # vis_feat = clip_vis_feats.view(B, 1, -1).expand(-1, K + 1, -1).contiguous().view(B * (K + 1), -1) - # # vis_feat = clip_vis_feats.view(1, B, -1).expand(K + 1, -1, -1).contiguous().view((K + 1) * B, -1) - - # # [B * K, dim] - # gen_vis_feat = clip_vis_feats.view(B, 1, -1).expand(-1, K, -1).contiguous().view(B * K, -1) - # # [B, dim] - # greedy_vis_feat = clip_vis_feats - # # [B * K + B, dim] - # vis_feat = torch.cat([gen_vis_feat, greedy_vis_feat], dim=0) - - # # if clipscore_model.mode == 'clip_s': - # # [B * K + B, dim] - # clip_s = clipscore_model(text_feat=text_feat, img_feat=vis_feat) - # clip_s = clip_s.view(B * K + B).detach() - - - # if clipscore_model.mode == 'refclip_s': - # # [B * K, dim] - # ref_text_feat = clipscore_model.text_extract(gts) - - # clipscore_scores = clipscore_model.calc_refclip_s(text_feat=text_feat, img_feat=vis_feat, ref_text_feat=ref_text_feat, clip_s=clip_s) - # clipscore_scores = clipscore_scores.view(B * K + B).detach() - - if getattr(opt, 'use_grammar', False) and not getattr(opt, 'joint_out', False): - - if getattr(opt, 'use_grammar_baseline', False): - grammar_rewards = grammar_prob - grammar_prob_baseline.view(B, 1).expand(-1, K).contiguous().flatten() - else: - grammar_rewards = grammar_prob - else: - grammar_rewards = None - - - if hasattr(opt, 'verbose') and not opt.verbose: - pass - else: - if clipscore_model.mode == 'clip_s': - print('CLIP-S:', rewards) - elif clipscore_model.mode == 'refclip_s': - print('RefCLIP-S:', rewards) - else: - rewards = torch.zeros(B, L) - unnormalized_mean_reward = None - grammar_rewards = None - - - rewards = opt.clipscore_reward_weight * rewards - - - # scores = scores[:gen_result_size].reshape(batch_size, seq_per_img) - scores[-batch_size:][:, np.newaxis] - # scores = scores.reshape(gen_result_size) - # rewards = np.repeat(scores[:, np.newaxis], gen_result.shape[1], 1) - - # [B, K] - # scores = scores[:gen_result_size].reshape(B, K) - scores[-B:].unsqueeze(1) - - # [B*K, L] - # rewards = scores.view(-1, 1).expand(-1, L).contiguous() - rewards = rewards.view(-1, 1).expand(-1, L).contiguous() - - if getattr(opt, 'use_grammar', False) and not getattr(opt, 'joint_out', False): - grammar_rewards = grammar_rewards.view(-1, 1).expand(-1, L).contiguous() - - return rewards, unnormalized_mean_reward, grammar_rewards - -def get_scores(data_gts, gen_result, opt): - batch_size = gen_result.size(0)# batch_size = sample_size * seq_per_img - seq_per_img = batch_size // len(data_gts) - - res = OrderedDict() - - gen_result = gen_result.data.cpu().numpy() - for i in range(batch_size): - res[i] = [array_to_str(gen_result[i])] - - gts = OrderedDict() - for i in range(len(data_gts)): - gts[i] = [array_to_str(data_gts[i][j]) for j in range(len(data_gts[i]))] - - res_ = [{'image_id':i, 'caption': res[i]} for i in range(batch_size)] - res__ = {i: res[i] for i in range(batch_size)} - gts = {i: gts[i // seq_per_img] for i in range(batch_size)} - if opt.cider_reward_weight > 0: - _, cider_scores = CiderD_scorer.compute_score(gts, res_) - # print('Cider scores:', _) - if hasattr(opt, 'verbose') and not opt.verbose: - pass - else: - print('Cider scores:', _) - else: - cider_scores = 0 - if opt.bleu_reward_weight > 0: - _, bleu_scores = Bleu_scorer.compute_score(gts, res__) - bleu_scores = np.array(bleu_scores[3]) - # print('Bleu scores:', _[3]) - if hasattr(opt, 'verbose') and not opt.verbose: - pass - else: - print('Bleu scores:', _[3]) - else: - bleu_scores = 0 - - scores = opt.cider_reward_weight * cider_scores + opt.bleu_reward_weight * bleu_scores - - return scores - -def get_self_cider_scores(data_gts, gen_result, opt): - batch_size = gen_result.size(0)# batch_size = sample_size * seq_per_img - seq_per_img = batch_size // len(data_gts) - - res = [] - - gen_result = gen_result.data.cpu().numpy() - for i in range(batch_size): - res.append(array_to_str(gen_result[i])) - - scores = [] - for i in range(len(data_gts)): - tmp = Cider_scorer.my_self_cider([res[i*seq_per_img:(i+1)*seq_per_img]]) - def get_div(eigvals): - eigvals = np.clip(eigvals, 0, None) - return -np.log(np.sqrt(eigvals[-1]) / (np.sqrt(eigvals).sum())) / np.log(len(eigvals)) - scores.append(get_div(np.linalg.eigvalsh(tmp[0]/10))) - - scores = np.array(scores) - - return scores diff --git a/spaces/mingyuan/ReMoDiffuse/mogen/core/evaluation/evaluators/base_evaluator.py b/spaces/mingyuan/ReMoDiffuse/mogen/core/evaluation/evaluators/base_evaluator.py deleted file mode 100644 index e63ed45a550e879a57bef9cb9fff251d73181169..0000000000000000000000000000000000000000 --- a/spaces/mingyuan/ReMoDiffuse/mogen/core/evaluation/evaluators/base_evaluator.py +++ /dev/null @@ -1,144 +0,0 @@ -import torch -import numpy as np -from ..utils import get_metric_statistics - - -class BaseEvaluator(object): - - def __init__(self, - batch_size=None, - drop_last=False, - replication_times=1, - replication_reduction='statistics', - eval_begin_idx=None, - eval_end_idx=None): - self.batch_size = batch_size - self.drop_last = drop_last - self.replication_times = replication_times - self.replication_reduction = replication_reduction - assert replication_reduction in ['statistics', 'mean', 'concat'] - self.eval_begin_idx = eval_begin_idx - self.eval_end_idx = eval_end_idx - - def evaluate(self, results): - total_len = len(results) - partial_len = total_len // self.replication_times - all_metrics = [] - for replication_idx in range(self.replication_times): - partial_results = results[ - replication_idx * partial_len: (replication_idx + 1) * partial_len] - if self.batch_size is not None: - batch_metrics = [] - for batch_start in range(self.eval_begin_idx, self.eval_end_idx, self.batch_size): - batch_results = partial_results[batch_start: batch_start + self.batch_size] - if len(batch_results) < self.batch_size and self.drop_last: - continue - batch_metrics.append(self.single_evaluate(batch_results)) - all_metrics.append(self.concat_batch_metrics(batch_metrics)) - else: - batch_results = partial_results[self.eval_begin_idx: self.eval_end_idx] - all_metrics.append(self.single_evaluate(batch_results)) - all_metrics = np.stack(all_metrics, axis=0) - if self.replication_reduction == 'statistics': - values = get_metric_statistics(all_metrics, self.replication_times) - elif self.replication_reduction == 'mean': - values = np.mean(all_metrics, axis=0) - elif self.replication_reduction == 'concat': - values = all_metrics - return self.parse_values(values) - - def prepare_results(self, results): - text = [] - pred_motion = [] - pred_motion_length = [] - pred_motion_mask = [] - motion = [] - motion_length = [] - motion_mask = [] - token = [] - # count the maximum motion length - T = max([result['motion'].shape[0] for result in results]) - for result in results: - cur_motion = result['motion'] - if cur_motion.shape[0] < T: - padding_values = torch.zeros((T - cur_motion.shape[0], cur_motion.shape[1])) - padding_values = padding_values.type_as(pred_motion) - cur_motion = torch.cat([cur_motion, padding_values], dim=0) - motion.append(cur_motion) - cur_pred_motion = result['pred_motion'] - if cur_pred_motion.shape[0] < T: - padding_values = torch.zeros((T - cur_pred_motion.shape[0], cur_pred_motion.shape[1])) - padding_values = padding_values.type_as(cur_pred_motion) - cur_pred_motion = torch.cat([cur_pred_motion, padding_values], dim=0) - pred_motion.append(cur_pred_motion) - cur_motion_mask = result['motion_mask'] - if cur_motion_mask.shape[0] < T: - padding_values = torch.zeros((T - cur_motion_mask.shape[0])) - padding_values = padding_values.type_as(cur_motion_mask) - cur_motion_mask= torch.cat([cur_motion_mask, padding_values], dim=0) - motion_mask.append(cur_motion_mask) - cur_pred_motion_mask = result['pred_motion_mask'] - if cur_pred_motion_mask.shape[0] < T: - padding_values = torch.zeros((T - cur_pred_motion_mask.shape[0])) - padding_values = padding_values.type_as(cur_pred_motion_mask) - cur_pred_motion_mask= torch.cat([cur_pred_motion_mask, padding_values], dim=0) - pred_motion_mask.append(cur_pred_motion_mask) - motion_length.append(result['motion_length'].item()) - pred_motion_length.append(result['pred_motion_length'].item()) - if 'text' in result.keys(): - text.append(result['text']) - if 'token' in result.keys(): - token.append(result['token']) - - motion = torch.stack(motion, dim=0) - pred_motion = torch.stack(pred_motion, dim=0) - motion_mask = torch.stack(motion_mask, dim=0) - pred_motion_mask = torch.stack(pred_motion_mask, dim=0) - motion_length = torch.Tensor(motion_length).to(motion.device).long() - pred_motion_length = torch.Tensor(pred_motion_length).to(motion.device).long() - output = { - 'pred_motion': pred_motion, - 'pred_motion_mask': pred_motion_mask, - 'pred_motion_length': pred_motion_length, - 'motion': motion, - 'motion_mask': motion_mask, - 'motion_length': motion_length, - 'text': text, - 'token': token - } - return output - - def to_device(self, device): - for model in self.model_list: - model.to(device) - - def motion_encode(self, motion, motion_length, motion_mask, device): - N = motion.shape[0] - motion_emb = [] - batch_size = 32 - cur_idx = 0 - with torch.no_grad(): - while cur_idx < N: - cur_motion = motion[cur_idx: cur_idx + batch_size].to(device) - cur_motion_length = motion_length[cur_idx: cur_idx + batch_size].to(device) - cur_motion_mask = motion_mask[cur_idx: cur_idx + batch_size].to(device) - cur_motion_emb = self.motion_encoder(cur_motion, cur_motion_length, cur_motion_mask) - motion_emb.append(cur_motion_emb) - cur_idx += batch_size - motion_emb = torch.cat(motion_emb, dim=0) - return motion_emb - - def text_encode(self, text, token, device): - N = len(text) - text_emb = [] - batch_size = 32 - cur_idx = 0 - with torch.no_grad(): - while cur_idx < N: - cur_text = text[cur_idx: cur_idx + batch_size] - cur_token = token[cur_idx: cur_idx + batch_size] - cur_text_emb = self.text_encoder(cur_text, cur_token, device) - text_emb.append(cur_text_emb) - cur_idx += batch_size - text_emb = torch.cat(text_emb, dim=0) - return text_emb \ No newline at end of file diff --git a/spaces/miyaaa666/bingo/src/lib/utils.ts b/spaces/miyaaa666/bingo/src/lib/utils.ts deleted file mode 100644 index 114b4540310253f97e65ec965728bf674be9ef6f..0000000000000000000000000000000000000000 --- a/spaces/miyaaa666/bingo/src/lib/utils.ts +++ /dev/null @@ -1,159 +0,0 @@ -import { clsx, type ClassValue } from 'clsx' -import { customAlphabet } from 'nanoid' -import { twMerge } from 'tailwind-merge' - -export function cn(...inputs: ClassValue[]) { - return twMerge(clsx(inputs)) -} - -export const nanoid = customAlphabet( - '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz', - 7 -) // 7-character random string - -export function createChunkDecoder() { - const decoder = new TextDecoder() - return function (chunk: Uint8Array | undefined): string { - if (!chunk) return '' - return decoder.decode(chunk, { stream: true }) - } -} - -export function random (start: number, end: number) { - return start + Math.ceil(Math.random() * (end - start)) -} - -export function randomIP() { - return `11.${random(104, 107)}.${random(1, 255)}.${random(1, 255)}` -} - -export const defaultUID = Math.random().toString(36).slice(2) - -export function parseHeadersFromCurl(content: string) { - const re = /-H '([^:]+):\s*([^']+)/mg - const headers: HeadersInit = {} - content = content.replaceAll('-H "', '-H \'').replaceAll('" ^', '\'\\').replaceAll('^\\^"', '"') // 将 cmd curl 转成 bash curl - content.replace(re, (_: string, key: string, value: string) => { - headers[key] = value - return '' - }) - - return headers -} - -export const ChunkKeys = ['BING_HEADER', 'BING_HEADER1', 'BING_HEADER2'] -export function encodeHeadersToCookie(content: string) { - const base64Content = btoa(content) - const contentChunks = base64Content.match(/.{1,4000}/g) || [] - return ChunkKeys.map((key, index) => `${key}=${contentChunks[index] ?? ''}`) -} - -export function extraCurlFromCookie(cookies: Partial<{ [key: string]: string }>) { - let base64Content = '' - ChunkKeys.forEach((key) => { - base64Content += (cookies[key] || '') - }) - try { - return atob(base64Content) - } catch(e) { - return '' - } -} - -export function extraHeadersFromCookie(cookies: Partial<{ [key: string]: string }>) { - return parseHeadersFromCurl(extraCurlFromCookie(cookies)) -} - -export function formatDate(input: string | number | Date): string { - const date = new Date(input) - return date.toLocaleDateString('en-US', { - month: 'long', - day: 'numeric', - year: 'numeric' - }) -} - -export function parseCookie(cookie: string, cookieName: string) { - const targetCookie = new RegExp(`(?:[; ]|^)${cookieName}=([^;]*)`).test(cookie) ? RegExp.$1 : cookie - return targetCookie ? decodeURIComponent(targetCookie).trim() : cookie.indexOf('=') === -1 ? cookie.trim() : '' -} - -export function setCookie(key: string, value: string) { - const maxAge = 86400 * 30 - document.cookie = `${key}=${value || ''}; Path=/; Max-Age=${maxAge}; SameSite=None; Secure` -} - -export function getCookie(cookieName: string) { - const re = new RegExp(`(?:[; ]|^)${cookieName}=([^;]*)`) - return re.test(document.cookie) ? RegExp.$1 : '' -} - -export function parseCookies(cookie: string, cookieNames: string[]) { - const cookies: { [key: string]: string } = {} - cookieNames.forEach(cookieName => { - cookies[cookieName] = parseCookie(cookie, cookieName) - }) - return cookies -} - -export const DEFAULT_UA = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36 Edg/115.0.0.0' -export const DEFAULT_IP = process.env.BING_IP || randomIP() - -export function parseUA(ua?: string, default_ua = DEFAULT_UA) { - return / EDGE?/i.test(decodeURIComponent(ua || '')) ? decodeURIComponent(ua!.trim()) : default_ua -} - -export function createHeaders(cookies: Partial<{ [key: string]: string }>, defaultHeaders?: Partial<{ [key: string]: string }>, type?: string) { - let { - BING_COOKIE = process.env.BING_COOKIE, - BING_UA = process.env.BING_UA, - BING_IP = process.env.BING_IP, - BING_HEADER = process.env.BING_HEADER, - IMAGE_ONLY = process.env.IMAGE_ONLY ?? '1', - } = cookies - - if (BING_HEADER) { - const headers = extraHeadersFromCookie({ - BING_HEADER, - ...cookies, - }) || {} - if (/^(1|true|yes)$/.test(String(IMAGE_ONLY)) && type !== 'image') { - // 仅画图时设置 cookie - headers.cookie = `_U=${defaultUID}` - } - if (!headers['user-agent']) { - throw new Error('身份信息设置有误,请参考文档重新设置') - } - return headers - } - - const ua = parseUA(BING_UA) - - if (!BING_COOKIE) { - BING_COOKIE = defaultHeaders?.IMAGE_BING_COOKIE || defaultUID // hf 暂时不用 Cookie 也可以正常使用 - } - - const parsedCookie = parseCookie(BING_COOKIE, '_U') - if (!parsedCookie) { - throw new Error('Invalid Cookie') - } - return { - 'x-forwarded-for': BING_IP || DEFAULT_IP, - 'Accept-Encoding': 'gzip, deflate, br', - 'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6', - 'User-Agent': ua!, - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - cookie: `_U=${parsedCookie}` || '', - } -} - -export class WatchDog { - private tid = 0 - watch(fn: Function, timeout = 2000) { - clearTimeout(this.tid) - this.tid = setTimeout(fn, timeout + Math.random() * 1000) - } - reset() { - clearTimeout(this.tid) - } -} diff --git a/spaces/mlpc-lab/BLIVA/bliva/common/logger.py b/spaces/mlpc-lab/BLIVA/bliva/common/logger.py deleted file mode 100644 index c99d01724cb45c0d6bec1a9e2b48ca15aa3cb8db..0000000000000000000000000000000000000000 --- a/spaces/mlpc-lab/BLIVA/bliva/common/logger.py +++ /dev/null @@ -1,193 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -import datetime -import logging -import time -from collections import defaultdict, deque - -import torch -import torch.distributed as dist - -from bliva.common import dist_utils - -class SmoothedValue(object): - """Track a series of values and provide access to smoothed values over a - window or the global series average. - """ - - def __init__(self, window_size=20, fmt=None): - if fmt is None: - fmt = "{median:.4f} ({global_avg:.4f})" - self.deque = deque(maxlen=window_size) - self.total = 0.0 - self.count = 0 - self.fmt = fmt - - def update(self, value, n=1): - self.deque.append(value) - self.count += n - self.total += value * n - - def synchronize_between_processes(self): - """ - Warning: does not synchronize the deque! - """ - if not dist_utils.is_dist_avail_and_initialized(): - return - t = torch.tensor([self.count, self.total], dtype=torch.float64, device="cuda") - dist.barrier() - dist.all_reduce(t) - t = t.tolist() - self.count = int(t[0]) - self.total = t[1] - - @property - def median(self): - d = torch.tensor(list(self.deque)) - return d.median().item() - - @property - def avg(self): - d = torch.tensor(list(self.deque), dtype=torch.float32) - return d.mean().item() - - @property - def global_avg(self): - return self.total / self.count - - @property - def max(self): - return max(self.deque) - - @property - def value(self): - return self.deque[-1] - - def __str__(self): - return self.fmt.format( - median=self.median, - avg=self.avg, - global_avg=self.global_avg, - max=self.max, - value=self.value, - ) - - -class MetricLogger(object): - def __init__(self, delimiter="\t"): - self.meters = defaultdict(SmoothedValue) - self.delimiter = delimiter - - def update(self, **kwargs): - for k, v in kwargs.items(): - if isinstance(v, torch.Tensor): - v = v.item() - assert isinstance(v, (float, int)) - self.meters[k].update(v) - - def __getattr__(self, attr): - if attr in self.meters: - return self.meters[attr] - if attr in self.__dict__: - return self.__dict__[attr] - raise AttributeError( - "'{}' object has no attribute '{}'".format(type(self).__name__, attr) - ) - - def __str__(self): - loss_str = [] - for name, meter in self.meters.items(): - loss_str.append("{}: {}".format(name, str(meter))) - return self.delimiter.join(loss_str) - - def global_avg(self): - loss_str = [] - for name, meter in self.meters.items(): - loss_str.append("{}: {:.4f}".format(name, meter.global_avg)) - return self.delimiter.join(loss_str) - - def synchronize_between_processes(self): - for meter in self.meters.values(): - meter.synchronize_between_processes() - - def add_meter(self, name, meter): - self.meters[name] = meter - - def log_every(self, iterable, print_freq, header=None): - i = 0 - if not header: - header = "" - start_time = time.time() - end = time.time() - iter_time = SmoothedValue(fmt="{avg:.4f}") - data_time = SmoothedValue(fmt="{avg:.4f}") - space_fmt = ":" + str(len(str(len(iterable)))) + "d" - log_msg = [ - header, - "[{0" + space_fmt + "}/{1}]", - "eta: {eta}", - "{meters}", - "time: {time}", - "data: {data}", - ] - if torch.cuda.is_available(): - log_msg.append("max mem: {memory:.0f}") - log_msg = self.delimiter.join(log_msg) - MB = 1024.0 * 1024.0 - for obj in iterable: - data_time.update(time.time() - end) - yield obj - iter_time.update(time.time() - end) - if i % print_freq == 0 or i == len(iterable) - 1: - eta_seconds = iter_time.global_avg * (len(iterable) - i) - eta_string = str(datetime.timedelta(seconds=int(eta_seconds))) - if torch.cuda.is_available(): - print( - log_msg.format( - i, - len(iterable), - eta=eta_string, - meters=str(self), - time=str(iter_time), - data=str(data_time), - memory=torch.cuda.max_memory_allocated() / MB, - ) - ) - else: - print( - log_msg.format( - i, - len(iterable), - eta=eta_string, - meters=str(self), - time=str(iter_time), - data=str(data_time), - ) - ) - i += 1 - end = time.time() - total_time = time.time() - start_time - total_time_str = str(datetime.timedelta(seconds=int(total_time))) - print( - "{} Total time: {} ({:.4f} s / it)".format( - header, total_time_str, total_time / len(iterable) - ) - ) - - -class AttrDict(dict): - def __init__(self, *args, **kwargs): - super(AttrDict, self).__init__(*args, **kwargs) - self.__dict__ = self - -def setup_logger(): - logging.basicConfig( - level=logging.INFO if dist_utils.is_main_process() else logging.WARN, - format="%(asctime)s [%(levelname)s] %(message)s", - handlers=[logging.StreamHandler()], - ) \ No newline at end of file diff --git a/spaces/mrstuffandthings/Bark-Voice-Cloning/bark/api.py b/spaces/mrstuffandthings/Bark-Voice-Cloning/bark/api.py deleted file mode 100644 index 7a4319ceaa13798912637290f8e9e88c50d5420a..0000000000000000000000000000000000000000 --- a/spaces/mrstuffandthings/Bark-Voice-Cloning/bark/api.py +++ /dev/null @@ -1,158 +0,0 @@ -from typing import Dict, Optional, Union - -import numpy as np - -from .generation import codec_decode, generate_coarse, generate_fine, generate_text_semantic - - -def generate_with_settings(text_prompt, semantic_temp=0.6, eos_p=0.2, coarse_temp=0.7, fine_temp=0.5, voice_name=None, output_full=False): - - # generation with more control - x_semantic = generate_text_semantic( - text_prompt, - history_prompt=voice_name, - temp=semantic_temp, - min_eos_p = eos_p, - use_kv_caching=True - ) - - x_coarse_gen = generate_coarse( - x_semantic, - history_prompt=voice_name, - temp=coarse_temp, - use_kv_caching=True - ) - x_fine_gen = generate_fine( - x_coarse_gen, - history_prompt=voice_name, - temp=fine_temp, - ) - - if output_full: - full_generation = { - 'semantic_prompt': x_semantic, - 'coarse_prompt': x_coarse_gen, - 'fine_prompt': x_fine_gen - } - return full_generation, codec_decode(x_fine_gen) - return codec_decode(x_fine_gen) - - -def text_to_semantic( - text: str, - history_prompt: Optional[Union[Dict, str]] = None, - temp: float = 0.7, - silent: bool = False, -): - """Generate semantic array from text. - - Args: - text: text to be turned into audio - history_prompt: history choice for audio cloning - temp: generation temperature (1.0 more diverse, 0.0 more conservative) - silent: disable progress bar - - Returns: - numpy semantic array to be fed into `semantic_to_waveform` - """ - x_semantic = generate_text_semantic( - text, - history_prompt=history_prompt, - temp=temp, - silent=silent, - use_kv_caching=True - ) - return x_semantic - - -def semantic_to_waveform( - semantic_tokens: np.ndarray, - history_prompt: Optional[Union[Dict, str]] = None, - temp: float = 0.7, - silent: bool = False, - output_full: bool = False, -): - """Generate audio array from semantic input. - - Args: - semantic_tokens: semantic token output from `text_to_semantic` - history_prompt: history choice for audio cloning - temp: generation temperature (1.0 more diverse, 0.0 more conservative) - silent: disable progress bar - output_full: return full generation to be used as a history prompt - - Returns: - numpy audio array at sample frequency 24khz - """ - coarse_tokens = generate_coarse( - semantic_tokens, - history_prompt=history_prompt, - temp=temp, - silent=silent, - use_kv_caching=True - ) - fine_tokens = generate_fine( - coarse_tokens, - history_prompt=history_prompt, - temp=0.5, - ) - audio_arr = codec_decode(fine_tokens) - if output_full: - full_generation = { - "semantic_prompt": semantic_tokens, - "coarse_prompt": coarse_tokens, - "fine_prompt": fine_tokens, - } - return full_generation, audio_arr - return audio_arr - - -def save_as_prompt(filepath, full_generation): - assert(filepath.endswith(".npz")) - assert(isinstance(full_generation, dict)) - assert("semantic_prompt" in full_generation) - assert("coarse_prompt" in full_generation) - assert("fine_prompt" in full_generation) - np.savez(filepath, **full_generation) - - -def generate_audio( - text: str, - history_prompt: Optional[Union[Dict, str]] = None, - text_temp: float = 0.7, - waveform_temp: float = 0.7, - silent: bool = False, - output_full: bool = False, -): - """Generate audio array from input text. - - Args: - text: text to be turned into audio - history_prompt: history choice for audio cloning - text_temp: generation temperature (1.0 more diverse, 0.0 more conservative) - waveform_temp: generation temperature (1.0 more diverse, 0.0 more conservative) - silent: disable progress bar - output_full: return full generation to be used as a history prompt - - Returns: - numpy audio array at sample frequency 24khz - """ - semantic_tokens = text_to_semantic( - text, - history_prompt=history_prompt, - temp=text_temp, - silent=silent, - ) - out = semantic_to_waveform( - semantic_tokens, - history_prompt=history_prompt, - temp=waveform_temp, - silent=silent, - output_full=output_full, - ) - if output_full: - full_generation, audio_arr = out - return full_generation, audio_arr - else: - audio_arr = out - return audio_arr diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/megatron_11b/detok.py b/spaces/mshukor/UnIVAL/fairseq/examples/megatron_11b/detok.py deleted file mode 100644 index 49921b28a1f35c6216b5ed85729453524e7a049d..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/megatron_11b/detok.py +++ /dev/null @@ -1,32 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import fileinput - -import sacremoses - - -def main(): - parser = argparse.ArgumentParser(description="") - parser.add_argument("files", nargs="*", help="input files") - args = parser.parse_args() - - detok = sacremoses.MosesDetokenizer() - - for line in fileinput.input(args.files, openhook=fileinput.hook_compressed): - print( - detok.detokenize(line.strip().split(" ")) - .replace(" @", "") - .replace("@ ", "") - .replace(" =", "=") - .replace("= ", "=") - .replace(" – ", "–") - ) - - -if __name__ == "__main__": - main() diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/roberta/commonsense_qa/__init__.py b/spaces/mshukor/UnIVAL/fairseq/examples/roberta/commonsense_qa/__init__.py deleted file mode 100644 index 42d21f35eb3dd33a053dcf0edd5eadd2dff11294..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/roberta/commonsense_qa/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import commonsense_qa_task # noqa diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/rxf/rxf_src/label_smoothed_cross_entropy_r3f.py b/spaces/mshukor/UnIVAL/fairseq/examples/rxf/rxf_src/label_smoothed_cross_entropy_r3f.py deleted file mode 100644 index 079db13e61c5ef46d1b1d288012145148eb0be04..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/rxf/rxf_src/label_smoothed_cross_entropy_r3f.py +++ /dev/null @@ -1,157 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn.functional as F -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.criterions.label_smoothed_cross_entropy import label_smoothed_nll_loss - - -@register_criterion("label_smoothed_cross_entropy_r3f") -class LabelSmoothedCrossEntropyR3FCriterion(FairseqCriterion): - def __init__( - self, task, sentence_avg, label_smoothing, eps, r3f_lambda, noise_type - ): - super().__init__(task) - self.sentence_avg = sentence_avg - self.label_smoothing = label_smoothing - self.eps = eps - self.r3f_lambda = r3f_lambda - self.noise_type = noise_type - if self.noise_type in {"normal"}: - self.noise_sampler = torch.distributions.normal.Normal( - loc=0.0, scale=self.eps - ) - elif self.noise_type == "uniform": - self.noise_sampler = torch.distributions.uniform.Uniform( - low=-self.eps, high=self.eps - ) - else: - raise Exception(f"unrecognized noise type {self.noise_type}") - - @staticmethod - def add_args(parser): - """Add criterion-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--label-smoothing', default=0., type=float, metavar='D', - help='epsilon for label smoothing, 0 means no label smoothing') - parser.add_argument('--eps', type=float, default=1e-5, - help='noise eps') - parser.add_argument('--r3f-lambda', type=float, default=1.0, - help='lambda for combining logistic loss and noisy KL loss') - parser.add_argument('--noise-type', type=str, default='normal', - choices=['normal', 'uniform'], - help='type of noises') - # fmt: on - - def _get_symm_kl(self, noised_logits, input_logits): - return ( - F.kl_div( - F.log_softmax(noised_logits, dim=-1, dtype=torch.float32), - F.softmax(input_logits, dim=-1, dtype=torch.float32), - None, - None, - "sum", - ) - + F.kl_div( - F.log_softmax(input_logits, dim=-1, dtype=torch.float32), - F.softmax(noised_logits, dim=-1, dtype=torch.float32), - None, - None, - "sum", - ) - ) / noised_logits.size(0) - - def forward(self, model, sample, reduce=True): - """Compute the loss for the given sample. - - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - token_embeddings = model.encoder.embed_tokens(sample["net_input"]["src_tokens"]) - input_logits, extra = model(**sample["net_input"]) - loss, nll_loss = self.compute_loss( - model, (input_logits, extra), sample, reduce=reduce - ) - sample_size = ( - sample["target"].size(0) if self.sentence_avg else sample["ntokens"] - ) - - if model.training: - noise = self.noise_sampler.sample(sample_shape=token_embeddings.shape).to( - token_embeddings - ) - noised_embeddings = token_embeddings.clone() + noise - - noised_logits, _ = model( - **sample["net_input"], token_embeddings=noised_embeddings - ) - symm_kl = self._get_symm_kl(noised_logits, input_logits) - - if model.training: - symm_kl = symm_kl * sample_size - loss = loss + self.r3f_lambda * symm_kl - - logging_output = { - "loss": loss.data, - "nll_loss": nll_loss.data, - "ntokens": sample["ntokens"], - "nsentences": sample["target"].size(0), - "sample_size": sample_size, - } - - if model.training: - logging_output.update( - symm_kl=utils.item(symm_kl.data) if reduce else symm_kl.data - ) - - return loss, sample_size, logging_output - - def compute_loss(self, model, net_output, sample, reduce=True): - lprobs = model.get_normalized_probs(net_output, log_probs=True) - lprobs = lprobs.view(-1, lprobs.size(-1)) - target = model.get_targets(sample, net_output).view(-1, 1) - loss, nll_loss = label_smoothed_nll_loss( - lprobs, - target, - self.label_smoothing, - ignore_index=self.padding_idx, - reduce=reduce, - ) - return loss, nll_loss - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - loss_sum = sum(log.get("loss", 0) for log in logging_outputs) - nll_loss_sum = sum(log.get("nll_loss", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - symm_kl_sum = sum(log.get("symm_kl", 0) for log in logging_outputs) - - metrics.log_scalar("symm_kl", symm_kl_sum / sample_size, sample_size, round=3) - metrics.log_scalar( - "loss", loss_sum / sample_size / math.log(2), sample_size, round=3 - ) - metrics.log_scalar( - "nll_loss", nll_loss_sum / ntokens / math.log(2), ntokens, round=3 - ) - metrics.log_derived( - "ppl", lambda meters: utils.get_perplexity(meters["nll_loss"].avg) - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/spaces/msmilauer/AutoGPT-duplicated2/autogpt/llm_utils.py b/spaces/msmilauer/AutoGPT-duplicated2/autogpt/llm_utils.py deleted file mode 100644 index 821820ffab07be2753cf385ff1de77820e4206ee..0000000000000000000000000000000000000000 --- a/spaces/msmilauer/AutoGPT-duplicated2/autogpt/llm_utils.py +++ /dev/null @@ -1,172 +0,0 @@ -from __future__ import annotations - -import time -from ast import List - -import openai -from colorama import Fore, Style -from openai.error import APIError, RateLimitError - -from autogpt.config import Config -from autogpt.logs import logger - -CFG = Config() - -openai.api_key = CFG.openai_api_key - - -def call_ai_function( - function: str, args: list, description: str, model: str | None = None -) -> str: - """Call an AI function - - This is a magic function that can do anything with no-code. See - https://github.com/Torantulino/AI-Functions for more info. - - Args: - function (str): The function to call - args (list): The arguments to pass to the function - description (str): The description of the function - model (str, optional): The model to use. Defaults to None. - - Returns: - str: The response from the function - """ - if model is None: - model = CFG.smart_llm_model - # For each arg, if any are None, convert to "None": - args = [str(arg) if arg is not None else "None" for arg in args] - # parse args to comma separated string - args = ", ".join(args) - messages = [ - { - "role": "system", - "content": f"You are now the following python function: ```# {description}" - f"\n{function}```\n\nOnly respond with your `return` value.", - }, - {"role": "user", "content": args}, - ] - - return create_chat_completion(model=model, messages=messages, temperature=0) - - -# Overly simple abstraction until we create something better -# simple retry mechanism when getting a rate error or a bad gateway -def create_chat_completion( - messages: list, # type: ignore - model: str | None = None, - temperature: float = CFG.temperature, - max_tokens: int | None = None, -) -> str: - """Create a chat completion using the OpenAI API - - Args: - messages (list[dict[str, str]]): The messages to send to the chat completion - model (str, optional): The model to use. Defaults to None. - temperature (float, optional): The temperature to use. Defaults to 0.9. - max_tokens (int, optional): The max tokens to use. Defaults to None. - - Returns: - str: The response from the chat completion - """ - response = None - num_retries = 10 - warned_user = False - if CFG.debug_mode: - print( - Fore.GREEN - + f"Creating chat completion with model {model}, temperature {temperature}," - f" max_tokens {max_tokens}" + Fore.RESET - ) - for attempt in range(num_retries): - backoff = 2 ** (attempt + 2) - try: - if CFG.use_azure: - response = openai.ChatCompletion.create( - deployment_id=CFG.get_azure_deployment_id_for_model(model), - model=model, - messages=messages, - temperature=temperature, - max_tokens=max_tokens, - ) - else: - response = openai.ChatCompletion.create( - model=model, - messages=messages, - temperature=temperature, - max_tokens=max_tokens, - ) - break - except RateLimitError: - if CFG.debug_mode: - print( - Fore.RED + "Error: ", - f"Reached rate limit, passing..." + Fore.RESET, - ) - if not warned_user: - logger.double_check( - f"Please double check that you have setup a {Fore.CYAN + Style.BRIGHT}PAID{Style.RESET_ALL} OpenAI API Account. " - + f"You can read more here: {Fore.CYAN}https://github.com/Significant-Gravitas/Auto-GPT#openai-api-keys-configuration{Fore.RESET}" - ) - warned_user = True - except APIError as e: - if e.http_status == 502: - pass - else: - raise - if attempt == num_retries - 1: - raise - if CFG.debug_mode: - print( - Fore.RED + "Error: ", - f"API Bad gateway. Waiting {backoff} seconds..." + Fore.RESET, - ) - time.sleep(backoff) - if response is None: - logger.typewriter_log( - "FAILED TO GET RESPONSE FROM OPENAI", - Fore.RED, - "Auto-GPT has failed to get a response from OpenAI's services. " - + f"Try running Auto-GPT again, and if the problem the persists try running it with `{Fore.CYAN}--debug{Fore.RESET}`.", - ) - logger.double_check() - if CFG.debug_mode: - raise RuntimeError(f"Failed to get response after {num_retries} retries") - else: - quit(1) - - return response.choices[0].message["content"] - - -def create_embedding_with_ada(text) -> list: - """Create an embedding with text-ada-002 using the OpenAI SDK""" - num_retries = 10 - for attempt in range(num_retries): - backoff = 2 ** (attempt + 2) - try: - if CFG.use_azure: - return openai.Embedding.create( - input=[text], - engine=CFG.get_azure_deployment_id_for_model( - "text-embedding-ada-002" - ), - )["data"][0]["embedding"] - else: - return openai.Embedding.create( - input=[text], model="text-embedding-ada-002" - )["data"][0]["embedding"] - except RateLimitError: - pass - except APIError as e: - if e.http_status == 502: - pass - else: - raise - if attempt == num_retries - 1: - raise - if CFG.debug_mode: - print( - Fore.RED + "Error: ", - f"API Bad gateway. Waiting {backoff} seconds..." + Fore.RESET, - ) - time.sleep(backoff) diff --git a/spaces/multimodalart/latentdiffusion/latent-diffusion/main.py b/spaces/multimodalart/latentdiffusion/latent-diffusion/main.py deleted file mode 100644 index e8e18c18fbb01f2e16d9376ea1bfb51f3b5df601..0000000000000000000000000000000000000000 --- a/spaces/multimodalart/latentdiffusion/latent-diffusion/main.py +++ /dev/null @@ -1,741 +0,0 @@ -import argparse, os, sys, datetime, glob, importlib, csv -import numpy as np -import time -import torch -import torchvision -import pytorch_lightning as pl - -from packaging import version -from omegaconf import OmegaConf -from torch.utils.data import random_split, DataLoader, Dataset, Subset -from functools import partial -from PIL import Image - -from pytorch_lightning import seed_everything -from pytorch_lightning.trainer import Trainer -from pytorch_lightning.callbacks import ModelCheckpoint, Callback, LearningRateMonitor -from pytorch_lightning.utilities.distributed import rank_zero_only -from pytorch_lightning.utilities import rank_zero_info - -from ldm.data.base import Txt2ImgIterableBaseDataset -from ldm.util import instantiate_from_config - - -def get_parser(**parser_kwargs): - def str2bool(v): - if isinstance(v, bool): - return v - if v.lower() in ("yes", "true", "t", "y", "1"): - return True - elif v.lower() in ("no", "false", "f", "n", "0"): - return False - else: - raise argparse.ArgumentTypeError("Boolean value expected.") - - parser = argparse.ArgumentParser(**parser_kwargs) - parser.add_argument( - "-n", - "--name", - type=str, - const=True, - default="", - nargs="?", - help="postfix for logdir", - ) - parser.add_argument( - "-r", - "--resume", - type=str, - const=True, - default="", - nargs="?", - help="resume from logdir or checkpoint in logdir", - ) - parser.add_argument( - "-b", - "--base", - nargs="*", - metavar="base_config.yaml", - help="paths to base configs. Loaded from left-to-right. " - "Parameters can be overwritten or added with command-line options of the form `--key value`.", - default=list(), - ) - parser.add_argument( - "-t", - "--train", - type=str2bool, - const=True, - default=False, - nargs="?", - help="train", - ) - parser.add_argument( - "--no-test", - type=str2bool, - const=True, - default=False, - nargs="?", - help="disable test", - ) - parser.add_argument( - "-p", - "--project", - help="name of new or path to existing project" - ) - parser.add_argument( - "-d", - "--debug", - type=str2bool, - nargs="?", - const=True, - default=False, - help="enable post-mortem debugging", - ) - parser.add_argument( - "-s", - "--seed", - type=int, - default=23, - help="seed for seed_everything", - ) - parser.add_argument( - "-f", - "--postfix", - type=str, - default="", - help="post-postfix for default name", - ) - parser.add_argument( - "-l", - "--logdir", - type=str, - default="logs", - help="directory for logging dat shit", - ) - parser.add_argument( - "--scale_lr", - type=str2bool, - nargs="?", - const=True, - default=True, - help="scale base-lr by ngpu * batch_size * n_accumulate", - ) - return parser - - -def nondefault_trainer_args(opt): - parser = argparse.ArgumentParser() - parser = Trainer.add_argparse_args(parser) - args = parser.parse_args([]) - return sorted(k for k in vars(args) if getattr(opt, k) != getattr(args, k)) - - -class WrappedDataset(Dataset): - """Wraps an arbitrary object with __len__ and __getitem__ into a pytorch dataset""" - - def __init__(self, dataset): - self.data = dataset - - def __len__(self): - return len(self.data) - - def __getitem__(self, idx): - return self.data[idx] - - -def worker_init_fn(_): - worker_info = torch.utils.data.get_worker_info() - - dataset = worker_info.dataset - worker_id = worker_info.id - - if isinstance(dataset, Txt2ImgIterableBaseDataset): - split_size = dataset.num_records // worker_info.num_workers - # reset num_records to the true number to retain reliable length information - dataset.sample_ids = dataset.valid_ids[worker_id * split_size:(worker_id + 1) * split_size] - current_id = np.random.choice(len(np.random.get_state()[1]), 1) - return np.random.seed(np.random.get_state()[1][current_id] + worker_id) - else: - return np.random.seed(np.random.get_state()[1][0] + worker_id) - - -class DataModuleFromConfig(pl.LightningDataModule): - def __init__(self, batch_size, train=None, validation=None, test=None, predict=None, - wrap=False, num_workers=None, shuffle_test_loader=False, use_worker_init_fn=False, - shuffle_val_dataloader=False): - super().__init__() - self.batch_size = batch_size - self.dataset_configs = dict() - self.num_workers = num_workers if num_workers is not None else batch_size * 2 - self.use_worker_init_fn = use_worker_init_fn - if train is not None: - self.dataset_configs["train"] = train - self.train_dataloader = self._train_dataloader - if validation is not None: - self.dataset_configs["validation"] = validation - self.val_dataloader = partial(self._val_dataloader, shuffle=shuffle_val_dataloader) - if test is not None: - self.dataset_configs["test"] = test - self.test_dataloader = partial(self._test_dataloader, shuffle=shuffle_test_loader) - if predict is not None: - self.dataset_configs["predict"] = predict - self.predict_dataloader = self._predict_dataloader - self.wrap = wrap - - def prepare_data(self): - for data_cfg in self.dataset_configs.values(): - instantiate_from_config(data_cfg) - - def setup(self, stage=None): - self.datasets = dict( - (k, instantiate_from_config(self.dataset_configs[k])) - for k in self.dataset_configs) - if self.wrap: - for k in self.datasets: - self.datasets[k] = WrappedDataset(self.datasets[k]) - - def _train_dataloader(self): - is_iterable_dataset = isinstance(self.datasets['train'], Txt2ImgIterableBaseDataset) - if is_iterable_dataset or self.use_worker_init_fn: - init_fn = worker_init_fn - else: - init_fn = None - return DataLoader(self.datasets["train"], batch_size=self.batch_size, - num_workers=self.num_workers, shuffle=False if is_iterable_dataset else True, - worker_init_fn=init_fn) - - def _val_dataloader(self, shuffle=False): - if isinstance(self.datasets['validation'], Txt2ImgIterableBaseDataset) or self.use_worker_init_fn: - init_fn = worker_init_fn - else: - init_fn = None - return DataLoader(self.datasets["validation"], - batch_size=self.batch_size, - num_workers=self.num_workers, - worker_init_fn=init_fn, - shuffle=shuffle) - - def _test_dataloader(self, shuffle=False): - is_iterable_dataset = isinstance(self.datasets['train'], Txt2ImgIterableBaseDataset) - if is_iterable_dataset or self.use_worker_init_fn: - init_fn = worker_init_fn - else: - init_fn = None - - # do not shuffle dataloader for iterable dataset - shuffle = shuffle and (not is_iterable_dataset) - - return DataLoader(self.datasets["test"], batch_size=self.batch_size, - num_workers=self.num_workers, worker_init_fn=init_fn, shuffle=shuffle) - - def _predict_dataloader(self, shuffle=False): - if isinstance(self.datasets['predict'], Txt2ImgIterableBaseDataset) or self.use_worker_init_fn: - init_fn = worker_init_fn - else: - init_fn = None - return DataLoader(self.datasets["predict"], batch_size=self.batch_size, - num_workers=self.num_workers, worker_init_fn=init_fn) - - -class SetupCallback(Callback): - def __init__(self, resume, now, logdir, ckptdir, cfgdir, config, lightning_config): - super().__init__() - self.resume = resume - self.now = now - self.logdir = logdir - self.ckptdir = ckptdir - self.cfgdir = cfgdir - self.config = config - self.lightning_config = lightning_config - - def on_keyboard_interrupt(self, trainer, pl_module): - if trainer.global_rank == 0: - print("Summoning checkpoint.") - ckpt_path = os.path.join(self.ckptdir, "last.ckpt") - trainer.save_checkpoint(ckpt_path) - - def on_pretrain_routine_start(self, trainer, pl_module): - if trainer.global_rank == 0: - # Create logdirs and save configs - os.makedirs(self.logdir, exist_ok=True) - os.makedirs(self.ckptdir, exist_ok=True) - os.makedirs(self.cfgdir, exist_ok=True) - - if "callbacks" in self.lightning_config: - if 'metrics_over_trainsteps_checkpoint' in self.lightning_config['callbacks']: - os.makedirs(os.path.join(self.ckptdir, 'trainstep_checkpoints'), exist_ok=True) - print("Project config") - print(OmegaConf.to_yaml(self.config)) - OmegaConf.save(self.config, - os.path.join(self.cfgdir, "{}-project.yaml".format(self.now))) - - print("Lightning config") - print(OmegaConf.to_yaml(self.lightning_config)) - OmegaConf.save(OmegaConf.create({"lightning": self.lightning_config}), - os.path.join(self.cfgdir, "{}-lightning.yaml".format(self.now))) - - else: - # ModelCheckpoint callback created log directory --- remove it - if not self.resume and os.path.exists(self.logdir): - dst, name = os.path.split(self.logdir) - dst = os.path.join(dst, "child_runs", name) - os.makedirs(os.path.split(dst)[0], exist_ok=True) - try: - os.rename(self.logdir, dst) - except FileNotFoundError: - pass - - -class ImageLogger(Callback): - def __init__(self, batch_frequency, max_images, clamp=True, increase_log_steps=True, - rescale=True, disabled=False, log_on_batch_idx=False, log_first_step=False, - log_images_kwargs=None): - super().__init__() - self.rescale = rescale - self.batch_freq = batch_frequency - self.max_images = max_images - self.logger_log_images = { - pl.loggers.TestTubeLogger: self._testtube, - } - self.log_steps = [2 ** n for n in range(int(np.log2(self.batch_freq)) + 1)] - if not increase_log_steps: - self.log_steps = [self.batch_freq] - self.clamp = clamp - self.disabled = disabled - self.log_on_batch_idx = log_on_batch_idx - self.log_images_kwargs = log_images_kwargs if log_images_kwargs else {} - self.log_first_step = log_first_step - - @rank_zero_only - def _testtube(self, pl_module, images, batch_idx, split): - for k in images: - grid = torchvision.utils.make_grid(images[k]) - grid = (grid + 1.0) / 2.0 # -1,1 -> 0,1; c,h,w - - tag = f"{split}/{k}" - pl_module.logger.experiment.add_image( - tag, grid, - global_step=pl_module.global_step) - - @rank_zero_only - def log_local(self, save_dir, split, images, - global_step, current_epoch, batch_idx): - root = os.path.join(save_dir, "images", split) - for k in images: - grid = torchvision.utils.make_grid(images[k], nrow=4) - if self.rescale: - grid = (grid + 1.0) / 2.0 # -1,1 -> 0,1; c,h,w - grid = grid.transpose(0, 1).transpose(1, 2).squeeze(-1) - grid = grid.numpy() - grid = (grid * 255).astype(np.uint8) - filename = "{}_gs-{:06}_e-{:06}_b-{:06}.png".format( - k, - global_step, - current_epoch, - batch_idx) - path = os.path.join(root, filename) - os.makedirs(os.path.split(path)[0], exist_ok=True) - Image.fromarray(grid).save(path) - - def log_img(self, pl_module, batch, batch_idx, split="train"): - check_idx = batch_idx if self.log_on_batch_idx else pl_module.global_step - if (self.check_frequency(check_idx) and # batch_idx % self.batch_freq == 0 - hasattr(pl_module, "log_images") and - callable(pl_module.log_images) and - self.max_images > 0): - logger = type(pl_module.logger) - - is_train = pl_module.training - if is_train: - pl_module.eval() - - with torch.no_grad(): - images = pl_module.log_images(batch, split=split, **self.log_images_kwargs) - - for k in images: - N = min(images[k].shape[0], self.max_images) - images[k] = images[k][:N] - if isinstance(images[k], torch.Tensor): - images[k] = images[k].detach().cpu() - if self.clamp: - images[k] = torch.clamp(images[k], -1., 1.) - - self.log_local(pl_module.logger.save_dir, split, images, - pl_module.global_step, pl_module.current_epoch, batch_idx) - - logger_log_images = self.logger_log_images.get(logger, lambda *args, **kwargs: None) - logger_log_images(pl_module, images, pl_module.global_step, split) - - if is_train: - pl_module.train() - - def check_frequency(self, check_idx): - if ((check_idx % self.batch_freq) == 0 or (check_idx in self.log_steps)) and ( - check_idx > 0 or self.log_first_step): - try: - self.log_steps.pop(0) - except IndexError as e: - print(e) - pass - return True - return False - - def on_train_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx): - if not self.disabled and (pl_module.global_step > 0 or self.log_first_step): - self.log_img(pl_module, batch, batch_idx, split="train") - - def on_validation_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx): - if not self.disabled and pl_module.global_step > 0: - self.log_img(pl_module, batch, batch_idx, split="val") - if hasattr(pl_module, 'calibrate_grad_norm'): - if (pl_module.calibrate_grad_norm and batch_idx % 25 == 0) and batch_idx > 0: - self.log_gradients(trainer, pl_module, batch_idx=batch_idx) - - -class CUDACallback(Callback): - # see https://github.com/SeanNaren/minGPT/blob/master/mingpt/callback.py - def on_train_epoch_start(self, trainer, pl_module): - # Reset the memory use counter - torch.cuda.reset_peak_memory_stats(trainer.root_gpu) - torch.cuda.synchronize(trainer.root_gpu) - self.start_time = time.time() - - def on_train_epoch_end(self, trainer, pl_module, outputs): - torch.cuda.synchronize(trainer.root_gpu) - max_memory = torch.cuda.max_memory_allocated(trainer.root_gpu) / 2 ** 20 - epoch_time = time.time() - self.start_time - - try: - max_memory = trainer.training_type_plugin.reduce(max_memory) - epoch_time = trainer.training_type_plugin.reduce(epoch_time) - - rank_zero_info(f"Average Epoch time: {epoch_time:.2f} seconds") - rank_zero_info(f"Average Peak memory {max_memory:.2f}MiB") - except AttributeError: - pass - - -if __name__ == "__main__": - # custom parser to specify config files, train, test and debug mode, - # postfix, resume. - # `--key value` arguments are interpreted as arguments to the trainer. - # `nested.key=value` arguments are interpreted as config parameters. - # configs are merged from left-to-right followed by command line parameters. - - # model: - # base_learning_rate: float - # target: path to lightning module - # params: - # key: value - # data: - # target: main.DataModuleFromConfig - # params: - # batch_size: int - # wrap: bool - # train: - # target: path to train dataset - # params: - # key: value - # validation: - # target: path to validation dataset - # params: - # key: value - # test: - # target: path to test dataset - # params: - # key: value - # lightning: (optional, has sane defaults and can be specified on cmdline) - # trainer: - # additional arguments to trainer - # logger: - # logger to instantiate - # modelcheckpoint: - # modelcheckpoint to instantiate - # callbacks: - # callback1: - # target: importpath - # params: - # key: value - - now = datetime.datetime.now().strftime("%Y-%m-%dT%H-%M-%S") - - # add cwd for convenience and to make classes in this file available when - # running as `python main.py` - # (in particular `main.DataModuleFromConfig`) - sys.path.append(os.getcwd()) - - parser = get_parser() - parser = Trainer.add_argparse_args(parser) - - opt, unknown = parser.parse_known_args() - if opt.name and opt.resume: - raise ValueError( - "-n/--name and -r/--resume cannot be specified both." - "If you want to resume training in a new log folder, " - "use -n/--name in combination with --resume_from_checkpoint" - ) - if opt.resume: - if not os.path.exists(opt.resume): - raise ValueError("Cannot find {}".format(opt.resume)) - if os.path.isfile(opt.resume): - paths = opt.resume.split("/") - # idx = len(paths)-paths[::-1].index("logs")+1 - # logdir = "/".join(paths[:idx]) - logdir = "/".join(paths[:-2]) - ckpt = opt.resume - else: - assert os.path.isdir(opt.resume), opt.resume - logdir = opt.resume.rstrip("/") - ckpt = os.path.join(logdir, "checkpoints", "last.ckpt") - - opt.resume_from_checkpoint = ckpt - base_configs = sorted(glob.glob(os.path.join(logdir, "configs/*.yaml"))) - opt.base = base_configs + opt.base - _tmp = logdir.split("/") - nowname = _tmp[-1] - else: - if opt.name: - name = "_" + opt.name - elif opt.base: - cfg_fname = os.path.split(opt.base[0])[-1] - cfg_name = os.path.splitext(cfg_fname)[0] - name = "_" + cfg_name - else: - name = "" - nowname = now + name + opt.postfix - logdir = os.path.join(opt.logdir, nowname) - - ckptdir = os.path.join(logdir, "checkpoints") - cfgdir = os.path.join(logdir, "configs") - seed_everything(opt.seed) - - try: - # init and save configs - configs = [OmegaConf.load(cfg) for cfg in opt.base] - cli = OmegaConf.from_dotlist(unknown) - config = OmegaConf.merge(*configs, cli) - lightning_config = config.pop("lightning", OmegaConf.create()) - # merge trainer cli with config - trainer_config = lightning_config.get("trainer", OmegaConf.create()) - # default to ddp - trainer_config["accelerator"] = "ddp" - for k in nondefault_trainer_args(opt): - trainer_config[k] = getattr(opt, k) - if not "gpus" in trainer_config: - del trainer_config["accelerator"] - cpu = True - else: - gpuinfo = trainer_config["gpus"] - print(f"Running on GPUs {gpuinfo}") - cpu = False - trainer_opt = argparse.Namespace(**trainer_config) - lightning_config.trainer = trainer_config - - # model - model = instantiate_from_config(config.model) - - # trainer and callbacks - trainer_kwargs = dict() - - # default logger configs - default_logger_cfgs = { - "wandb": { - "target": "pytorch_lightning.loggers.WandbLogger", - "params": { - "name": nowname, - "save_dir": logdir, - "offline": opt.debug, - "id": nowname, - } - }, - "testtube": { - "target": "pytorch_lightning.loggers.TestTubeLogger", - "params": { - "name": "testtube", - "save_dir": logdir, - } - }, - } - default_logger_cfg = default_logger_cfgs["testtube"] - if "logger" in lightning_config: - logger_cfg = lightning_config.logger - else: - logger_cfg = OmegaConf.create() - logger_cfg = OmegaConf.merge(default_logger_cfg, logger_cfg) - trainer_kwargs["logger"] = instantiate_from_config(logger_cfg) - - # modelcheckpoint - use TrainResult/EvalResult(checkpoint_on=metric) to - # specify which metric is used to determine best models - default_modelckpt_cfg = { - "target": "pytorch_lightning.callbacks.ModelCheckpoint", - "params": { - "dirpath": ckptdir, - "filename": "{epoch:06}", - "verbose": True, - "save_last": True, - } - } - if hasattr(model, "monitor"): - print(f"Monitoring {model.monitor} as checkpoint metric.") - default_modelckpt_cfg["params"]["monitor"] = model.monitor - default_modelckpt_cfg["params"]["save_top_k"] = 3 - - if "modelcheckpoint" in lightning_config: - modelckpt_cfg = lightning_config.modelcheckpoint - else: - modelckpt_cfg = OmegaConf.create() - modelckpt_cfg = OmegaConf.merge(default_modelckpt_cfg, modelckpt_cfg) - print(f"Merged modelckpt-cfg: \n{modelckpt_cfg}") - if version.parse(pl.__version__) < version.parse('1.4.0'): - trainer_kwargs["checkpoint_callback"] = instantiate_from_config(modelckpt_cfg) - - # add callback which sets up log directory - default_callbacks_cfg = { - "setup_callback": { - "target": "main.SetupCallback", - "params": { - "resume": opt.resume, - "now": now, - "logdir": logdir, - "ckptdir": ckptdir, - "cfgdir": cfgdir, - "config": config, - "lightning_config": lightning_config, - } - }, - "image_logger": { - "target": "main.ImageLogger", - "params": { - "batch_frequency": 750, - "max_images": 4, - "clamp": True - } - }, - "learning_rate_logger": { - "target": "main.LearningRateMonitor", - "params": { - "logging_interval": "step", - # "log_momentum": True - } - }, - "cuda_callback": { - "target": "main.CUDACallback" - }, - } - if version.parse(pl.__version__) >= version.parse('1.4.0'): - default_callbacks_cfg.update({'checkpoint_callback': modelckpt_cfg}) - - if "callbacks" in lightning_config: - callbacks_cfg = lightning_config.callbacks - else: - callbacks_cfg = OmegaConf.create() - - if 'metrics_over_trainsteps_checkpoint' in callbacks_cfg: - print( - 'Caution: Saving checkpoints every n train steps without deleting. This might require some free space.') - default_metrics_over_trainsteps_ckpt_dict = { - 'metrics_over_trainsteps_checkpoint': - {"target": 'pytorch_lightning.callbacks.ModelCheckpoint', - 'params': { - "dirpath": os.path.join(ckptdir, 'trainstep_checkpoints'), - "filename": "{epoch:06}-{step:09}", - "verbose": True, - 'save_top_k': -1, - 'every_n_train_steps': 10000, - 'save_weights_only': True - } - } - } - default_callbacks_cfg.update(default_metrics_over_trainsteps_ckpt_dict) - - callbacks_cfg = OmegaConf.merge(default_callbacks_cfg, callbacks_cfg) - if 'ignore_keys_callback' in callbacks_cfg and hasattr(trainer_opt, 'resume_from_checkpoint'): - callbacks_cfg.ignore_keys_callback.params['ckpt_path'] = trainer_opt.resume_from_checkpoint - elif 'ignore_keys_callback' in callbacks_cfg: - del callbacks_cfg['ignore_keys_callback'] - - trainer_kwargs["callbacks"] = [instantiate_from_config(callbacks_cfg[k]) for k in callbacks_cfg] - - trainer = Trainer.from_argparse_args(trainer_opt, **trainer_kwargs) - trainer.logdir = logdir ### - - # data - data = instantiate_from_config(config.data) - # NOTE according to https://pytorch-lightning.readthedocs.io/en/latest/datamodules.html - # calling these ourselves should not be necessary but it is. - # lightning still takes care of proper multiprocessing though - data.prepare_data() - data.setup() - print("#### Data #####") - for k in data.datasets: - print(f"{k}, {data.datasets[k].__class__.__name__}, {len(data.datasets[k])}") - - # configure learning rate - bs, base_lr = config.data.params.batch_size, config.model.base_learning_rate - if not cpu: - ngpu = len(lightning_config.trainer.gpus.strip(",").split(',')) - else: - ngpu = 1 - if 'accumulate_grad_batches' in lightning_config.trainer: - accumulate_grad_batches = lightning_config.trainer.accumulate_grad_batches - else: - accumulate_grad_batches = 1 - print(f"accumulate_grad_batches = {accumulate_grad_batches}") - lightning_config.trainer.accumulate_grad_batches = accumulate_grad_batches - if opt.scale_lr: - model.learning_rate = accumulate_grad_batches * ngpu * bs * base_lr - print( - "Setting learning rate to {:.2e} = {} (accumulate_grad_batches) * {} (num_gpus) * {} (batchsize) * {:.2e} (base_lr)".format( - model.learning_rate, accumulate_grad_batches, ngpu, bs, base_lr)) - else: - model.learning_rate = base_lr - print("++++ NOT USING LR SCALING ++++") - print(f"Setting learning rate to {model.learning_rate:.2e}") - - - # allow checkpointing via USR1 - def melk(*args, **kwargs): - # run all checkpoint hooks - if trainer.global_rank == 0: - print("Summoning checkpoint.") - ckpt_path = os.path.join(ckptdir, "last.ckpt") - trainer.save_checkpoint(ckpt_path) - - - def divein(*args, **kwargs): - if trainer.global_rank == 0: - import pudb; - pudb.set_trace() - - - import signal - - signal.signal(signal.SIGUSR1, melk) - signal.signal(signal.SIGUSR2, divein) - - # run - if opt.train: - try: - trainer.fit(model, data) - except Exception: - melk() - raise - if not opt.no_test and not trainer.interrupted: - trainer.test(model, data) - except Exception: - if opt.debug and trainer.global_rank == 0: - try: - import pudb as debugger - except ImportError: - import pdb as debugger - debugger.post_mortem() - raise - finally: - # move newly created debug project to debug_runs - if opt.debug and not opt.resume and trainer.global_rank == 0: - dst, name = os.path.split(logdir) - dst = os.path.join(dst, "debug_runs", name) - os.makedirs(os.path.split(dst)[0], exist_ok=True) - os.rename(logdir, dst) - if trainer.global_rank == 0: - print(trainer.profiler.summary()) diff --git a/spaces/multimodalart/mariogpt/mario_gpt/level.py b/spaces/multimodalart/mariogpt/mario_gpt/level.py deleted file mode 100644 index 406e3e53d3958de4dd95ef354b9cc99540b168c2..0000000000000000000000000000000000000000 --- a/spaces/multimodalart/mariogpt/mario_gpt/level.py +++ /dev/null @@ -1,14 +0,0 @@ -FULL_LEVEL_STR_WITH_PATHS = """-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xxxxx----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xxx---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xxx--------xxx---------xxxx--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xxx------------------------------------------------------------------------------------------------------------------------------------------------------xxxxxx------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xxx--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xxx------------------xxx---------xxx--------------------------------------------xxx----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xxxxxx-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx----xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-------------------------------------------------------------------------------------------------------------------xx----------------------------------------------xx-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xxxx-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xxxx---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xx--x-------------------------------------------------------------------------------------------xx----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xx--x------xx--x-------xx---x-------------------------------------------------------------------------------------------------------------------------------------------------------------xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx--x-------------------------------------------------------------------------------------------------------------------------------------------------xxxxx----xxxxxxxxxxxxxxxxxxxxxxxx------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xx--x------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xx--x-------xxx------xx--x-------xx--x------xxxx-----------------xxxx-----------xx--x------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xxx----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xx--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx--------------xxxxxxxxxxxxxxxxxxxxxxxxxxxx-XXX-xxxxxxxxxxxxxxxxxxxxxxxxxxx--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xx-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xxxxx--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------SSSSSSSSSSSSSSSSSSSSSSSSS?SSSSSSSSSSSSS----------xxxSS????SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS----SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS-x---SSSSSSSSSSSSSSSS-------------SSS--[]SSSSSSSSSSSSSSS---------------------------------S-------------------------xx-x--------------------------------------------xx-x-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------?---------------------------------------------------------------------------------------------S-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xx---xx--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xx---x------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xxxx------xxxx-----xx----x--------------xxxx--?--------------------------------------------------------------------xx-x------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------E-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xx----x----xx----x-----xx-----x------------------------------------------------------------------------------------------------------------------------------------------------------------xXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX---x------------SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS?SSSSSSSSSSSSSSSSSSSSSSSSS-----SSSSSSSSS--xx--x-------SSSSSSSSSSSSSSSSSSSSSS--------------------------------------------------------------------------------------------------------------------------------------------------------xxxx--------------------------------------------------------XXXX------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xxxxx--xxx------------------SS-------------------XXXX------------------------------------------------------------------------------------------------------------------------------------------xx----x------------------------------------------------------------------------S-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ooo------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xxx-----xxxx-----xx------------------------------------------------xx----x-----xx--x----xx----x-----xx----x----xx---x-----xx--------xx---x---------xx----x------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------E---------------------------xx--x--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS?SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS------------------------------------------------oo-------xxx------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xx-xxx-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xxxx-----------------E--------------------------------------------------oo----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS--------xxSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS-x-----SSSS---xxSSSSSSSSSSSSSSSSSSSSSSSSSSS-----SSSSSSSSSSSSSSSSSSSSSSSSSSS------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------oo-------xxx---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xx--------------------------------------------------------------------------------------------------------------------------------------------------------------xx-x-------------------------------------------------------------------------------------------------------------------------------------------------------------------oo------------------------xx----x------------------------ooo-------------------------------------------------------------------------------E----------------------------------------------------------------------------------------------------------------------------------------------?---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------SSSSS-------xxxxSS-------------------------------------------SSSS------------------------------x--SSSS------------------------------[]SSSSSSSSSSSSSSS-------------------XXXXXXXXXXXX--S------------------------xx---x-----xxx----------------------------------xx---x---------------------------xxxxxxx--------------xx-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xx---<>-x------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xx-----x------------------------------------------------------------------------------------------------------------------------------------------------E---------------------------xx<>-x----xx--xxx--xx------x-ooxxx------xx---x---xxx------xx--------------------------------------------------xxxxxxx---x--xxx-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xxx-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------E-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xx------x--xx------x---xx-------x-xxx-------------------------------------------------------------------------------------------------------------------xxx--------------------------------xxXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX----x-------------[]----[]------------------------------------------------SSSSSSSSSSSSSSSSSSSSSSSSSS------SSSSSSSSSSSSSSSS-------[]---------------xx---X-------------------------[]--------------------------------------------------------------------------------------------------------------------------------------------------xxxxx--xx---x--------------------------------------------------------[]---------xxxxx--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xxxx----xxx--x-----------------[]--------------------[]-------------------------------------------------------------------------------------------------------------------------------------xxxxxxx------x-------------xxx-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------o-o---------------------------?---------------------xxx-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xx--x---xx---x-?-xx-x------xxx----xxx------------------------------xx------x---xx----x--xx------x---xx------x--xx-----x---xx-x------xx-----xx------xx------x-----xxx------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xx----xx------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------SS--SSSSSS--SSSS------SSSS------------------------------------------------------------------------------------------------------------xxxxxxxx--x---------------------------------------------------------------------------------------------xxx----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xx--XX-x---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xx---x------------------------------------------------------------?-----XXXX-------------------o---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------SSSSSSSSSSSSSSSSSSSSSSSSSSSS----------------xx------------------------------------------------x----------xx---------------------------------SSSSSSSSSSSSSSSS-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xxxxxxxx--x---------------------------------------------------------------------------------------------xxx-------------------------------------------------------------------------------------------xx-x--------------------------------------------------------------------------------------------------------------o---------------------------------------xx----xx---x----xxxxx----------------------------------------------------------------------------------------------------------------------------------------------------------------------xxx-----xxx--xQ-----xxx---------------------XXXX-----------------------------------------------------------------oo--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xxx---------- --------------------------------------------------oooooooo--E--xxxxxxx---------E-----------------------------------------------------------------------------------------xxxx-------------------------------------------SSSSS-------xSSS---------------------------------------------SSSS-------------------------------x-SSSS------------------------------[]SSSSSSSSSSSSSSS------------------XX-------------S------------------xxxxxxx-----xxxxxx--x--------------------------xxxxxxxx-----x--------------------xxxxxxx---x--x------------xx-xxxx------------xxx-----------------------------------------------------------xxxx-----------------------------------------------------------------------------------------------------------------------------------------------------------------xxxxx---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xx---[]--x------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xxxxx-----------------------------------------------------------------------------xx-------x-----------xxxxx----------------------------------------------------------------------------------------------------------------------------------------------------------xx-[]--x--xx------xxx--------xxxx--x----xx-----xxxx--x----xx-x--------xxx-------------------------------------xx----x-----xxx--x--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xxxx----xxx--x-------------------------------------------------------------E---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------oo---------------------------------o-o---------ooooo-----XXX----ooo------------------------------------------------------------------------------------------------------------------------------------------------------------xxx--------xxx--------xxxx---------xx--x----------------------------------------------------------------------------------------------------------------xxx--x--------------E---------------xx-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX-----x------------[]----[]-------E----------------xxx---------------------SSSSSSSSSSSSSSSSSSSSSSSSSS------SSSSSSSSSSSSSSSS-------[]--------------xx------------------------------[]-------------------------------------------------------------------------------------------------------------------------------------------------xx----xxx-----xxxxxx--------------------------------------------------[]-------xxx----x--------------------------------------ooo------------------------------------------------?-----------------------------------------------------------------------------------------------------------ExxXXX----XXX---x--E-------------[]--------------------[]----------------------------------xxxxxxxxxxx---------------------------------------------------------------------------------------xx----x--------x---------xxxx--x-----------------------------SSS----SSSS---------------------E--------------------------------------------------------------------------------------------------------------------------E------------------------------------------------------------------------xxxxx---------------------------------------------xxxx--x---------------xxxxxxxxxxxxxxx--------------------------------------------------------------xxxx-----------------------------------------------------------------------------------------------------xxxxxxxxxxx----------E-E------------------xxxxx-------------------------xxx---xx----x-xx-----x-xx---x-xxxxx--xxxxx--x---------------------------xxxE-------x-xx------xxx--------xxxx--------xxx-------x-xx---xxxxxxx------E-xxxxxxx--------xxxxxx--x------------------------------------------------------------------------------------------------------------------xxx------------------oo---------------------------xxxxx-o-o---------ooo-----------------------------------------------------------------------------------------------------------------------xx-----E-xxxxxx----------------------------------------------------------------------------------------------------------------------------------------------E--------------------------------------------------------------------------------------------------------xxxxx------------------------------------------------------------SS--SSSSSS--SSSS--E---SSSS------------------------------------------------------------XXX--------------------------------Eooo--------xxXXXXXXX---x---------oooo----------E------xxx---------oo--oo------------------------------------------xxx--x-----------------------xxxxxxx-------------------------------------------------------------------------------------------xxxxxx-------------------------------------------------------xx---XX--x-----------------------------------------------------------------------------------------------------------------------------------------------------E-------------------------xx-----x----xxxxx-----------------------------XX-----------------------------------------------XXX-------------------------oo-------------------------xxxxxxxxx------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xxxxx--------------------------SSSSSSSSSSSSSSSSSSSSSSSSSSSS-----------XXX-xx--------------------------------------------------x----?---xx----------------------------------SSSSSSSSSSSSSSSS------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Eooo--------xxXXXXXXX---x---------oooo----------E------xxx-----xx--oo--oo------------------------------------------xxx--x-----------------------------------------------------------------------------------------xx---x-----------------------------------------------------------------------------xxxxx-------------------------------------------------------------xx---xx-xxxxx-----xxxxx----x----------------------------------------------------------------------------------------------------------------------xxxxx---------------------------------------xxxx-oxoooxx--xxx-----XXX-x-------------xxxx--------------------------------------------------------------------------------------------------------------------------------------------------------------------E--------------------------------------------------------------------------------xxxxx-------------------------------------------------------------------------------------------------xxx--------oo------------------------------------------xxxxxxxxxx----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xxx--x--------- ---------XSSSSS---------------------------------SSSSSSSSSSSSSSxx-SSSSxSSSSSSSSSSSSS----------------------------------SSSSSS--------------xxxxx--------------------------xxXX-x------------------------------------------SSSSS-------x------------------------------------------------SSSS----E---------------------------xSSSS-------------xx--xxxxx--------[]SSSSSSSSSSSSSSS-----------------XXX-------------S---------------xxxx---XXX-----XXXXXXE--x------------------------xxXXXXXXX------xxxx--------XXX----xxXXXXXX---?---x----------xx--XXX-x-------xxxxx--x-XXX--------------------------------------Q-?-Q----------xx---xSSS-----------------------------------S----------------------------------SSSSSSSS--------------------------------------------------XXXX------------------------xxXXX-x--------------------------------------------------------------------------------------xx----------------------------------------------------------------------------------?-----------Sx---XX---x-------------------------------------------------------EXX-------------------------------------------------------------------------------?Q---------------------------------oooo--------------xxXXX-x-------------------------XXX----------------------E------------------xxxxxxxx------E--xxxxxxxxxxxx----xxxxxxxxxxxxxx---------E-------------------------------------------------------------------------------------------------XXXXXEXXXXX---------------------xxXX[]X--xxx-------XXX--------XXXX---x--xx------XXXX--ox--xx---x------xx--x-------xx----------------xxxx--SS--xxX---XX-----XXX---x----------------------------------------------------------------------------------------------------xxx---------------------------------------------E-E-E------------S--------xx---x--xxXS---x--------------------------------------------------------SSSSSSSSSS-------------------------------------------------------------------------------------------------------------------------------XX-----------------------------------------XXX-----------------------------------------------------------------------XXX-------------XXXXXXX----------------------E---E---------X--------S---------------------XXXX----------XXXXXXXXXXXXXXXX------------------------------------------------------------------------xxXX--------XXX--------XXXX---------XX---x------------------XXXX----------------------------------------------------------------------------------------xxXX---x-----E-SSSSS-----------------xS-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX------x-----------[]----[]---QQQQQQ--------xxxxxxxx--x--------------------SSSSSSSSSSSSSSSSSSSSSSSSSS------SSSSSSSSSSSSSSSS-------[]------xxxxxxxxx-------------------------------[]----------------------------------------------------------------------xxxx------------------XX------------------------QQQ?QQQQQQQQ?QQQQ---------xxX-----x-------x----x-------------------------------------------------<>------xx<>-----x-------------------------------------ooo-------------------------------------xxxx----------------------------------------------------------------------------------------------------------------xxxxx---------------x---------------<>--------------------[]---------------------------------xx---x---XX-x------------SSSS-----------------------------------------------SSSSS-----SSSSS--------xX----E---------x-------xx--X---x-----S---------XQQ?QQX--------------------------SSSSSS----QQQQQQQ----------------E-----------------------------------------------SSSSSSS---------------E------E------------------------S--------------xxx---XX-------------------------------------------------xxXXX-x-------------------------------------------xxXXX---x-------------xxXXXXXXXXXXXXX-x------------------------------------------------------------xx---x---------xx------------------------------SSSSSSSSSSSSS-------------------SSS?SSS-------QQ?---------xx--QQQ?QQQ-x-------SSSSSSS-SS-xxxx-----xxxx--X-x-----------------------xx--x-xx------xx-------xx-----xx--XX--XXXXX---x-------------------------xx-<>--------xx-------XXX--------XXXX--------XXX--------xx----XXXXXXX--------XXXXXXX--------XXXXEX---x------------------------------------------------------------------oooo------------------------------xxxx--------xx--x-----------------oo--------------------------xx-XX-xo-o---------XXX-------------------------------------------------------------------------------------SSSSSSSS?S--------S----S--------xxx---------x----x-------SSSSSS----------XXXXXXXXXXXXXXXX--------------X---------------------------Q---------------------------------------------------------SSSSSSSS---SSSQ--------------?-----------SSS----SQQS------------------------------------------------------xxXX--x----------------------------------------------oooo-------SS--------SS---S----SS----------oooooo-----------------------------------------------------------------XXX-----------------XXXXX------xx------------x--------XXXX----------xxxxxxxx--xoo------------------------xxxE-----oo------------------xxXX---x---------------------xx-SSSS-x-------------------QQQQQ-----------SSSS--------SSSSS------SSSS---------------------------xx?SSS-x---------------------------------SSSSS---?-------xxx--xS---XX---x---------------------------------------------------------------------------xxxxS-------SSS------------------SSQSSS?SSS--------SSS------------------SQS--SQS------------------xx-------x--xxXXX-x---------------------------------------------------------------------------------o------o-o----XXX------XXXX---------------------xxxx---XXXX-x--------------------------Q--------------------------------------Q-Q-------------------------S-------------------------------------------------------QQQQ-------------------------------------------------------------xxXX--x-------------------------SSSSSSSSSSSSSSSSSSSSSSSSSSSS--QQ------xxxx-xSSS------------E------------------------------------x--S??-xx-----------------------------------SSSSSSSSSSSSSSSS-----E--------------------------------------------------------------------------------------SSS------------------------------------------------------------------------------------------------XX-----------------------------XXXXX------xx------------x--------XXXX----------xxxxxxxx--xoo-xx-x-------------------xxxE-----oo------------------xxXX---x--------------------------------SS--------------------------------------------------xxxxx-----x--------------S------------------------------------------------------------xxXXX-x------------------------SSS-------E------------------------xx-x-xx--SSSSS-----SSSSS-----xxxx---------------------E--S---SSSSSSSSS------------SS---SS-------------SSS-----E---------------------------------E-xxXXX-x----------------------------XXX------xx-XX---x-xx----x-----------x-----XXoo--xxXX-x-------------------------------------XXX-----oo--XXXX-------xxxxxx-------XXXX--------------------------?SS-----------------------------------------------------SSSSSS----S----------------------------------------SSSSS-------------------xxxxx----xxXX--x-----------------------------------------------------------------------------------------------xx--x---------------------------------------------------xSSSSSSSS-x--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xxXX---x-------- ---------X---------------------------------------------------xx-------x--------------------------------------------E--------------------xx-<>-x------------------------xxXXX--x--------------------------------xxxxx----SSSSS------xx-------------------------------------------------------<>----------------------------x---------------xx-xxx----x-------[]SSSSSSSSSSSSSSS----------------XXXX-------------S--------------xxXXX---------------------x-------------E--------xx--------------XXX-x-------------xx---------------x------xxxx--------x-----xxXXXX---x-------------------------------------------------------xx-----x--xxxxxxxxxx----------------------------------------------------------------------------------------------------------------------XXX----------------E-------xxXEXX--x------------------------------------------------------------------------------------xx-x---------------------------------S--------------------------oooo---------------------------xxxx---------x-----------------------------------------------------EXXX----------------------------------------------------------------E-----------------------------------------------------------xxxx---xx------x----------------------------------------------SSSSSSSSSSS----------xx?QQQQQQ---------SSSSSSSSSSSS----SSSSSSSSSSSSX-x------------E-----------------------?------------------------------------------------XX--------------------------------------------------xx---------x---------------------------xxx-------------o-xxx-----x----xx----x-----xx-x------xxxx----xx---x----xx-------------------x--------------------------------------------------------------------------------------------------xxX-x-------------------------------------------<><><>--------------------xx-----xxxEX-----x------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------XXXoo-------------------XXX----------------E---------------------------------------------------------------------------------------------------------------S--------------------------------------------------------------------------------------------------------------------------xxXXX--------XXX--------XXXX---------XX----x------------------------------------------------------------------------------------------------------------xxXXX----x-E--------------------------x--XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX-------x----------<>----[]----------------xxSSSSSSS---xxxxx--------------------------------------------xxxxx---------------------[]------xSSSSSSSS-X--------X--------------------[]----------X--EX-------------------------------------------------------x<>-x-----------------[]--------------------------------------[]---------xxXX----XX------XX-----x-------------------------------------------------------xxX[]------x---------------------------------------------------------------------------xx---x------E-----------------------------------------------------xxx--------------xxx------------------------------xx--x-----------------x------------------------------------[]--------------------------------xx-E--E---[]--x------------[]------------------xxxx-------------------------S-----------[]---------xx----<>----------x-----xx--------x----S---------X-----X----------------------------------------------------------SSSS----------------------------------------------------------------E------------------------------------------------xx--xx--------------------------------oooo---------------xx------x----------------------XXX--ooo-----------xx--------x-----------xx----------------xx---xxxxx-------------------------------------------------xx-----x-------xx-x--------------------------------------------------------------------------------------xx------------x-------------E--xx---x---xx--E--X--x---------------------xx-E--xx-------XX-------XX-----XX---------------x-----E-----------------xx--[]--------XX-----------------------------------------XX--------------------------------------------x---------------------xxxx----------------------------------------oooo-----------------------------xx---x------xx----x-------------------------------------------xx------x----------------------------SSSSSSSSSSSSSSSS---------xxx--------------ES--------------E-------------[]----------------------------xx-S--------<>-----x--------------------------[]-------[]--------------------------------------------------------------xxxx--------------------------------------------------------------------------------------------------------------------------------------------xxXXX---x----------------------------------o---------------------SS--------SS---S----SS--------------------------------xx-----------------------------------------------------------------------------xx------------oox--------------------xx-XXXXXX---x--xxxxx----------xxxxxxxxx--x-------------XXX--------xx-XX----x-------------------xx--------x----------------------------------------------------------------------------------------x-------x-----------------------------------------------xx--x-x----XX----x-------------------------------------------------------------------------xx--xxx--------------------------------------------------------xxxx------------------------------xxx---------xxxXXXX--x-------------------------E----------o----ooo------E--------XXXX--o---o---o------XXXX------------------------o----xxx-----------xxXXX---------x---------------------------------------------------------------------------------------------------------oooo----------oooo------------oooo-------------------------------------------------------------------------xxXXX---x-------------------------------------------------------------xx---xx--------------<>-------------------------------------x----xxXXX---------------------------------S-------------------<>-------------------------------------------xx------------------------------------------------------------------------------------------------------------E-------------------------------XX---------------------------------------xx------------oox--------------------xx-XXXXXX---x-xx---x----------xxxxxxxxx--x-------------XX---------xx-XX----x----------------------------------------------------------------------------------xxXSSS------x----------------------------xxx----------------xxxxx---------------------xx-XXX--x--------------------------------<>-----------------------xx---xx------------------------x--x-----xxxx----------<>------------------------------------------------------<>----------------------------------xxXXXX--x-----------------------------------xx--------xx-----XX-----------x---------xx-----x-------------------------------------------XX-----------xxxx-XXX-x-xxxxx------------------------------------------------------------------------------------------------------------------------------------------------------------SS--xx----x--xxXXX---x---------------------------------------------------------------------------------------------xx----x--------------------------------------------------x----------x------------------------------------------------------------------------------------------------------------------xxx---------------------------------------------------------------------------------xx-XX----x------- ---------X--------------------------------------------------xx---------x--------------------------------------------xxxxx--------------xx--[]--x----------------------xxXXXX---x--------------------------xxxxxx----x---SSSSS-----xxS-------------------------------------------------------[]------------E----------------x-------------xx--XXX-----x------[]SSSSSSSSSSSSSSS---------------XXXXX-------------S-------------xx--------------------------x--------------------xx--------------------x-----------xx-----------------x-xxxxxXXX---------x---xx---------x----------------------------------------------E---E--xx-------xxxXXXXXXXX-x-----------------------------------------------------------------------------------------------------------------------------------------------xxEXXXX---x-----------------------------------------------xx---------------------------------xx---x----oooooo----------------------------------------------------------------------xxxxx----xx--x----------x--------------------------------------E------------EXXXX-------------------------------------------------E------------------------------------------------------------xxx-------E--xx---x-xx--------x---------------------xxxxx---------------------------------------xx---------------------------------------------X--x---E-------------------------------E-----------------------------------------------XXX--------------------xxxxx------------------------xx--------XXX---------------------------XXX---------------XXX------x--xx------x---xx---x----xx---x--xx-----x--xx----ooooooo----------x------------------------------------------------------------------------------------------------xxXX--x--------------xxxxx-----------------------[][][]-------------------xx-------xXXX------x------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------X-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xxXXXX--------XXX--------XXXX---------XX-----x-------------------------------------------------------------------------------------------------------xxxxxXXXX-----x--------------------------xx--XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX--------x---------------[]---------------xx-SSSSSSS--SSSSS-x--------------------------xxxxx-------xxxxxx----x--------------------[]------xSSSSSSS--------------------------------<>----------X---X-------------------------------------------------------x[]--x----------------[]--------------------------------------[]--------xxXXXXX------------------x------------------------------------------------E-----xXX[]-------x-----------xxxxxx-----oooo---------xxxx-------xxx------------------xxxx--xx-----x-xxxxx-----xx------------oooo----oooo-----------------xxxxxx--x----------xxxx--x-----xxxx-xxxxx-----oooo----xx---X------------------x--------------E--------------------[]------------------------------xxx---------[]---x-----------[]-----------------xx---xxx---------------------SS-----------[]---------x-----[]-----------xxxxxxE---------x-------------X-----X------X-------------------------------------------------------------------------------------------------------------------xxxxxxxxxxxxxxx------------------------?------------xx---X-x-------------------------------oxxo--------------xx--------x------------xxxx----------ooo----------xx----------xx--------xx-----------------E-xxxx----x--------------E------------------------------xxxx-------x-----xx---xxx------------------------------xxxxxxx-----<>--------------------------------------xx------E-------x--------------xx-----x-xx---XX-X---x-------------------xx-----XX-----------------------------------------x-xxxxx------------xxxxx---[]------------------------oo------------------------------------------------------------------------x-------------------xx---x---------------------------------xxxxx-------E-------------------------xx-----x----xx------xxxxxx-----xxx-----------------------xxxxxxx--------x----------------------------------------------------xB-x------S------------QQQ------<>-------------[]---------------------------xx-----------[]------xxxx-------xxx------------[]-------[]-------------X-----------------------------------------------xx---x---------------------------------E------------------------------------------------------------------------xxx-----------------------------xxXXXX----x-----------------------------xxxxx--------S-SSSS-S-----SS--------SS---S----SS----------SSSSSS---------------xx-x-xxx-------------------------------------------------------------------xxxxxx----------------x------------------xx------------xxx----x--------xxXXXXXXXX---x---xxx----------------xxXXXX-----x-----------------xx----------x----------------------------------------------------------------------------------xxxxxx-------Ex---------------------------------------------xx----xx----XX-----x-----------------------------------------------------------------------xx------x-xxx--------------------------------------------------xxXX-x-----------------------o-----xXX----------xEXXXX---x--------------------XXXXXX-------------------XXXXXX----xxxx----XXX-XXX-XXX-----------xxxxxx--xxx----------XXX--xx--x---------xx--------------xx-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------xxXXXX----x------------------------------xxx-----------------xxx-xxx--xx-----xS-------------[]--------------------------------------x--xx-------------------------------------S-------------------[]------------------------------------------xx-x----------------------------------------------------------------------------------------------------------<>---------------------------xxx-XX----------------------------------xxxxxx----------------x------------------xx------------xx-----x--------xxXXXXXXXX---x---xxx----------------xxXXXX-----x--------------------------------------o-------------------------------oo--------xxXXX---------x------------------------xxxx--xx-------------xxXSS-xo------------------xx--XXX---x-------------------xxxxxxxxx---[]-------------xxxxxxxxxxx----<>-----------------------<>---x---xx---x---------[]------------------------------------------------------[]------xxxxx----xxxxx-------------xxXXXXX---x---------------------------------xx---------XX-------------------x--xxxxxxx-------x----------------------------------------------xxxxx---xxXXX-oo---xx----x------------------------------------------------------------------B-------------------------------------------------------------------------------------------xxX-----xxxXXXX----x-------------------------------------------------------------------------xxxxx-------------xx------x-------------------------------------------------x-----------x-----------------------------------------------------------------------------xxxxxx-----------------------------xxX-x---------------------------------------------------------------------------xxxxxxX-XX-----x------ --------EX--E-X---------------xxxx-?-----------xxxxxxxxxxxxxx-----------x------E------X---------------------------xxx----xx----------xxx---[]---x--------------------xxXXXXX----x------------------------xxXXXXX-----xx-SSSSS---xxx-------XXXXX---------------------------XXX----------E----[]-----------<>----------------oxxx------xxxxx------------x------]SSSSSSSSSSSSSSS--------------XXXXXX-------------S---------xxxxx----------------------------x-----------xx-xxxxxx----------------o-o---xx--------xx-------------------xxoox--------------x-xx-----------x--------------------------------------xxx----xxxxxxxx---------xXXXXXXXX---x----xx-----------xxxxxxxxxx--------------------xxx----------xxxxxxx------------------------xxxxx-xxxxx-Exxxxx--xxx---------------------------xxEXXXXX----x----------E------------------------ooo-------xx-x---------oooo---------------xxxxx-----x-----xx----------------------------------------ooo--------------------------xxxx----x--xx---S-----------x-------oo-----------------------------XXX------X--XXXXX--------------------------------------xx------------------------xx-------xxx--QQ-----------------------xxxxxxxx--xxxx-xxx-xx-----xx----------x--E--------------xxxx----x-------------------------------------xx----------------------------------------------X---x-------------------------------------E-------------------------------------------XXXX---------------xxxxxx----xxxxxxxx-------------xxxx-------------------------------------------------------------------xxx--------xxxx-----x--xx-----xxx------oxxx-----ooooooo-----------x---------------------------S----------------xxxx--------------------------xx------------------xxXXX---x----------xxxx-<>-x------------------Q?--QQQQQQ--------------xxxxxx--------XXXXS------x------------------B--------------------------------------------------X----------------------------------------<>-------------------------------------------------------E---------------X---X--------------------xxx--------------------------------------------------------ooooo-------------------------------------------------------------------------------------------------------SSS----------------------------------------------------------------------------------------------------------------------xxXXXXX--------XXX--------XXXX---------XX------x-----------------------------------------------------------------------------------------------------xx--xXXXXX------x------------------------xxS--XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX---------x--------------[]--xxxxxxxxxxxxxx--SSSSSSS--SSSSS--xxxxxxxxxxxxxxxxxxxxxxxxxxx----xxxxxxxxE--<>-----x------xxxxxxx------[]---xxxxSSSSSS---------------------------------------X---X-X---X--------X-EE----X-------------ooo----------------E----xx[]---x----xxxx-------[]--------------------------------------[]-------xxXXXX---------------------xxxxx-----------------------------------------------xxxXX[]--------x--xxx---xxx--x--x----oooo--------xx---xxxxxxxx--x----------------xx---xxx-------xx----x---xx-x-----------oooo----oooo------------S---xx--XXX---x--------xx--E---x---xx---xx----xx---oooo-xxxx------------------------x----------xxxx--------------------[]------xxxx-------------------xx-E---------[]----x----------[]----------------xx-----x-x-------------------SSS-------?---[]-------xxx-----[]------------x---<>----------x--SSS-------XE-E--X------XEE-------X----------------------E--------------X-E--E------------------------------------------------E--E---------xxXSSSSSSSSSSSS-x-------------xxxx------------------xx-E--X--x----------xxx-----------------xx-xxxxxxxxxxxxxxxx----------xooo-------xx---xxxxxxxxxxxx---------xxx-----------E-xxxxxxxxx--------------------XXXX-----x---xxx------------------------------------xxx-<>--------xxxxxx----<>-x---Q--Q--?-----------------xxxSSSSS-x----[]----------------------------SSS-----xxx-----SSSSSSS----x-----------xxx-------xx----XX-X----x------------xxxxxxx--------------------------------------------------xx----x--------xxxx--xE---[]------------------------SS-------------------------------------------------------------------------x------------xxx--xx-----x-----------------------------xxxx----xxxx---------------------------xxx-------xxxxx--------x----xxxxxx--x------------------xxxxx----x----------xx---------------xxx--------------------xxxx-------xxb------------------------------[]-------------[]--------------xxxx--------xx------------[]-------xB-xxxxxxxx--x-----------[]-------[]-------xxx--------------------------------------------------xx-----xxx-------xxxx----S-----------------------------------------------------------------------xxxxxx--------xxx--xx--------------------------xxXXXXX-----x------------------------xxxxx--S-x-------SoS--SoS-----SS----ooooSS---So??-SS---E------SSSSSS--------------xx---xx--x---------------------xxxxx-----SSSSS??--------------------xxxxxxxxx---XXXXX--------------x--------------xxxx-------------XXX-----xxxx---xx-------------xxxx--xxxxx----------xx-XXXX------x---------------xxX-----------x--------xxxx------------------------xxxx-------------------------xxxx-----------xx---xE------<>-x-------------------------------------------xx------x----XX------x-------------Q--?----------------E-----------------------------------xx----E-EExx--xx-------------E------------------------SSS------xxXXX--x-----------------------E----xXX----------XXXXXX----x-------------------xxxxx----XXX----------------------xx---x------------------------xxXXXX-xxx--xxx-----------xx----x-------xx------------XXXX-x---------------------------------------oo-----------------------------------------------------------------------xxxx------------xxxx---------------------------------------------------------------------------xxXXXXX-----x----------------------xxxxxxxx?-xxxxxxxxxxxxxxxxxx--xx--xxx------S-S------------[]-------S--------------------E----------xxx---------------------E-----------------?oooooooooo--------[]-----------------------------------------xx---x-------------------------------------xxxxxxx---------------------------------------------------xxxxxx----[]--------------------------xxXx-XX--------------------------xxxxxxxxx---XXXXX--------------x--------------xxxx-------------XX------xxx----xx-------------xxxx--xxxxx----------xx-XXXX------x---------------------------xxxxxx-------------------------------xxxxx---------xxXXXS----------x---ooo----------------xxXSS-SQ-x-----------xxXXX---x-----------------xxXX-XXX----x----------------xxx---x----xx-XXX----------xxx----x---<>----[]-----------------------[]----x-xx-----xxx------[]------------xxxx---------SS--------------xxxxx--------[]----xxx----x-xxx----x----------xxxXXXXXX----x---------------------------xxxxxx--------------------------------xxx----x---------x----XXXX------ooooxxxxxxxxxx---------------xxx----xxxx---------XXX-----x-----------------------------------------------------------------b-------------------E-------------------------xxxxx--------------------E-------------SS----xxXX------xXXXXX-----x-----------------------------xxxx--------------------------o----------xxx--S-x------o---xxx--o-----xxx---------------------------------------------xx------------x-----------------SSSSSSSS-------------------------------oo--------------xxxxx--<>-x---------------------------xxXX--xo------o----o-------------------oo---oo----------------------------------xx---x-X-XX------x----- ---------XSS?SX---QQ?QQ------xx<>-x-----------xx--?SSSSSSSSSS--X?--------x-----QQ?QQQQX--------------------------xx<>---<>-x--------xx<>--QQQ?---x------------------xxXXXXXX-----x---------------xxxx---xxXXXXXX----<>-x---SS--xx<>------XXXXXX-------------------XXX-----------------<>---?SSS--E-------[]---------------S?SS-x----xx-XXX-------------xxxxx-]SSSSSSSSSSSSSSS--------E----XXXXXXX-------------S--------xx-XXX--------XXX------------------x---------xx-xx--XXX----------------o-o-XXX-x------xxXX-------XXXEX-----XXXEXX---------------xx-------------x-------------------xxxxxxxxxxxxxxxxxxxE-xxxxxXXXXXXX---------XXXX---------x--xx-x-xxxxx---xx-XXXXXXX-x----xxxx----------xx--x-------xxxSSSSS-x----QQQQ-----------xxxxx----xx----xxx----xxx--x-------------------------xxXXXXXXX-----x-----------xxxx------------------------xxxxxx---xxx---------------E--------xx--XX------x---xx-x------------------------------------------------------------------xxx--S-----xxx-----------------x---------xxx------------------------------------XXXXXX----------xxxxxxxxxxxxxxxxxxxxxx-----xx-xxxxxxxxxxxxx----------xx-xxxxxxxx--x-------------------------xxXXXXXXX---x--xx--xx-----XXXX----------x------oooooooooxxXXX-----x----------------------E-------xxxxxxx--------------------------E--------------------X--E-x-----------------------------------<>--------E--EE-----------------------------XXXXX--------------xxXXXXXEXEEXXXXXXX-x-----------xxXEXXX-----------------------------------------------------------------XXXEXX-----XXXX------xxx------XXX------XXXX-----------E------------x---------------------------------xxx------xx<>-x------------------xxxx--xx-x----------------xxXXXX----x--------xxQQ?-[]--x------------------------------------Q--xxx---<>-------XXXXX--------x---------B-------b------------------------------------xxx---SSSSSSSSX----------------------------------------[]----?QQQQQQQ?-------------------------------------QQQQQQQ-----------X--------------------------xx--x---------------------------XXX-------XXXXX------------XXXXXXX-------?--------------XXXX---XXX-------XXX----------------------------------------------X-------------SSS-----QQQQ-----------------xx--------------------------------------xxx------------------------------------------------xxXXXXXXS-------XXX--------XXXX---------XX-------x------------------------------------------------------------ooo------------------------------------xx---XXXXXX-------x----SSS----------------x----XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX----------x-------------[]-xxSSSSSSSSSSSSS--SSSSSSS--SSSSS?SS?SSSSS???????SSSSSSSSSSSS?----SSSSSSSSS--[]------x--xxxx-<><>-x-----[]--xx--?SSSSS---------------------------<>-----------XSSSX-XSS?X--------XSSSSSSSX-------------ooo--------------------xxS[]----x--xx---xx-----<>SSSSSS--------------------SSSSSSSSSSS-<>------xxXXXXX----------------------xXX-x-------------------------xxxx-----------xxxx-xx<>XX[]---------xxxS-x-xxXX--X---x-------------Exx---XXXXXXEXX---x--------E-----xx----XXX-------XX-----x-xx---x----------E-------------xxxxx--------xx----------x-xxxxxxx-XXX----x-xx---XXX---XX-x------xxXXX-------------------------x--------xx<>-x-QQQQQQ?QQQQQQQ?Q--<>-----xxX--x-----------------xx------------[]-----xx--------[]----------xxxxxxx------S--xS----------------SSSS-----------<>------xx<>-----[]------------?---[]-----------x-SSS-------XQQ-QQX------XSSSSSSSSSX--------xxxx------QQQQQQQQ-----------XS?SSSSS---X-----------------------------------------SSSSSSS------xxXX--------------x--------QQ-xx-X-x----xxx---------xx--X--X---x--------xxS-x---------------xx--XXXXXXXXXXXXXXXX----------oxoo------xx---XXXXXXXXXXXX-x-------xxXXXX-----XXXXXXXXXXXXXXXX-----------------------------xxxx--x------------------------QQQQQQQxxxxx-?-[]-------SSSSSS?----[]--x--------------S---------xxxSS-------x--QQQQQQ-----------SSSSSSS------xxxx----xx<>--------------E--x---------xx<>-------<>----XX-X-----x----------xx-XXXXX-------o------------------------------------------XX-----x------xxXXX--<>---[]----------------------------------------------XX----------------------------------------------------x--------xxxx--xxx-------x---oooooooooo--------------xxXXX-----x--xxx-----------------------xxXX-------XXXXX--------X---XXXXXXX---x---?------------xxXXXX---XX---------XX-x-------------xx--x-----------SSSSSS-xx--xxx----xxXXX-xS-----------------------QQQQQQQQQQ--------[]-------------xx<>-x------xx-------X-----XX-------XXXXESSSSSS---x----------[]-------[]------xx<-x--------------------------Q---S?SQS------------xxx------<>-x-----xx<>-x----------------S?S--------------S-----SS----Q--Q--Q-----S----------SS----xxX--X-x------xxXX--X-x----------SSQS--------xxxxXXXXXX------x-----------QQQQQ-----xxxXXXX-----xx-----SSS--SSS-----SSSS--SSSSSS---SSS--SS--SSSS----------------------xxx----<>---x------xxxx---------xxXXX-xxxx---------------------------xxXXXXXXXX-----------------------xxxxxx-------xxXXX---------------------XXX-x-xx--------------XXXX--XXXX-x--------xxXXXXXX-------x-------S?S---xxEX---S-----X--x------xx<>-x--E?QQQQ----------S---xx<>-x-QQQQ--QQQ-------------xx<>-x---------xx----<>---E--[]--x------------------xxx-----S---E---------xxx-------SS---XX-------x---------Q-E------ESSS----E----<>------------------E--S------------xxX--------XX-XX-x-----------<>------SSSSSSSSSSS---------------xxXXXX---x---SQS--S?S--------SSSSS--xxXX---------XXXXXXX-----x-------------xxxxxx----x---------------XXXX-----oooxx-----x-------------------xx-xx------XXX-XXX-x-xxxxx---xx------xxxxxxxx-------------------x----------------xxxx-?---------------o--o-------------------Q-Q-----------------------QQQQ--------xxx---------xx<>-x----------xx<>-x----------QQ?SSQQQ----------------------------------xxx------------------xxXXXXXX---S--x--------------------xx-SSSSSS??SSSSSSSSSSSSSSSSSS--QQ--QQQ---------S------E--SS[]SS---E----E----------------<>------XXX-SSSS----------------E--<>-----------------??SSSSSSSSS----XX--[]---------------------------------------xxx-----xxx------------------------------E--xxXXXXX-x-------------------xxx---------------------------xxXSSS-x---XX-------------------------xxXX--XX-------------------------xxXXXXXXXX-----------------------xxxx---------xxXXX---------------------XX-x--xx--------------XXXX--XXXX-x--------xxXXXXXX-------x--------QQ-------------xxxxXXXX-xSSS------------------ooo-----xxXXX-x-------xx-XXXX-----------xxx--------S---------xxXXX------x---------xxXXXX-SS-x---------------xxXXX-XXX-----x-------SSS----xx<>---S---<>-x------------xx<>---SS---[]----XXX----------------ES----[]-----xx------<>-x----SXXS----------xx--xxx-xxx---------SSS---xxxxx-XX-xSSS----XXX--xx<>-----xx<>-----xx-------xx<>XXXXXX-----x-----------------XXXX----xxXXXXX-----------------------------XXXXXX----XX-----XX--xxxxxx-----------xx--XXXXXX-x-------------xxXX---XXXXX------------------x---------------------------------------QQQQ-------------------SSXSS-------xxxxx---SSSSSSS---xxxx------------xxx----x------------------SSSSS-xxx-----------xxXXX------XXXXXX------x---------------------------xx<>-x-----------------------------------xx<>-----xx-------xx<>--------<>-x-------------------------------xxxx--------xxX----S----X---x------------------------------------------------------xxxx-----------xxx--<>--[]--x-------------------------xxXXX---x------------------xxxxx-------xx--------------------------xxxx---------xx----X-X-XX-------x---- ----------------------------xx-[]--x---------xx---------------XX----------x-------------------------------------xx-[]---[]--x------xx-[]----------x-------xxxx-----xxXXXXXXX------x------------Exx---x-xxXXXXXXX----[]--x--SS-xx-[]-----XXXXXXX-----------XXX-------------------------[]--------<>-------[]--<>-----E-----SSS---x--xx-----------SSS-S--SSSSSSSSSSSSSSSSSSSSSS---E---<>---XXXXXXXX-----<>------S-------xx-----------------------------------xxxxxx--xx--XX------------------------------x----xx----------------------------------------XXX--------------x-----------------xxXXXXXXXXXXXXXXXXXXEXXXXXXX--------------XXXXX----------xxx---xx----x-xx-----------x--xx---xxxxx----xx----x-----xx<>-------x-----------------xx-XXX---XXX----XXX----XXX---x------------E----------xxXXXXXXXX------x---------xx<>-x----------------------xx---x----<>-x------------------E---xx---XXX------x-xx---x-----------E------xxxxxx----------------------------------------xx<>---------x-------------------x-------xxX-x---xxxx---------------------------XXXXXXX---------xxXXXXXXXXXXXXXXXXXXXX-x---xx--XXXXXXXXXXXX-x--------xx--XXXXXXXX---x--------------xxx------xx----------XX--XX--XX----------------XX--x-------------xx----------x----------------SSSSSS?SSSS-xx--QQQ?---S---------------------<>------S------------?X-<>--x-----------------?QQ--------------[]-------<>--<>---------E------------------XXXXXX------------xxx-------------------x---------xx-----------------------------E-----E--------------------------------------------E-----------XXX-----E-----------------------<>-------------x---------------xxxx-----xxx----xxX-x----xx-[]--x----QQQQQQ?-----xx---xxx---x--------------xxXXXXX-----x------xx-----[]---x------------------------------xxxx---xx<>---[]------XXXXXX---------x--------b--?-B------?QQ-----------------------------xxX-x---------------------------------------------------[]-------------------------------------XXXXXXXXXXXXX------------------X-------------------------xx----x------------------------------------------------------------------------------------------------------------------------------------------------------------------SSS-------------------------xx-x-------------------------------------xB-x-----------------E----------------------------xxXXXXXXX--------XXX--------XXXX---------XX--------x-----xxx-----------------------------------------------------------------------------------------xx----XXXXXX--------x---------------------xx----XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX-----------x------------<>xx-SSSSSSSSSSSSS-------------------SSSSSSSSSSSSS----------------------------[]-------xxxXXX-[][]--x----<>-xx------------------------------------[]----S-S-S----------------------------------------------------------oo-----xx--[]-----xxx----X-x---------------------------------------------------xxXXXXXX----------------------XXX--x-----------------------xx---xx--------xx---xxX[]XX[]----------x----xxXXX--X----xxxxxxxxxxxxxxx----X-------X----xxxxxxxxxxxxxxx-----------------------xx-----xxxxxxxxxxxxxxxxxxxxxxxxx----x------xx------------xx----x----------xx--------------xxxxxxx-------------------------------------xx-[]--x------------------------xx-X---x---------------xx-------------<>-----B-x-------<>---------xxE---x-----------x--------------------------------------xx-[]-----[]----------------[]------------xSSS--------------------------------------xx---x--------------------xxx---------EE-X--------------------------------Q?QQQ----------------xxXXX----oooooo-----x---------xx--X--x--xxX-x-------xxE--X--X----x---xxxxx----x-------xxxxxxxx-------------------------------xxxxxxxxx------------------x-----xx------------------------------------------------------XXXXX---x-----------------------------xx--<>---[]------------------[]---x-----------E----------xx<>----------x------------------------------xx---x--xx-[]-------X------X---x---E---xx-[]-------[]----XX-X------x--------xx--------------o--XX----------------------------------------------xE---xx------[]---[]-----------------------------------------------------------------------------------------------------x------xxXXX--XXX--------x--oooooooooo-------------xx--------XX--XX-x-xxxxx---------------xx-X------------------------------------x-----xxxx-----xx--------XXX------------x-----------xx----x----------------xx------x--xxXXXX--x-----xxx---------------------------------<>------------xx-[]--x----xxE---------------------XXXXXX----------x---------<>-------<>-----xx-[]-x------------------------------------xxxx-----xx<>------[]--x---xx-[]--x------------------------------------------------------------------------xxXX--XX-x----xxXXX--XX-x---xxxx------------xxx--XXXXXXX-------x------------------xxxXXXXXXXX--XX-xx------------------SS-------------------EE------------------------xx<>----[]----xxx--xxSS-x-------xxEXXX-XXX-x-------------------------xx--------------------------------XXXX?-x-----xx------------------------------xx---------------------------x------xx-XXXXXX--------x-----------xxXXX---------X---x----xx-[]--x--------------------xx-[]--x---------------------xx-[]--x------xxx-----[]--<>--[]---x----------------xxX-x-------------E---xx<>------------XX--------x----------E-------------<>-----------------------<>---------E----xxXXX-------XX-XX--x-------------------------------------------xxXXXXX----x---------------E--E-----xxXXX-------EXXXXXXXX------x-----------xxXXXXX-----x-------------xxxxxxxxxxxxxx-------x-----------------xx-xx----------------xx----xxxx-------XXXXXXXXXXXX----------------x--------------xx<>-x----------------------------------------------------------------------------xxX-x-------xx-[]--x--------xx-[]--x-------------------------xxxx---------------------xxX-x----------------xxXXXXXXX-------x------------------xx--SSSSS---SSSSSSSSSSSSSSSSSS------------------------<>----[]----<>---<>-------------XX-[]--XXX------------------E-----<>--[]----------XX-------------------XXX--[]---SS---------------------------------xx<>-----<>-x--E--------------------------E-xx-X------x-----------------xxX-x-------------------------xx-X-----x--------xxxx----------E----ExxXXX-xXX------------------------xx--------------------------------XXXx?-------xx-----------------------------xxx---------------------------x------xx-XXXXXX--------x-------------------xxxxXXXXX----x---------------------------xxXXXX--x-----xxXXXXXX-SSS-------<>-x----------------xxXXXX-------x-------xxXXXXX-----x-------------xxXXXX-XXX------x------------xx-[]-------[]--x-xxxxxx---xx-[]--------[]----------E------E----<>---E-[]-----<>------[]--xx------xxxx-----xx------xx--xxx-----------xxXX<>-XX--x----------xx-[]----E<>[]----<>-x-----xx-[]XXXXXX------x-----------------------xx-------------------------------------------------------XXXXXX-x---------xx-----------x-----------xx------------------------------x-----------------------E---B-----------------B-----E---B----------------xx-<>-x-----------xx<>-x----------xx<>-----xx---------xxxx--------xxX-x--B------xxXXXX-----XXXXXXX-------x-------------------------xx-[]--x---------------------------------xx-[]----<>-x-----xx-[]--------[]--x-----xxxx--------------------xx<>-x------xx-X------E--X----x--E-------------------------------------xxx---------xx---x---------xx<>--[]--[]---x-----------------------xxXXXX----x----------------xx----x-----xx-x------------------------xx<>-x------xxx-----X-X-XX--------x--- ---------------------------xx--[]---x-------xx---------------XXX-----------xx---------------xxx----------------xx--[]---[]---x----xx--[]-----------x-----xx<>-x---xxXXXXXXXX-------x-----------xx-----xxXXXXXXXX----[]---x-SSxx--[]----XXXXXXXX---XXX---------------------------------[]--------[]---<>--[]--[]----<>-----SS-----xxx------------SSS-S--SSSSSSSSSSSSSSSSSSSSSS--<>---[]--XXXXXXXXX-----[]------S------xx-------------------------------------x----xxx------------------------------------x--xx-----------------------------------------------------------x-----xx--------xxXXX-------------------------------------XXXXXX-----------x----<>-----xx-------------xxx----XXXX-x--xx------xxx-xx-[]--------x----E----------xx--XX----XX-----XX-----XX-----xx--------------------xxXXXXXXXXX-------x-------xx-[]--x--------------------xx----X----[]--x--------------------xx----XXXX------xx-----xxx--------------xx---X-x--------------------------------------xx-[]---X----<>-----------------X--x-----xx-X--x-xx<>-x-------------------------XXXXXXXX--------xxXXXXX---------------X--xxxx---X----------X--xxxxxxxxx---------------xxxxxxxxxxxxxxx--x----xx------------------------------------------xxxxxxxxxxxxxx------------x--------------------------xx--------------------------------[]----------------E---X-[]---x-------------------------B-------[]---<>--[]--[]---------------------------XXXXXXX-----------xxXXX-------------------x----xxxxx-----------------------------<>----<>---------------------XXX---------------------------------------<>------------------E----[]--------------x-------------xx<>-x---xxS-x--xx-X--x--xx--[]---x--------------xx-----x-----xx-----------xxXXXXXX------x----xx------[]----x----------------xxx---------xx<>-x-xx-[]---[]-----XXXXXXX----------x-------B----b---------------------xxx------xxxxx--xx-X--x--------------------------------------------------[]----B------------------?------------------------------------<>------X------------------x-----xx------x------------------------------------------------------------------------------------------------------------------------------------------------------------------SSx-----------------------xx---x--------xxx-xxx-xxx-----xxx--------xxb--x-------------------------xxxxxx-------------xxXXXXXXXX--------XXX--------XXXX---------XX---------x---xxB-x------------xxx-xxx------Exxx----------xxxx---------xxx-----xxxx----------------xxxxxx-xx-----XXXXXX---------x-------------------xx---E-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX------------x----xxx-----xx--SSSSSSSSSSSSS---------<>--------SSSSSSSSSSSSS--------------<>------------[]-------EEXXXX-[][]---x-----xx------------------X-------oooooooo---[]----S-S-S-----------------xxx--------------------------------------oo----xx---[]------x-----XX-xx---Exxx----------------------E---E--------xxx---xxXXXXXXX-E-------------------XXXX---x-------------xxxxx---xx---<>-x------xx---<>XX[]XX[]----------SS-SSxXXXX--X----XXXXXXXXXXXXXXX----X-------X---XXXXXXXXXXXXXXXX-----------XX---------XXX-----XXXEXXXXXXXXXXXEXXXXXXXXX-----xxxxxxx------------XXX---XX---------XXX--------------XXXXXXX-----------------------------X------xx--[]---x----------------------xx--X----x-------------xx---------------------b--x------xxx-------xx<>-X-X------------x------xxx-----xxx----------xxx-E----xx--[]-----[]----------------[]--------------SSx------------E----------------xxxxx--xx-----x------------------xxX-xSSSSSSSSSSSX----------------xxx-------------------------xxx-----xxXXXX----------------xxxx----xx---X---xxx-X--x-----xx-X--X--X-----x-xx---SS-SS-------xxXXXXXXX---------------------------XXXXXXXXXXXXX-------------------x---xx----------------------------------------------------------------x------xxx--x-------xxxx----xx---[]---[]-------------S----[]----x------E-----xxxxxx--xx-[]-----------x-xxxxx---xxx---------xxxxxxxxx-----xxx--[]-------X------X----x-----xx--[]-------[]----XX-X-------S------xx-------------------------------------------------------------------x--xx-------[]---[]------------------------------------------------------------------------------------------------------x----xxXXXXE-------------xxxxxxxxxxxxxxxxx-------xx-----------------xx----x-------xxxxx-xx----------------------------------------x---xx---x---xx-------------------------x---------xx---------------------xx--------xxxXXXXX---x---xxB-x----------xxxxx-----------------xxx----------xx--[]---x--xx<>--------------------XXXXXXX-----------x--------xxx------xxx---xx--[]--x----------xx----------------------xx<>-x---xx-[]------[]---x-xx--[]---x-----xxx--------------xxxxx-------------------------------------------xxXXX--XXX-x--xxXXXX--XXX-x-xx<>-x----------xx<>-XXXXXXXX--------xx--------------xxxXXXXXXXXXX--XXXX-x-----------------SS-----------------------xxxxx----------------xx-[]----[]----<>-xxx-SS--x-----xxXXXXX------x------xxx---------xxxxxxx-----------o----------------------------xxxxxx------------------------------XXXX---------------------------x----xx--XXXXXX---------x---------xxXXXX---------XX---x--xx--[]---x------------------xx--[]---x-----------xxxxx---xx--[]---x----xx<>-----[]--[]--[]----x-xxxxx-------Exx-X--x---------------xx-[]------------XX---------x-----------------------------------xxxxx------------------<>---xxXXXX-------XX-XXX--x---------------------------------xxxxx---xxXXXXXX-----x---------------------Exx-XXX-------XXXXXXXXX-------xxx-------xx------------x--xxxxxxxxxxx---XXXXXXXXXX--------x---------------xx--XXXX-------------XXX---XXXXX------------------------------------x-xxxx-------xx-[]--x-----xxx-------------------------------------------xxxxx------------------xx-X--x-----xx--[]---x------xx--[]---x----------xxx----------xx<>-x------xxxxx-xxx----xx-X--x--------------xxXXXXXXXX--------xx--------xxx-xxxxx---SSSSS---SSSSSSSSSSSSSSSSSS------------------------[]----[]----[]---[]------------XXX-[]--XXX-----------------<>-----[]--[]---------XXXXXX---------------XXXX--[]---SS--------------------------------xx-[]-----[]--x-----------------------------xx--X-------x-------------B-xx-X--x-----------------------xx--X------x----B-xx<>-x---B----------xxXXXX--x--------xxx--------xxxxxxx-----------o-------------------------x---xxxxx------------------------------XXXX---------------------------x----xx--XXXXXX---------x---------xxx---xxxxXXXXXX-------x-------------xxx---------xxXXXXX---x---xxXXXXXXX-----------[]--x--------------xxXXXXX--------x-----xxXXXXXX--SSS-x------xxxxxxxXXXXX-XXX-------xxx--------xx--[]-------[]---xx<><>-x-xx--[]--------[]---------<>-----<>----[]--<>-[]-----[]------[]-<>-x----xx<>-x---xx-------<>--<>-x-xxxxx---xxXXX[]-XXX--x--------xx--[]---<>[][]----[]--x---xx-X[]XXXXXX-------xxx-------xxx---xxx---xx--------------------XXXXX---------------------------------------x-------xx-------------x-xxxxx---xx--------------------------------x-----xxxx--------B--------b-------B--E------b---------b-----------B---xx--[]--x---------xx-[]--x------B-xx-[]----<>-x---B---xx<>-x------xx-X--x-b----xxxXXXXX----XXXXXXXX--------xx----------------------xx--[]---x----xxxxxxxxxx-xxx-------------xx--[]----[]--x---xx--[]--------[]---x---xx<>-x------------------xx-[]--x----xx--X---------X-----xxxxxxxxxxx----------------xxxxxxx-----xxX-x-------xx-----x-------xx-[]--[]--[]----x---------------------xxXXXXX-----x--xxxxxxxxxx--xxX---X-x---xx---xxxxxx-----------------xx-[]--x----xx<>---X-X-X-XX---------x-- -xxxxxxxxxxxxxxxxxxxxxxxxxxx---[]----xxxxxxxx-----------E---XXXX-----------X-xxxxxxxxxxxxxxxx-Exxxxxxxxxxxxxxxxx---[]---[]----xxxxx---[]------E-----xxxxxx-[]--xxxxXXXXXXXXX--------xxxxxxxxxxxxX-----XXXXXXXXXX----[]----xxxx---[]---XXXXXXXXX---------------------------------------[]--------[]---[]--[]--[]----[]-------E-EE--xX------------SSS-S--SSSSSSSSSSSSSSSSSSSSSS--[]---[]-XXXXXXXXXX-----[]------Sxxxxxxx------------------------------XXXXEXXEX----XXX-------------------------------------xxx-------------------------------------------------------------xxxxxxXxxxxxxxxxXXX-------------------------------------XXXXXX----<>-----<>----[]-----<>--------------x-----------xxx-------XX-xx--[]---------xxxxxxxxxxxxxxxxX----------------------------XX-xxxxxxxxxxxxxxxxxxxxxXXXXXXXXXX-------Xxxxxxxxx--[]---xxxxxxxxxxxxxxxxxxxxx-----XX---[]---xxxxxxxxxxxxxxxxxxxxxX----XXXXX-----<>-----XX-xxxxxxxxxxxxxxxX---X--xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx--[]E--X----[]---E-------------X---xxxxxx--X---xx-[]--xxxxxxxxxxxxxxx---------XXXXXXXXX--------xXXXXXX---------------X--XXXX---X----------X-XXXXXXXXXX--------------XXXXXXXXXXXXXXXX---xxxxx-------------------------------------------XXXXXXXXXXXXXX-------------xxxxx--------------------xx---------------------------------[]---------------<>---X-[]----x---xxxxx---B------------b-------[]---[]--[]--[]--------------------------XXXXXXXX---------xxx------------------------xxxxx--x--------------------XXX-------[]----[]---------------------------------------------------------------[]-----------------<>----[]--------------Xxxxxxxxxxxxxxx-[]--xxxx-S--xxx--X---xxx---[]----xxxxxxxxxxxxxxxX-----X-----X-xxxxxxxxxxxxXXXXXXX-------xxxxx--X----[]-----xxxxxxxxxxxxxxxxx--xxxxxxxxxx-[]--xx--[]---[]----XXXXXXXX---------X-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx--xxxxxxx----xxx--X---xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx[]----b-E-----B------E------------E---------------------------[]------X--------X----------xxxxxx--------xxxxx---------------------------------------------------------------------------------------------------------------E---------------------------------------------SS-xxxxxxxxxxxxxxxxxxxxxxxx-----xxxxxxxxx--xx--xx--xxxxxxE-xxxxxxxxx-BEE-xxxxxxxxxxxxxxxxxxxxxxxxxxE-EE-xxxxxxxxxxxxxxXXXXXXXXX--------XXX--------XXXX---------XX----------xxxx-b--xxxxxxxxxxxxx--xx--xxxxxxxx--xxxxxxxxxxxEE-xxxxxxxxxx--xxxxxxEE-xxxxxxxxxxxxxxxxx--EE-xxX-----XXXXXX----------xxxxxxxxxxxxxxxxxxxx------XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX-----------X-xxxxx--xxxxxx---SSSSSSSSSSSSS---------[]-----------------------------------[]------------[]------E-XXXXX-[][]----xxxxxx---------------------------oooooooo---[]----S-S-Sxxxxxxxxxxxxxxxxxx--xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx----[]------B-----XXXX-xxxxxE-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxE-xxxxXXXXXXXX<>------------------XXXXX----xxxxxxxxxxxxxxE---x-xx----[]--xxxxxxx----[]XX[]XX[]--------X-SS-SSXXXXX--X----X-------------X----X-------X---X--------------X------------------------------X-------X---------------X-----XXXXXXE----------------------------------------------X-----------------------------------Xxxxxxxx---[]----xxxxxxxxxxxxxxxxxxxxxxx--EX-----xxxxxxxxxxxxxxX---------------E-----B---xxxxxxxE-xxxxxxxx-[]-X-X-------------xxxxxxxE-xxxxxxB-xxxxxxxxxxxB-xxxxxxx---[]-----[]------E---------[]------------X-SS-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx----xxx-------xxxxxxxxxxxxxxxxxxx-X--xxxxxxxxxxxxxxxxxxxxxxxxxxxxxE-xxxxxxxxxxxxxxxxxxxxxxxxxx--xxxxxxXXXXX----oooooo-------x--xxxxx----X----x--X---xxxxxx--X--X--X------xxX---SS-SS-xxxxxxx--------------------------------------------------------------------xxxx------------------------------------------------------------------xxxxxxxX-xx-xxxxxxxxEE-xxxxxX---[]---[]------------------[]-----xxxxxxxxxxxxx---X-xxx--[]----E---EE--xx----xxxxE-xxxxxxxxxx---x--x-------x---[]----E--X------X-----xxxxxx---[]-------[]----XX-X-------xxxxxxxx---------------------------------------------------------------------xxx--------[]---[]----------------o---o--------------------------------------------------------------------------------X-xxxxxXXXXX-------------XXXXXXXXXXXEXXXXX-xxxxxxxx------------------XX-----xxxxxxxx----xx------------------------------------------x-xx-----xxxx---------------------------xxxxxxxxxx------Xxxxxxxxxxxxxxxxx----------xXXXXXX----xxxx-b--xxxxxxxxxxxB---xxxxxxxxxxxxxxxxxxE-xxxxxxxxxxx---[]----xxx-[]-----X-------------XXXXXXXX----------B-xxxxxxxxxE-xxxxxxxE-xxxx---[]---xxxxxxxxxxxXxxxxxxxxxxxxxxxxxxxxxxx-[]--xxxx--[]-E----[]----xx-E-[]----xxxxxx--xxxxxxxxxxxxxxx----xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxXXXX--XXXX-xxxXXXXX--XXXX-xx-[]--xxxxxxxxxxx-[]XXXXXXXXX--------X-xxxxxxxxxxxxxxxXXXXXXXXXXXXE-XXXX--xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx----xxxxxxxxxxxxxxxxx--[]----[]--E-[]--x--SS---xxxxxxXXXXXX-------xxxxxxx--xxxxxxxxxx--XXXX----------XXX----------------------------x--x----------------------------------------------ooo--------------xxxxx---XXXXXX--------X-xxxxxxxxxxXXXXX----EE---XX---Exxx---[]----xxxxxxxxxxxxxxxxxxx---[]----xxxxxxxxxxxx----xxxxE--[]----xxxxx-[]--E--[]--[]--[]-----xx----xxxxxxxxx--X---xxxxxxxxxxxxxxxx--[]E-----------XX--------X-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx----xxxxxxxxxxxxxxxxxxxxxxxxXXXX--------X--XXX--Exxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx----xxxxXXXXXXX-----Exxxxxxxxxxxxxxxxxxxxxxx--XXX------XXXXXXXXXX-------XX-xxxxxxxx--------------xxx---XXXXXXX----------------------xxxxxxxxxxxxxxxx--------------------------------------------------------------------xxXX-xxxxxxxx--[]---xxxxxx--xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx----xxxxxxxxxxxxxxxxxxx--X---xxxxxx---[]----xxxxxxx---[]----xxxxxxxxxxx--xxxxxxxxxxx-[]--xxxxxxx----xx--xxxxx--X---xxxxxxxxxxxxxxxXXXXXXXXX--------X-xxxxxxxxx--xx--x---------oooSSSSSSSSSSSSSSSSSS------------------------[]-E--[]----[]---[]--------E-E-XXX-[]--XXX-----------------[]---E-[]--[]--------XXXX------------EE---XXXXX--[]---SSxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx--[]-----[]---xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx---X--------xxxxxxxxxxxxxxxx--X---xxxxxxxxxxxxxxxxxxxxxxxxE-EX-------xxxxxxx-[]--xxxxxxxxxxxxxxxXXXXX---xxxxxxxxxX-xxxxxxxxx--XXXX----------XXX-------------------------xxxx--x----------------------------------------------ooo--------------xxxxx---XXXXXX--------X-xxxxxxxxxx--xxxxXXXXXXX----------xxxxxxxxxxxxxx--xxxxxxxxxxXXXXXX----xxxx----XXXX-----------[]---xxxxxxxxxxxxxxxXXXXXX--SSSSS--xxxxxxXXXXXXX-------xxxxxxx----xXXXXXX-XXX-------XX-xxxxxxxxx---[]----E--[]----x[][]--xx---[]--------[]---------[]-----[]----[]--[]-[]-----[]------[]-[]--xxxxx-[]--xxxx--------[]--[]--xx----xxxxXXXX[]-XXX---xxxxxxxxx---[]---[][][]----[]---xxxx-XX[]XXXXXX-------XX-xxxxxxxx--xxxx--xxxx------------------------------------------------------------------xxxxxxxx---------------xx----xxxx----------------------------------xxxxxxXX-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx---[]---xxxxxxxxxx--[]---xxxxxxxxx--[]-E--[]--xxxxxxxx-[]--xxxxxxx--X---xxxxxxx-XXXXXX---XXXXXXXXX--------X-xxxxxxxxxxxxxxxxxxxxxxxE--[]----xxxxx----x----xx--xxxxxxxxxxxxxx---[]----[]---xxxx---[]--------[]-E-Exxxx-[]-Exxxxxxxxxxxxxxxxxxx--[]---xxxxxE--X---------X------x----x----xxxxxxxxxxxxxxxxx---x--xxxxxx-X--xxxxxxxx-------xxxxxxxx--[]--[]--[]-----xxxxxxxxxxxxxxxxxxxxxxXXXXXX------xxx----x----xxx-X-E-X--xxxx-----x----xxxxxxxxxxxxxxxxxx--[]---xxxxx-[]---X-X-X-XX--------X-xx -XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX---XXXXXXXXXXXXXXXXXXXX-------XXXXXXXXXXXXXXXXXX--XXXXXXXXXXXXXXXXXXXXXX---XX--XXXXXXXXXXXXX-----X---XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX-----XXXXXXXXXX--XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX---------------------------------XXXXXXXXXX----XXX---XX--XX--XX--XXXXXXXXXXX-XXXXXXX---------XXXSSS-S--XXXXXXXXXXXXXXXXXXXXXX--XX---XXXXXXXXXXXXXXXXXXXXXXXXXXXxXXXXXX--------------------------------------------------------------------------------XXEXXXX------------------------------------------------------------XXXXXXXxXXXXXXXXXXX-------------------------------------XXXXXX----[]-----[]----[]-----[]------------XXX-----------XXX-------XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX----------------------------XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX-----XXX--[]XXXXXXXXXXXXXXXXXXXXXXXXX----XXXXXX----[]------XXXXXXXXXXXXXXXXXX---X--XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX----XXXXXXXXX----------XXXXXXXXXXXXXXXXXXXXX--XXXXXXXXXXXXXXX--------XXXXXXXXXXXXXXXXXXXXXXXXX---------------X---------X----------X-X--------X--------------X--------------X--XXXXXXXX-----------------------------------------X------------X-------------XXXXXxxxxxxxxxxxxxxxxxxxxx-------------EE-------------------[]---------------[]---X-[]-E---xxxx----xxxxxxxxxxxxxxxxxxxxxxxx[]---[]--[]--[]-------------------------XXXXXXXXX--------XXXXXXXX---------------XXXXXXXXXXEEXEXX--------------------------X[]X--X[]XXXXX-------XXEX--------------XXXXX----------------------------[]-----------------[]----[]-------------XXXXXXXXXXXXXXXXXXXXXX<>XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX-----X-----XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX--XXXXXXXXXXXXXX-XXX-XX--XXXXX-XXXXXXXXXXXXXXXXX-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX--XXXXXXX--X-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX-XXXXXXXXXXXX-XXXXXXXXXXXXXXXXXXXXXXXXXXXX-XXXX---XXXXXXXXX----XXXXX---------XXXXXXX--------XXXX------------------------------------------------------XXX-XXX--------XXXXXX-------------------------XXXXXXXXXXXXXXXX---------XXXXXXXXXXXXXX------------------XXXXXXXXXXXXXXXXXXXXXXXXXXXX-----XXXXXXXXX--XX--XX--XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX------XXX--------XXXX---------XXXXXXXXX---XXXXXXXXXXXXXXXXXXXXX--XX--XXXXXXXX--XXXXXXXXXXXXXXXXXXXXXXXX--XXXXXXXXXXXXXXXXXXXXXXXXXX--XXXXXX-----XXXXXX----------XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX--XXXXXXXXXXXXXXXX--XXXXXXE--XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX-XX-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX-XXXEXXXXXX-XXXXXXEXXXXXXX----XXXXXXXXX--------------XXXXXXXX---[]----S-S-SXXXXXXXXXXXXXXXXXX--XXXXXXXXXXXXEEXEXXXXXXXXXXXXEEXEXXXXXXXXXXXXXXXXXXXXXXXXXX-----XXXXXXXXX<>XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX<>-XXXXXXXXXXXX[]------------------XXXXXXXXXXXXXXXXXXXXXX<>----xx-----XXXXXXXXXXX----[]XXXXXXXXXXXXXXXXXXXXXX-XXXXX--X----X-------------X----X-------X---X--------------X------------------------------X-------X---------------X----------------------------------------------------------X-----------------------------------XXXXXXXXXXXXXXXXXXXXXXEEXXXXXEXEXXXXXXXXXXXX-----XXEXXXXXXXXXXXX-------------XXXXXXXXXXXXXXXXXXXXXXXXEXXXXXXXXXXX-------------XXXXXX<>XXXXXXXXXXXXXXXXXXXXX-XXXXXXXEXXXX-----XXXXXXX<>XXXXXX---XXXXXXX-------XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX----XXX------XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX--XXXXXXXXXXXXXXXXXXXXXXX-----X--XXXXX--XXXXXXXXXXXX-XXXXXXX--X--X--X-----XXXXXXXXXXXXXXXXXXXX-----------------------------------------------------------------XXXXXXXXX---------------------------------------------------------------XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX---XXXXXXX-XXXXX--XXXXXXX---XXXXXXXXXXXXXXXXXXXX---XXXXXXXXXXXXXXXXXXXXXXX-X-XXXXXXXXXXXXXXXXX-X-X--X-------XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX-------XX-XXXXX-XXXXXXXXXXXXXXXX------------------------------------------------XXXXXXXX------------XXXXXXX-----[]---XX-XXX------------X---X--------------XXX-------------------------------------------------------------X-XXXXXXXXXXX--------------XX-------------XX-XXXXXXXX------------------------XXXXXXXXX----XX-----------------------------------XXX-----xx-----XXXXXX-------------------------XXXXXXXXXXX------XXXXXXXXXXXXXXXXX------S---XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX--XXXXXEEXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX-------------XXXXXXXXXXXXXXXEXXXXXXXXXXXXXX-XXXXXXXX-XXXXX--XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX--XXXXXXXXXXXXXXX---XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX--XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX---XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX--XX--XXXXXXXXXXXX-------XXXXXXXX--XXXXXXXXXX----------------------------------XXXX-----XXXXX-XXXXX-------------------------------------------XXX-------------XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX----XXXXXXXXXX---XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX---XXXXXXXXXX--XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX---XXXXXXXXXXXXXXXXXXXXXXXXXXXXX--------X--XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX----XXXXXXXXXX--XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX---XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX--------------XXX--------------------------------XXXXXXXXXXXXXXXX-----------------------------------XXX-------------------------XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX--XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX----XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX--XXXXXXXXXXXXXXXXXXXXXXX---XXX--XXXXXXXX--XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX--XX--X---XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX------XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX--XX--XX------XXXX------XXXXXXXXXXXXXXX--XXXXXXXXX-----XXXXXXXXXXXXXXXXXXXXXXX--XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX--XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX----XXXXXXXXXXXXXXXXXX--XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX---XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX----------------------------------XXXX-----XXXXX-XXXXX-------------------------------------------XXX-------------XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX--XXXXXXXXX------XX--XXXXXXXXXXXXXXXX--XXXXXXXXXXXXXXX---XXXXXXXXXXXXXXXX---XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX-------XX-XXXXXXXXXXXX------XXXXXXXXX---XXXXXXX--XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX------XXXXXXXXXXXX-X-XXXXXXXX-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX--XXXX--XXXX---------XXXX--XXXX----------------------------------------------XXXXXXXXX-XXXX------XXXXXX----XXXX-----XXXX------------------------XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX--XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX-X-XX-XX-XX-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX-X-XX-X-XX-XXXXXXXXXXXXXXXXX-X-X-XXXXXXXXXXXXXXXXXXX------XXXXXXXXXX-XX--XX--XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX---XX---XXXXXXXXXXXXXXXX-----X----XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX-X-X-X-XXXXXXXXXXXXXX""" diff --git a/spaces/multimodalart/stable-diffusion-inpainting/inpainting.py b/spaces/multimodalart/stable-diffusion-inpainting/inpainting.py deleted file mode 100644 index 798c3fd252f826762aee6970f867eee537249db8..0000000000000000000000000000000000000000 --- a/spaces/multimodalart/stable-diffusion-inpainting/inpainting.py +++ /dev/null @@ -1,194 +0,0 @@ -import inspect -from typing import List, Optional, Union - -import numpy as np -import torch - -import PIL -from diffusers import AutoencoderKL, DDIMScheduler, DiffusionPipeline, PNDMScheduler, UNet2DConditionModel -from diffusers.pipelines.stable_diffusion import StableDiffusionSafetyChecker -from tqdm.auto import tqdm -from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer - - -def preprocess_image(image): - w, h = image.size - w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32 - image = image.resize((w, h), resample=PIL.Image.LANCZOS) - image = np.array(image).astype(np.float32) / 255.0 - image = image[None].transpose(0, 3, 1, 2) - image = torch.from_numpy(image) - return 2.0 * image - 1.0 - - -def preprocess_mask(mask): - mask = mask.convert("L") - w, h = mask.size - w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32 - mask = mask.resize((w // 8, h // 8), resample=PIL.Image.NEAREST) - mask = np.array(mask).astype(np.float32) / 255.0 - mask = np.tile(mask, (4, 1, 1)) - mask = mask[None].transpose(0, 1, 2, 3) # what does this step do? - mask = 1 - mask # repaint white, keep black - mask = torch.from_numpy(mask) - return mask - -class StableDiffusionInpaintingPipeline(DiffusionPipeline): - def __init__( - self, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet2DConditionModel, - scheduler: Union[DDIMScheduler, PNDMScheduler], - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPFeatureExtractor, - ): - super().__init__() - scheduler = scheduler.set_format("pt") - self.register_modules( - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - - @torch.no_grad() - def __call__( - self, - prompt: Union[str, List[str]], - init_image: torch.FloatTensor, - mask_image: torch.FloatTensor, - strength: float = 0.8, - num_inference_steps: Optional[int] = 50, - guidance_scale: Optional[float] = 7.5, - eta: Optional[float] = 0.0, - generator: Optional[torch.Generator] = None, - output_type: Optional[str] = "pil", - ): - - if isinstance(prompt, str): - batch_size = 1 - elif isinstance(prompt, list): - batch_size = len(prompt) - else: - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if strength < 0 or strength > 1: - raise ValueError(f"The value of strength should in [0.0, 1.0] but is {strength}") - - # set timesteps - accepts_offset = "offset" in set(inspect.signature(self.scheduler.set_timesteps).parameters.keys()) - extra_set_kwargs = {} - offset = 0 - if accepts_offset: - offset = 1 - extra_set_kwargs["offset"] = 1 - - self.scheduler.set_timesteps(num_inference_steps, **extra_set_kwargs) - - # preprocess image - init_image = preprocess_image(init_image).to(self.device) - - # encode the init image into latents and scale the latents - init_latent_dist = self.vae.encode(init_image).latent_dist - init_latents = init_latent_dist.sample(generator=generator) - init_latents = 0.18215 * init_latents - - # prepare init_latents noise to latents - init_latents = torch.cat([init_latents] * batch_size) - init_latents_orig = init_latents - - # preprocess mask - mask = preprocess_mask(mask_image).to(self.device) - mask = torch.cat([mask] * batch_size) - - # check sizes - if not mask.shape == init_latents.shape: - raise ValueError(f"The mask and init_image should be the same size!") - - # get the original timestep using init_timestep - init_timestep = int(num_inference_steps * strength) + offset - init_timestep = min(init_timestep, num_inference_steps) - timesteps = self.scheduler.timesteps[-init_timestep] - timesteps = torch.tensor([timesteps] * batch_size, dtype=torch.long, device=self.device) - - # add noise to latents using the timesteps - noise = torch.randn(init_latents.shape, generator=generator, device=self.device) - init_latents = self.scheduler.add_noise(init_latents, noise, timesteps) - - # get prompt text embeddings - text_input = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_embeddings = self.text_encoder(text_input.input_ids.to(self.device))[0] - - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance: - max_length = text_input.input_ids.shape[-1] - uncond_input = self.tokenizer( - [""] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt" - ) - uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0] - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) - - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - latents = init_latents - t_start = max(num_inference_steps - init_timestep + offset, 0) - for i, t in tqdm(enumerate(self.scheduler.timesteps[t_start:])): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - - # predict the noise residual - noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings)["sample"] - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs)["prev_sample"] - - # masking - init_latents_proper = self.scheduler.add_noise(init_latents_orig, noise, t) - latents = (init_latents_proper * mask) + (latents * (1 - mask)) - - # scale and decode the image latents with vae - latents = 1 / 0.18215 * latents - image = self.vae.decode(latents).sample - - image = (image / 2 + 0.5).clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).numpy() - - # run safety checker - safety_cheker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(self.device) - image, has_nsfw_concept = self.safety_checker(images=image, clip_input=safety_cheker_input.pixel_values) - - if output_type == "pil": - image = self.numpy_to_pil(image) - - return {"sample": image, "nsfw_content_detected": has_nsfw_concept} \ No newline at end of file diff --git a/spaces/naver-ai/DenseDiffusion/utils.py b/spaces/naver-ai/DenseDiffusion/utils.py deleted file mode 100644 index db56ef24b9d98fb9b6077989804a04ad428f0234..0000000000000000000000000000000000000000 --- a/spaces/naver-ai/DenseDiffusion/utils.py +++ /dev/null @@ -1,105 +0,0 @@ -import torch -import base64 -import gradio as gr -import numpy as np -from PIL import Image -from io import BytesIO - -MAX_COLORS = 12 - - -def create_binary_matrix(img_arr, target_color): - mask = np.all(img_arr == target_color, axis=-1) - binary_matrix = mask.astype(int) - return binary_matrix - -def preprocess_mask(mask_, h, w, device): - mask = np.array(mask_) - mask = mask.astype(np.float32) - mask = mask[None, None] - mask[mask < 0.5] = 0 - mask[mask >= 0.5] = 1 - mask = torch.from_numpy(mask).to(device) - mask = torch.nn.functional.interpolate(mask, size=(h, w), mode='nearest') - return mask - -def process_sketch(canvas_data): - binary_matrixes = [] - base64_img = canvas_data['image'] - image_data = base64.b64decode(base64_img.split(',')[1]) - image = Image.open(BytesIO(image_data)).convert("RGB") - im2arr = np.array(image) - colors = [tuple(map(int, rgb[4:-1].split(','))) for rgb in canvas_data['colors']] - colors_fixed = [] - - r, g, b = 255, 255, 255 - binary_matrix = create_binary_matrix(im2arr, (r,g,b)) - binary_matrixes.append(binary_matrix) - binary_matrix_ = np.repeat(np.expand_dims(binary_matrix, axis=(-1)), 3, axis=(-1)) - colored_map = binary_matrix_*(r,g,b) + (1-binary_matrix_)*(50,50,50) - colors_fixed.append(gr.update(value=colored_map.astype(np.uint8))) - - for color in colors: - r, g, b = color - if any(c != 255 for c in (r, g, b)): - binary_matrix = create_binary_matrix(im2arr, (r,g,b)) - binary_matrixes.append(binary_matrix) - binary_matrix_ = np.repeat(np.expand_dims(binary_matrix, axis=(-1)), 3, axis=(-1)) - colored_map = binary_matrix_*(r,g,b) + (1-binary_matrix_)*(50,50,50) - colors_fixed.append(gr.update(value=colored_map.astype(np.uint8))) - - visibilities = [] - colors = [] - for n in range(MAX_COLORS): - visibilities.append(gr.update(visible=False)) - colors.append(gr.update()) - for n in range(len(colors_fixed)): - visibilities[n] = gr.update(visible=True) - colors[n] = colors_fixed[n] - - return [gr.update(visible=True), binary_matrixes, *visibilities, *colors] - -def process_prompts(binary_matrixes, *seg_prompts): - return [gr.update(visible=True), gr.update(value=' , '.join(seg_prompts[:len(binary_matrixes)]))] - -def process_example(layout_path, all_prompts, seed_): - - all_prompts = all_prompts.split('***') - - binary_matrixes = [] - colors_fixed = [] - - im2arr = np.array(Image.open(layout_path))[:,:,:3] - unique, counts = np.unique(np.reshape(im2arr,(-1,3)), axis=0, return_counts=True) - sorted_idx = np.argsort(-counts) - - binary_matrix = create_binary_matrix(im2arr, (0,0,0)) - binary_matrixes.append(binary_matrix) - binary_matrix_ = np.repeat(np.expand_dims(binary_matrix, axis=(-1)), 3, axis=(-1)) - colored_map = binary_matrix_*(255,255,255) + (1-binary_matrix_)*(50,50,50) - colors_fixed.append(gr.update(value=colored_map.astype(np.uint8))) - - for i in range(len(all_prompts)-1): - r, g, b = unique[sorted_idx[i]] - if any(c != 255 for c in (r, g, b)) and any(c != 0 for c in (r, g, b)): - binary_matrix = create_binary_matrix(im2arr, (r,g,b)) - binary_matrixes.append(binary_matrix) - binary_matrix_ = np.repeat(np.expand_dims(binary_matrix, axis=(-1)), 3, axis=(-1)) - colored_map = binary_matrix_*(r,g,b) + (1-binary_matrix_)*(50,50,50) - colors_fixed.append(gr.update(value=colored_map.astype(np.uint8))) - - visibilities = [] - colors = [] - prompts = [] - for n in range(MAX_COLORS): - visibilities.append(gr.update(visible=False)) - colors.append(gr.update()) - prompts.append(gr.update()) - - for n in range(len(colors_fixed)): - visibilities[n] = gr.update(visible=True) - colors[n] = colors_fixed[n] - prompts[n] = all_prompts[n+1] - - return [gr.update(visible=True), binary_matrixes, *visibilities, *colors, *prompts, - gr.update(visible=True), gr.update(value=all_prompts[0]), int(seed_)] diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Gasturb 12 Download Crack Gta.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Gasturb 12 Download Crack Gta.md deleted file mode 100644 index 390d7d4f3e40ff756b177a52bef0e62e86dc95f3..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Gasturb 12 Download Crack Gta.md +++ /dev/null @@ -1,20 +0,0 @@ -
    -

    How to Download Gasturb 12 Crack for GTA Online

    -

    Gasturb 12 is a software that simulates gas turbine performance and can be used for various applications. However, it is not a free software and requires a license to use. Some people may want to download a cracked version of Gasturb 12 to use it without paying for it. This article will show you how to download Gasturb 12 crack for GTA Online, a popular online game that features gas turbines in some of its vehicles.

    -

    gasturb 12 download crack gta


    Download ✶✶✶ https://urlcod.com/2uIab9



    -

    Before you proceed, you should be aware that downloading cracked software is illegal and may expose your computer to viruses and malware. You may also face legal consequences if you are caught using pirated software. Therefore, we do not recommend or endorse downloading Gasturb 12 crack for GTA Online or any other purpose.

    -

    If you still want to download Gasturb 12 crack for GTA Online, you will need to follow these steps:

    -
      -
    1. Go to this link and download the PDF file that contains the instructions and the download link for Gasturb 12 crack[^1^]. The file name is darale.pdf and it is about 4 MB in size.
    2. -
    3. Open the PDF file and follow the instructions carefully. You will need to install a software called WinRAR to extract the Gasturb 12 crack files from the compressed archive.
    4. -
    5. After extracting the files, run the setup.exe file and follow the installation wizard. You will need to enter a serial number that is provided in the PDF file.
    6. -
    7. Once the installation is complete, copy the crack files from the Crack folder and paste them into the Gasturb 12 installation folder. This will overwrite the original files and activate the software.
    8. -
    9. Launch Gasturb 12 and enjoy using it for GTA Online or any other purpose.
    10. -
    -

    Congratulations! You have successfully downloaded Gasturb 12 crack for GTA Online. However, you should be careful not to update the software or connect it to the internet, as this may cause it to stop working or alert the authorities. You should also scan your computer regularly for any viruses or malware that may have been installed along with the crack.

    -

    If you want to use Gasturb 12 legally and safely, you should buy a license from the official website[^2^]. You can choose from different packages depending on your needs and budget. You can also download a free 14-day trial version to test the software before buying it.

    -

    -

    GTA Online is a dynamic and ever-evolving online universe for up to 30 players, including all existing gameplay upgrades and content released since launch ready to enjoy solo or with friends. It is available for PlayStation 5 and Xbox Series X|S[^3^]. You can learn more about GTA Online and how to play it from the official website[^3^].

    In this article, we have shown you how to download Gasturb 12 crack for GTA Online. However, we have also warned you about the risks and consequences of using pirated software. We hope that you will make an informed decision and respect the intellectual property rights of the developers.

    -

    If you have any questions or feedback about Gasturb 12 or GTA Online, you can contact us through our website or social media channels. We would love to hear from you and help you with any issues you may have. Thank you for reading and have a great day!

    7196e7f11a
    -
    -
    \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Polygonal Design Unfold3D 40 32bit.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Polygonal Design Unfold3D 40 32bit.md deleted file mode 100644 index 813afd2e64cf7a92d6ead9108cb9bd8071015074..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Polygonal Design Unfold3D 40 32bit.md +++ /dev/null @@ -1,24 +0,0 @@ - -

    Polygonal Design Unfold3D 40 32bit: A Powerful UV Mapping Solution for 3D Artists

    -

    UV mapping is the process of projecting a 2D texture onto a 3D model. It is an essential step in creating realistic and detailed textures for 3D objects. However, UV mapping can be challenging and time-consuming, especially for complex models with many polygons.

    -

    That's why Polygonal Design, a French company founded in 2001, developed UNFOLD3D, a software that simplifies and automates the UV mapping process. UNFOLD3D uses advanced algorithms to generate high-quality UVs with minimal distortion and stretching. It also offers intuitive tools to edit and optimize the UV layout, such as seam placement, island packing, overlapping detection, and more.

    -

    Polygonal Design Unfold3D 40 32bit


    Download ->>->>->> https://urlcod.com/2uIbzZ



    -

    Polygonal Design recently released UNFOLD3D 40 32bit, the latest version of its software, compatible with Windows, macOS, and Linux operating systems. UNFOLD3D 40 32bit includes new features and improvements, such as:

    -
      -
    • A new super fast autoseams algorithm that automatically cuts and unfolds the model in seconds.
    • -
    • A new BLINK UVs edition that offers a low-cost option for users who only need the autoseams feature.
    • -
    • A new SDK that allows software developers to integrate UNFOLD3D features into their own applications.
    • -
    • A new online help that explains the icons and keyboard shortcuts of UNFOLD3D.
    • -
    -

    UNFOLD3D 40 32bit is available in three editions: BLINK UVs, UV Wizard, and UV Wizard Floating. The prices range from 99 Euros to 699 Euros depending on the edition and license type. Users can also download a free trial version from the Polygonal Design website.

    -

    UNFOLD3D is trusted by many professional and freelance 3D artists around the world. It has been used for various projects in video games, animation, VFX, architecture, and more. UNFOLD3D has also been chosen by Autodesk for its reliability and quality.

    -

    If you are looking for a fast, easy, and powerful UV mapping solution for your 3D models, you should definitely check out Polygonal Design UNFOLD3D 40 32bit. You will be amazed by how much time and effort you can save with this software.

    -

    For more information, visit http://polygonal-design.fr/

    -

    - -

    UNFOLD3D 40 32bit is not only fast and easy, but also versatile and customizable. Users can choose from different unwrapping methods, such as angle-based, conformal, or mixed. They can also adjust various parameters, such as stretch tolerance, island margin, or packing quality. Moreover, users can paint the UV density directly on the model, to control the texel ratio and avoid texture waste.

    -

    UNFOLD3D 40 32bit has received positive reviews from many users who praised its speed, quality, and usability. For example, Creative Bloq gave it a 4-star rating and said: "If you’re working with low to normally dense meshes, Unfold 3D is true to its word, and unwraps significantly faster than most other standalone UV mappers on the market. Some features, like the new alignment and straightening tools are on par with UV Layout’s, and probably a little more intuitive to use."[1]

    -

    Another user commented on YouTube: "Unfold3D is the default algorithm used for the Peel functions now inside of 3ds Max with the Unwrap UVW modifier. In the Edit UV window of the Unwrap UVW modifier you will see that the Peel section has changed to reflect a new set of options. A rollout menu is now provided so that you can choose to use the old LSCM method from before, or stick with Unfold3D."[2]

    -

    If you want to learn more about how to use UNFOLD3D 40 32bit, you can watch some tutorials on YouTube, such as this one: Unfold3D Basic Unfolding Tutorial. You can also visit the Polygonal Design website for more information and support.

    7b8c122e87
    -
    -
    \ No newline at end of file diff --git a/spaces/nightfury/Colorizer_Models/colorizers/siggraph17.py b/spaces/nightfury/Colorizer_Models/colorizers/siggraph17.py deleted file mode 100644 index 775a23f25d03f3bf1761e5d2bbf4b400eb2c6047..0000000000000000000000000000000000000000 --- a/spaces/nightfury/Colorizer_Models/colorizers/siggraph17.py +++ /dev/null @@ -1,168 +0,0 @@ -import torch -import torch.nn as nn - -from .base_color import * - -class SIGGRAPHGenerator(BaseColor): - def __init__(self, norm_layer=nn.BatchNorm2d, classes=529): - super(SIGGRAPHGenerator, self).__init__() - - # Conv1 - model1=[nn.Conv2d(4, 64, kernel_size=3, stride=1, padding=1, bias=True),] - model1+=[nn.ReLU(True),] - model1+=[nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1, bias=True),] - model1+=[nn.ReLU(True),] - model1+=[norm_layer(64),] - # add a subsampling operation - - # Conv2 - model2=[nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1, bias=True),] - model2+=[nn.ReLU(True),] - model2+=[nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1, bias=True),] - model2+=[nn.ReLU(True),] - model2+=[norm_layer(128),] - # add a subsampling layer operation - - # Conv3 - model3=[nn.Conv2d(128, 256, kernel_size=3, stride=1, padding=1, bias=True),] - model3+=[nn.ReLU(True),] - model3+=[nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1, bias=True),] - model3+=[nn.ReLU(True),] - model3+=[nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1, bias=True),] - model3+=[nn.ReLU(True),] - model3+=[norm_layer(256),] - # add a subsampling layer operation - - # Conv4 - model4=[nn.Conv2d(256, 512, kernel_size=3, stride=1, padding=1, bias=True),] - model4+=[nn.ReLU(True),] - model4+=[nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1, bias=True),] - model4+=[nn.ReLU(True),] - model4+=[nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1, bias=True),] - model4+=[nn.ReLU(True),] - model4+=[norm_layer(512),] - - # Conv5 - model5=[nn.Conv2d(512, 512, kernel_size=3, dilation=2, stride=1, padding=2, bias=True),] - model5+=[nn.ReLU(True),] - model5+=[nn.Conv2d(512, 512, kernel_size=3, dilation=2, stride=1, padding=2, bias=True),] - model5+=[nn.ReLU(True),] - model5+=[nn.Conv2d(512, 512, kernel_size=3, dilation=2, stride=1, padding=2, bias=True),] - model5+=[nn.ReLU(True),] - model5+=[norm_layer(512),] - - # Conv6 - model6=[nn.Conv2d(512, 512, kernel_size=3, dilation=2, stride=1, padding=2, bias=True),] - model6+=[nn.ReLU(True),] - model6+=[nn.Conv2d(512, 512, kernel_size=3, dilation=2, stride=1, padding=2, bias=True),] - model6+=[nn.ReLU(True),] - model6+=[nn.Conv2d(512, 512, kernel_size=3, dilation=2, stride=1, padding=2, bias=True),] - model6+=[nn.ReLU(True),] - model6+=[norm_layer(512),] - - # Conv7 - model7=[nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1, bias=True),] - model7+=[nn.ReLU(True),] - model7+=[nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1, bias=True),] - model7+=[nn.ReLU(True),] - model7+=[nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1, bias=True),] - model7+=[nn.ReLU(True),] - model7+=[norm_layer(512),] - - # Conv7 - model8up=[nn.ConvTranspose2d(512, 256, kernel_size=4, stride=2, padding=1, bias=True)] - model3short8=[nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1, bias=True),] - - model8=[nn.ReLU(True),] - model8+=[nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1, bias=True),] - model8+=[nn.ReLU(True),] - model8+=[nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1, bias=True),] - model8+=[nn.ReLU(True),] - model8+=[norm_layer(256),] - - # Conv9 - model9up=[nn.ConvTranspose2d(256, 128, kernel_size=4, stride=2, padding=1, bias=True),] - model2short9=[nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1, bias=True),] - # add the two feature maps above - - model9=[nn.ReLU(True),] - model9+=[nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1, bias=True),] - model9+=[nn.ReLU(True),] - model9+=[norm_layer(128),] - - # Conv10 - model10up=[nn.ConvTranspose2d(128, 128, kernel_size=4, stride=2, padding=1, bias=True),] - model1short10=[nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1, bias=True),] - # add the two feature maps above - - model10=[nn.ReLU(True),] - model10+=[nn.Conv2d(128, 128, kernel_size=3, dilation=1, stride=1, padding=1, bias=True),] - model10+=[nn.LeakyReLU(negative_slope=.2),] - - # classification output - model_class=[nn.Conv2d(256, classes, kernel_size=1, padding=0, dilation=1, stride=1, bias=True),] - - # regression output - model_out=[nn.Conv2d(128, 2, kernel_size=1, padding=0, dilation=1, stride=1, bias=True),] - model_out+=[nn.Tanh()] - - self.model1 = nn.Sequential(*model1) - self.model2 = nn.Sequential(*model2) - self.model3 = nn.Sequential(*model3) - self.model4 = nn.Sequential(*model4) - self.model5 = nn.Sequential(*model5) - self.model6 = nn.Sequential(*model6) - self.model7 = nn.Sequential(*model7) - self.model8up = nn.Sequential(*model8up) - self.model8 = nn.Sequential(*model8) - self.model9up = nn.Sequential(*model9up) - self.model9 = nn.Sequential(*model9) - self.model10up = nn.Sequential(*model10up) - self.model10 = nn.Sequential(*model10) - self.model3short8 = nn.Sequential(*model3short8) - self.model2short9 = nn.Sequential(*model2short9) - self.model1short10 = nn.Sequential(*model1short10) - - self.model_class = nn.Sequential(*model_class) - self.model_out = nn.Sequential(*model_out) - - self.upsample4 = nn.Sequential(*[nn.Upsample(scale_factor=4, mode='bilinear'),]) - self.softmax = nn.Sequential(*[nn.Softmax(dim=1),]) - - def forward(self, input_A, input_B=None, mask_B=None): - if(input_B is None): - input_B = torch.cat((input_A*0, input_A*0), dim=1) - if(mask_B is None): - mask_B = input_A*0 - - conv1_2 = self.model1(torch.cat((self.normalize_l(input_A),self.normalize_ab(input_B),mask_B),dim=1)) - conv2_2 = self.model2(conv1_2[:,:,::2,::2]) - conv3_3 = self.model3(conv2_2[:,:,::2,::2]) - conv4_3 = self.model4(conv3_3[:,:,::2,::2]) - conv5_3 = self.model5(conv4_3) - conv6_3 = self.model6(conv5_3) - conv7_3 = self.model7(conv6_3) - - conv8_up = self.model8up(conv7_3) + self.model3short8(conv3_3) - conv8_3 = self.model8(conv8_up) - conv9_up = self.model9up(conv8_3) + self.model2short9(conv2_2) - conv9_3 = self.model9(conv9_up) - conv10_up = self.model10up(conv9_3) + self.model1short10(conv1_2) - conv10_2 = self.model10(conv10_up) - out_reg = self.model_out(conv10_2) - - conv9_up = self.model9up(conv8_3) + self.model2short9(conv2_2) - conv9_3 = self.model9(conv9_up) - conv10_up = self.model10up(conv9_3) + self.model1short10(conv1_2) - conv10_2 = self.model10(conv10_up) - out_reg = self.model_out(conv10_2) - - return self.unnormalize_ab(out_reg) - -def siggraph17(pretrained=True): - model = SIGGRAPHGenerator() - if(pretrained): - import torch.utils.model_zoo as model_zoo - model.load_state_dict(model_zoo.load_url('https://colorizers.s3.us-east-2.amazonaws.com/siggraph17-df00044c.pth',map_location='cpu',check_hash=True)) - return model - diff --git a/spaces/nomic-ai/EleutherAI_lambada_openai/index.html b/spaces/nomic-ai/EleutherAI_lambada_openai/index.html deleted file mode 100644 index 9392a889f571af457bf6670a202d565690e3c963..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/EleutherAI_lambada_openai/index.html +++ /dev/null @@ -1,42 +0,0 @@ - - - - EleutherAI/lambada_openai - - - - -
    - -
    - - - \ No newline at end of file diff --git a/spaces/nomic-ai/Helsinki-NLP_tatoeba_mt/README.md b/spaces/nomic-ai/Helsinki-NLP_tatoeba_mt/README.md deleted file mode 100644 index 0c68986d16175e50d98acafc42e3b523cdc75c3f..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/Helsinki-NLP_tatoeba_mt/README.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: Helsinki-NLP/tatoeba_mt -emoji: 🗺️ -colorFrom: purple -colorTo: red -sdk: static -pinned: false ---- diff --git a/spaces/oguzakif/video-object-remover/FGT_codes/LAFC/models/__init__.py b/spaces/oguzakif/video-object-remover/FGT_codes/LAFC/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/oguzakif/video-object-remover/SiamMask/data/coco/pycocotools/setup.py b/spaces/oguzakif/video-object-remover/SiamMask/data/coco/pycocotools/setup.py deleted file mode 100644 index 9f252b834381d58037d0aec161fd2506a00e730d..0000000000000000000000000000000000000000 --- a/spaces/oguzakif/video-object-remover/SiamMask/data/coco/pycocotools/setup.py +++ /dev/null @@ -1,24 +0,0 @@ -from distutils.core import setup -from Cython.Build import cythonize -from distutils.extension import Extension -import numpy as np - -# To compile and install locally run "python setup.py build_ext --inplace" -# To install library to Python site-packages run "python setup.py build_ext install" - -ext_modules = [ - Extension( - '_mask', - sources=['common/maskApi.c', '_mask.pyx'], - include_dirs = [np.get_include(), 'common'], - extra_compile_args=['-Wno-cpp', '-Wno-unused-function', '-std=c99'], - ) -] - -setup(name='pycocotools', - packages=['pycocotools'], - package_dir = {'pycocotools': '.'}, - version='2.0', - ext_modules= - cythonize(ext_modules) - ) diff --git a/spaces/ori1026/OriChatGPT/run_Linux.sh b/spaces/ori1026/OriChatGPT/run_Linux.sh deleted file mode 100644 index 62af07283093d8e580763d7acfe493c3d88e7b08..0000000000000000000000000000000000000000 --- a/spaces/ori1026/OriChatGPT/run_Linux.sh +++ /dev/null @@ -1,25 +0,0 @@ -#!/bin/bash - -# 获取脚本所在目录 -script_dir=$(dirname "$0") - -# 将工作目录更改为脚本所在目录 -cd "$script_dir" - -# 检查Git仓库是否有更新 -git remote update -pwd - -if ! git status -uno | grep 'up to date' > /dev/null; then - # 如果有更新,关闭当前运行的服务器 - pkill -f ChuanhuChatbot.py - - # 拉取最新更改 - git pull - - # 安装依赖 - pip3 install -r requirements.txt - - # 重新启动服务器 - nohup python3 ChuanhuChatbot.py & -fi diff --git a/spaces/p-baleine/metaanalyser/metaanalyser/chains/overview/prompt.py b/spaces/p-baleine/metaanalyser/metaanalyser/chains/overview/prompt.py deleted file mode 100644 index be9fb9e495fcd2365a5dae6e11dfcc9c381e2fa8..0000000000000000000000000000000000000000 --- a/spaces/p-baleine/metaanalyser/metaanalyser/chains/overview/prompt.py +++ /dev/null @@ -1,79 +0,0 @@ -from langchain.output_parsers import PydanticOutputParser -from langchain.prompts import ( - ChatPromptTemplate, - PromptTemplate, - SystemMessagePromptTemplate, - HumanMessagePromptTemplate, -) -from pydantic import BaseModel, Field -from typing import List - - -class Overview(BaseModel): - - title: str = Field(description="title of the systematic review") - main_points: List[str] = Field(description="main points that make up the systematic review") - overview: str = Field(description="overview of the systematic review") - - def __str__(self): - points = "\n - ".join(self.main_points) - return f""" -Title: {self.title} -Points: - - {points} -Overview: {self.overview} -""".strip() - - def _repr_html_(self): - main_points = "".join([f"
  12. {p}
  13. " for p in self.main_points]) - - return ( - "
    " - f"
    Title:" - f" {self.title}" - f"
    " - f"
    Main points:" - f"
      {main_points}
    " - f"
    " - f"
    Overview:" - f" {self.overview}" - f"
    " - "
    " - ) - - -output_parser = PydanticOutputParser(pydantic_object=Overview) - -system_template = "You are a research scientist and intereseted in {categories}. You are working on writing a systematic review regarding \"{query}\"." -system_prompt = SystemMessagePromptTemplate.from_template(system_template) - -human_template = """Write an overview of the systematic review based on the summary of the following list of paper abstracts. - ------ -{abstracts} ------ - -This overview should serve as a compass for you as you construct the outline of the systematic review and write down its details. - -Assuming that the readers of this systematic review will not be familiar with the field. In order to make it easy for readers who are not familiar with this field to understand, list the main points briefly (approximately 30 words maximum) based on the following points. - -- Motivation for this field and the problem this field are trying to solve -- Historical background of this field -- Future development of this field - -Based on these main points, provide an overview of the systematic review regarding {query} you will write. - -Finally, write the title of the systematic review you are going to write based on this overview. - -{format_instructions}""" -human_prompt = HumanMessagePromptTemplate( - prompt=PromptTemplate( - template=human_template, - input_variables=["abstracts", "query"], - partial_variables={ - "format_instructions": output_parser.get_format_instructions() - } - ) -) - -OVERVIEW_PROMPT = ChatPromptTemplate.from_messages([system_prompt, human_prompt]) diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/CONTRIBUTING.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/CONTRIBUTING.md deleted file mode 100644 index ae2be777aa37e956b5ff791523c20ba7b918799a..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/CONTRIBUTING.md +++ /dev/null @@ -1,505 +0,0 @@ - - -# How to contribute to Diffusers 🧨 - -We ❤️ contributions from the open-source community! Everyone is welcome, and all types of participation –not just code– are valued and appreciated. Answering questions, helping others, reaching out, and improving the documentation are all immensely valuable to the community, so don't be afraid and get involved if you're up for it! - -Everyone is encouraged to start by saying 👋 in our public Discord channel. We discuss the latest trends in diffusion models, ask questions, show off personal projects, help each other with contributions, or just hang out ☕. Join us on Discord - -Whichever way you choose to contribute, we strive to be part of an open, welcoming, and kind community. Please, read our [code of conduct](https://github.com/huggingface/diffusers/blob/main/CODE_OF_CONDUCT.md) and be mindful to respect it during your interactions. We also recommend you become familiar with the [ethical guidelines](https://huggingface.co/docs/diffusers/conceptual/ethical_guidelines) that guide our project and ask you to adhere to the same principles of transparency and responsibility. - -We enormously value feedback from the community, so please do not be afraid to speak up if you believe you have valuable feedback that can help improve the library - every message, comment, issue, and pull request (PR) is read and considered. - -## Overview - -You can contribute in many ways ranging from answering questions on issues to adding new diffusion models to -the core library. - -In the following, we give an overview of different ways to contribute, ranked by difficulty in ascending order. All of them are valuable to the community. - -* 1. Asking and answering questions on [the Diffusers discussion forum](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers) or on [Discord](https://discord.gg/G7tWnz98XR). -* 2. Opening new issues on [the GitHub Issues tab](https://github.com/huggingface/diffusers/issues/new/choose) -* 3. Answering issues on [the GitHub Issues tab](https://github.com/huggingface/diffusers/issues) -* 4. Fix a simple issue, marked by the "Good first issue" label, see [here](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22). -* 5. Contribute to the [documentation](https://github.com/huggingface/diffusers/tree/main/docs/source). -* 6. Contribute a [Community Pipeline](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3Acommunity-examples) -* 7. Contribute to the [examples](https://github.com/huggingface/diffusers/tree/main/examples). -* 8. Fix a more difficult issue, marked by the "Good second issue" label, see [here](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22Good+second+issue%22). -* 9. Add a new pipeline, model, or scheduler, see ["New Pipeline/Model"](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+pipeline%2Fmodel%22) and ["New scheduler"](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+scheduler%22) issues. For this contribution, please have a look at [Design Philosophy](https://github.com/huggingface/diffusers/blob/main/PHILOSOPHY.md). - -As said before, **all contributions are valuable to the community**. -In the following, we will explain each contribution a bit more in detail. - -For all contributions 4.-9. you will need to open a PR. It is explained in detail how to do so in [Opening a pull requst](#how-to-open-a-pr) - -### 1. Asking and answering questions on the Diffusers discussion forum or on the Diffusers Discord - -Any question or comment related to the Diffusers library can be asked on the [discussion forum](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/) or on [Discord](https://discord.gg/G7tWnz98XR). Such questions and comments include (but are not limited to): -- Reports of training or inference experiments in an attempt to share knowledge -- Presentation of personal projects -- Questions to non-official training examples -- Project proposals -- General feedback -- Paper summaries -- Asking for help on personal projects that build on top of the Diffusers library -- General questions -- Ethical questions regarding diffusion models -- ... - -Every question that is asked on the forum or on Discord actively encourages the community to publicly -share knowledge and might very well help a beginner in the future that has the same question you're -having. Please do pose any questions you might have. -In the same spirit, you are of immense help to the community by answering such questions because this way you are publicly documenting knowledge for everybody to learn from. - -**Please** keep in mind that the more effort you put into asking or answering a question, the higher -the quality of the publicly documented knowledge. In the same way, well-posed and well-answered questions create a high-quality knowledge database accessible to everybody, while badly posed questions or answers reduce the overall quality of the public knowledge database. -In short, a high quality question or answer is *precise*, *concise*, *relevant*, *easy-to-understand*, *accesible*, and *well-formated/well-posed*. For more information, please have a look through the [How to write a good issue](#how-to-write-a-good-issue) section. - -**NOTE about channels**: -[*The forum*](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63) is much better indexed by search engines, such as Google. Posts are ranked by popularity rather than chronologically. Hence, it's easier to look up questions and answers that we posted some time ago. -In addition, questions and answers posted in the forum can easily be linked to. -In contrast, *Discord* has a chat-like format that invites fast back-and-forth communication. -While it will most likely take less time for you to get an answer to your question on Discord, your -question won't be visible anymore over time. Also, it's much harder to find information that was posted a while back on Discord. We therefore strongly recommend using the forum for high-quality questions and answers in an attempt to create long-lasting knowledge for the community. If discussions on Discord lead to very interesting answers and conclusions, we recommend posting the results on the forum to make the information more available for future readers. - -### 2. Opening new issues on the GitHub issues tab - -The 🧨 Diffusers library is robust and reliable thanks to the users who notify us of -the problems they encounter. So thank you for reporting an issue. - -Remember, GitHub issues are reserved for technical questions directly related to the Diffusers library, bug reports, feature requests, or feedback on the library design. - -In a nutshell, this means that everything that is **not** related to the **code of the Diffusers library** (including the documentation) should **not** be asked on GitHub, but rather on either the [forum](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63) or [Discord](https://discord.gg/G7tWnz98XR). - -**Please consider the following guidelines when opening a new issue**: -- Make sure you have searched whether your issue has already been asked before (use the search bar on GitHub under Issues). -- Please never report a new issue on another (related) issue. If another issue is highly related, please -open a new issue nevertheless and link to the related issue. -- Make sure your issue is written in English. Please use one of the great, free online translation services, such as [DeepL](https://www.deepl.com/translator) to translate from your native language to English if you are not comfortable in English. -- Check whether your issue might be solved by updating to the newest Diffusers version. Before posting your issue, please make sure that `python -c "import diffusers; print(diffusers.__version__)"` is higher or matches the latest Diffusers version. -- Remember that the more effort you put into opening a new issue, the higher the quality of your answer will be and the better the overall quality of the Diffusers issues. - -New issues usually include the following. - -#### 2.1. Reproducible, minimal bug reports. - -A bug report should always have a reproducible code snippet and be as minimal and concise as possible. -This means in more detail: -- Narrow the bug down as much as you can, **do not just dump your whole code file** -- Format your code -- Do not include any external libraries except for Diffusers depending on them. -- **Always** provide all necessary information about your environment; for this, you can run: `diffusers-cli env` in your shell and copy-paste the displayed information to the issue. -- Explain the issue. If the reader doesn't know what the issue is and why it is an issue, she cannot solve it. -- **Always** make sure the reader can reproduce your issue with as little effort as possible. If your code snippet cannot be run because of missing libraries or undefined variables, the reader cannot help you. Make sure your reproducible code snippet is as minimal as possible and can be copy-pasted into a simple Python shell. -- If in order to reproduce your issue a model and/or dataset is required, make sure the reader has access to that model or dataset. You can always upload your model or dataset to the [Hub](https://huggingface.co) to make it easily downloadable. Try to keep your model and dataset as small as possible, to make the reproduction of your issue as effortless as possible. - -For more information, please have a look through the [How to write a good issue](#how-to-write-a-good-issue) section. - -You can open a bug report [here](https://github.com/huggingface/diffusers/issues/new/choose). - -#### 2.2. Feature requests. - -A world-class feature request addresses the following points: - -1. Motivation first: -* Is it related to a problem/frustration with the library? If so, please explain -why. Providing a code snippet that demonstrates the problem is best. -* Is it related to something you would need for a project? We'd love to hear -about it! -* Is it something you worked on and think could benefit the community? -Awesome! Tell us what problem it solved for you. -2. Write a *full paragraph* describing the feature; -3. Provide a **code snippet** that demonstrates its future use; -4. In case this is related to a paper, please attach a link; -5. Attach any additional information (drawings, screenshots, etc.) you think may help. - -You can open a feature request [here](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=&template=feature_request.md&title=). - -#### 2.3 Feedback. - -Feedback about the library design and why it is good or not good helps the core maintainers immensely to build a user-friendly library. To understand the philosophy behind the current design philosophy, please have a look [here](https://huggingface.co/docs/diffusers/conceptual/philosophy). If you feel like a certain design choice does not fit with the current design philosophy, please explain why and how it should be changed. If a certain design choice follows the design philosophy too much, hence restricting use cases, explain why and how it should be changed. -If a certain design choice is very useful for you, please also leave a note as this is great feedback for future design decisions. - -You can open an issue about feedback [here](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=&template=feedback.md&title=). - -#### 2.4 Technical questions. - -Technical questions are mainly about why certain code of the library was written in a certain way, or what a certain part of the code does. Please make sure to link to the code in question and please provide detail on -why this part of the code is difficult to understand. - -You can open an issue about a technical question [here](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=bug&template=bug-report.yml). - -#### 2.5 Proposal to add a new model, scheduler, or pipeline. - -If the diffusion model community released a new model, pipeline, or scheduler that you would like to see in the Diffusers library, please provide the following information: - -* Short description of the diffusion pipeline, model, or scheduler and link to the paper or public release. -* Link to any of its open-source implementation. -* Link to the model weights if they are available. - -If you are willing to contribute to the model yourself, let us know so we can best guide you. Also, don't forget -to tag the original author of the component (model, scheduler, pipeline, etc.) by GitHub handle if you can find it. - -You can open a request for a model/pipeline/scheduler [here](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=New+model%2Fpipeline%2Fscheduler&template=new-model-addition.yml). - -### 3. Answering issues on the GitHub issues tab - -Answering issues on GitHub might require some technical knowledge of Diffusers, but we encourage everybody to give it a try even if you are not 100% certain that your answer is correct. -Some tips to give a high-quality answer to an issue: -- Be as concise and minimal as possible -- Stay on topic. An answer to the issue should concern the issue and only the issue. -- Provide links to code, papers, or other sources that prove or encourage your point. -- Answer in code. If a simple code snippet is the answer to the issue or shows how the issue can be solved, please provide a fully reproducible code snippet. - -Also, many issues tend to be simply off-topic, duplicates of other issues, or irrelevant. It is of great -help to the maintainers if you can answer such issues, encouraging the author of the issue to be -more precise, provide the link to a duplicated issue or redirect them to [the forum](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63) or [Discord](https://discord.gg/G7tWnz98XR) - -If you have verified that the issued bug report is correct and requires a correction in the source code, -please have a look at the next sections. - -For all of the following contributions, you will need to open a PR. It is explained in detail how to do so in the [Opening a pull requst](#how-to-open-a-pr) section. - -### 4. Fixing a "Good first issue" - -*Good first issues* are marked by the [Good first issue](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) label. Usually, the issue already -explains how a potential solution should look so that it is easier to fix. -If the issue hasn't been closed and you would like to try to fix this issue, you can just leave a message "I would like to try this issue.". There are usually three scenarios: -- a.) The issue description already proposes a fix. In this case and if the solution makes sense to you, you can open a PR or draft PR to fix it. -- b.) The issue description does not propose a fix. In this case, you can ask what a proposed fix could look like and someone from the Diffusers team should answer shortly. If you have a good idea of how to fix it, feel free to directly open a PR. -- c.) There is already an open PR to fix the issue, but the issue hasn't been closed yet. If the PR has gone stale, you can simply open a new PR and link to the stale PR. PRs often go stale if the original contributor who wanted to fix the issue suddenly cannot find the time anymore to proceed. This often happens in open-source and is very normal. In this case, the community will be very happy if you give it a new try and leverage the knowledge of the existing PR. If there is already a PR and it is active, you can help the author by giving suggestions, reviewing the PR or even asking whether you can contribute to the PR. - - -### 5. Contribute to the documentation - -A good library **always** has good documentation! The official documentation is often one of the first points of contact for new users of the library, and therefore contributing to the documentation is a **highly -valuable contribution**. - -Contributing to the library can have many forms: - -- Correcting spelling or grammatical errors. -- Correct incorrect formatting of the docstring. If you see that the official documentation is weirdly displayed or a link is broken, we are very happy if you take some time to correct it. -- Correct the shape or dimensions of a docstring input or output tensor. -- Clarify documentation that is hard to understand or incorrect. -- Update outdated code examples. -- Translating the documentation to another language. - -Anything displayed on [the official Diffusers doc page](https://huggingface.co/docs/diffusers/index) is part of the official documentation and can be corrected, adjusted in the respective [documentation source](https://github.com/huggingface/diffusers/tree/main/docs/source). - -Please have a look at [this page](https://github.com/huggingface/diffusers/tree/main/docs) on how to verify changes made to the documentation locally. - - -### 6. Contribute a community pipeline - -[Pipelines](https://huggingface.co/docs/diffusers/api/pipelines/overview) are usually the first point of contact between the Diffusers library and the user. -Pipelines are examples of how to use Diffusers [models](https://huggingface.co/docs/diffusers/api/models) and [schedulers](https://huggingface.co/docs/diffusers/api/schedulers/overview). -We support two types of pipelines: - -- Official Pipelines -- Community Pipelines - -Both official and community pipelines follow the same design and consist of the same type of components. - -Official pipelines are tested and maintained by the core maintainers of Diffusers. Their code -resides in [src/diffusers/pipelines](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines). -In contrast, community pipelines are contributed and maintained purely by the **community** and are **not** tested. -They reside in [examples/community](https://github.com/huggingface/diffusers/tree/main/examples/community) and while they can be accessed via the [PyPI diffusers package](https://pypi.org/project/diffusers/), their code is not part of the PyPI distribution. - -The reason for the distinction is that the core maintainers of the Diffusers library cannot maintain and test all -possible ways diffusion models can be used for inference, but some of them may be of interest to the community. -Officially released diffusion pipelines, -such as Stable Diffusion are added to the core src/diffusers/pipelines package which ensures -high quality of maintenance, no backward-breaking code changes, and testing. -More bleeding edge pipelines should be added as community pipelines. If usage for a community pipeline is high, the pipeline can be moved to the official pipelines upon request from the community. This is one of the ways we strive to be a community-driven library. - -To add a community pipeline, one should add a .py file to [examples/community](https://github.com/huggingface/diffusers/tree/main/examples/community) and adapt the [examples/community/README.md](https://github.com/huggingface/diffusers/tree/main/examples/community/README.md) to include an example of the new pipeline. - -An example can be seen [here](https://github.com/huggingface/diffusers/pull/2400). - -Community pipeline PRs are only checked at a superficial level and ideally they should be maintained by their original authors. - -Contributing a community pipeline is a great way to understand how Diffusers models and schedulers work. Having contributed a community pipeline is usually the first stepping stone to contributing an official pipeline to the -core package. - -### 7. Contribute to training examples - -Diffusers examples are a collection of training scripts that reside in [examples](https://github.com/huggingface/diffusers/tree/main/examples). - -We support two types of training examples: - -- Official training examples -- Research training examples - -Research training examples are located in [examples/research_projects](https://github.com/huggingface/diffusers/tree/main/examples/research_projects) whereas official training examples include all folders under [examples](https://github.com/huggingface/diffusers/tree/main/examples) except the `research_projects` and `community` folders. -The official training examples are maintained by the Diffusers' core maintainers whereas the research training examples are maintained by the community. -This is because of the same reasons put forward in [6. Contribute a community pipeline](#contribute-a-community-pipeline) for official pipelines vs. community pipelines: It is not feasible for the core maintainers to maintain all possible training methods for diffusion models. -If the Diffusers core maintainers and the community consider a certain training paradigm to be too experimental or not popular enough, the corresponding training code should be put in the `research_projects` folder and maintained by the author. - -Both official training and research examples consist of a directory that contains one or more training scripts, a requirements.txt file, and a README.md file. In order for the user to make use of the -training examples, it is required to clone the repository: - -``` -git clone https://github.com/huggingface/diffusers -``` - -as well as to install all additional dependencies required for training: - -``` -pip install -r /examples//requirements.txt -``` - -Therefore when adding an example, the `requirements.txt` file shall define all pip dependencies required for your training example so that once all those are installed, the user can run the example's training script. See, for example, the [DreamBooth `requirements.txt` file](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/requirements.txt). - -Training examples of the Diffusers library should adhere to the following philosophy: -- All the code necessary to run the examples should be found in a single Python file -- One should be able to run the example from the command line with `python .py --args` -- Examples should be kept simple and serve as **an example** on how to use Diffusers for training. The purpose of example scripts is **not** to create state-of-the-art diffusion models, but rather to reproduce known training schemes without adding too much custom logic. As a byproduct of this point, our examples also strive to serve as good educational materials. - -To contribute an example, it is highly recommended to look at already existing examples such as [dreambooth](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth.py) to get an idea of how they should look like. -We strongly advise contributors to make use of the [Accelerate library](https://github.com/huggingface/accelerate) as it's tightly integrated -with Diffusers. -Once an example script works, please make sure to add a comprehensive `README.md` that states how to use the example exactly. This README should include: -- An example command on how to run the example script as shown [here e.g.](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth#running-locally-with-pytorch). -- A link to some training results (logs, models, ...) that show what the user can expect as shown [here e.g.](https://api.wandb.ai/report/patrickvonplaten/xm6cd5q5). -- If you are adding a non-official/research training example, **please don't forget** to add a sentence that you are maintaining this training example which includes your git handle as shown [here](https://github.com/huggingface/diffusers/tree/main/examples/research_projects/intel_opts#diffusers-examples-with-intel-optimizations). - -If you are contributing to the official training examples, please also make sure to add a test to [examples/test_examples.py](https://github.com/huggingface/diffusers/blob/main/examples/test_examples.py). This is not necessary for non-official training examples. - -### 8. Fixing a "Good second issue" - -*Good second issues* are marked by the [Good second issue](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22Good+second+issue%22) label. Good second issues are -usually more complicated to solve than [Good first issues](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22). -The issue description usually gives less guidance on how to fix the issue and requires -a decent understanding of the library by the interested contributor. -If you are interested in tackling a second good issue, feel free to open a PR to fix it and link the PR to the issue. If you see that a PR has already been opened for this issue but did not get merged, have a look to understand why it wasn't merged and try to open an improved PR. -Good second issues are usually more difficult to get merged compared to good first issues, so don't hesitate to ask for help from the core maintainers. If your PR is almost finished the core maintainers can also jump into your PR and commit to it in order to get it merged. - -### 9. Adding pipelines, models, schedulers - -Pipelines, models, and schedulers are the most important pieces of the Diffusers library. -They provide easy access to state-of-the-art diffusion technologies and thus allow the community to -build powerful generative AI applications. - -By adding a new model, pipeline, or scheduler you might enable a new powerful use case for any of the user interfaces relying on Diffusers which can be of immense value for the whole generative AI ecosystem. - -Diffusers has a couple of open feature requests for all three components - feel free to gloss over them -if you don't know yet what specific component you would like to add: -- [Model or pipeline](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+pipeline%2Fmodel%22) -- [Scheduler](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+scheduler%22) - -Before adding any of the three components, it is strongly recommended that you give the [Philosophy guide](https://github.com/huggingface/diffusers/blob/main/PHILOSOPHY.md) a read to better understand the design of any of the three components. Please be aware that -we cannot merge model, scheduler, or pipeline additions that strongly diverge from our design philosophy -as it will lead to API inconsistencies. If you fundamentally disagree with a design choice, please -open a [Feedback issue](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=&template=feedback.md&title=) instead so that it can be discussed whether a certain design -pattern/design choice shall be changed everywhere in the library and whether we shall update our design philosophy. Consistency across the library is very important for us. - -Please make sure to add links to the original codebase/paper to the PR and ideally also ping the -original author directly on the PR so that they can follow the progress and potentially help with questions. - -If you are unsure or stuck in the PR, don't hesitate to leave a message to ask for a first review or help. - -## How to write a good issue - -**The better your issue is written, the higher the chances that it will be quickly resolved.** - -1. Make sure that you've used the correct template for your issue. You can pick between *Bug Report*, *Feature Request*, *Feedback about API Design*, *New model/pipeline/scheduler addition*, *Forum*, or a blank issue. Make sure to pick the correct one when opening [a new issue](https://github.com/huggingface/diffusers/issues/new/choose). -2. **Be precise**: Give your issue a fitting title. Try to formulate your issue description as simple as possible. The more precise you are when submitting an issue, the less time it takes to understand the issue and potentially solve it. Make sure to open an issue for one issue only and not for multiple issues. If you found multiple issues, simply open multiple issues. If your issue is a bug, try to be as precise as possible about what bug it is - you should not just write "Error in diffusers". -3. **Reproducibility**: No reproducible code snippet == no solution. If you encounter a bug, maintainers **have to be able to reproduce** it. Make sure that you include a code snippet that can be copy-pasted into a Python interpreter to reproduce the issue. Make sure that your code snippet works, *i.e.* that there are no missing imports or missing links to images, ... Your issue should contain an error message **and** a code snippet that can be copy-pasted without any changes to reproduce the exact same error message. If your issue is using local model weights or local data that cannot be accessed by the reader, the issue cannot be solved. If you cannot share your data or model, try to make a dummy model or dummy data. -4. **Minimalistic**: Try to help the reader as much as you can to understand the issue as quickly as possible by staying as concise as possible. Remove all code / all information that is irrelevant to the issue. If you have found a bug, try to create the easiest code example you can to demonstrate your issue, do not just dump your whole workflow into the issue as soon as you have found a bug. E.g., if you train a model and get an error at some point during the training, you should first try to understand what part of the training code is responsible for the error and try to reproduce it with a couple of lines. Try to use dummy data instead of full datasets. -5. Add links. If you are referring to a certain naming, method, or model make sure to provide a link so that the reader can better understand what you mean. If you are referring to a specific PR or issue, make sure to link it to your issue. Do not assume that the reader knows what you are talking about. The more links you add to your issue the better. -6. Formatting. Make sure to nicely format your issue by formatting code into Python code syntax, and error messages into normal code syntax. See the [official GitHub formatting docs](https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax) for more information. -7. Think of your issue not as a ticket to be solved, but rather as a beautiful entry to a well-written encyclopedia. Every added issue is a contribution to publicly available knowledge. By adding a nicely written issue you not only make it easier for maintainers to solve your issue, but you are helping the whole community to better understand a certain aspect of the library. - -## How to write a good PR - -1. Be a chameleon. Understand existing design patterns and syntax and make sure your code additions flow seamlessly into the existing code base. Pull requests that significantly diverge from existing design patterns or user interfaces will not be merged. -2. Be laser focused. A pull request should solve one problem and one problem only. Make sure to not fall into the trap of "also fixing another problem while we're adding it". It is much more difficult to review pull requests that solve multiple, unrelated problems at once. -3. If helpful, try to add a code snippet that displays an example of how your addition can be used. -4. The title of your pull request should be a summary of its contribution. -5. If your pull request addresses an issue, please mention the issue number in -the pull request description to make sure they are linked (and people -consulting the issue know you are working on it); -6. To indicate a work in progress please prefix the title with `[WIP]`. These -are useful to avoid duplicated work, and to differentiate it from PRs ready -to be merged; -7. Try to formulate and format your text as explained in [How to write a good issue](#how-to-write-a-good-issue). -8. Make sure existing tests pass; -9. Add high-coverage tests. No quality testing = no merge. -- If you are adding new `@slow` tests, make sure they pass using -`RUN_SLOW=1 python -m pytest tests/test_my_new_model.py`. -CircleCI does not run the slow tests, but GitHub actions does every night! -10. All public methods must have informative docstrings that work nicely with markdown. See `[pipeline_latent_diffusion.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py)` for an example. -11. Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos, and other non-text files. We prefer to leverage a hf.co hosted `dataset` like -[`hf-internal-testing`](https://huggingface.co/hf-internal-testing) or [huggingface/documentation-images](https://huggingface.co/datasets/huggingface/documentation-images) to place these files. -If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images -to this dataset. - -## How to open a PR - -Before writing code, we strongly advise you to search through the existing PRs or -issues to make sure that nobody is already working on the same thing. If you are -unsure, it is always a good idea to open an issue to get some feedback. - -You will need basic `git` proficiency to be able to contribute to -🧨 Diffusers. `git` is not the easiest tool to use but it has the greatest -manual. Type `git --help` in a shell and enjoy. If you prefer books, [Pro -Git](https://git-scm.com/book/en/v2) is a very good reference. - -Follow these steps to start contributing ([supported Python versions](https://github.com/huggingface/diffusers/blob/main/setup.py#L244)): - -1. Fork the [repository](https://github.com/huggingface/diffusers) by -clicking on the 'Fork' button on the repository's page. This creates a copy of the code -under your GitHub user account. - -2. Clone your fork to your local disk, and add the base repository as a remote: - - ```bash - $ git clone git@github.com:/diffusers.git - $ cd diffusers - $ git remote add upstream https://github.com/huggingface/diffusers.git - ``` - -3. Create a new branch to hold your development changes: - - ```bash - $ git checkout -b a-descriptive-name-for-my-changes - ``` - -**Do not** work on the `main` branch. - -4. Set up a development environment by running the following command in a virtual environment: - - ```bash - $ pip install -e ".[dev]" - ``` - -If you have already cloned the repo, you might need to `git pull` to get the most recent changes in the -library. - -5. Develop the features on your branch. - -As you work on the features, you should make sure that the test suite -passes. You should run the tests impacted by your changes like this: - - ```bash - $ pytest tests/.py - ``` - -Before you run the tests, please make sure you install the dependencies required for testing. You can do so -with this command: - - ```bash - $ pip install -e ".[test]" - ``` - -You can run the full test suite with the following command, but it takes -a beefy machine to produce a result in a decent amount of time now that -Diffusers has grown a lot. Here is the command for it: - - ```bash - $ make test - ``` - -🧨 Diffusers relies on `black` and `isort` to format its source code -consistently. After you make changes, apply automatic style corrections and code verifications -that can't be automated in one go with: - - ```bash - $ make style - ``` - -🧨 Diffusers also uses `ruff` and a few custom scripts to check for coding mistakes. Quality -control runs in CI, however, you can also run the same checks with: - - ```bash - $ make quality - ``` - -Once you're happy with your changes, add changed files using `git add` and -make a commit with `git commit` to record your changes locally: - - ```bash - $ git add modified_file.py - $ git commit - ``` - -It is a good idea to sync your copy of the code with the original -repository regularly. This way you can quickly account for changes: - - ```bash - $ git pull upstream main - ``` - -Push the changes to your account using: - - ```bash - $ git push -u origin a-descriptive-name-for-my-changes - ``` - -6. Once you are satisfied, go to the -webpage of your fork on GitHub. Click on 'Pull request' to send your changes -to the project maintainers for review. - -7. It's ok if maintainers ask you for changes. It happens to core contributors -too! So everyone can see the changes in the Pull request, work in your local -branch and push the changes to your fork. They will automatically appear in -the pull request. - -### Tests - -An extensive test suite is included to test the library behavior and several examples. Library tests can be found in -the [tests folder](https://github.com/huggingface/diffusers/tree/main/tests). - -We like `pytest` and `pytest-xdist` because it's faster. From the root of the -repository, here's how to run tests with `pytest` for the library: - -```bash -$ python -m pytest -n auto --dist=loadfile -s -v ./tests/ -``` - -In fact, that's how `make test` is implemented! - -You can specify a smaller set of tests in order to test only the feature -you're working on. - -By default, slow tests are skipped. Set the `RUN_SLOW` environment variable to -`yes` to run them. This will download many gigabytes of models — make sure you -have enough disk space and a good Internet connection, or a lot of patience! - -```bash -$ RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./tests/ -``` - -`unittest` is fully supported, here's how to run tests with it: - -```bash -$ python -m unittest discover -s tests -t . -v -$ python -m unittest discover -s examples -t examples -v -``` - -### Syncing forked main with upstream (HuggingFace) main - -To avoid pinging the upstream repository which adds reference notes to each upstream PR and sends unnecessary notifications to the developers involved in these PRs, -when syncing the main branch of a forked repository, please, follow these steps: -1. When possible, avoid syncing with the upstream using a branch and PR on the forked repository. Instead, merge directly into the forked main. -2. If a PR is absolutely necessary, use the following steps after checking out your branch: -``` -$ git checkout -b your-branch-for-syncing -$ git pull --squash --no-commit upstream main -$ git commit -m '' -$ git push --set-upstream origin your-branch-for-syncing -``` - -### Style guide - -For documentation strings, 🧨 Diffusers follows the [google style](https://google.github.io/styleguide/pyguide.html). diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/ko/using-diffusers/conditional_image_generation.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/ko/using-diffusers/conditional_image_generation.md deleted file mode 100644 index 5525ac990ca457bc5040c313e0a3d9aad0abdc46..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/ko/using-diffusers/conditional_image_generation.md +++ /dev/null @@ -1,60 +0,0 @@ - - -# 조건부 이미지 생성 - -[[open-in-colab]] - -조건부 이미지 생성을 사용하면 텍스트 프롬프트에서 이미지를 생성할 수 있습니다. 텍스트는 임베딩으로 변환되며, 임베딩은 노이즈에서 이미지를 생성하도록 모델을 조건화하는 데 사용됩니다. - -[`DiffusionPipeline`]은 추론을 위해 사전 훈련된 diffusion 시스템을 사용하는 가장 쉬운 방법입니다. - -먼저 [`DiffusionPipeline`]의 인스턴스를 생성하고 다운로드할 파이프라인 [체크포인트](https://huggingface.co/models?library=diffusers&sort=downloads)를 지정합니다. - -이 가이드에서는 [잠재 Diffusion](https://huggingface.co/CompVis/ldm-text2im-large-256)과 함께 텍스트-이미지 생성에 [`DiffusionPipeline`]을 사용합니다: - -```python ->>> from diffusers import DiffusionPipeline - ->>> generator = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256") -``` - -[`DiffusionPipeline`]은 모든 모델링, 토큰화, 스케줄링 구성 요소를 다운로드하고 캐시합니다. -이 모델은 약 14억 개의 파라미터로 구성되어 있기 때문에 GPU에서 실행할 것을 강력히 권장합니다. -PyTorch에서와 마찬가지로 생성기 객체를 GPU로 이동할 수 있습니다: - -```python ->>> generator.to("cuda") -``` - -이제 텍스트 프롬프트에서 `생성기`를 사용할 수 있습니다: - -```python ->>> image = generator("An image of a squirrel in Picasso style").images[0] -``` - -출력값은 기본적으로 [`PIL.Image`](https://pillow.readthedocs.io/en/stable/reference/Image.html?highlight=image#the-image-class) 객체로 래핑됩니다. - -호출하여 이미지를 저장할 수 있습니다: - -```python ->>> image.save("image_of_squirrel_painting.png") -``` - -아래 스페이스를 사용해보고 안내 배율 매개변수를 자유롭게 조정하여 이미지 품질에 어떤 영향을 미치는지 확인해 보세요! - - \ No newline at end of file diff --git a/spaces/pharma-IA/PharmaWise_Prospecto_Generico_Vortioxetina_V2C/README.md b/spaces/pharma-IA/PharmaWise_Prospecto_Generico_Vortioxetina_V2C/README.md deleted file mode 100644 index 1635a412f800b8827f1fc26a8107bca8637cc39c..0000000000000000000000000000000000000000 --- a/spaces/pharma-IA/PharmaWise_Prospecto_Generico_Vortioxetina_V2C/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: PharmaWise Demo Vortioxetina v2C -emoji: 🚀 -colorFrom: green -colorTo: blue -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: artistic-2.0 -duplicated_from: pharma-IA/PharmaWise_Prospecto_Megalabs_V2 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/phenomenon1981/MagicPrompt-Stable-Diffusion/app.py b/spaces/phenomenon1981/MagicPrompt-Stable-Diffusion/app.py deleted file mode 100644 index 8644606f60321da5256c0cf7440c0aa06fea5da1..0000000000000000000000000000000000000000 --- a/spaces/phenomenon1981/MagicPrompt-Stable-Diffusion/app.py +++ /dev/null @@ -1,96 +0,0 @@ -from transformers import pipeline, set_seed -import gradio as grad, random, re -import os -import sys - -gpt2_pipe = pipeline('text-generation', model='Gustavosta/MagicPrompt-Stable-Diffusion', tokenizer='gpt2') - -def generate(starting_text): - with open("ideas.txt", "r") as f: - line = f.readlines() - seed = random.randint(100, 1000000) - set_seed(seed) - - if starting_text == "": - starting_text: str = line[random.randrange(0, len(line))].replace("\n", "").capitalize() - starting_text: str = re.sub(r"\.", '', starting_text) - - response = gpt2_pipe(starting_text, max_length=(len(starting_text) + random.randint(60, 80)), num_return_sequences=1) - response_list = [] - for x in response: - resp = x['generated_text'].strip() - if resp != starting_text and len(resp) > (len(starting_text) + 4) and resp.endswith((":", "-", "—")) is False: - response_list.append(resp) - - response_end = "\n".join(response_list) - response_end = re.sub('[^ ]+\.[^ ]+','', response_end) - response_end = response_end.replace("<", "").replace(">", "") - - if response_end != "": - return response_end - -with grad.Blocks(css='style.css') as demo: - grad.HTML( - """ -
    -
    -

    - The Stable Diffusion Prompt Generator - because your text needs a little more visual spice. -

    -
    -

    - Ready to see some magic happen? Simply type in your basic idea. Feeling lazy? No problem, just hit the "Magic Prompt" button and it will randomly pull from a list of thousands of ideas for you. -

    -

    - ❤️ Press the Like Button if you enjoy my space! ❤️ -

    -
    - """ - ) - with grad.Column(elem_id="col-container"): - with grad.Row(variant="compact"): - txt = grad.Textbox( - label="Initial Text", - show_label=False, - max_lines=1, - placeholder="Enter a basic idea", - ).style( - container=False, - ) - run = grad.Button("✨ Magic Prompt ✨").style(full_width=False) - - - - with grad.Row(variant="compact"): - out = grad.Textbox( - label="Generated Text", - show_label=False, - lines=5, - ).style( - container=False, - ) - - run.click(generate, inputs=[txt], outputs=[out]) - - - - with grad.Row(): - grad.HTML( - """ - -
    -

    Transform your boring ideas into creative masterpieces with just one click! Enter a spark of inspiration and let the "Magic Prompt" button work its magic. -

    -
    - """ -) - - - fn=generate, - run=generate, - inputs=txt, - outputs=out - demo.launch(enable_queue=False, inline=True) \ No newline at end of file diff --git a/spaces/piuba-bigdata/discurso-de-odio/README.md b/spaces/piuba-bigdata/discurso-de-odio/README.md deleted file mode 100644 index 110f42b2cfe2f79dcccadf2793f4623b5fa319e1..0000000000000000000000000000000000000000 --- a/spaces/piuba-bigdata/discurso-de-odio/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Contextual Hate Speech -emoji: 📉 -colorFrom: red -colorTo: yellow -sdk: streamlit -sdk_version: 1.15.2 -app_file: app.py -pinned: false ---- - - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - - diff --git a/spaces/pix2pix-zero-library/pix2pix-zero-demo/utils/generate_synthetic.py b/spaces/pix2pix-zero-library/pix2pix-zero-demo/utils/generate_synthetic.py deleted file mode 100644 index 51228de6264157415362669660521b33db31af7d..0000000000000000000000000000000000000000 --- a/spaces/pix2pix-zero-library/pix2pix-zero-demo/utils/generate_synthetic.py +++ /dev/null @@ -1,316 +0,0 @@ -import os, sys, time, re, pdb -import torch, torchvision -import numpy -from PIL import Image -import hashlib -from tqdm import tqdm -import openai -from utils.direction_utils import * - -p = "submodules/pix2pix-zero/src/utils" -if p not in sys.path: - sys.path.append(p) -from diffusers import DDIMScheduler -from edit_directions import construct_direction -from edit_pipeline import EditingPipeline -from ddim_inv import DDIMInversion -from scheduler import DDIMInverseScheduler -from lavis.models import load_model_and_preprocess -from transformers import T5Tokenizer, AutoTokenizer, T5ForConditionalGeneration, BloomForCausalLM - - - -def load_sentence_embeddings(l_sentences, tokenizer, text_encoder, device="cuda"): - with torch.no_grad(): - l_embeddings = [] - for sent in tqdm(l_sentences): - text_inputs = tokenizer( - sent, - padding="max_length", - max_length=tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - prompt_embeds = text_encoder(text_input_ids.to(device), attention_mask=None)[0] - l_embeddings.append(prompt_embeds) - return torch.concatenate(l_embeddings, dim=0).mean(dim=0).unsqueeze(0) - - - -def launch_generate_sample(prompt, seed, negative_scale, num_ddim): - os.makedirs("tmp", exist_ok=True) - # do the editing - edit_pipe = EditingPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float32).to("cuda") - edit_pipe.scheduler = DDIMScheduler.from_config(edit_pipe.scheduler.config) - - # set the random seed and sample the input noise map - torch.cuda.manual_seed(int(seed)) - z = torch.randn((1,4,64,64), device="cuda") - - z_hashname = hashlib.sha256(z.cpu().numpy().tobytes()).hexdigest() - z_inv_fname = f"tmp/{z_hashname}_ddim_{num_ddim}_inv.pt" - torch.save(z, z_inv_fname) - - rec_pil = edit_pipe(prompt, - num_inference_steps=num_ddim, x_in=z, - only_sample=True, # this flag will only generate the sampled image, not the edited image - guidance_scale=negative_scale, - negative_prompt="" # use the empty string for the negative prompt - ) - # print(rec_pil) - del edit_pipe - torch.cuda.empty_cache() - - return rec_pil[0], z_inv_fname - - - -def clean_l_sentences(ls): - s = [re.sub('\d', '', x) for x in ls] - s = [x.replace(".","").replace("-","").replace(")","").strip() for x in s] - return s - - - -def gpt3_compute_word2sentences(task_type, word, num=100): - l_sentences = [] - if task_type=="object": - template_prompt = f"Provide many captions for images containing {word}." - elif task_type=="style": - template_prompt = f"Provide many captions for images that are in the {word} style." - while True: - ret = openai.Completion.create( - model="text-davinci-002", - prompt=template_prompt, - max_tokens=1000, - temperature=1.0) - raw_return = ret.choices[0].text - for line in raw_return.split("\n"): - line = line.strip() - if len(line)>10: - skip=False - for subword in word.split(" "): - if subword not in line: skip=True - if not skip: l_sentences.append(line) - else: - l_sentences.append(line+f", {word}") - time.sleep(0.05) - print(len(l_sentences)) - if len(l_sentences)>=num: - break - l_sentences = clean_l_sentences(l_sentences) - return l_sentences - - -def flant5xl_compute_word2sentences(word, num=100): - text_input = f"Provide a caption for images containing a {word}. The captions should be in English and should be no longer than 150 characters." - - l_sentences = [] - tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xl") - model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xl", device_map="auto", torch_dtype=torch.float16) - input_ids = tokenizer(text_input, return_tensors="pt").input_ids.to("cuda") - input_length = input_ids.shape[1] - while True: - outputs = model.generate(input_ids,temperature=0.9, num_return_sequences=16, do_sample=True, max_length=128) - output = tokenizer.batch_decode(outputs[:, input_length:], skip_special_tokens=True) - for line in output: - line = line.strip() - skip=False - for subword in word.split(" "): - if subword not in line: skip=True - if not skip: l_sentences.append(line) - else: l_sentences.append(line+f", {word}") - print(len(l_sentences)) - if len(l_sentences)>=num: - break - l_sentences = clean_l_sentences(l_sentences) - - del model - del tokenizer - torch.cuda.empty_cache() - - return l_sentences - -def bloomz_compute_sentences(word, num=100): - l_sentences = [] - tokenizer = AutoTokenizer.from_pretrained("bigscience/bloomz-7b1") - model = BloomForCausalLM.from_pretrained("bigscience/bloomz-7b1", device_map="auto", torch_dtype=torch.float16) - input_text = f"Provide a caption for images containing a {word}. The captions should be in English and should be no longer than 150 characters. Caption:" - input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") - input_length = input_ids.shape[1] - t = 0.95 - eta = 1e-5 - min_length = 15 - - while True: - try: - outputs = model.generate(input_ids,temperature=t, num_return_sequences=16, do_sample=True, max_length=128, min_length=min_length, eta_cutoff=eta) - output = tokenizer.batch_decode(outputs[:, input_length:], skip_special_tokens=True) - except: - continue - for line in output: - line = line.strip() - skip=False - for subword in word.split(" "): - if subword not in line: skip=True - if not skip: l_sentences.append(line) - else: l_sentences.append(line+f", {word}") - print(len(l_sentences)) - if len(l_sentences)>=num: - break - l_sentences = clean_l_sentences(l_sentences) - del model - del tokenizer - torch.cuda.empty_cache() - - return l_sentences - - - -def make_custom_dir(description, sent_type, api_key, org_key, l_custom_sentences): - if sent_type=="fixed-template": - l_sentences = generate_image_prompts_with_templates(description) - elif "GPT3" in sent_type: - import openai - openai.organization = org_key - openai.api_key = api_key - _=openai.Model.retrieve("text-davinci-002") - l_sentences = gpt3_compute_word2sentences("object", description, num=1000) - - elif "flan-t5-xl" in sent_type: - l_sentences = flant5xl_compute_word2sentences(description, num=1000) - # save the sentences to file - with open(f"tmp/flant5xl_sentences_{description}.txt", "w") as f: - for line in l_sentences: - f.write(line+"\n") - elif "BLOOMZ-7B" in sent_type: - l_sentences = bloomz_compute_sentences(description, num=1000) - # save the sentences to file - with open(f"tmp/bloomz_sentences_{description}.txt", "w") as f: - for line in l_sentences: - f.write(line+"\n") - - elif sent_type=="custom sentences": - l_sentences = l_custom_sentences.split("\n") - print(f"length of new sentence is {len(l_sentences)}") - - pipe = EditingPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float32).to("cuda") - emb = load_sentence_embeddings(l_sentences, pipe.tokenizer, pipe.text_encoder, device="cuda") - del pipe - torch.cuda.empty_cache() - return emb - - -def launch_main(img_in_real, img_in_synth, src, src_custom, dest, dest_custom, num_ddim, xa_guidance, edit_mul, fpath_z_gen, gen_prompt, sent_type_src, sent_type_dest, api_key, org_key, custom_sentences_src, custom_sentences_dest): - d_name2desc = get_all_directions_names() - d_desc2name = {v:k for k,v in d_name2desc.items()} - os.makedirs("tmp", exist_ok=True) - - # generate custom direction first - if src=="make your own!": - outf_name = f"tmp/template_emb_{src_custom}_{sent_type_src}.pt" - if not os.path.exists(outf_name): - src_emb = make_custom_dir(src_custom, sent_type_src, api_key, org_key, custom_sentences_src) - torch.save(src_emb, outf_name) - else: - src_emb = torch.load(outf_name) - else: - src_emb = get_emb(d_desc2name[src]) - - if dest=="make your own!": - outf_name = f"tmp/template_emb_{dest_custom}_{sent_type_dest}.pt" - if not os.path.exists(outf_name): - dest_emb = make_custom_dir(dest_custom, sent_type_dest, api_key, org_key, custom_sentences_dest) - torch.save(dest_emb, outf_name) - else: - dest_emb = torch.load(outf_name) - else: - dest_emb = get_emb(d_desc2name[dest]) - text_dir = (dest_emb.cuda() - src_emb.cuda())*edit_mul - - - - if img_in_real is not None and img_in_synth is None: - print("using real image") - # resize the image so that the longer side is 512 - width, height = img_in_real.size - if width > height: scale_factor = 512 / width - else: scale_factor = 512 / height - new_size = (int(width * scale_factor), int(height * scale_factor)) - img_in_real = img_in_real.resize(new_size, Image.Resampling.LANCZOS) - hash = hashlib.sha256(img_in_real.tobytes()).hexdigest() - # print(hash) - inv_fname = f"tmp/{hash}_ddim_{num_ddim}_inv.pt" - caption_fname = f"tmp/{hash}_caption.txt" - - # make the caption if it hasn't been made before - if not os.path.exists(caption_fname): - # BLIP - model_blip, vis_processors, _ = load_model_and_preprocess(name="blip_caption", model_type="base_coco", is_eval=True, device=torch.device("cuda")) - _image = vis_processors["eval"](img_in_real).unsqueeze(0).cuda() - prompt_str = model_blip.generate({"image": _image})[0] - del model_blip - torch.cuda.empty_cache() - with open(caption_fname, "w") as f: - f.write(prompt_str) - else: - prompt_str = open(caption_fname, "r").read().strip() - print(f"CAPTION: {prompt_str}") - - # do the inversion if it hasn't been done before - if not os.path.exists(inv_fname): - # inversion pipeline - pipe_inv = DDIMInversion.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float32).to("cuda") - pipe_inv.scheduler = DDIMInverseScheduler.from_config(pipe_inv.scheduler.config) - x_inv, x_inv_image, x_dec_img = pipe_inv( prompt_str, - guidance_scale=1, num_inversion_steps=num_ddim, - img=img_in_real, torch_dtype=torch.float32 ) - x_inv = x_inv.detach() - torch.save(x_inv, inv_fname) - del pipe_inv - torch.cuda.empty_cache() - else: - x_inv = torch.load(inv_fname) - - # do the editing - edit_pipe = EditingPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float32).to("cuda") - edit_pipe.scheduler = DDIMScheduler.from_config(edit_pipe.scheduler.config) - - _, edit_pil = edit_pipe(prompt_str, - num_inference_steps=num_ddim, - x_in=x_inv, - edit_dir=text_dir, - guidance_amount=xa_guidance, - guidance_scale=5.0, - negative_prompt=prompt_str # use the unedited prompt for the negative prompt - ) - del edit_pipe - torch.cuda.empty_cache() - return edit_pil[0] - - - elif img_in_real is None and img_in_synth is not None: - print("using synthetic image") - x_inv = torch.load(fpath_z_gen) - pipe = EditingPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float32).to("cuda") - pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) - rec_pil, edit_pil = pipe(gen_prompt, - num_inference_steps=num_ddim, - x_in=x_inv, - edit_dir=text_dir, - guidance_amount=xa_guidance, - guidance_scale=5, - negative_prompt="" # use the empty string for the negative prompt - ) - del pipe - torch.cuda.empty_cache() - return edit_pil[0] - - else: - raise ValueError(f"Invalid image type found: {img_in_real} {img_in_synth}") - - - -if __name__=="__main__": - print(flant5xl_compute_word2sentences("cat wearing sunglasses", num=100)) \ No newline at end of file diff --git a/spaces/pixiou/bingo/src/components/ui/codeblock.tsx b/spaces/pixiou/bingo/src/components/ui/codeblock.tsx deleted file mode 100644 index aabda4e3b59f4e36b6ab79feb19d8d18b70e881b..0000000000000000000000000000000000000000 --- a/spaces/pixiou/bingo/src/components/ui/codeblock.tsx +++ /dev/null @@ -1,142 +0,0 @@ -'use client' - -import { FC, memo } from 'react' -import { Prism as SyntaxHighlighter } from 'react-syntax-highlighter' -import { coldarkDark } from 'react-syntax-highlighter/dist/cjs/styles/prism' - -import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard' -import { IconCheck, IconCopy, IconDownload } from '@/components/ui/icons' -import { Button } from '@/components/ui/button' - -interface Props { - language: string - value: string -} - -interface languageMap { - [key: string]: string | undefined -} - -export const programmingLanguages: languageMap = { - javascript: '.js', - python: '.py', - java: '.java', - c: '.c', - cpp: '.cpp', - 'c++': '.cpp', - 'c#': '.cs', - ruby: '.rb', - php: '.php', - swift: '.swift', - 'objective-c': '.m', - kotlin: '.kt', - typescript: '.ts', - go: '.go', - perl: '.pl', - rust: '.rs', - scala: '.scala', - haskell: '.hs', - lua: '.lua', - shell: '.sh', - sql: '.sql', - html: '.html', - css: '.css' - // add more file extensions here, make sure the key is same as language prop in CodeBlock.tsx component -} - -export const generateRandomString = (length: number, lowercase = false) => { - const chars = 'ABCDEFGHJKLMNPQRSTUVWXY3456789' // excluding similar looking characters like Z, 2, I, 1, O, 0 - let result = '' - for (let i = 0; i < length; i++) { - result += chars.charAt(Math.floor(Math.random() * chars.length)) - } - return lowercase ? result.toLowerCase() : result -} - -const CodeBlock: FC = memo(({ language, value }) => { - const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 }) - - const downloadAsFile = () => { - if (typeof window === 'undefined') { - return - } - const fileExtension = programmingLanguages[language] || '.file' - const suggestedFileName = `file-${generateRandomString( - 3, - true - )}${fileExtension}` - const fileName = window.prompt('Enter file name' || '', suggestedFileName) - - if (!fileName) { - // User pressed cancel on prompt. - return - } - - const blob = new Blob([value], { type: 'text/plain' }) - const url = URL.createObjectURL(blob) - const link = document.createElement('a') - link.download = fileName - link.href = url - link.style.display = 'none' - document.body.appendChild(link) - link.click() - document.body.removeChild(link) - URL.revokeObjectURL(url) - } - - const onCopy = () => { - if (isCopied) return - copyToClipboard(value) - } - - return ( -
    -
    - {language} -
    - - -
    -
    - - {value} - -
    - ) -}) -CodeBlock.displayName = 'CodeBlock' - -export { CodeBlock } diff --git a/spaces/prerna9811/Chord/portaudio/test/patest_underflow.c b/spaces/prerna9811/Chord/portaudio/test/patest_underflow.c deleted file mode 100644 index 96216a691712e24b97e63ca4c227a315f189a9aa..0000000000000000000000000000000000000000 --- a/spaces/prerna9811/Chord/portaudio/test/patest_underflow.c +++ /dev/null @@ -1,162 +0,0 @@ -/** @file patest_underflow.c - @ingroup test_src - @brief Simulate an output buffer underflow condition. - Tests whether the stream can be stopped when underflowing buffers. - @author Ross Bencina - @author Phil Burk -*/ -/* - * $Id$ - * - * This program uses the PortAudio Portable Audio Library. - * For more information see: http://www.portaudio.com - * Copyright (c) 1999-2000 Ross Bencina and Phil Burk - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -#include -#include -#include "portaudio.h" - -#define NUM_SECONDS (20) -#define SAMPLE_RATE (44100) -#define FRAMES_PER_BUFFER (2048) -#define MSEC_PER_BUFFER ( (FRAMES_PER_BUFFER * 1000) / SAMPLE_RATE ) - -#ifndef M_PI -#define M_PI (3.14159265) -#endif - -#define TABLE_SIZE (200) -typedef struct -{ - float sine[TABLE_SIZE]; - int left_phase; - int right_phase; - int sleepTime; -} -paTestData; - -/* This routine will be called by the PortAudio engine when audio is needed. -** It may called at interrupt level on some machines so don't do anything -** that could mess up the system like calling malloc() or free(). -*/ -static int patestCallback( const void *inputBuffer, void *outputBuffer, - unsigned long framesPerBuffer, - const PaStreamCallbackTimeInfo* timeInfo, - PaStreamCallbackFlags statusFlags, - void *userData ) -{ - paTestData *data = (paTestData*)userData; - float *out = (float*)outputBuffer; - unsigned long i; - int finished = 0; - (void) inputBuffer; /* Prevent unused variable warnings. */ - for( i=0; isine[data->left_phase]; /* left */ - *out++ = data->sine[data->right_phase]; /* right */ - data->left_phase += 1; - if( data->left_phase >= TABLE_SIZE ) data->left_phase -= TABLE_SIZE; - data->right_phase += 3; /* higher pitch so we can distinguish left and right. */ - if( data->right_phase >= TABLE_SIZE ) data->right_phase -= TABLE_SIZE; - } - - /* Cause underflow to occur. */ - if( data->sleepTime > 0 ) Pa_Sleep( data->sleepTime ); - data->sleepTime += 1; - - return finished; -} - -/*******************************************************************/ -int main(void); -int main(void) -{ - PaStreamParameters outputParameters; - PaStream *stream; - PaError err; - paTestData data; - int i; - printf("PortAudio Test: output sine wave. SR = %d, BufSize = %d\n", SAMPLE_RATE, FRAMES_PER_BUFFER); - /* initialise sinusoidal wavetable */ - for( i=0; idefaultLowOutputLatency; - outputParameters.hostApiSpecificStreamInfo = NULL; - err = Pa_OpenStream( - &stream, - NULL, /* no input */ - &outputParameters, - SAMPLE_RATE, - FRAMES_PER_BUFFER, - paClipOff, /* we won't output out of range samples so don't bother clipping them */ - patestCallback, - &data ); - if( err != paNoError ) goto error; - err = Pa_StartStream( stream ); - if( err != paNoError ) goto error; - - while( data.sleepTime < (2 * MSEC_PER_BUFFER) ) - { - printf("SleepTime = %d\n", data.sleepTime ); - Pa_Sleep( data.sleepTime ); - } - - printf("Try to stop stream.\n"); - err = Pa_StopStream( stream ); - if( err != paNoError ) goto error; - err = Pa_CloseStream( stream ); - if( err != paNoError ) goto error; - Pa_Terminate(); - printf("Test finished.\n"); - return err; -error: - Pa_Terminate(); - fprintf( stderr, "An error occurred while using the portaudio stream\n" ); - fprintf( stderr, "Error number: %d\n", err ); - fprintf( stderr, "Error message: %s\n", Pa_GetErrorText( err ) ); - return err; -} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/PsdImagePlugin.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/PsdImagePlugin.py deleted file mode 100644 index 2f019bb8c3477a332fc255e03208d1bd3b064d9d..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/PsdImagePlugin.py +++ /dev/null @@ -1,303 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# Adobe PSD 2.5/3.0 file handling -# -# History: -# 1995-09-01 fl Created -# 1997-01-03 fl Read most PSD images -# 1997-01-18 fl Fixed P and CMYK support -# 2001-10-21 fl Added seek/tell support (for layers) -# -# Copyright (c) 1997-2001 by Secret Labs AB. -# Copyright (c) 1995-2001 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -import io - -from . import Image, ImageFile, ImagePalette -from ._binary import i8 -from ._binary import i16be as i16 -from ._binary import i32be as i32 -from ._binary import si16be as si16 - -MODES = { - # (photoshop mode, bits) -> (pil mode, required channels) - (0, 1): ("1", 1), - (0, 8): ("L", 1), - (1, 8): ("L", 1), - (2, 8): ("P", 1), - (3, 8): ("RGB", 3), - (4, 8): ("CMYK", 4), - (7, 8): ("L", 1), # FIXME: multilayer - (8, 8): ("L", 1), # duotone - (9, 8): ("LAB", 3), -} - - -# --------------------------------------------------------------------. -# read PSD images - - -def _accept(prefix): - return prefix[:4] == b"8BPS" - - -## -# Image plugin for Photoshop images. - - -class PsdImageFile(ImageFile.ImageFile): - format = "PSD" - format_description = "Adobe Photoshop" - _close_exclusive_fp_after_loading = False - - def _open(self): - read = self.fp.read - - # - # header - - s = read(26) - if not _accept(s) or i16(s, 4) != 1: - msg = "not a PSD file" - raise SyntaxError(msg) - - psd_bits = i16(s, 22) - psd_channels = i16(s, 12) - psd_mode = i16(s, 24) - - mode, channels = MODES[(psd_mode, psd_bits)] - - if channels > psd_channels: - msg = "not enough channels" - raise OSError(msg) - if mode == "RGB" and psd_channels == 4: - mode = "RGBA" - channels = 4 - - self._mode = mode - self._size = i32(s, 18), i32(s, 14) - - # - # color mode data - - size = i32(read(4)) - if size: - data = read(size) - if mode == "P" and size == 768: - self.palette = ImagePalette.raw("RGB;L", data) - - # - # image resources - - self.resources = [] - - size = i32(read(4)) - if size: - # load resources - end = self.fp.tell() + size - while self.fp.tell() < end: - read(4) # signature - id = i16(read(2)) - name = read(i8(read(1))) - if not (len(name) & 1): - read(1) # padding - data = read(i32(read(4))) - if len(data) & 1: - read(1) # padding - self.resources.append((id, name, data)) - if id == 1039: # ICC profile - self.info["icc_profile"] = data - - # - # layer and mask information - - self.layers = [] - - size = i32(read(4)) - if size: - end = self.fp.tell() + size - size = i32(read(4)) - if size: - _layer_data = io.BytesIO(ImageFile._safe_read(self.fp, size)) - self.layers = _layerinfo(_layer_data, size) - self.fp.seek(end) - self.n_frames = len(self.layers) - self.is_animated = self.n_frames > 1 - - # - # image descriptor - - self.tile = _maketile(self.fp, mode, (0, 0) + self.size, channels) - - # keep the file open - self._fp = self.fp - self.frame = 1 - self._min_frame = 1 - - def seek(self, layer): - if not self._seek_check(layer): - return - - # seek to given layer (1..max) - try: - name, mode, bbox, tile = self.layers[layer - 1] - self._mode = mode - self.tile = tile - self.frame = layer - self.fp = self._fp - return name, bbox - except IndexError as e: - msg = "no such layer" - raise EOFError(msg) from e - - def tell(self): - # return layer number (0=image, 1..max=layers) - return self.frame - - -def _layerinfo(fp, ct_bytes): - # read layerinfo block - layers = [] - - def read(size): - return ImageFile._safe_read(fp, size) - - ct = si16(read(2)) - - # sanity check - if ct_bytes < (abs(ct) * 20): - msg = "Layer block too short for number of layers requested" - raise SyntaxError(msg) - - for _ in range(abs(ct)): - # bounding box - y0 = i32(read(4)) - x0 = i32(read(4)) - y1 = i32(read(4)) - x1 = i32(read(4)) - - # image info - mode = [] - ct_types = i16(read(2)) - types = list(range(ct_types)) - if len(types) > 4: - continue - - for _ in types: - type = i16(read(2)) - - if type == 65535: - m = "A" - else: - m = "RGBA"[type] - - mode.append(m) - read(4) # size - - # figure out the image mode - mode.sort() - if mode == ["R"]: - mode = "L" - elif mode == ["B", "G", "R"]: - mode = "RGB" - elif mode == ["A", "B", "G", "R"]: - mode = "RGBA" - else: - mode = None # unknown - - # skip over blend flags and extra information - read(12) # filler - name = "" - size = i32(read(4)) # length of the extra data field - if size: - data_end = fp.tell() + size - - length = i32(read(4)) - if length: - fp.seek(length - 16, io.SEEK_CUR) - - length = i32(read(4)) - if length: - fp.seek(length, io.SEEK_CUR) - - length = i8(read(1)) - if length: - # Don't know the proper encoding, - # Latin-1 should be a good guess - name = read(length).decode("latin-1", "replace") - - fp.seek(data_end) - layers.append((name, mode, (x0, y0, x1, y1))) - - # get tiles - for i, (name, mode, bbox) in enumerate(layers): - tile = [] - for m in mode: - t = _maketile(fp, m, bbox, 1) - if t: - tile.extend(t) - layers[i] = name, mode, bbox, tile - - return layers - - -def _maketile(file, mode, bbox, channels): - tile = None - read = file.read - - compression = i16(read(2)) - - xsize = bbox[2] - bbox[0] - ysize = bbox[3] - bbox[1] - - offset = file.tell() - - if compression == 0: - # - # raw compression - tile = [] - for channel in range(channels): - layer = mode[channel] - if mode == "CMYK": - layer += ";I" - tile.append(("raw", bbox, offset, layer)) - offset = offset + xsize * ysize - - elif compression == 1: - # - # packbits compression - i = 0 - tile = [] - bytecount = read(channels * ysize * 2) - offset = file.tell() - for channel in range(channels): - layer = mode[channel] - if mode == "CMYK": - layer += ";I" - tile.append(("packbits", bbox, offset, layer)) - for y in range(ysize): - offset = offset + i16(bytecount, i) - i += 2 - - file.seek(offset) - - if offset & 1: - read(1) # padding - - return tile - - -# -------------------------------------------------------------------- -# registry - - -Image.register_open(PsdImageFile.format, PsdImageFile, _accept) - -Image.register_extension(PsdImageFile.format, ".psd") - -Image.register_mime(PsdImageFile.format, "image/vnd.adobe.photoshop") diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/tootils/src/index.ts b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/tootils/src/index.ts deleted file mode 100644 index 2f26e811efb5cb54edd8d262f56dc8fa8c5dbf59..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/tootils/src/index.ts +++ /dev/null @@ -1,59 +0,0 @@ -import { test as base } from "@playwright/test"; -import { basename } from "path"; -import { spy } from "tinyspy"; - -import type { SvelteComponent } from "svelte"; -import type { SpyFn } from "tinyspy"; - -export function get_text(el: T): string { - return el.innerText.trim(); -} - -export function wait(n: number): Promise { - return new Promise((r) => setTimeout(r, n)); -} - -export const test = base.extend<{ setup: void }>({ - setup: [ - async ({ page }, use, testInfo): Promise => { - const port = process.env.GRADIO_E2E_TEST_PORT; - const { file } = testInfo; - const test_name = basename(file, ".spec.ts"); - - await page.goto(`localhost:${port}/${test_name}`); - - await use(); - }, - { auto: true } - ] -}); - -export async function wait_for_event( - component: SvelteComponent, - event: string -): Promise { - const mock = spy(); - return new Promise((res) => { - component.$on(event, () => { - mock(); - res(mock); - }); - }); -} - -export interface ActionReturn< - Parameter = never, - Attributes extends Record = Record -> { - update?: [Parameter] extends [never] ? never : (parameter: Parameter) => void; - destroy?: () => void; - /** - * ### DO NOT USE THIS - * This exists solely for type-checking and has no effect at runtime. - * Set this through the `Attributes` generic instead. - */ - $$_attributes?: Attributes; -} - -export { expect } from "@playwright/test"; -export * from "./render"; diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Example-ecf93e4d.js b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Example-ecf93e4d.js deleted file mode 100644 index ab83d9baddc093b65b4f1c7e9993b7a43e6e717f..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Example-ecf93e4d.js +++ /dev/null @@ -1,2 +0,0 @@ -const{SvelteComponent:c,attr:u,detach:r,element:d,init:o,insert:v,noop:_,safe_not_equal:g,toggle_class:s}=window.__gradio__svelte__internal;function y(n){let e;return{c(){e=d("div"),u(e,"class","prose svelte-zvfedn"),s(e,"table",n[1]==="table"),s(e,"gallery",n[1]==="gallery"),s(e,"selected",n[2])},m(l,t){v(l,e,t),e.innerHTML=n[0]},p(l,[t]){t&1&&(e.innerHTML=l[0]),t&2&&s(e,"table",l[1]==="table"),t&2&&s(e,"gallery",l[1]==="gallery"),t&4&&s(e,"selected",l[2])},i:_,o:_,d(l){l&&r(e)}}}function m(n,e,l){let{value:t}=e,{type:i}=e,{selected:f=!1}=e;return n.$$set=a=>{"value"in a&&l(0,t=a.value),"type"in a&&l(1,i=a.type),"selected"in a&&l(2,f=a.selected)},[t,i,f]}class b extends c{constructor(e){super(),o(this,e,m,y,g,{value:0,type:1,selected:2})}}export{b as default}; -//# sourceMappingURL=Example-ecf93e4d.js.map diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/jinja2/meta.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/jinja2/meta.py deleted file mode 100644 index 0057d6eabade5e964e6ef0e3ac8ed2dd67494b03..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/jinja2/meta.py +++ /dev/null @@ -1,111 +0,0 @@ -"""Functions that expose information about templates that might be -interesting for introspection. -""" -import typing as t - -from . import nodes -from .compiler import CodeGenerator -from .compiler import Frame - -if t.TYPE_CHECKING: - from .environment import Environment - - -class TrackingCodeGenerator(CodeGenerator): - """We abuse the code generator for introspection.""" - - def __init__(self, environment: "Environment") -> None: - super().__init__(environment, "", "") - self.undeclared_identifiers: t.Set[str] = set() - - def write(self, x: str) -> None: - """Don't write.""" - - def enter_frame(self, frame: Frame) -> None: - """Remember all undeclared identifiers.""" - super().enter_frame(frame) - - for _, (action, param) in frame.symbols.loads.items(): - if action == "resolve" and param not in self.environment.globals: - self.undeclared_identifiers.add(param) - - -def find_undeclared_variables(ast: nodes.Template) -> t.Set[str]: - """Returns a set of all variables in the AST that will be looked up from - the context at runtime. Because at compile time it's not known which - variables will be used depending on the path the execution takes at - runtime, all variables are returned. - - >>> from jinja2 import Environment, meta - >>> env = Environment() - >>> ast = env.parse('{% set foo = 42 %}{{ bar + foo }}') - >>> meta.find_undeclared_variables(ast) == {'bar'} - True - - .. admonition:: Implementation - - Internally the code generator is used for finding undeclared variables. - This is good to know because the code generator might raise a - :exc:`TemplateAssertionError` during compilation and as a matter of - fact this function can currently raise that exception as well. - """ - codegen = TrackingCodeGenerator(ast.environment) # type: ignore - codegen.visit(ast) - return codegen.undeclared_identifiers - - -_ref_types = (nodes.Extends, nodes.FromImport, nodes.Import, nodes.Include) -_RefType = t.Union[nodes.Extends, nodes.FromImport, nodes.Import, nodes.Include] - - -def find_referenced_templates(ast: nodes.Template) -> t.Iterator[t.Optional[str]]: - """Finds all the referenced templates from the AST. This will return an - iterator over all the hardcoded template extensions, inclusions and - imports. If dynamic inheritance or inclusion is used, `None` will be - yielded. - - >>> from jinja2 import Environment, meta - >>> env = Environment() - >>> ast = env.parse('{% extends "layout.html" %}{% include helper %}') - >>> list(meta.find_referenced_templates(ast)) - ['layout.html', None] - - This function is useful for dependency tracking. For example if you want - to rebuild parts of the website after a layout template has changed. - """ - template_name: t.Any - - for node in ast.find_all(_ref_types): - template: nodes.Expr = node.template # type: ignore - - if not isinstance(template, nodes.Const): - # a tuple with some non consts in there - if isinstance(template, (nodes.Tuple, nodes.List)): - for template_name in template.items: - # something const, only yield the strings and ignore - # non-string consts that really just make no sense - if isinstance(template_name, nodes.Const): - if isinstance(template_name.value, str): - yield template_name.value - # something dynamic in there - else: - yield None - # something dynamic we don't know about here - else: - yield None - continue - # constant is a basestring, direct template name - if isinstance(template.value, str): - yield template.value - # a tuple or list (latter *should* not happen) made of consts, - # yield the consts that are strings. We could warn here for - # non string values - elif isinstance(node, nodes.Include) and isinstance( - template.value, (tuple, list) - ): - for template_name in template.value: - if isinstance(template_name, str): - yield template_name - # something else we don't care about, we could warn here - else: - yield None diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/test_validate.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/test_validate.py deleted file mode 100644 index e99e0a686384883d570feef949597d08da7e8ff9..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/test_validate.py +++ /dev/null @@ -1,41 +0,0 @@ -import pytest - -from pandas.core.frame import DataFrame - - -@pytest.fixture -def dataframe(): - return DataFrame({"a": [1, 2], "b": [3, 4]}) - - -class TestDataFrameValidate: - """Tests for error handling related to data types of method arguments.""" - - @pytest.mark.parametrize( - "func", - [ - "query", - "eval", - "set_index", - "reset_index", - "dropna", - "drop_duplicates", - "sort_values", - ], - ) - @pytest.mark.parametrize("inplace", [1, "True", [1, 2, 3], 5.0]) - def test_validate_bool_args(self, dataframe, func, inplace): - msg = 'For argument "inplace" expected type bool' - kwargs = {"inplace": inplace} - - if func == "query": - kwargs["expr"] = "a > b" - elif func == "eval": - kwargs["expr"] = "a + b" - elif func == "set_index": - kwargs["keys"] = ["a"] - elif func == "sort_values": - kwargs["by"] = ["a"] - - with pytest.raises(ValueError, match=msg): - getattr(dataframe, func)(**kwargs) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/interval/test_base.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/interval/test_base.py deleted file mode 100644 index e0155a13481ac47221d3a0d48805bc57f6fc8c5b..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/interval/test_base.py +++ /dev/null @@ -1,56 +0,0 @@ -import numpy as np -import pytest - -from pandas import IntervalIndex -import pandas._testing as tm - - -class TestInterval: - """ - Tests specific to the shared common index tests; unrelated tests should be placed - in test_interval.py or the specific test file (e.g. test_astype.py) - """ - - @pytest.fixture - def simple_index(self) -> IntervalIndex: - return IntervalIndex.from_breaks(range(11), closed="right") - - @pytest.fixture - def index(self): - return tm.makeIntervalIndex(10) - - def test_take(self, closed): - index = IntervalIndex.from_breaks(range(11), closed=closed) - - result = index.take(range(10)) - tm.assert_index_equal(result, index) - - result = index.take([0, 0, 1]) - expected = IntervalIndex.from_arrays([0, 0, 1], [1, 1, 2], closed=closed) - tm.assert_index_equal(result, expected) - - def test_where(self, simple_index, listlike_box): - klass = listlike_box - - idx = simple_index - cond = [True] * len(idx) - expected = idx - result = expected.where(klass(cond)) - tm.assert_index_equal(result, expected) - - cond = [False] + [True] * len(idx[1:]) - expected = IntervalIndex([np.nan] + idx[1:].tolist()) - result = idx.where(klass(cond)) - tm.assert_index_equal(result, expected) - - def test_getitem_2d_deprecated(self, simple_index): - # GH#30588 multi-dim indexing is deprecated, but raising is also acceptable - idx = simple_index - with pytest.raises(ValueError, match="multi-dimensional indexing not allowed"): - idx[:, None] - with pytest.raises(ValueError, match="multi-dimensional indexing not allowed"): - # GH#44051 - idx[True] - with pytest.raises(ValueError, match="multi-dimensional indexing not allowed"): - # GH#44051 - idx[False] diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/period/methods/test_astype.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/period/methods/test_astype.py deleted file mode 100644 index e54cd73a35f5966317b8d66b81f6a6e0609c3a62..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/period/methods/test_astype.py +++ /dev/null @@ -1,148 +0,0 @@ -import numpy as np -import pytest - -from pandas import ( - CategoricalIndex, - DatetimeIndex, - Index, - NaT, - Period, - PeriodIndex, - period_range, -) -import pandas._testing as tm - - -class TestPeriodIndexAsType: - @pytest.mark.parametrize("dtype", [float, "timedelta64", "timedelta64[ns]"]) - def test_astype_raises(self, dtype): - # GH#13149, GH#13209 - idx = PeriodIndex(["2016-05-16", "NaT", NaT, np.nan], freq="D") - msg = "Cannot cast PeriodIndex to dtype" - with pytest.raises(TypeError, match=msg): - idx.astype(dtype) - - def test_astype_conversion(self): - # GH#13149, GH#13209 - idx = PeriodIndex(["2016-05-16", "NaT", NaT, np.nan], freq="D", name="idx") - - result = idx.astype(object) - expected = Index( - [Period("2016-05-16", freq="D")] + [Period(NaT, freq="D")] * 3, - dtype="object", - name="idx", - ) - tm.assert_index_equal(result, expected) - - result = idx.astype(np.int64) - expected = Index( - [16937] + [-9223372036854775808] * 3, dtype=np.int64, name="idx" - ) - tm.assert_index_equal(result, expected) - - result = idx.astype(str) - expected = Index([str(x) for x in idx], name="idx") - tm.assert_index_equal(result, expected) - - idx = period_range("1990", "2009", freq="A", name="idx") - result = idx.astype("i8") - tm.assert_index_equal(result, Index(idx.asi8, name="idx")) - tm.assert_numpy_array_equal(result.values, idx.asi8) - - def test_astype_uint(self): - arr = period_range("2000", periods=2, name="idx") - - with pytest.raises(TypeError, match=r"Do obj.astype\('int64'\)"): - arr.astype("uint64") - with pytest.raises(TypeError, match=r"Do obj.astype\('int64'\)"): - arr.astype("uint32") - - def test_astype_object(self): - idx = PeriodIndex([], freq="M") - - exp = np.array([], dtype=object) - tm.assert_numpy_array_equal(idx.astype(object).values, exp) - tm.assert_numpy_array_equal(idx._mpl_repr(), exp) - - idx = PeriodIndex(["2011-01", NaT], freq="M") - - exp = np.array([Period("2011-01", freq="M"), NaT], dtype=object) - tm.assert_numpy_array_equal(idx.astype(object).values, exp) - tm.assert_numpy_array_equal(idx._mpl_repr(), exp) - - exp = np.array([Period("2011-01-01", freq="D"), NaT], dtype=object) - idx = PeriodIndex(["2011-01-01", NaT], freq="D") - tm.assert_numpy_array_equal(idx.astype(object).values, exp) - tm.assert_numpy_array_equal(idx._mpl_repr(), exp) - - # TODO: de-duplicate this version (from test_ops) with the one above - # (from test_period) - def test_astype_object2(self): - idx = period_range(start="2013-01-01", periods=4, freq="M", name="idx") - expected_list = [ - Period("2013-01-31", freq="M"), - Period("2013-02-28", freq="M"), - Period("2013-03-31", freq="M"), - Period("2013-04-30", freq="M"), - ] - expected = Index(expected_list, dtype=object, name="idx") - result = idx.astype(object) - assert isinstance(result, Index) - assert result.dtype == object - tm.assert_index_equal(result, expected) - assert result.name == expected.name - assert idx.tolist() == expected_list - - idx = PeriodIndex( - ["2013-01-01", "2013-01-02", "NaT", "2013-01-04"], freq="D", name="idx" - ) - expected_list = [ - Period("2013-01-01", freq="D"), - Period("2013-01-02", freq="D"), - Period("NaT", freq="D"), - Period("2013-01-04", freq="D"), - ] - expected = Index(expected_list, dtype=object, name="idx") - result = idx.astype(object) - assert isinstance(result, Index) - assert result.dtype == object - tm.assert_index_equal(result, expected) - for i in [0, 1, 3]: - assert result[i] == expected[i] - assert result[2] is NaT - assert result.name == expected.name - - result_list = idx.tolist() - for i in [0, 1, 3]: - assert result_list[i] == expected_list[i] - assert result_list[2] is NaT - - def test_astype_category(self): - obj = period_range("2000", periods=2, name="idx") - result = obj.astype("category") - expected = CategoricalIndex( - [Period("2000-01-01", freq="D"), Period("2000-01-02", freq="D")], name="idx" - ) - tm.assert_index_equal(result, expected) - - result = obj._data.astype("category") - expected = expected.values - tm.assert_categorical_equal(result, expected) - - def test_astype_array_fallback(self): - obj = period_range("2000", periods=2, name="idx") - result = obj.astype(bool) - expected = Index(np.array([True, True]), name="idx") - tm.assert_index_equal(result, expected) - - result = obj._data.astype(bool) - expected = np.array([True, True]) - tm.assert_numpy_array_equal(result, expected) - - def test_period_astype_to_timestamp(self): - pi = PeriodIndex(["2011-01", "2011-02", "2011-03"], freq="M") - - exp = DatetimeIndex(["2011-01-01", "2011-02-01", "2011-03-01"], tz="US/Eastern") - res = pi.astype("datetime64[ns, US/Eastern]") - tm.assert_index_equal(res, exp) - assert res.freq == exp.freq diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/accessors/test_cat_accessor.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/accessors/test_cat_accessor.py deleted file mode 100644 index 2d50b0f36904a94e7e16aac426e1d098fe320c94..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/accessors/test_cat_accessor.py +++ /dev/null @@ -1,258 +0,0 @@ -import numpy as np -import pytest - -from pandas import ( - Categorical, - DataFrame, - Index, - Series, - Timestamp, - date_range, - period_range, - timedelta_range, -) -import pandas._testing as tm -from pandas.core.arrays.categorical import CategoricalAccessor -from pandas.core.indexes.accessors import Properties - - -class TestCatAccessor: - @pytest.mark.parametrize( - "method", - [ - lambda x: x.cat.set_categories([1, 2, 3]), - lambda x: x.cat.reorder_categories([2, 3, 1], ordered=True), - lambda x: x.cat.rename_categories([1, 2, 3]), - lambda x: x.cat.remove_unused_categories(), - lambda x: x.cat.remove_categories([2]), - lambda x: x.cat.add_categories([4]), - lambda x: x.cat.as_ordered(), - lambda x: x.cat.as_unordered(), - ], - ) - def test_getname_categorical_accessor(self, method): - # GH#17509 - ser = Series([1, 2, 3], name="A").astype("category") - expected = "A" - result = method(ser).name - assert result == expected - - def test_cat_accessor(self): - ser = Series(Categorical(["a", "b", np.nan, "a"])) - tm.assert_index_equal(ser.cat.categories, Index(["a", "b"])) - assert not ser.cat.ordered, False - - exp = Categorical(["a", "b", np.nan, "a"], categories=["b", "a"]) - - res = ser.cat.set_categories(["b", "a"]) - tm.assert_categorical_equal(res.values, exp) - - ser[:] = "a" - ser = ser.cat.remove_unused_categories() - tm.assert_index_equal(ser.cat.categories, Index(["a"])) - - def test_cat_accessor_api(self): - # GH#9322 - - assert Series.cat is CategoricalAccessor - ser = Series(list("aabbcde")).astype("category") - assert isinstance(ser.cat, CategoricalAccessor) - - invalid = Series([1]) - with pytest.raises(AttributeError, match="only use .cat accessor"): - invalid.cat - assert not hasattr(invalid, "cat") - - def test_cat_accessor_no_new_attributes(self): - # https://github.com/pandas-dev/pandas/issues/10673 - cat = Series(list("aabbcde")).astype("category") - with pytest.raises(AttributeError, match="You cannot add any new attribute"): - cat.cat.xlabel = "a" - - def test_categorical_delegations(self): - # invalid accessor - msg = r"Can only use \.cat accessor with a 'category' dtype" - with pytest.raises(AttributeError, match=msg): - Series([1, 2, 3]).cat - with pytest.raises(AttributeError, match=msg): - Series([1, 2, 3]).cat() - with pytest.raises(AttributeError, match=msg): - Series(["a", "b", "c"]).cat - with pytest.raises(AttributeError, match=msg): - Series(np.arange(5.0)).cat - with pytest.raises(AttributeError, match=msg): - Series([Timestamp("20130101")]).cat - - # Series should delegate calls to '.categories', '.codes', '.ordered' - # and the methods '.set_categories()' 'drop_unused_categories()' to the - # categorical - ser = Series(Categorical(["a", "b", "c", "a"], ordered=True)) - exp_categories = Index(["a", "b", "c"]) - tm.assert_index_equal(ser.cat.categories, exp_categories) - ser = ser.cat.rename_categories([1, 2, 3]) - exp_categories = Index([1, 2, 3]) - tm.assert_index_equal(ser.cat.categories, exp_categories) - - exp_codes = Series([0, 1, 2, 0], dtype="int8") - tm.assert_series_equal(ser.cat.codes, exp_codes) - - assert ser.cat.ordered - ser = ser.cat.as_unordered() - assert not ser.cat.ordered - - ser = ser.cat.as_ordered() - assert ser.cat.ordered - - # reorder - ser = Series(Categorical(["a", "b", "c", "a"], ordered=True)) - exp_categories = Index(["c", "b", "a"]) - exp_values = np.array(["a", "b", "c", "a"], dtype=np.object_) - ser = ser.cat.set_categories(["c", "b", "a"]) - tm.assert_index_equal(ser.cat.categories, exp_categories) - tm.assert_numpy_array_equal(ser.values.__array__(), exp_values) - tm.assert_numpy_array_equal(ser.__array__(), exp_values) - - # remove unused categories - ser = Series(Categorical(["a", "b", "b", "a"], categories=["a", "b", "c"])) - exp_categories = Index(["a", "b"]) - exp_values = np.array(["a", "b", "b", "a"], dtype=np.object_) - ser = ser.cat.remove_unused_categories() - tm.assert_index_equal(ser.cat.categories, exp_categories) - tm.assert_numpy_array_equal(ser.values.__array__(), exp_values) - tm.assert_numpy_array_equal(ser.__array__(), exp_values) - - # This method is likely to be confused, so test that it raises an error - # on wrong inputs: - msg = "'Series' object has no attribute 'set_categories'" - with pytest.raises(AttributeError, match=msg): - ser.set_categories([4, 3, 2, 1]) - - # right: ser.cat.set_categories([4,3,2,1]) - - # GH#18862 (let Series.cat.rename_categories take callables) - ser = Series(Categorical(["a", "b", "c", "a"], ordered=True)) - result = ser.cat.rename_categories(lambda x: x.upper()) - expected = Series( - Categorical(["A", "B", "C", "A"], categories=["A", "B", "C"], ordered=True) - ) - tm.assert_series_equal(result, expected) - - @pytest.mark.parametrize( - "idx", - [ - date_range("1/1/2015", periods=5), - date_range("1/1/2015", periods=5, tz="MET"), - period_range("1/1/2015", freq="D", periods=5), - timedelta_range("1 days", "10 days"), - ], - ) - def test_dt_accessor_api_for_categorical(self, idx): - # https://github.com/pandas-dev/pandas/issues/10661 - - ser = Series(idx) - cat = ser.astype("category") - - # only testing field (like .day) - # and bool (is_month_start) - attr_names = type(ser._values)._datetimelike_ops - - assert isinstance(cat.dt, Properties) - - special_func_defs = [ - ("strftime", ("%Y-%m-%d",), {}), - ("round", ("D",), {}), - ("floor", ("D",), {}), - ("ceil", ("D",), {}), - ("asfreq", ("D",), {}), - ("as_unit", ("s"), {}), - ] - if idx.dtype == "M8[ns]": - # exclude dt64tz since that is already localized and would raise - tup = ("tz_localize", ("UTC",), {}) - special_func_defs.append(tup) - elif idx.dtype.kind == "M": - # exclude dt64 since that is not localized so would raise - tup = ("tz_convert", ("EST",), {}) - special_func_defs.append(tup) - - _special_func_names = [f[0] for f in special_func_defs] - - _ignore_names = ["components", "tz_localize", "tz_convert"] - - func_names = [ - fname - for fname in dir(ser.dt) - if not ( - fname.startswith("_") - or fname in attr_names - or fname in _special_func_names - or fname in _ignore_names - ) - ] - - func_defs = [(fname, (), {}) for fname in func_names] - func_defs.extend( - f_def for f_def in special_func_defs if f_def[0] in dir(ser.dt) - ) - - for func, args, kwargs in func_defs: - warn_cls = [] - if func == "to_period" and getattr(idx, "tz", None) is not None: - # dropping TZ - warn_cls.append(UserWarning) - if func == "to_pydatetime": - # deprecated to return Index[object] - warn_cls.append(FutureWarning) - if warn_cls: - warn_cls = tuple(warn_cls) - else: - warn_cls = None - with tm.assert_produces_warning(warn_cls): - res = getattr(cat.dt, func)(*args, **kwargs) - exp = getattr(ser.dt, func)(*args, **kwargs) - - tm.assert_equal(res, exp) - - for attr in attr_names: - res = getattr(cat.dt, attr) - exp = getattr(ser.dt, attr) - - tm.assert_equal(res, exp) - - def test_dt_accessor_api_for_categorical_invalid(self): - invalid = Series([1, 2, 3]).astype("category") - msg = "Can only use .dt accessor with datetimelike" - - with pytest.raises(AttributeError, match=msg): - invalid.dt - assert not hasattr(invalid, "str") - - def test_set_categories_setitem(self): - # GH#43334 - - df = DataFrame({"Survived": [1, 0, 1], "Sex": [0, 1, 1]}, dtype="category") - - df["Survived"] = df["Survived"].cat.rename_categories(["No", "Yes"]) - df["Sex"] = df["Sex"].cat.rename_categories(["female", "male"]) - - # values should not be coerced to NaN - assert list(df["Sex"]) == ["female", "male", "male"] - assert list(df["Survived"]) == ["Yes", "No", "Yes"] - - df["Sex"] = Categorical(df["Sex"], categories=["female", "male"], ordered=False) - df["Survived"] = Categorical( - df["Survived"], categories=["No", "Yes"], ordered=False - ) - - # values should not be coerced to NaN - assert list(df["Sex"]) == ["female", "male", "male"] - assert list(df["Survived"]) == ["Yes", "No", "Yes"] - - def test_categorical_of_booleans_is_boolean(self): - # https://github.com/pandas-dev/pandas/issues/46313 - df = DataFrame( - {"int_cat": [1, 2, 3], "bool_cat": [True, False, False]}, dtype="category" - ) - value = df["bool_cat"].cat.categories.dtype - expected = np.dtype(np.bool_) - assert value is expected diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/chardet/jisfreq.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/chardet/jisfreq.py deleted file mode 100644 index 83fc082b545106d02622de20f2083e8a7562f96c..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/chardet/jisfreq.py +++ /dev/null @@ -1,325 +0,0 @@ -######################## BEGIN LICENSE BLOCK ######################## -# The Original Code is Mozilla Communicator client code. -# -# The Initial Developer of the Original Code is -# Netscape Communications Corporation. -# Portions created by the Initial Developer are Copyright (C) 1998 -# the Initial Developer. All Rights Reserved. -# -# Contributor(s): -# Mark Pilgrim - port to Python -# -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA -# 02110-1301 USA -######################### END LICENSE BLOCK ######################### - -# Sampling from about 20M text materials include literature and computer technology -# -# Japanese frequency table, applied to both S-JIS and EUC-JP -# They are sorted in order. - -# 128 --> 0.77094 -# 256 --> 0.85710 -# 512 --> 0.92635 -# 1024 --> 0.97130 -# 2048 --> 0.99431 -# -# Ideal Distribution Ratio = 0.92635 / (1-0.92635) = 12.58 -# Random Distribution Ration = 512 / (2965+62+83+86-512) = 0.191 -# -# Typical Distribution Ratio, 25% of IDR - -JIS_TYPICAL_DISTRIBUTION_RATIO = 3.0 - -# Char to FreqOrder table , -JIS_TABLE_SIZE = 4368 - -JIS_CHAR_TO_FREQ_ORDER = ( - 40, 1, 6, 182, 152, 180, 295,2127, 285, 381,3295,4304,3068,4606,3165,3510, # 16 -3511,1822,2785,4607,1193,2226,5070,4608, 171,2996,1247, 18, 179,5071, 856,1661, # 32 -1262,5072, 619, 127,3431,3512,3230,1899,1700, 232, 228,1294,1298, 284, 283,2041, # 48 -2042,1061,1062, 48, 49, 44, 45, 433, 434,1040,1041, 996, 787,2997,1255,4305, # 64 -2108,4609,1684,1648,5073,5074,5075,5076,5077,5078,3687,5079,4610,5080,3927,3928, # 80 -5081,3296,3432, 290,2285,1471,2187,5082,2580,2825,1303,2140,1739,1445,2691,3375, # 96 -1691,3297,4306,4307,4611, 452,3376,1182,2713,3688,3069,4308,5083,5084,5085,5086, # 112 -5087,5088,5089,5090,5091,5092,5093,5094,5095,5096,5097,5098,5099,5100,5101,5102, # 128 -5103,5104,5105,5106,5107,5108,5109,5110,5111,5112,4097,5113,5114,5115,5116,5117, # 144 -5118,5119,5120,5121,5122,5123,5124,5125,5126,5127,5128,5129,5130,5131,5132,5133, # 160 -5134,5135,5136,5137,5138,5139,5140,5141,5142,5143,5144,5145,5146,5147,5148,5149, # 176 -5150,5151,5152,4612,5153,5154,5155,5156,5157,5158,5159,5160,5161,5162,5163,5164, # 192 -5165,5166,5167,5168,5169,5170,5171,5172,5173,5174,5175,1472, 598, 618, 820,1205, # 208 -1309,1412,1858,1307,1692,5176,5177,5178,5179,5180,5181,5182,1142,1452,1234,1172, # 224 -1875,2043,2149,1793,1382,2973, 925,2404,1067,1241, 960,1377,2935,1491, 919,1217, # 240 -1865,2030,1406,1499,2749,4098,5183,5184,5185,5186,5187,5188,2561,4099,3117,1804, # 256 -2049,3689,4309,3513,1663,5189,3166,3118,3298,1587,1561,3433,5190,3119,1625,2998, # 272 -3299,4613,1766,3690,2786,4614,5191,5192,5193,5194,2161, 26,3377, 2,3929, 20, # 288 -3691, 47,4100, 50, 17, 16, 35, 268, 27, 243, 42, 155, 24, 154, 29, 184, # 304 - 4, 91, 14, 92, 53, 396, 33, 289, 9, 37, 64, 620, 21, 39, 321, 5, # 320 - 12, 11, 52, 13, 3, 208, 138, 0, 7, 60, 526, 141, 151,1069, 181, 275, # 336 -1591, 83, 132,1475, 126, 331, 829, 15, 69, 160, 59, 22, 157, 55,1079, 312, # 352 - 109, 38, 23, 25, 10, 19, 79,5195, 61, 382,1124, 8, 30,5196,5197,5198, # 368 -5199,5200,5201,5202,5203,5204,5205,5206, 89, 62, 74, 34,2416, 112, 139, 196, # 384 - 271, 149, 84, 607, 131, 765, 46, 88, 153, 683, 76, 874, 101, 258, 57, 80, # 400 - 32, 364, 121,1508, 169,1547, 68, 235, 145,2999, 41, 360,3027, 70, 63, 31, # 416 - 43, 259, 262,1383, 99, 533, 194, 66, 93, 846, 217, 192, 56, 106, 58, 565, # 432 - 280, 272, 311, 256, 146, 82, 308, 71, 100, 128, 214, 655, 110, 261, 104,1140, # 448 - 54, 51, 36, 87, 67,3070, 185,2618,2936,2020, 28,1066,2390,2059,5207,5208, # 464 -5209,5210,5211,5212,5213,5214,5215,5216,4615,5217,5218,5219,5220,5221,5222,5223, # 480 -5224,5225,5226,5227,5228,5229,5230,5231,5232,5233,5234,5235,5236,3514,5237,5238, # 496 -5239,5240,5241,5242,5243,5244,2297,2031,4616,4310,3692,5245,3071,5246,3598,5247, # 512 -4617,3231,3515,5248,4101,4311,4618,3808,4312,4102,5249,4103,4104,3599,5250,5251, # 528 -5252,5253,5254,5255,5256,5257,5258,5259,5260,5261,5262,5263,5264,5265,5266,5267, # 544 -5268,5269,5270,5271,5272,5273,5274,5275,5276,5277,5278,5279,5280,5281,5282,5283, # 560 -5284,5285,5286,5287,5288,5289,5290,5291,5292,5293,5294,5295,5296,5297,5298,5299, # 576 -5300,5301,5302,5303,5304,5305,5306,5307,5308,5309,5310,5311,5312,5313,5314,5315, # 592 -5316,5317,5318,5319,5320,5321,5322,5323,5324,5325,5326,5327,5328,5329,5330,5331, # 608 -5332,5333,5334,5335,5336,5337,5338,5339,5340,5341,5342,5343,5344,5345,5346,5347, # 624 -5348,5349,5350,5351,5352,5353,5354,5355,5356,5357,5358,5359,5360,5361,5362,5363, # 640 -5364,5365,5366,5367,5368,5369,5370,5371,5372,5373,5374,5375,5376,5377,5378,5379, # 656 -5380,5381, 363, 642,2787,2878,2788,2789,2316,3232,2317,3434,2011, 165,1942,3930, # 672 -3931,3932,3933,5382,4619,5383,4620,5384,5385,5386,5387,5388,5389,5390,5391,5392, # 688 -5393,5394,5395,5396,5397,5398,5399,5400,5401,5402,5403,5404,5405,5406,5407,5408, # 704 -5409,5410,5411,5412,5413,5414,5415,5416,5417,5418,5419,5420,5421,5422,5423,5424, # 720 -5425,5426,5427,5428,5429,5430,5431,5432,5433,5434,5435,5436,5437,5438,5439,5440, # 736 -5441,5442,5443,5444,5445,5446,5447,5448,5449,5450,5451,5452,5453,5454,5455,5456, # 752 -5457,5458,5459,5460,5461,5462,5463,5464,5465,5466,5467,5468,5469,5470,5471,5472, # 768 -5473,5474,5475,5476,5477,5478,5479,5480,5481,5482,5483,5484,5485,5486,5487,5488, # 784 -5489,5490,5491,5492,5493,5494,5495,5496,5497,5498,5499,5500,5501,5502,5503,5504, # 800 -5505,5506,5507,5508,5509,5510,5511,5512,5513,5514,5515,5516,5517,5518,5519,5520, # 816 -5521,5522,5523,5524,5525,5526,5527,5528,5529,5530,5531,5532,5533,5534,5535,5536, # 832 -5537,5538,5539,5540,5541,5542,5543,5544,5545,5546,5547,5548,5549,5550,5551,5552, # 848 -5553,5554,5555,5556,5557,5558,5559,5560,5561,5562,5563,5564,5565,5566,5567,5568, # 864 -5569,5570,5571,5572,5573,5574,5575,5576,5577,5578,5579,5580,5581,5582,5583,5584, # 880 -5585,5586,5587,5588,5589,5590,5591,5592,5593,5594,5595,5596,5597,5598,5599,5600, # 896 -5601,5602,5603,5604,5605,5606,5607,5608,5609,5610,5611,5612,5613,5614,5615,5616, # 912 -5617,5618,5619,5620,5621,5622,5623,5624,5625,5626,5627,5628,5629,5630,5631,5632, # 928 -5633,5634,5635,5636,5637,5638,5639,5640,5641,5642,5643,5644,5645,5646,5647,5648, # 944 -5649,5650,5651,5652,5653,5654,5655,5656,5657,5658,5659,5660,5661,5662,5663,5664, # 960 -5665,5666,5667,5668,5669,5670,5671,5672,5673,5674,5675,5676,5677,5678,5679,5680, # 976 -5681,5682,5683,5684,5685,5686,5687,5688,5689,5690,5691,5692,5693,5694,5695,5696, # 992 -5697,5698,5699,5700,5701,5702,5703,5704,5705,5706,5707,5708,5709,5710,5711,5712, # 1008 -5713,5714,5715,5716,5717,5718,5719,5720,5721,5722,5723,5724,5725,5726,5727,5728, # 1024 -5729,5730,5731,5732,5733,5734,5735,5736,5737,5738,5739,5740,5741,5742,5743,5744, # 1040 -5745,5746,5747,5748,5749,5750,5751,5752,5753,5754,5755,5756,5757,5758,5759,5760, # 1056 -5761,5762,5763,5764,5765,5766,5767,5768,5769,5770,5771,5772,5773,5774,5775,5776, # 1072 -5777,5778,5779,5780,5781,5782,5783,5784,5785,5786,5787,5788,5789,5790,5791,5792, # 1088 -5793,5794,5795,5796,5797,5798,5799,5800,5801,5802,5803,5804,5805,5806,5807,5808, # 1104 -5809,5810,5811,5812,5813,5814,5815,5816,5817,5818,5819,5820,5821,5822,5823,5824, # 1120 -5825,5826,5827,5828,5829,5830,5831,5832,5833,5834,5835,5836,5837,5838,5839,5840, # 1136 -5841,5842,5843,5844,5845,5846,5847,5848,5849,5850,5851,5852,5853,5854,5855,5856, # 1152 -5857,5858,5859,5860,5861,5862,5863,5864,5865,5866,5867,5868,5869,5870,5871,5872, # 1168 -5873,5874,5875,5876,5877,5878,5879,5880,5881,5882,5883,5884,5885,5886,5887,5888, # 1184 -5889,5890,5891,5892,5893,5894,5895,5896,5897,5898,5899,5900,5901,5902,5903,5904, # 1200 -5905,5906,5907,5908,5909,5910,5911,5912,5913,5914,5915,5916,5917,5918,5919,5920, # 1216 -5921,5922,5923,5924,5925,5926,5927,5928,5929,5930,5931,5932,5933,5934,5935,5936, # 1232 -5937,5938,5939,5940,5941,5942,5943,5944,5945,5946,5947,5948,5949,5950,5951,5952, # 1248 -5953,5954,5955,5956,5957,5958,5959,5960,5961,5962,5963,5964,5965,5966,5967,5968, # 1264 -5969,5970,5971,5972,5973,5974,5975,5976,5977,5978,5979,5980,5981,5982,5983,5984, # 1280 -5985,5986,5987,5988,5989,5990,5991,5992,5993,5994,5995,5996,5997,5998,5999,6000, # 1296 -6001,6002,6003,6004,6005,6006,6007,6008,6009,6010,6011,6012,6013,6014,6015,6016, # 1312 -6017,6018,6019,6020,6021,6022,6023,6024,6025,6026,6027,6028,6029,6030,6031,6032, # 1328 -6033,6034,6035,6036,6037,6038,6039,6040,6041,6042,6043,6044,6045,6046,6047,6048, # 1344 -6049,6050,6051,6052,6053,6054,6055,6056,6057,6058,6059,6060,6061,6062,6063,6064, # 1360 -6065,6066,6067,6068,6069,6070,6071,6072,6073,6074,6075,6076,6077,6078,6079,6080, # 1376 -6081,6082,6083,6084,6085,6086,6087,6088,6089,6090,6091,6092,6093,6094,6095,6096, # 1392 -6097,6098,6099,6100,6101,6102,6103,6104,6105,6106,6107,6108,6109,6110,6111,6112, # 1408 -6113,6114,2044,2060,4621, 997,1235, 473,1186,4622, 920,3378,6115,6116, 379,1108, # 1424 -4313,2657,2735,3934,6117,3809, 636,3233, 573,1026,3693,3435,2974,3300,2298,4105, # 1440 - 854,2937,2463, 393,2581,2417, 539, 752,1280,2750,2480, 140,1161, 440, 708,1569, # 1456 - 665,2497,1746,1291,1523,3000, 164,1603, 847,1331, 537,1997, 486, 508,1693,2418, # 1472 -1970,2227, 878,1220, 299,1030, 969, 652,2751, 624,1137,3301,2619, 65,3302,2045, # 1488 -1761,1859,3120,1930,3694,3516, 663,1767, 852, 835,3695, 269, 767,2826,2339,1305, # 1504 - 896,1150, 770,1616,6118, 506,1502,2075,1012,2519, 775,2520,2975,2340,2938,4314, # 1520 -3028,2086,1224,1943,2286,6119,3072,4315,2240,1273,1987,3935,1557, 175, 597, 985, # 1536 -3517,2419,2521,1416,3029, 585, 938,1931,1007,1052,1932,1685,6120,3379,4316,4623, # 1552 - 804, 599,3121,1333,2128,2539,1159,1554,2032,3810, 687,2033,2904, 952, 675,1467, # 1568 -3436,6121,2241,1096,1786,2440,1543,1924, 980,1813,2228, 781,2692,1879, 728,1918, # 1584 -3696,4624, 548,1950,4625,1809,1088,1356,3303,2522,1944, 502, 972, 373, 513,2827, # 1600 - 586,2377,2391,1003,1976,1631,6122,2464,1084, 648,1776,4626,2141, 324, 962,2012, # 1616 -2177,2076,1384, 742,2178,1448,1173,1810, 222, 102, 301, 445, 125,2420, 662,2498, # 1632 - 277, 200,1476,1165,1068, 224,2562,1378,1446, 450,1880, 659, 791, 582,4627,2939, # 1648 -3936,1516,1274, 555,2099,3697,1020,1389,1526,3380,1762,1723,1787,2229, 412,2114, # 1664 -1900,2392,3518, 512,2597, 427,1925,2341,3122,1653,1686,2465,2499, 697, 330, 273, # 1680 - 380,2162, 951, 832, 780, 991,1301,3073, 965,2270,3519, 668,2523,2636,1286, 535, # 1696 -1407, 518, 671, 957,2658,2378, 267, 611,2197,3030,6123, 248,2299, 967,1799,2356, # 1712 - 850,1418,3437,1876,1256,1480,2828,1718,6124,6125,1755,1664,2405,6126,4628,2879, # 1728 -2829, 499,2179, 676,4629, 557,2329,2214,2090, 325,3234, 464, 811,3001, 992,2342, # 1744 -2481,1232,1469, 303,2242, 466,1070,2163, 603,1777,2091,4630,2752,4631,2714, 322, # 1760 -2659,1964,1768, 481,2188,1463,2330,2857,3600,2092,3031,2421,4632,2318,2070,1849, # 1776 -2598,4633,1302,2254,1668,1701,2422,3811,2905,3032,3123,2046,4106,1763,1694,4634, # 1792 -1604, 943,1724,1454, 917, 868,2215,1169,2940, 552,1145,1800,1228,1823,1955, 316, # 1808 -1080,2510, 361,1807,2830,4107,2660,3381,1346,1423,1134,4108,6127, 541,1263,1229, # 1824 -1148,2540, 545, 465,1833,2880,3438,1901,3074,2482, 816,3937, 713,1788,2500, 122, # 1840 -1575, 195,1451,2501,1111,6128, 859, 374,1225,2243,2483,4317, 390,1033,3439,3075, # 1856 -2524,1687, 266, 793,1440,2599, 946, 779, 802, 507, 897,1081, 528,2189,1292, 711, # 1872 -1866,1725,1167,1640, 753, 398,2661,1053, 246, 348,4318, 137,1024,3440,1600,2077, # 1888 -2129, 825,4319, 698, 238, 521, 187,2300,1157,2423,1641,1605,1464,1610,1097,2541, # 1904 -1260,1436, 759,2255,1814,2150, 705,3235, 409,2563,3304, 561,3033,2005,2564, 726, # 1920 -1956,2343,3698,4109, 949,3812,3813,3520,1669, 653,1379,2525, 881,2198, 632,2256, # 1936 -1027, 778,1074, 733,1957, 514,1481,2466, 554,2180, 702,3938,1606,1017,1398,6129, # 1952 -1380,3521, 921, 993,1313, 594, 449,1489,1617,1166, 768,1426,1360, 495,1794,3601, # 1968 -1177,3602,1170,4320,2344, 476, 425,3167,4635,3168,1424, 401,2662,1171,3382,1998, # 1984 -1089,4110, 477,3169, 474,6130,1909, 596,2831,1842, 494, 693,1051,1028,1207,3076, # 2000 - 606,2115, 727,2790,1473,1115, 743,3522, 630, 805,1532,4321,2021, 366,1057, 838, # 2016 - 684,1114,2142,4322,2050,1492,1892,1808,2271,3814,2424,1971,1447,1373,3305,1090, # 2032 -1536,3939,3523,3306,1455,2199, 336, 369,2331,1035, 584,2393, 902, 718,2600,6131, # 2048 -2753, 463,2151,1149,1611,2467, 715,1308,3124,1268, 343,1413,3236,1517,1347,2663, # 2064 -2093,3940,2022,1131,1553,2100,2941,1427,3441,2942,1323,2484,6132,1980, 872,2368, # 2080 -2441,2943, 320,2369,2116,1082, 679,1933,3941,2791,3815, 625,1143,2023, 422,2200, # 2096 -3816,6133, 730,1695, 356,2257,1626,2301,2858,2637,1627,1778, 937, 883,2906,2693, # 2112 -3002,1769,1086, 400,1063,1325,3307,2792,4111,3077, 456,2345,1046, 747,6134,1524, # 2128 - 884,1094,3383,1474,2164,1059, 974,1688,2181,2258,1047, 345,1665,1187, 358, 875, # 2144 -3170, 305, 660,3524,2190,1334,1135,3171,1540,1649,2542,1527, 927, 968,2793, 885, # 2160 -1972,1850, 482, 500,2638,1218,1109,1085,2543,1654,2034, 876, 78,2287,1482,1277, # 2176 - 861,1675,1083,1779, 724,2754, 454, 397,1132,1612,2332, 893, 672,1237, 257,2259, # 2192 -2370, 135,3384, 337,2244, 547, 352, 340, 709,2485,1400, 788,1138,2511, 540, 772, # 2208 -1682,2260,2272,2544,2013,1843,1902,4636,1999,1562,2288,4637,2201,1403,1533, 407, # 2224 - 576,3308,1254,2071, 978,3385, 170, 136,1201,3125,2664,3172,2394, 213, 912, 873, # 2240 -3603,1713,2202, 699,3604,3699, 813,3442, 493, 531,1054, 468,2907,1483, 304, 281, # 2256 -4112,1726,1252,2094, 339,2319,2130,2639, 756,1563,2944, 748, 571,2976,1588,2425, # 2272 -2715,1851,1460,2426,1528,1392,1973,3237, 288,3309, 685,3386, 296, 892,2716,2216, # 2288 -1570,2245, 722,1747,2217, 905,3238,1103,6135,1893,1441,1965, 251,1805,2371,3700, # 2304 -2601,1919,1078, 75,2182,1509,1592,1270,2640,4638,2152,6136,3310,3817, 524, 706, # 2320 -1075, 292,3818,1756,2602, 317, 98,3173,3605,3525,1844,2218,3819,2502, 814, 567, # 2336 - 385,2908,1534,6137, 534,1642,3239, 797,6138,1670,1529, 953,4323, 188,1071, 538, # 2352 - 178, 729,3240,2109,1226,1374,2000,2357,2977, 731,2468,1116,2014,2051,6139,1261, # 2368 -1593, 803,2859,2736,3443, 556, 682, 823,1541,6140,1369,2289,1706,2794, 845, 462, # 2384 -2603,2665,1361, 387, 162,2358,1740, 739,1770,1720,1304,1401,3241,1049, 627,1571, # 2400 -2427,3526,1877,3942,1852,1500, 431,1910,1503, 677, 297,2795, 286,1433,1038,1198, # 2416 -2290,1133,1596,4113,4639,2469,1510,1484,3943,6141,2442, 108, 712,4640,2372, 866, # 2432 -3701,2755,3242,1348, 834,1945,1408,3527,2395,3243,1811, 824, 994,1179,2110,1548, # 2448 -1453, 790,3003, 690,4324,4325,2832,2909,3820,1860,3821, 225,1748, 310, 346,1780, # 2464 -2470, 821,1993,2717,2796, 828, 877,3528,2860,2471,1702,2165,2910,2486,1789, 453, # 2480 - 359,2291,1676, 73,1164,1461,1127,3311, 421, 604, 314,1037, 589, 116,2487, 737, # 2496 - 837,1180, 111, 244, 735,6142,2261,1861,1362, 986, 523, 418, 581,2666,3822, 103, # 2512 - 855, 503,1414,1867,2488,1091, 657,1597, 979, 605,1316,4641,1021,2443,2078,2001, # 2528 -1209, 96, 587,2166,1032, 260,1072,2153, 173, 94, 226,3244, 819,2006,4642,4114, # 2544 -2203, 231,1744, 782, 97,2667, 786,3387, 887, 391, 442,2219,4326,1425,6143,2694, # 2560 - 633,1544,1202, 483,2015, 592,2052,1958,2472,1655, 419, 129,4327,3444,3312,1714, # 2576 -1257,3078,4328,1518,1098, 865,1310,1019,1885,1512,1734, 469,2444, 148, 773, 436, # 2592 -1815,1868,1128,1055,4329,1245,2756,3445,2154,1934,1039,4643, 579,1238, 932,2320, # 2608 - 353, 205, 801, 115,2428, 944,2321,1881, 399,2565,1211, 678, 766,3944, 335,2101, # 2624 -1459,1781,1402,3945,2737,2131,1010, 844, 981,1326,1013, 550,1816,1545,2620,1335, # 2640 -1008, 371,2881, 936,1419,1613,3529,1456,1395,2273,1834,2604,1317,2738,2503, 416, # 2656 -1643,4330, 806,1126, 229, 591,3946,1314,1981,1576,1837,1666, 347,1790, 977,3313, # 2672 - 764,2861,1853, 688,2429,1920,1462, 77, 595, 415,2002,3034, 798,1192,4115,6144, # 2688 -2978,4331,3035,2695,2582,2072,2566, 430,2430,1727, 842,1396,3947,3702, 613, 377, # 2704 - 278, 236,1417,3388,3314,3174, 757,1869, 107,3530,6145,1194, 623,2262, 207,1253, # 2720 -2167,3446,3948, 492,1117,1935, 536,1838,2757,1246,4332, 696,2095,2406,1393,1572, # 2736 -3175,1782, 583, 190, 253,1390,2230, 830,3126,3389, 934,3245,1703,1749,2979,1870, # 2752 -2545,1656,2204, 869,2346,4116,3176,1817, 496,1764,4644, 942,1504, 404,1903,1122, # 2768 -1580,3606,2945,1022, 515, 372,1735, 955,2431,3036,6146,2797,1110,2302,2798, 617, # 2784 -6147, 441, 762,1771,3447,3607,3608,1904, 840,3037, 86, 939,1385, 572,1370,2445, # 2800 -1336, 114,3703, 898, 294, 203,3315, 703,1583,2274, 429, 961,4333,1854,1951,3390, # 2816 -2373,3704,4334,1318,1381, 966,1911,2322,1006,1155, 309, 989, 458,2718,1795,1372, # 2832 -1203, 252,1689,1363,3177, 517,1936, 168,1490, 562, 193,3823,1042,4117,1835, 551, # 2848 - 470,4645, 395, 489,3448,1871,1465,2583,2641, 417,1493, 279,1295, 511,1236,1119, # 2864 - 72,1231,1982,1812,3004, 871,1564, 984,3449,1667,2696,2096,4646,2347,2833,1673, # 2880 -3609, 695,3246,2668, 807,1183,4647, 890, 388,2333,1801,1457,2911,1765,1477,1031, # 2896 -3316,3317,1278,3391,2799,2292,2526, 163,3450,4335,2669,1404,1802,6148,2323,2407, # 2912 -1584,1728,1494,1824,1269, 298, 909,3318,1034,1632, 375, 776,1683,2061, 291, 210, # 2928 -1123, 809,1249,1002,2642,3038, 206,1011,2132, 144, 975, 882,1565, 342, 667, 754, # 2944 -1442,2143,1299,2303,2062, 447, 626,2205,1221,2739,2912,1144,1214,2206,2584, 760, # 2960 -1715, 614, 950,1281,2670,2621, 810, 577,1287,2546,4648, 242,2168, 250,2643, 691, # 2976 - 123,2644, 647, 313,1029, 689,1357,2946,1650, 216, 771,1339,1306, 808,2063, 549, # 2992 - 913,1371,2913,2914,6149,1466,1092,1174,1196,1311,2605,2396,1783,1796,3079, 406, # 3008 -2671,2117,3949,4649, 487,1825,2220,6150,2915, 448,2348,1073,6151,2397,1707, 130, # 3024 - 900,1598, 329, 176,1959,2527,1620,6152,2275,4336,3319,1983,2191,3705,3610,2155, # 3040 -3706,1912,1513,1614,6153,1988, 646, 392,2304,1589,3320,3039,1826,1239,1352,1340, # 3056 -2916, 505,2567,1709,1437,2408,2547, 906,6154,2672, 384,1458,1594,1100,1329, 710, # 3072 - 423,3531,2064,2231,2622,1989,2673,1087,1882, 333, 841,3005,1296,2882,2379, 580, # 3088 -1937,1827,1293,2585, 601, 574, 249,1772,4118,2079,1120, 645, 901,1176,1690, 795, # 3104 -2207, 478,1434, 516,1190,1530, 761,2080, 930,1264, 355, 435,1552, 644,1791, 987, # 3120 - 220,1364,1163,1121,1538, 306,2169,1327,1222, 546,2645, 218, 241, 610,1704,3321, # 3136 -1984,1839,1966,2528, 451,6155,2586,3707,2568, 907,3178, 254,2947, 186,1845,4650, # 3152 - 745, 432,1757, 428,1633, 888,2246,2221,2489,3611,2118,1258,1265, 956,3127,1784, # 3168 -4337,2490, 319, 510, 119, 457,3612, 274,2035,2007,4651,1409,3128, 970,2758, 590, # 3184 -2800, 661,2247,4652,2008,3950,1420,1549,3080,3322,3951,1651,1375,2111, 485,2491, # 3200 -1429,1156,6156,2548,2183,1495, 831,1840,2529,2446, 501,1657, 307,1894,3247,1341, # 3216 - 666, 899,2156,1539,2549,1559, 886, 349,2208,3081,2305,1736,3824,2170,2759,1014, # 3232 -1913,1386, 542,1397,2948, 490, 368, 716, 362, 159, 282,2569,1129,1658,1288,1750, # 3248 -2674, 276, 649,2016, 751,1496, 658,1818,1284,1862,2209,2087,2512,3451, 622,2834, # 3264 - 376, 117,1060,2053,1208,1721,1101,1443, 247,1250,3179,1792,3952,2760,2398,3953, # 3280 -6157,2144,3708, 446,2432,1151,2570,3452,2447,2761,2835,1210,2448,3082, 424,2222, # 3296 -1251,2449,2119,2836, 504,1581,4338, 602, 817, 857,3825,2349,2306, 357,3826,1470, # 3312 -1883,2883, 255, 958, 929,2917,3248, 302,4653,1050,1271,1751,2307,1952,1430,2697, # 3328 -2719,2359, 354,3180, 777, 158,2036,4339,1659,4340,4654,2308,2949,2248,1146,2232, # 3344 -3532,2720,1696,2623,3827,6158,3129,1550,2698,1485,1297,1428, 637, 931,2721,2145, # 3360 - 914,2550,2587, 81,2450, 612, 827,2646,1242,4655,1118,2884, 472,1855,3181,3533, # 3376 -3534, 569,1353,2699,1244,1758,2588,4119,2009,2762,2171,3709,1312,1531,6159,1152, # 3392 -1938, 134,1830, 471,3710,2276,1112,1535,3323,3453,3535, 982,1337,2950, 488, 826, # 3408 - 674,1058,1628,4120,2017, 522,2399, 211, 568,1367,3454, 350, 293,1872,1139,3249, # 3424 -1399,1946,3006,1300,2360,3324, 588, 736,6160,2606, 744, 669,3536,3828,6161,1358, # 3440 - 199, 723, 848, 933, 851,1939,1505,1514,1338,1618,1831,4656,1634,3613, 443,2740, # 3456 -3829, 717,1947, 491,1914,6162,2551,1542,4121,1025,6163,1099,1223, 198,3040,2722, # 3472 - 370, 410,1905,2589, 998,1248,3182,2380, 519,1449,4122,1710, 947, 928,1153,4341, # 3488 -2277, 344,2624,1511, 615, 105, 161,1212,1076,1960,3130,2054,1926,1175,1906,2473, # 3504 - 414,1873,2801,6164,2309, 315,1319,3325, 318,2018,2146,2157, 963, 631, 223,4342, # 3520 -4343,2675, 479,3711,1197,2625,3712,2676,2361,6165,4344,4123,6166,2451,3183,1886, # 3536 -2184,1674,1330,1711,1635,1506, 799, 219,3250,3083,3954,1677,3713,3326,2081,3614, # 3552 -1652,2073,4657,1147,3041,1752, 643,1961, 147,1974,3955,6167,1716,2037, 918,3007, # 3568 -1994, 120,1537, 118, 609,3184,4345, 740,3455,1219, 332,1615,3830,6168,1621,2980, # 3584 -1582, 783, 212, 553,2350,3714,1349,2433,2082,4124, 889,6169,2310,1275,1410, 973, # 3600 - 166,1320,3456,1797,1215,3185,2885,1846,2590,2763,4658, 629, 822,3008, 763, 940, # 3616 -1990,2862, 439,2409,1566,1240,1622, 926,1282,1907,2764, 654,2210,1607, 327,1130, # 3632 -3956,1678,1623,6170,2434,2192, 686, 608,3831,3715, 903,3957,3042,6171,2741,1522, # 3648 -1915,1105,1555,2552,1359, 323,3251,4346,3457, 738,1354,2553,2311,2334,1828,2003, # 3664 -3832,1753,2351,1227,6172,1887,4125,1478,6173,2410,1874,1712,1847, 520,1204,2607, # 3680 - 264,4659, 836,2677,2102, 600,4660,3833,2278,3084,6174,4347,3615,1342, 640, 532, # 3696 - 543,2608,1888,2400,2591,1009,4348,1497, 341,1737,3616,2723,1394, 529,3252,1321, # 3712 - 983,4661,1515,2120, 971,2592, 924, 287,1662,3186,4349,2700,4350,1519, 908,1948, # 3728 -2452, 156, 796,1629,1486,2223,2055, 694,4126,1259,1036,3392,1213,2249,2742,1889, # 3744 -1230,3958,1015, 910, 408, 559,3617,4662, 746, 725, 935,4663,3959,3009,1289, 563, # 3760 - 867,4664,3960,1567,2981,2038,2626, 988,2263,2381,4351, 143,2374, 704,1895,6175, # 3776 -1188,3716,2088, 673,3085,2362,4352, 484,1608,1921,2765,2918, 215, 904,3618,3537, # 3792 - 894, 509, 976,3043,2701,3961,4353,2837,2982, 498,6176,6177,1102,3538,1332,3393, # 3808 -1487,1636,1637, 233, 245,3962, 383, 650, 995,3044, 460,1520,1206,2352, 749,3327, # 3824 - 530, 700, 389,1438,1560,1773,3963,2264, 719,2951,2724,3834, 870,1832,1644,1000, # 3840 - 839,2474,3717, 197,1630,3394, 365,2886,3964,1285,2133, 734, 922, 818,1106, 732, # 3856 - 480,2083,1774,3458, 923,2279,1350, 221,3086, 85,2233,2234,3835,1585,3010,2147, # 3872 -1387,1705,2382,1619,2475, 133, 239,2802,1991,1016,2084,2383, 411,2838,1113, 651, # 3888 -1985,1160,3328, 990,1863,3087,1048,1276,2647, 265,2627,1599,3253,2056, 150, 638, # 3904 -2019, 656, 853, 326,1479, 680,1439,4354,1001,1759, 413,3459,3395,2492,1431, 459, # 3920 -4355,1125,3329,2265,1953,1450,2065,2863, 849, 351,2678,3131,3254,3255,1104,1577, # 3936 - 227,1351,1645,2453,2193,1421,2887, 812,2121, 634, 95,2435, 201,2312,4665,1646, # 3952 -1671,2743,1601,2554,2702,2648,2280,1315,1366,2089,3132,1573,3718,3965,1729,1189, # 3968 - 328,2679,1077,1940,1136, 558,1283, 964,1195, 621,2074,1199,1743,3460,3619,1896, # 3984 -1916,1890,3836,2952,1154,2112,1064, 862, 378,3011,2066,2113,2803,1568,2839,6178, # 4000 -3088,2919,1941,1660,2004,1992,2194, 142, 707,1590,1708,1624,1922,1023,1836,1233, # 4016 -1004,2313, 789, 741,3620,6179,1609,2411,1200,4127,3719,3720,4666,2057,3721, 593, # 4032 -2840, 367,2920,1878,6180,3461,1521, 628,1168, 692,2211,2649, 300, 720,2067,2571, # 4048 -2953,3396, 959,2504,3966,3539,3462,1977, 701,6181, 954,1043, 800, 681, 183,3722, # 4064 -1803,1730,3540,4128,2103, 815,2314, 174, 467, 230,2454,1093,2134, 755,3541,3397, # 4080 -1141,1162,6182,1738,2039, 270,3256,2513,1005,1647,2185,3837, 858,1679,1897,1719, # 4096 -2954,2324,1806, 402, 670, 167,4129,1498,2158,2104, 750,6183, 915, 189,1680,1551, # 4112 - 455,4356,1501,2455, 405,1095,2955, 338,1586,1266,1819, 570, 641,1324, 237,1556, # 4128 -2650,1388,3723,6184,1368,2384,1343,1978,3089,2436, 879,3724, 792,1191, 758,3012, # 4144 -1411,2135,1322,4357, 240,4667,1848,3725,1574,6185, 420,3045,1546,1391, 714,4358, # 4160 -1967, 941,1864, 863, 664, 426, 560,1731,2680,1785,2864,1949,2363, 403,3330,1415, # 4176 -1279,2136,1697,2335, 204, 721,2097,3838, 90,6186,2085,2505, 191,3967, 124,2148, # 4192 -1376,1798,1178,1107,1898,1405, 860,4359,1243,1272,2375,2983,1558,2456,1638, 113, # 4208 -3621, 578,1923,2609, 880, 386,4130, 784,2186,2266,1422,2956,2172,1722, 497, 263, # 4224 -2514,1267,2412,2610, 177,2703,3542, 774,1927,1344, 616,1432,1595,1018, 172,4360, # 4240 -2325, 911,4361, 438,1468,3622, 794,3968,2024,2173,1681,1829,2957, 945, 895,3090, # 4256 - 575,2212,2476, 475,2401,2681, 785,2744,1745,2293,2555,1975,3133,2865, 394,4668, # 4272 -3839, 635,4131, 639, 202,1507,2195,2766,1345,1435,2572,3726,1908,1184,1181,2457, # 4288 -3727,3134,4362, 843,2611, 437, 916,4669, 234, 769,1884,3046,3047,3623, 833,6187, # 4304 -1639,2250,2402,1355,1185,2010,2047, 999, 525,1732,1290,1488,2612, 948,1578,3728, # 4320 -2413,2477,1216,2725,2159, 334,3840,1328,3624,2921,1525,4132, 564,1056, 891,4363, # 4336 -1444,1698,2385,2251,3729,1365,2281,2235,1717,6188, 864,3841,2515, 444, 527,2767, # 4352 -2922,3625, 544, 461,6189, 566, 209,2437,3398,2098,1065,2068,3331,3626,3257,2137, # 4368 #last 512 -) - - diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/colorama/winterm.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/colorama/winterm.py deleted file mode 100644 index 0fdb4ec4e91090876dc3fbf207049b521fa0dd73..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/colorama/winterm.py +++ /dev/null @@ -1,169 +0,0 @@ -# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file. -from . import win32 - - -# from wincon.h -class WinColor(object): - BLACK = 0 - BLUE = 1 - GREEN = 2 - CYAN = 3 - RED = 4 - MAGENTA = 5 - YELLOW = 6 - GREY = 7 - -# from wincon.h -class WinStyle(object): - NORMAL = 0x00 # dim text, dim background - BRIGHT = 0x08 # bright text, dim background - BRIGHT_BACKGROUND = 0x80 # dim text, bright background - -class WinTerm(object): - - def __init__(self): - self._default = win32.GetConsoleScreenBufferInfo(win32.STDOUT).wAttributes - self.set_attrs(self._default) - self._default_fore = self._fore - self._default_back = self._back - self._default_style = self._style - # In order to emulate LIGHT_EX in windows, we borrow the BRIGHT style. - # So that LIGHT_EX colors and BRIGHT style do not clobber each other, - # we track them separately, since LIGHT_EX is overwritten by Fore/Back - # and BRIGHT is overwritten by Style codes. - self._light = 0 - - def get_attrs(self): - return self._fore + self._back * 16 + (self._style | self._light) - - def set_attrs(self, value): - self._fore = value & 7 - self._back = (value >> 4) & 7 - self._style = value & (WinStyle.BRIGHT | WinStyle.BRIGHT_BACKGROUND) - - def reset_all(self, on_stderr=None): - self.set_attrs(self._default) - self.set_console(attrs=self._default) - self._light = 0 - - def fore(self, fore=None, light=False, on_stderr=False): - if fore is None: - fore = self._default_fore - self._fore = fore - # Emulate LIGHT_EX with BRIGHT Style - if light: - self._light |= WinStyle.BRIGHT - else: - self._light &= ~WinStyle.BRIGHT - self.set_console(on_stderr=on_stderr) - - def back(self, back=None, light=False, on_stderr=False): - if back is None: - back = self._default_back - self._back = back - # Emulate LIGHT_EX with BRIGHT_BACKGROUND Style - if light: - self._light |= WinStyle.BRIGHT_BACKGROUND - else: - self._light &= ~WinStyle.BRIGHT_BACKGROUND - self.set_console(on_stderr=on_stderr) - - def style(self, style=None, on_stderr=False): - if style is None: - style = self._default_style - self._style = style - self.set_console(on_stderr=on_stderr) - - def set_console(self, attrs=None, on_stderr=False): - if attrs is None: - attrs = self.get_attrs() - handle = win32.STDOUT - if on_stderr: - handle = win32.STDERR - win32.SetConsoleTextAttribute(handle, attrs) - - def get_position(self, handle): - position = win32.GetConsoleScreenBufferInfo(handle).dwCursorPosition - # Because Windows coordinates are 0-based, - # and win32.SetConsoleCursorPosition expects 1-based. - position.X += 1 - position.Y += 1 - return position - - def set_cursor_position(self, position=None, on_stderr=False): - if position is None: - # I'm not currently tracking the position, so there is no default. - # position = self.get_position() - return - handle = win32.STDOUT - if on_stderr: - handle = win32.STDERR - win32.SetConsoleCursorPosition(handle, position) - - def cursor_adjust(self, x, y, on_stderr=False): - handle = win32.STDOUT - if on_stderr: - handle = win32.STDERR - position = self.get_position(handle) - adjusted_position = (position.Y + y, position.X + x) - win32.SetConsoleCursorPosition(handle, adjusted_position, adjust=False) - - def erase_screen(self, mode=0, on_stderr=False): - # 0 should clear from the cursor to the end of the screen. - # 1 should clear from the cursor to the beginning of the screen. - # 2 should clear the entire screen, and move cursor to (1,1) - handle = win32.STDOUT - if on_stderr: - handle = win32.STDERR - csbi = win32.GetConsoleScreenBufferInfo(handle) - # get the number of character cells in the current buffer - cells_in_screen = csbi.dwSize.X * csbi.dwSize.Y - # get number of character cells before current cursor position - cells_before_cursor = csbi.dwSize.X * csbi.dwCursorPosition.Y + csbi.dwCursorPosition.X - if mode == 0: - from_coord = csbi.dwCursorPosition - cells_to_erase = cells_in_screen - cells_before_cursor - elif mode == 1: - from_coord = win32.COORD(0, 0) - cells_to_erase = cells_before_cursor - elif mode == 2: - from_coord = win32.COORD(0, 0) - cells_to_erase = cells_in_screen - else: - # invalid mode - return - # fill the entire screen with blanks - win32.FillConsoleOutputCharacter(handle, ' ', cells_to_erase, from_coord) - # now set the buffer's attributes accordingly - win32.FillConsoleOutputAttribute(handle, self.get_attrs(), cells_to_erase, from_coord) - if mode == 2: - # put the cursor where needed - win32.SetConsoleCursorPosition(handle, (1, 1)) - - def erase_line(self, mode=0, on_stderr=False): - # 0 should clear from the cursor to the end of the line. - # 1 should clear from the cursor to the beginning of the line. - # 2 should clear the entire line. - handle = win32.STDOUT - if on_stderr: - handle = win32.STDERR - csbi = win32.GetConsoleScreenBufferInfo(handle) - if mode == 0: - from_coord = csbi.dwCursorPosition - cells_to_erase = csbi.dwSize.X - csbi.dwCursorPosition.X - elif mode == 1: - from_coord = win32.COORD(0, csbi.dwCursorPosition.Y) - cells_to_erase = csbi.dwCursorPosition.X - elif mode == 2: - from_coord = win32.COORD(0, csbi.dwCursorPosition.Y) - cells_to_erase = csbi.dwSize.X - else: - # invalid mode - return - # fill the entire screen with blanks - win32.FillConsoleOutputCharacter(handle, ' ', cells_to_erase, from_coord) - # now set the buffer's attributes accordingly - win32.FillConsoleOutputAttribute(handle, self.get_attrs(), cells_to_erase, from_coord) - - def set_title(self, title): - win32.SetConsoleTitle(title) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/distlib/_backport/shutil.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/distlib/_backport/shutil.py deleted file mode 100644 index 10ed362539718aed693f8155ce7ad55c64163aff..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/distlib/_backport/shutil.py +++ /dev/null @@ -1,764 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright (C) 2012 The Python Software Foundation. -# See LICENSE.txt and CONTRIBUTORS.txt. -# -"""Utility functions for copying and archiving files and directory trees. - -XXX The functions here don't copy the resource fork or other metadata on Mac. - -""" - -import os -import sys -import stat -from os.path import abspath -import fnmatch -try: - from collections.abc import Callable -except ImportError: - from collections import Callable -import errno -from . import tarfile - -try: - import bz2 - _BZ2_SUPPORTED = True -except ImportError: - _BZ2_SUPPORTED = False - -try: - from pwd import getpwnam -except ImportError: - getpwnam = None - -try: - from grp import getgrnam -except ImportError: - getgrnam = None - -__all__ = ["copyfileobj", "copyfile", "copymode", "copystat", "copy", "copy2", - "copytree", "move", "rmtree", "Error", "SpecialFileError", - "ExecError", "make_archive", "get_archive_formats", - "register_archive_format", "unregister_archive_format", - "get_unpack_formats", "register_unpack_format", - "unregister_unpack_format", "unpack_archive", "ignore_patterns"] - -class Error(EnvironmentError): - pass - -class SpecialFileError(EnvironmentError): - """Raised when trying to do a kind of operation (e.g. copying) which is - not supported on a special file (e.g. a named pipe)""" - -class ExecError(EnvironmentError): - """Raised when a command could not be executed""" - -class ReadError(EnvironmentError): - """Raised when an archive cannot be read""" - -class RegistryError(Exception): - """Raised when a registry operation with the archiving - and unpacking registries fails""" - - -try: - WindowsError -except NameError: - WindowsError = None - -def copyfileobj(fsrc, fdst, length=16*1024): - """copy data from file-like object fsrc to file-like object fdst""" - while 1: - buf = fsrc.read(length) - if not buf: - break - fdst.write(buf) - -def _samefile(src, dst): - # Macintosh, Unix. - if hasattr(os.path, 'samefile'): - try: - return os.path.samefile(src, dst) - except OSError: - return False - - # All other platforms: check for same pathname. - return (os.path.normcase(os.path.abspath(src)) == - os.path.normcase(os.path.abspath(dst))) - -def copyfile(src, dst): - """Copy data from src to dst""" - if _samefile(src, dst): - raise Error("`%s` and `%s` are the same file" % (src, dst)) - - for fn in [src, dst]: - try: - st = os.stat(fn) - except OSError: - # File most likely does not exist - pass - else: - # XXX What about other special files? (sockets, devices...) - if stat.S_ISFIFO(st.st_mode): - raise SpecialFileError("`%s` is a named pipe" % fn) - - with open(src, 'rb') as fsrc: - with open(dst, 'wb') as fdst: - copyfileobj(fsrc, fdst) - -def copymode(src, dst): - """Copy mode bits from src to dst""" - if hasattr(os, 'chmod'): - st = os.stat(src) - mode = stat.S_IMODE(st.st_mode) - os.chmod(dst, mode) - -def copystat(src, dst): - """Copy all stat info (mode bits, atime, mtime, flags) from src to dst""" - st = os.stat(src) - mode = stat.S_IMODE(st.st_mode) - if hasattr(os, 'utime'): - os.utime(dst, (st.st_atime, st.st_mtime)) - if hasattr(os, 'chmod'): - os.chmod(dst, mode) - if hasattr(os, 'chflags') and hasattr(st, 'st_flags'): - try: - os.chflags(dst, st.st_flags) - except OSError as why: - if (not hasattr(errno, 'EOPNOTSUPP') or - why.errno != errno.EOPNOTSUPP): - raise - -def copy(src, dst): - """Copy data and mode bits ("cp src dst"). - - The destination may be a directory. - - """ - if os.path.isdir(dst): - dst = os.path.join(dst, os.path.basename(src)) - copyfile(src, dst) - copymode(src, dst) - -def copy2(src, dst): - """Copy data and all stat info ("cp -p src dst"). - - The destination may be a directory. - - """ - if os.path.isdir(dst): - dst = os.path.join(dst, os.path.basename(src)) - copyfile(src, dst) - copystat(src, dst) - -def ignore_patterns(*patterns): - """Function that can be used as copytree() ignore parameter. - - Patterns is a sequence of glob-style patterns - that are used to exclude files""" - def _ignore_patterns(path, names): - ignored_names = [] - for pattern in patterns: - ignored_names.extend(fnmatch.filter(names, pattern)) - return set(ignored_names) - return _ignore_patterns - -def copytree(src, dst, symlinks=False, ignore=None, copy_function=copy2, - ignore_dangling_symlinks=False): - """Recursively copy a directory tree. - - The destination directory must not already exist. - If exception(s) occur, an Error is raised with a list of reasons. - - If the optional symlinks flag is true, symbolic links in the - source tree result in symbolic links in the destination tree; if - it is false, the contents of the files pointed to by symbolic - links are copied. If the file pointed by the symlink doesn't - exist, an exception will be added in the list of errors raised in - an Error exception at the end of the copy process. - - You can set the optional ignore_dangling_symlinks flag to true if you - want to silence this exception. Notice that this has no effect on - platforms that don't support os.symlink. - - The optional ignore argument is a callable. If given, it - is called with the `src` parameter, which is the directory - being visited by copytree(), and `names` which is the list of - `src` contents, as returned by os.listdir(): - - callable(src, names) -> ignored_names - - Since copytree() is called recursively, the callable will be - called once for each directory that is copied. It returns a - list of names relative to the `src` directory that should - not be copied. - - The optional copy_function argument is a callable that will be used - to copy each file. It will be called with the source path and the - destination path as arguments. By default, copy2() is used, but any - function that supports the same signature (like copy()) can be used. - - """ - names = os.listdir(src) - if ignore is not None: - ignored_names = ignore(src, names) - else: - ignored_names = set() - - os.makedirs(dst) - errors = [] - for name in names: - if name in ignored_names: - continue - srcname = os.path.join(src, name) - dstname = os.path.join(dst, name) - try: - if os.path.islink(srcname): - linkto = os.readlink(srcname) - if symlinks: - os.symlink(linkto, dstname) - else: - # ignore dangling symlink if the flag is on - if not os.path.exists(linkto) and ignore_dangling_symlinks: - continue - # otherwise let the copy occurs. copy2 will raise an error - copy_function(srcname, dstname) - elif os.path.isdir(srcname): - copytree(srcname, dstname, symlinks, ignore, copy_function) - else: - # Will raise a SpecialFileError for unsupported file types - copy_function(srcname, dstname) - # catch the Error from the recursive copytree so that we can - # continue with other files - except Error as err: - errors.extend(err.args[0]) - except EnvironmentError as why: - errors.append((srcname, dstname, str(why))) - try: - copystat(src, dst) - except OSError as why: - if WindowsError is not None and isinstance(why, WindowsError): - # Copying file access times may fail on Windows - pass - else: - errors.extend((src, dst, str(why))) - if errors: - raise Error(errors) - -def rmtree(path, ignore_errors=False, onerror=None): - """Recursively delete a directory tree. - - If ignore_errors is set, errors are ignored; otherwise, if onerror - is set, it is called to handle the error with arguments (func, - path, exc_info) where func is os.listdir, os.remove, or os.rmdir; - path is the argument to that function that caused it to fail; and - exc_info is a tuple returned by sys.exc_info(). If ignore_errors - is false and onerror is None, an exception is raised. - - """ - if ignore_errors: - def onerror(*args): - pass - elif onerror is None: - def onerror(*args): - raise - try: - if os.path.islink(path): - # symlinks to directories are forbidden, see bug #1669 - raise OSError("Cannot call rmtree on a symbolic link") - except OSError: - onerror(os.path.islink, path, sys.exc_info()) - # can't continue even if onerror hook returns - return - names = [] - try: - names = os.listdir(path) - except os.error: - onerror(os.listdir, path, sys.exc_info()) - for name in names: - fullname = os.path.join(path, name) - try: - mode = os.lstat(fullname).st_mode - except os.error: - mode = 0 - if stat.S_ISDIR(mode): - rmtree(fullname, ignore_errors, onerror) - else: - try: - os.remove(fullname) - except os.error: - onerror(os.remove, fullname, sys.exc_info()) - try: - os.rmdir(path) - except os.error: - onerror(os.rmdir, path, sys.exc_info()) - - -def _basename(path): - # A basename() variant which first strips the trailing slash, if present. - # Thus we always get the last component of the path, even for directories. - return os.path.basename(path.rstrip(os.path.sep)) - -def move(src, dst): - """Recursively move a file or directory to another location. This is - similar to the Unix "mv" command. - - If the destination is a directory or a symlink to a directory, the source - is moved inside the directory. The destination path must not already - exist. - - If the destination already exists but is not a directory, it may be - overwritten depending on os.rename() semantics. - - If the destination is on our current filesystem, then rename() is used. - Otherwise, src is copied to the destination and then removed. - A lot more could be done here... A look at a mv.c shows a lot of - the issues this implementation glosses over. - - """ - real_dst = dst - if os.path.isdir(dst): - if _samefile(src, dst): - # We might be on a case insensitive filesystem, - # perform the rename anyway. - os.rename(src, dst) - return - - real_dst = os.path.join(dst, _basename(src)) - if os.path.exists(real_dst): - raise Error("Destination path '%s' already exists" % real_dst) - try: - os.rename(src, real_dst) - except OSError: - if os.path.isdir(src): - if _destinsrc(src, dst): - raise Error("Cannot move a directory '%s' into itself '%s'." % (src, dst)) - copytree(src, real_dst, symlinks=True) - rmtree(src) - else: - copy2(src, real_dst) - os.unlink(src) - -def _destinsrc(src, dst): - src = abspath(src) - dst = abspath(dst) - if not src.endswith(os.path.sep): - src += os.path.sep - if not dst.endswith(os.path.sep): - dst += os.path.sep - return dst.startswith(src) - -def _get_gid(name): - """Returns a gid, given a group name.""" - if getgrnam is None or name is None: - return None - try: - result = getgrnam(name) - except KeyError: - result = None - if result is not None: - return result[2] - return None - -def _get_uid(name): - """Returns an uid, given a user name.""" - if getpwnam is None or name is None: - return None - try: - result = getpwnam(name) - except KeyError: - result = None - if result is not None: - return result[2] - return None - -def _make_tarball(base_name, base_dir, compress="gzip", verbose=0, dry_run=0, - owner=None, group=None, logger=None): - """Create a (possibly compressed) tar file from all the files under - 'base_dir'. - - 'compress' must be "gzip" (the default), "bzip2", or None. - - 'owner' and 'group' can be used to define an owner and a group for the - archive that is being built. If not provided, the current owner and group - will be used. - - The output tar file will be named 'base_name' + ".tar", possibly plus - the appropriate compression extension (".gz", or ".bz2"). - - Returns the output filename. - """ - tar_compression = {'gzip': 'gz', None: ''} - compress_ext = {'gzip': '.gz'} - - if _BZ2_SUPPORTED: - tar_compression['bzip2'] = 'bz2' - compress_ext['bzip2'] = '.bz2' - - # flags for compression program, each element of list will be an argument - if compress is not None and compress not in compress_ext: - raise ValueError("bad value for 'compress', or compression format not " - "supported : {0}".format(compress)) - - archive_name = base_name + '.tar' + compress_ext.get(compress, '') - archive_dir = os.path.dirname(archive_name) - - if not os.path.exists(archive_dir): - if logger is not None: - logger.info("creating %s", archive_dir) - if not dry_run: - os.makedirs(archive_dir) - - # creating the tarball - if logger is not None: - logger.info('Creating tar archive') - - uid = _get_uid(owner) - gid = _get_gid(group) - - def _set_uid_gid(tarinfo): - if gid is not None: - tarinfo.gid = gid - tarinfo.gname = group - if uid is not None: - tarinfo.uid = uid - tarinfo.uname = owner - return tarinfo - - if not dry_run: - tar = tarfile.open(archive_name, 'w|%s' % tar_compression[compress]) - try: - tar.add(base_dir, filter=_set_uid_gid) - finally: - tar.close() - - return archive_name - -def _call_external_zip(base_dir, zip_filename, verbose=False, dry_run=False): - # XXX see if we want to keep an external call here - if verbose: - zipoptions = "-r" - else: - zipoptions = "-rq" - from distutils.errors import DistutilsExecError - from distutils.spawn import spawn - try: - spawn(["zip", zipoptions, zip_filename, base_dir], dry_run=dry_run) - except DistutilsExecError: - # XXX really should distinguish between "couldn't find - # external 'zip' command" and "zip failed". - raise ExecError("unable to create zip file '%s': " - "could neither import the 'zipfile' module nor " - "find a standalone zip utility") % zip_filename - -def _make_zipfile(base_name, base_dir, verbose=0, dry_run=0, logger=None): - """Create a zip file from all the files under 'base_dir'. - - The output zip file will be named 'base_name' + ".zip". Uses either the - "zipfile" Python module (if available) or the InfoZIP "zip" utility - (if installed and found on the default search path). If neither tool is - available, raises ExecError. Returns the name of the output zip - file. - """ - zip_filename = base_name + ".zip" - archive_dir = os.path.dirname(base_name) - - if not os.path.exists(archive_dir): - if logger is not None: - logger.info("creating %s", archive_dir) - if not dry_run: - os.makedirs(archive_dir) - - # If zipfile module is not available, try spawning an external 'zip' - # command. - try: - import zipfile - except ImportError: - zipfile = None - - if zipfile is None: - _call_external_zip(base_dir, zip_filename, verbose, dry_run) - else: - if logger is not None: - logger.info("creating '%s' and adding '%s' to it", - zip_filename, base_dir) - - if not dry_run: - zip = zipfile.ZipFile(zip_filename, "w", - compression=zipfile.ZIP_DEFLATED) - - for dirpath, dirnames, filenames in os.walk(base_dir): - for name in filenames: - path = os.path.normpath(os.path.join(dirpath, name)) - if os.path.isfile(path): - zip.write(path, path) - if logger is not None: - logger.info("adding '%s'", path) - zip.close() - - return zip_filename - -_ARCHIVE_FORMATS = { - 'gztar': (_make_tarball, [('compress', 'gzip')], "gzip'ed tar-file"), - 'bztar': (_make_tarball, [('compress', 'bzip2')], "bzip2'ed tar-file"), - 'tar': (_make_tarball, [('compress', None)], "uncompressed tar file"), - 'zip': (_make_zipfile, [], "ZIP file"), - } - -if _BZ2_SUPPORTED: - _ARCHIVE_FORMATS['bztar'] = (_make_tarball, [('compress', 'bzip2')], - "bzip2'ed tar-file") - -def get_archive_formats(): - """Returns a list of supported formats for archiving and unarchiving. - - Each element of the returned sequence is a tuple (name, description) - """ - formats = [(name, registry[2]) for name, registry in - _ARCHIVE_FORMATS.items()] - formats.sort() - return formats - -def register_archive_format(name, function, extra_args=None, description=''): - """Registers an archive format. - - name is the name of the format. function is the callable that will be - used to create archives. If provided, extra_args is a sequence of - (name, value) tuples that will be passed as arguments to the callable. - description can be provided to describe the format, and will be returned - by the get_archive_formats() function. - """ - if extra_args is None: - extra_args = [] - if not isinstance(function, Callable): - raise TypeError('The %s object is not callable' % function) - if not isinstance(extra_args, (tuple, list)): - raise TypeError('extra_args needs to be a sequence') - for element in extra_args: - if not isinstance(element, (tuple, list)) or len(element) !=2: - raise TypeError('extra_args elements are : (arg_name, value)') - - _ARCHIVE_FORMATS[name] = (function, extra_args, description) - -def unregister_archive_format(name): - del _ARCHIVE_FORMATS[name] - -def make_archive(base_name, format, root_dir=None, base_dir=None, verbose=0, - dry_run=0, owner=None, group=None, logger=None): - """Create an archive file (eg. zip or tar). - - 'base_name' is the name of the file to create, minus any format-specific - extension; 'format' is the archive format: one of "zip", "tar", "bztar" - or "gztar". - - 'root_dir' is a directory that will be the root directory of the - archive; ie. we typically chdir into 'root_dir' before creating the - archive. 'base_dir' is the directory where we start archiving from; - ie. 'base_dir' will be the common prefix of all files and - directories in the archive. 'root_dir' and 'base_dir' both default - to the current directory. Returns the name of the archive file. - - 'owner' and 'group' are used when creating a tar archive. By default, - uses the current owner and group. - """ - save_cwd = os.getcwd() - if root_dir is not None: - if logger is not None: - logger.debug("changing into '%s'", root_dir) - base_name = os.path.abspath(base_name) - if not dry_run: - os.chdir(root_dir) - - if base_dir is None: - base_dir = os.curdir - - kwargs = {'dry_run': dry_run, 'logger': logger} - - try: - format_info = _ARCHIVE_FORMATS[format] - except KeyError: - raise ValueError("unknown archive format '%s'" % format) - - func = format_info[0] - for arg, val in format_info[1]: - kwargs[arg] = val - - if format != 'zip': - kwargs['owner'] = owner - kwargs['group'] = group - - try: - filename = func(base_name, base_dir, **kwargs) - finally: - if root_dir is not None: - if logger is not None: - logger.debug("changing back to '%s'", save_cwd) - os.chdir(save_cwd) - - return filename - - -def get_unpack_formats(): - """Returns a list of supported formats for unpacking. - - Each element of the returned sequence is a tuple - (name, extensions, description) - """ - formats = [(name, info[0], info[3]) for name, info in - _UNPACK_FORMATS.items()] - formats.sort() - return formats - -def _check_unpack_options(extensions, function, extra_args): - """Checks what gets registered as an unpacker.""" - # first make sure no other unpacker is registered for this extension - existing_extensions = {} - for name, info in _UNPACK_FORMATS.items(): - for ext in info[0]: - existing_extensions[ext] = name - - for extension in extensions: - if extension in existing_extensions: - msg = '%s is already registered for "%s"' - raise RegistryError(msg % (extension, - existing_extensions[extension])) - - if not isinstance(function, Callable): - raise TypeError('The registered function must be a callable') - - -def register_unpack_format(name, extensions, function, extra_args=None, - description=''): - """Registers an unpack format. - - `name` is the name of the format. `extensions` is a list of extensions - corresponding to the format. - - `function` is the callable that will be - used to unpack archives. The callable will receive archives to unpack. - If it's unable to handle an archive, it needs to raise a ReadError - exception. - - If provided, `extra_args` is a sequence of - (name, value) tuples that will be passed as arguments to the callable. - description can be provided to describe the format, and will be returned - by the get_unpack_formats() function. - """ - if extra_args is None: - extra_args = [] - _check_unpack_options(extensions, function, extra_args) - _UNPACK_FORMATS[name] = extensions, function, extra_args, description - -def unregister_unpack_format(name): - """Removes the pack format from the registry.""" - del _UNPACK_FORMATS[name] - -def _ensure_directory(path): - """Ensure that the parent directory of `path` exists""" - dirname = os.path.dirname(path) - if not os.path.isdir(dirname): - os.makedirs(dirname) - -def _unpack_zipfile(filename, extract_dir): - """Unpack zip `filename` to `extract_dir` - """ - try: - import zipfile - except ImportError: - raise ReadError('zlib not supported, cannot unpack this archive.') - - if not zipfile.is_zipfile(filename): - raise ReadError("%s is not a zip file" % filename) - - zip = zipfile.ZipFile(filename) - try: - for info in zip.infolist(): - name = info.filename - - # don't extract absolute paths or ones with .. in them - if name.startswith('/') or '..' in name: - continue - - target = os.path.join(extract_dir, *name.split('/')) - if not target: - continue - - _ensure_directory(target) - if not name.endswith('/'): - # file - data = zip.read(info.filename) - f = open(target, 'wb') - try: - f.write(data) - finally: - f.close() - del data - finally: - zip.close() - -def _unpack_tarfile(filename, extract_dir): - """Unpack tar/tar.gz/tar.bz2 `filename` to `extract_dir` - """ - try: - tarobj = tarfile.open(filename) - except tarfile.TarError: - raise ReadError( - "%s is not a compressed or uncompressed tar file" % filename) - try: - tarobj.extractall(extract_dir) - finally: - tarobj.close() - -_UNPACK_FORMATS = { - 'gztar': (['.tar.gz', '.tgz'], _unpack_tarfile, [], "gzip'ed tar-file"), - 'tar': (['.tar'], _unpack_tarfile, [], "uncompressed tar file"), - 'zip': (['.zip'], _unpack_zipfile, [], "ZIP file") - } - -if _BZ2_SUPPORTED: - _UNPACK_FORMATS['bztar'] = (['.bz2'], _unpack_tarfile, [], - "bzip2'ed tar-file") - -def _find_unpack_format(filename): - for name, info in _UNPACK_FORMATS.items(): - for extension in info[0]: - if filename.endswith(extension): - return name - return None - -def unpack_archive(filename, extract_dir=None, format=None): - """Unpack an archive. - - `filename` is the name of the archive. - - `extract_dir` is the name of the target directory, where the archive - is unpacked. If not provided, the current working directory is used. - - `format` is the archive format: one of "zip", "tar", or "gztar". Or any - other registered format. If not provided, unpack_archive will use the - filename extension and see if an unpacker was registered for that - extension. - - In case none is found, a ValueError is raised. - """ - if extract_dir is None: - extract_dir = os.getcwd() - - if format is not None: - try: - format_info = _UNPACK_FORMATS[format] - except KeyError: - raise ValueError("Unknown unpack format '{0}'".format(format)) - - func = format_info[1] - func(filename, extract_dir, **dict(format_info[2])) - else: - # we need to look at the registered unpackers supported extensions - format = _find_unpack_format(filename) - if format is None: - raise ReadError("Unknown archive format '{0}'".format(filename)) - - func = _UNPACK_FORMATS[format][1] - kwargs = dict(_UNPACK_FORMATS[format][2]) - func(filename, extract_dir, **kwargs) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/html5lib/treewalkers/genshi.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/html5lib/treewalkers/genshi.py deleted file mode 100644 index 7483be27d4d24f845e56b6954ee63eec730c00aa..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/html5lib/treewalkers/genshi.py +++ /dev/null @@ -1,69 +0,0 @@ -from __future__ import absolute_import, division, unicode_literals - -from genshi.core import QName -from genshi.core import START, END, XML_NAMESPACE, DOCTYPE, TEXT -from genshi.core import START_NS, END_NS, START_CDATA, END_CDATA, PI, COMMENT - -from . import base - -from ..constants import voidElements, namespaces - - -class TreeWalker(base.TreeWalker): - def __iter__(self): - # Buffer the events so we can pass in the following one - previous = None - for event in self.tree: - if previous is not None: - for token in self.tokens(previous, event): - yield token - previous = event - - # Don't forget the final event! - if previous is not None: - for token in self.tokens(previous, None): - yield token - - def tokens(self, event, next): - kind, data, _ = event - if kind == START: - tag, attribs = data - name = tag.localname - namespace = tag.namespace - converted_attribs = {} - for k, v in attribs: - if isinstance(k, QName): - converted_attribs[(k.namespace, k.localname)] = v - else: - converted_attribs[(None, k)] = v - - if namespace == namespaces["html"] and name in voidElements: - for token in self.emptyTag(namespace, name, converted_attribs, - not next or next[0] != END or - next[1] != tag): - yield token - else: - yield self.startTag(namespace, name, converted_attribs) - - elif kind == END: - name = data.localname - namespace = data.namespace - if namespace != namespaces["html"] or name not in voidElements: - yield self.endTag(namespace, name) - - elif kind == COMMENT: - yield self.comment(data) - - elif kind == TEXT: - for token in self.text(data): - yield token - - elif kind == DOCTYPE: - yield self.doctype(*data) - - elif kind in (XML_NAMESPACE, DOCTYPE, START_NS, END_NS, - START_CDATA, END_CDATA, PI): - pass - - else: - yield self.unknown(kind) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pkg_resources/_vendor/packaging/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pkg_resources/_vendor/packaging/__init__.py deleted file mode 100644 index a0cf67df5245be16a020ca048832e180f7ce8661..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pkg_resources/_vendor/packaging/__init__.py +++ /dev/null @@ -1,26 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. -from __future__ import absolute_import, division, print_function - -from .__about__ import ( - __author__, - __copyright__, - __email__, - __license__, - __summary__, - __title__, - __uri__, - __version__, -) - -__all__ = [ - "__title__", - "__summary__", - "__uri__", - "__version__", - "__author__", - "__email__", - "__license__", - "__copyright__", -] diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/tal.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/tal.py deleted file mode 100644 index 170b781a933ec3ee539bfbfad87340205933bb28..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/tal.py +++ /dev/null @@ -1,77 +0,0 @@ -""" - pygments.lexers.tal - ~~~~~~~~~~~~~~~~~~~ - - Lexer for Uxntal - - .. versionadded:: 2.12 - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -from pygments.lexer import RegexLexer, words -from pygments.token import Comment, Keyword, Name, String, Number, \ - Punctuation, Whitespace, Literal - -__all__ = ['TalLexer'] - - -class TalLexer(RegexLexer): - """ - For `Uxntal `_ source code. - - .. versionadded:: 2.12 - """ - - name = 'Tal' - aliases = ['tal', 'uxntal'] - filenames = ['*.tal'] - mimetypes = ['text/x-uxntal'] - - instructions = [ - 'BRK', 'LIT', 'INC', 'POP', 'DUP', 'NIP', 'SWP', 'OVR', 'ROT', - 'EQU', 'NEQ', 'GTH', 'LTH', 'JMP', 'JCN', 'JSR', 'STH', - 'LDZ', 'STZ', 'LDR', 'STR', 'LDA', 'STA', 'DEI', 'DEO', - 'ADD', 'SUB', 'MUL', 'DIV', 'AND', 'ORA', 'EOR', 'SFT' - ] - - tokens = { - # the comment delimiters must not be adjacent to non-space characters. - # this means ( foo ) is a valid comment but (foo) is not. this also - # applies to nested comments. - 'comment': [ - (r'(?32 Bit Adobe Premiere Pro CS4 Adobe After Effects CS4 to Adobe CS5 [ENGLISH]

    Download Zip 🗸 https://geags.com/2uCq1G



    -
    -hi there i have a 30 minute video clip. when i export the movie as a.wmv it plays back in 2 second intervals. I have a dell inspiron 1525 and nvidia geforce mx 420 graphic card. please can somebody help me with this. thanks - -Hi Guys, - -This has been driving me mad, can anyone help please. - -I'm using Windows Vista and I've installed the 64 bit version of cs4 and I have 3D video support on my Vista. - -It was fine, but it suddenly changed. - -When I try to render on my wacom Graphire4 tablet, it goes into "make-up mode" and if I stop the render, it goes into "make-up mode" again. - -I can't get it to render for more than a few seconds! I tried uninstalling and reinstalling and that didn't work, I tried reinstalling the drivers. - -I have tried searching and I couldn't find the answer anywhere. Can anyone help? - -Thank you. - -hi guys. I have the same problem. It's almost been a week now since I've been on the forums, so I hope someone can help me. First I installed CS4, then I had to reinstall as the drivers wouldn't work. That one, it worked fine for a day or so, then suddenly, in the same moment, it changed. When I use Photoshop, it goes into "make-up mode" and if I stop, it goes back into "make-up mode", and the screen gets all messy. It's like it just doesn't get out of that "make-up mode". - -Any ideas? - -Not sure if this is a bug, but I have the same problem as all. My video is making a "cutting" sound when moving through the video. I have tried using 3 different editors and I get the same problem. I am using Windows XP and have updated to the newest drivers from my graphic card. - -hi all, I am having a problem with CS4. I have a ION hard drive, I went back into windows and then it says the hard drive doesn't exist. But i have it on there, in the IDE cable it says "missing hard drive" and in the other cable it says "Hard drive boot sector damaged". - -I took out the hd and tried setting it up on my desktop and it says the boot sector is damaged and 4fefd39f24
    -
    -
    -

    diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Discografia De Los Rayos Del Norte De Sergio Vega Temp.md b/spaces/quidiaMuxgu/Expedit-SAM/Discografia De Los Rayos Del Norte De Sergio Vega Temp.md deleted file mode 100644 index b341a31aa3f0b06944bf068d8b03a3942938b6b3..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Discografia De Los Rayos Del Norte De Sergio Vega Temp.md +++ /dev/null @@ -1,6 +0,0 @@ -

    discografia de los rayos del norte de sergio vega | temp


    Download Ziphttps://geags.com/2uCsyT



    - -discografia de los rayos del norte de sergio vega | temp · telecharger cours ssiap 1 pdf. Disciplines. Bioimaging and Biomedical Optics. 1fdad05405
    -
    -
    -

    diff --git a/spaces/quidiaMuxgu/Expedit-SAM/First Love (Crazy Little Thing Called Love) English Sub [Thai Mo.md b/spaces/quidiaMuxgu/Expedit-SAM/First Love (Crazy Little Thing Called Love) English Sub [Thai Mo.md deleted file mode 100644 index a2035ec902303764384f4ec54bd0426fea37fda2..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/First Love (Crazy Little Thing Called Love) English Sub [Thai Mo.md +++ /dev/null @@ -1,84 +0,0 @@ -

    First Love (Crazy Little Thing Called Love) English Sub [Thai Mo


    Download ✔✔✔ https://geags.com/2uCs6n



    - -But if she could do that, she would have to keep secret her existence as he is a reincarnation of the malevolent spirit. - -Cast - -Makoto Mizuhara as Hinata - -Tetsuji Tamayama as Tamotsu - -Yuki Okawa as Mizuki - -Chiaki Konaka as Nadeshiko - -Yumi Takigawa as Hajime - -Rikiya Koyama as Seizo - -Shinya Hirata as Mikio - -Kou Shibasaki as Soichiro - -Songs - -Opening theme - - "Starry Night" by +44 - -Ending theme - - "Don't Cry" by Kishida Kyōko - -References - -External links - -Category:Japanese films - -Category:2012 films - -Category:2010s supernatural films - -Category:Japanese supernatural horror films - -Category:Japanese ghost films - -Category:Japanese horror film remakes - -Category:Japanese-language films - -Category:2012 horror films - -162 Ga. App. 243 (1982) - -290 S.E.2d 134 - -DYER - -v. - -MILLER. - -64097. - -Court of Appeals of Georgia. - -Decided June 23, 1982. - -*244 James P. Branch, for appellant. - -W. J. Mason, for appellee. - -SOGNIER, Judge. - -William Dyer brought this action against William Miller to recover money expended for the care and treatment of a child born to Miller's wife. Miller answered and denied the child was born alive. The trial court granted Miller's motion for summary judgment. Dyer appeals. - -The evidence shows Miller was the father of a child born to his wife in March of 1979. The child was delivered by a C-section. During the operation Miller was awake and part of the time talked to his wife. The child died shortly after birth. The child's mother was convicted of the murder of the child and is serving a life sentence. - -Dyer, the step-father of the child, brought this action against Miller, the natural father, for the value of the care and treatment of the child. Miller denied the allegations of the complaint and sought judgment over against the natural mother. - -The evidence, construed in the light most favorable to Dyer, shows the following. A hospital chaplain made an inquiry to determine whether Miller would be interested in a long-term "courtesy visit" with the infant child. Miller agreed to 4fefd39f24
    -
    -
    -

    diff --git a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_pack/modules/F0Predictor/F0Predictor.py b/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_pack/modules/F0Predictor/F0Predictor.py deleted file mode 100644 index f56e49e7f0e6eab3babf0711cae2933371b9f9cc..0000000000000000000000000000000000000000 --- a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_pack/modules/F0Predictor/F0Predictor.py +++ /dev/null @@ -1,16 +0,0 @@ -class F0Predictor(object): - def compute_f0(self, wav, p_len): - """ - input: wav:[signal_length] - p_len:int - output: f0:[signal_length//hop_length] - """ - pass - - def compute_f0_uv(self, wav, p_len): - """ - input: wav:[signal_length] - p_len:int - output: f0:[signal_length//hop_length],uv:[signal_length//hop_length] - """ - pass diff --git a/spaces/r3gm/RVC_HF/infer/modules/uvr5/preprocess.py b/spaces/r3gm/RVC_HF/infer/modules/uvr5/preprocess.py deleted file mode 100644 index 19f11110ea822eeb140fb885c600536290a1adff..0000000000000000000000000000000000000000 --- a/spaces/r3gm/RVC_HF/infer/modules/uvr5/preprocess.py +++ /dev/null @@ -1,346 +0,0 @@ -import os -import logging - -logger = logging.getLogger(__name__) - -import librosa -import numpy as np -import soundfile as sf -import torch - -from infer.lib.uvr5_pack.lib_v5 import nets_61968KB as Nets -from infer.lib.uvr5_pack.lib_v5 import spec_utils -from infer.lib.uvr5_pack.lib_v5.model_param_init import ModelParameters -from infer.lib.uvr5_pack.lib_v5.nets_new import CascadedNet -from infer.lib.uvr5_pack.utils import inference - - -class AudioPre: - def __init__(self, agg, model_path, device, is_half): - self.model_path = model_path - self.device = device - self.data = { - # Processing Options - "postprocess": False, - "tta": False, - # Constants - "window_size": 512, - "agg": agg, - "high_end_process": "mirroring", - } - mp = ModelParameters("infer/lib/uvr5_pack/lib_v5/modelparams/4band_v2.json") - model = Nets.CascadedASPPNet(mp.param["bins"] * 2) - cpk = torch.load(model_path, map_location="cpu") - model.load_state_dict(cpk) - model.eval() - if is_half: - model = model.half().to(device) - else: - model = model.to(device) - - self.mp = mp - self.model = model - - def _path_audio_(self, music_file, ins_root=None, vocal_root=None, format="flac"): - if ins_root is None and vocal_root is None: - return "No save root." - name = os.path.basename(music_file) - if ins_root is not None: - os.makedirs(ins_root, exist_ok=True) - if vocal_root is not None: - os.makedirs(vocal_root, exist_ok=True) - X_wave, y_wave, X_spec_s, y_spec_s = {}, {}, {}, {} - bands_n = len(self.mp.param["band"]) - # print(bands_n) - for d in range(bands_n, 0, -1): - bp = self.mp.param["band"][d] - if d == bands_n: # high-end band - ( - X_wave[d], - _, - ) = librosa.core.load( # 理论上librosa读取可能对某些音频有bug,应该上ffmpeg读取,但是太麻烦了弃坑 - music_file, - bp["sr"], - False, - dtype=np.float32, - res_type=bp["res_type"], - ) - if X_wave[d].ndim == 1: - X_wave[d] = np.asfortranarray([X_wave[d], X_wave[d]]) - else: # lower bands - X_wave[d] = librosa.core.resample( - X_wave[d + 1], - self.mp.param["band"][d + 1]["sr"], - bp["sr"], - res_type=bp["res_type"], - ) - # Stft of wave source - X_spec_s[d] = spec_utils.wave_to_spectrogram_mt( - X_wave[d], - bp["hl"], - bp["n_fft"], - self.mp.param["mid_side"], - self.mp.param["mid_side_b2"], - self.mp.param["reverse"], - ) - # pdb.set_trace() - if d == bands_n and self.data["high_end_process"] != "none": - input_high_end_h = (bp["n_fft"] // 2 - bp["crop_stop"]) + ( - self.mp.param["pre_filter_stop"] - self.mp.param["pre_filter_start"] - ) - input_high_end = X_spec_s[d][ - :, bp["n_fft"] // 2 - input_high_end_h : bp["n_fft"] // 2, : - ] - - X_spec_m = spec_utils.combine_spectrograms(X_spec_s, self.mp) - aggresive_set = float(self.data["agg"] / 100) - aggressiveness = { - "value": aggresive_set, - "split_bin": self.mp.param["band"][1]["crop_stop"], - } - with torch.no_grad(): - pred, X_mag, X_phase = inference( - X_spec_m, self.device, self.model, aggressiveness, self.data - ) - # Postprocess - if self.data["postprocess"]: - pred_inv = np.clip(X_mag - pred, 0, np.inf) - pred = spec_utils.mask_silence(pred, pred_inv) - y_spec_m = pred * X_phase - v_spec_m = X_spec_m - y_spec_m - - if ins_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], y_spec_m, input_high_end, self.mp - ) - wav_instrument = spec_utils.cmb_spectrogram_to_wave( - y_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_instrument = spec_utils.cmb_spectrogram_to_wave(y_spec_m, self.mp) - logger.info("%s instruments done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - ins_root, - "instrument_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) # - else: - path = os.path.join( - ins_root, "instrument_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) - if vocal_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], v_spec_m, input_high_end, self.mp - ) - wav_vocals = spec_utils.cmb_spectrogram_to_wave( - v_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_vocals = spec_utils.cmb_spectrogram_to_wave(v_spec_m, self.mp) - logger.info("%s vocals done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - vocal_root, - "vocal_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - else: - path = os.path.join( - vocal_root, "vocal_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) - - -class AudioPreDeEcho: - def __init__(self, agg, model_path, device, is_half): - self.model_path = model_path - self.device = device - self.data = { - # Processing Options - "postprocess": False, - "tta": False, - # Constants - "window_size": 512, - "agg": agg, - "high_end_process": "mirroring", - } - mp = ModelParameters("infer/lib/uvr5_pack/lib_v5/modelparams/4band_v3.json") - nout = 64 if "DeReverb" in model_path else 48 - model = CascadedNet(mp.param["bins"] * 2, nout) - cpk = torch.load(model_path, map_location="cpu") - model.load_state_dict(cpk) - model.eval() - if is_half: - model = model.half().to(device) - else: - model = model.to(device) - - self.mp = mp - self.model = model - - def _path_audio_( - self, music_file, vocal_root=None, ins_root=None, format="flac" - ): # 3个VR模型vocal和ins是反的 - if ins_root is None and vocal_root is None: - return "No save root." - name = os.path.basename(music_file) - if ins_root is not None: - os.makedirs(ins_root, exist_ok=True) - if vocal_root is not None: - os.makedirs(vocal_root, exist_ok=True) - X_wave, y_wave, X_spec_s, y_spec_s = {}, {}, {}, {} - bands_n = len(self.mp.param["band"]) - # print(bands_n) - for d in range(bands_n, 0, -1): - bp = self.mp.param["band"][d] - if d == bands_n: # high-end band - ( - X_wave[d], - _, - ) = librosa.core.load( # 理论上librosa读取可能对某些音频有bug,应该上ffmpeg读取,但是太麻烦了弃坑 - music_file, - bp["sr"], - False, - dtype=np.float32, - res_type=bp["res_type"], - ) - if X_wave[d].ndim == 1: - X_wave[d] = np.asfortranarray([X_wave[d], X_wave[d]]) - else: # lower bands - X_wave[d] = librosa.core.resample( - X_wave[d + 1], - self.mp.param["band"][d + 1]["sr"], - bp["sr"], - res_type=bp["res_type"], - ) - # Stft of wave source - X_spec_s[d] = spec_utils.wave_to_spectrogram_mt( - X_wave[d], - bp["hl"], - bp["n_fft"], - self.mp.param["mid_side"], - self.mp.param["mid_side_b2"], - self.mp.param["reverse"], - ) - # pdb.set_trace() - if d == bands_n and self.data["high_end_process"] != "none": - input_high_end_h = (bp["n_fft"] // 2 - bp["crop_stop"]) + ( - self.mp.param["pre_filter_stop"] - self.mp.param["pre_filter_start"] - ) - input_high_end = X_spec_s[d][ - :, bp["n_fft"] // 2 - input_high_end_h : bp["n_fft"] // 2, : - ] - - X_spec_m = spec_utils.combine_spectrograms(X_spec_s, self.mp) - aggresive_set = float(self.data["agg"] / 100) - aggressiveness = { - "value": aggresive_set, - "split_bin": self.mp.param["band"][1]["crop_stop"], - } - with torch.no_grad(): - pred, X_mag, X_phase = inference( - X_spec_m, self.device, self.model, aggressiveness, self.data - ) - # Postprocess - if self.data["postprocess"]: - pred_inv = np.clip(X_mag - pred, 0, np.inf) - pred = spec_utils.mask_silence(pred, pred_inv) - y_spec_m = pred * X_phase - v_spec_m = X_spec_m - y_spec_m - - if ins_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], y_spec_m, input_high_end, self.mp - ) - wav_instrument = spec_utils.cmb_spectrogram_to_wave( - y_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_instrument = spec_utils.cmb_spectrogram_to_wave(y_spec_m, self.mp) - logger.info("%s instruments done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - ins_root, - "instrument_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) # - else: - path = os.path.join( - ins_root, "instrument_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) - if vocal_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], v_spec_m, input_high_end, self.mp - ) - wav_vocals = spec_utils.cmb_spectrogram_to_wave( - v_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_vocals = spec_utils.cmb_spectrogram_to_wave(v_spec_m, self.mp) - logger.info("%s vocals done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - vocal_root, - "vocal_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - else: - path = os.path.join( - vocal_root, "vocal_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) diff --git a/spaces/radames/UserControllableLT-Latent-Transformer/interface/pixel2style2pixel/models/encoders/helpers.py b/spaces/radames/UserControllableLT-Latent-Transformer/interface/pixel2style2pixel/models/encoders/helpers.py deleted file mode 100644 index b51fdf97141407fcc1c9d249a086ddbfd042469f..0000000000000000000000000000000000000000 --- a/spaces/radames/UserControllableLT-Latent-Transformer/interface/pixel2style2pixel/models/encoders/helpers.py +++ /dev/null @@ -1,119 +0,0 @@ -from collections import namedtuple -import torch -from torch.nn import Conv2d, BatchNorm2d, PReLU, ReLU, Sigmoid, MaxPool2d, AdaptiveAvgPool2d, Sequential, Module - -""" -ArcFace implementation from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch) -""" - - -class Flatten(Module): - def forward(self, input): - return input.view(input.size(0), -1) - - -def l2_norm(input, axis=1): - norm = torch.norm(input, 2, axis, True) - output = torch.div(input, norm) - return output - - -class Bottleneck(namedtuple('Block', ['in_channel', 'depth', 'stride'])): - """ A named tuple describing a ResNet block. """ - - -def get_block(in_channel, depth, num_units, stride=2): - return [Bottleneck(in_channel, depth, stride)] + [Bottleneck(depth, depth, 1) for i in range(num_units - 1)] - - -def get_blocks(num_layers): - if num_layers == 50: - blocks = [ - get_block(in_channel=64, depth=64, num_units=3), - get_block(in_channel=64, depth=128, num_units=4), - get_block(in_channel=128, depth=256, num_units=14), - get_block(in_channel=256, depth=512, num_units=3) - ] - elif num_layers == 100: - blocks = [ - get_block(in_channel=64, depth=64, num_units=3), - get_block(in_channel=64, depth=128, num_units=13), - get_block(in_channel=128, depth=256, num_units=30), - get_block(in_channel=256, depth=512, num_units=3) - ] - elif num_layers == 152: - blocks = [ - get_block(in_channel=64, depth=64, num_units=3), - get_block(in_channel=64, depth=128, num_units=8), - get_block(in_channel=128, depth=256, num_units=36), - get_block(in_channel=256, depth=512, num_units=3) - ] - else: - raise ValueError("Invalid number of layers: {}. Must be one of [50, 100, 152]".format(num_layers)) - return blocks - - -class SEModule(Module): - def __init__(self, channels, reduction): - super(SEModule, self).__init__() - self.avg_pool = AdaptiveAvgPool2d(1) - self.fc1 = Conv2d(channels, channels // reduction, kernel_size=1, padding=0, bias=False) - self.relu = ReLU(inplace=True) - self.fc2 = Conv2d(channels // reduction, channels, kernel_size=1, padding=0, bias=False) - self.sigmoid = Sigmoid() - - def forward(self, x): - module_input = x - x = self.avg_pool(x) - x = self.fc1(x) - x = self.relu(x) - x = self.fc2(x) - x = self.sigmoid(x) - return module_input * x - - -class bottleneck_IR(Module): - def __init__(self, in_channel, depth, stride): - super(bottleneck_IR, self).__init__() - if in_channel == depth: - self.shortcut_layer = MaxPool2d(1, stride) - else: - self.shortcut_layer = Sequential( - Conv2d(in_channel, depth, (1, 1), stride, bias=False), - BatchNorm2d(depth) - ) - self.res_layer = Sequential( - BatchNorm2d(in_channel), - Conv2d(in_channel, depth, (3, 3), (1, 1), 1, bias=False), PReLU(depth), - Conv2d(depth, depth, (3, 3), stride, 1, bias=False), BatchNorm2d(depth) - ) - - def forward(self, x): - shortcut = self.shortcut_layer(x) - res = self.res_layer(x) - return res + shortcut - - -class bottleneck_IR_SE(Module): - def __init__(self, in_channel, depth, stride): - super(bottleneck_IR_SE, self).__init__() - if in_channel == depth: - self.shortcut_layer = MaxPool2d(1, stride) - else: - self.shortcut_layer = Sequential( - Conv2d(in_channel, depth, (1, 1), stride, bias=False), - BatchNorm2d(depth) - ) - self.res_layer = Sequential( - BatchNorm2d(in_channel), - Conv2d(in_channel, depth, (3, 3), (1, 1), 1, bias=False), - PReLU(depth), - Conv2d(depth, depth, (3, 3), stride, 1, bias=False), - BatchNorm2d(depth), - SEModule(depth, 16) - ) - - def forward(self, x): - shortcut = self.shortcut_layer(x) - res = self.res_layer(x) - return res + shortcut diff --git a/spaces/raedeXanto/academic-chatgpt-beta/(2011) Terjemah Kitab Khozinatul Asror.pdf Luvre magistrale de lun des plus grands soufis de lhistoire.md b/spaces/raedeXanto/academic-chatgpt-beta/(2011) Terjemah Kitab Khozinatul Asror.pdf Luvre magistrale de lun des plus grands soufis de lhistoire.md deleted file mode 100644 index 843b878348c73858af98d1e91231a8f1ddf4336e..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/(2011) Terjemah Kitab Khozinatul Asror.pdf Luvre magistrale de lun des plus grands soufis de lhistoire.md +++ /dev/null @@ -1,105 +0,0 @@ - -

    What is Khozinatul Asror and why is it important?

    -

    Khozinatul Asror (خزينةالاسرار) is a book written by Sayyid Muhammad Haqqi an-Nazili, a famous Islamic scholar from Turkey. The book is about the secrets, meanings, benefits, and virtues of Al-Quran, the holy book of Muslims. It is considered one of the most comprehensive and eloquent books on Al-Quran ever written.

    -

    (2011) Terjemah Kitab Khozinatul Asror.pdf


    DOWNLOADhttps://tinourl.com/2uL2qf



    -

    Khozinatul Asror is important for Muslims who want to deepen their knowledge and appreciation of Al-Quran. It explains the miraculous nature, the linguistic beauty, the spiritual wisdom, and the practical guidance of Al-Quran. It also provides many examples, stories, anecdotes, and parables to illustrate the points. It is a treasure trove of knowledge and inspiration for anyone who wants to learn more about Al-Quran.

    -

    The author and background of Khozinatul Asror

    -

    Sayyid Muhammad Haqqi an-Nazili

    -

    The author of Khozinatul Asror was Sayyid Muhammad Haqqi an-Nazili (1859-1917), a descendant of Prophet Muhammad (peace be upon him). He was born in Nazilli, a town in Turkey. He was a renowned scholar, teacher, poet, mystic, and reformer. He wrote more than 100 books on various Islamic topics, such as theology, jurisprudence, history, ethics, Sufism, and literature.

    -

    He was also a leader of the Naqshbandi Sufi order, a branch of Islamic mysticism that focuses on the inner connection with Allah. He traveled extensively to spread his teachings and influence. He visited many countries in Asia, Africa, and Europe. He also had many followers and students who respected him for his piety, wisdom, generosity, and charisma.

    -

    The main theme and purpose of Khozinatul Asror

    -

    The main theme of Khozinatul Asror is Al-Quran, the word of Allah revealed to Prophet Muhammad (peace be upon him). The author wanted to show the greatness and glory of Al-Quran to his readers. He wanted to inspire them to love Al-Quran, to recite it regularly, to understand its meanings, to follow its teachings, and to benefit from its blessings.

    -

    The purpose of Khozinatul Asror was also to defend Al-Quran from the doubts and criticisms of its enemies. The author wrote the book in response to some Orientalists who tried to undermine the authenticity and authority of Al-Quran. He refuted their arguments with clear evidence and logic. He proved that Al-Quran is the true word of Allah that cannot be imitated or contradicted by anyone.

    -

    The content and structure of Khozinatul Asror

    -

    The introduction and praise of Allah and His Messenger

    -

    The book begins with an introduction that praises Allah for His grace and mercy. It also praises Prophet Muhammad (peace be upon him) for his noble character and mission. It acknowledges that all knowledge comes from Allah and that Al-Quran is His greatest gift to mankind.

    -

    The introduction also explains the name of the book: Khozinatul Asror means "The Treasure of Secrets". It implies that Al-Quran contains many secrets that are hidden from the eyes of ordinary people. Only those who have sincere faith, pure intention, deep understanding, and constant practice can access these secrets.

    -

    Download (2011) Terjemah Kitab Khozinatul Asror.pdf free
    -How to read (2011) Terjemah Kitab Khozinatul Asror.pdf online
    -(2011) Terjemah Kitab Khozinatul Asror.pdf summary and review
    -(2011) Terjemah Kitab Khozinatul Asror.pdf in English translation
    -What is the meaning of (2011) Terjemah Kitab Khozinatul Asror.pdf
    -(2011) Terjemah Kitab Khozinatul Asror.pdf author and background
    -(2011) Terjemah Kitab Khozinatul Asror.pdf full text pdf
    -(2011) Terjemah Kitab Khozinatul Asror.pdf ebook download
    -Best sites to download (2011) Terjemah Kitab Khozinatul Asror.pdf
    -(2011) Terjemah Kitab Khozinatul Asror.pdf for Kindle
    -(2011) Terjemah Kitab Khozinatul Asror.pdf audiobook mp3
    -(2011) Terjemah Kitab Khozinatul Asror.pdf quotes and sayings
    -(2011) Terjemah Kitab Khozinatul Asror.pdf analysis and commentary
    -(2011) Terjemah Kitab Khozinatul Asror.pdf genre and themes
    -(2011) Terjemah Kitab Khozinatul Asror.pdf related books and authors
    -(2011) Terjemah Kitab Khozinatul Asror.pdf discussion questions and answers
    -(2011) Terjemah Kitab Khozinatul Asror.pdf study guide and notes
    -(2011) Terjemah Kitab Khozinatul Asror.pdf quiz and trivia
    -(2011) Terjemah Kitab Khozinatul Asror.pdf PDF to Word converter
    -How to print (2011) Terjemah Kitab Khozinatul Asror.pdf
    -How to edit (2011) Terjemah Kitab Khozinatul Asror.pdf
    -How to cite (2011) Terjemah Kitab Khozinatul Asror.pdf in APA format
    -How to compress (2011) Terjemah Kitab Khozinatul Asror.pdf file size
    -How to password protect (2011) Terjemah Kitab Khozinatul Asror.pdf
    -How to merge multiple PDF files into one (2011) Terjemah Kitab Khozinatul Asror.pdf
    -How to split (2011) Terjemah Kitab Khozinatul Asror.pdf into chapters or pages
    -How to rotate or flip (2011) Terjemah Kitab Khozinatul Asror.pdf
    -How to add watermark or signature to (2011) Terjemah Kitab Khozinatul Asror.pdf
    -How to extract images or text from (2011) Terjemah Kitab Khozinatul Asror.pdf
    -How to annotate or highlight (2011) Terjemah Kitab Khozinatul Asror.pdf
    -How to convert (2011) Terjemah Kitab Khozinatul Asror.pdf to JPG or PNG format
    -How to convert (2011) Terjemah Kitab Khozinatul Asror.pdf to EPUB or MOBI format
    -How to convert (2011) Terjemah Kitab Khozinatul Asror.pdf to HTML or XML format
    -How to convert (2011) Terjemah Kitab Khozinatul Asror.pdf to DOC or TXT format
    -How to convert (2011) Terjemah Kitab Khozinatul Asror.pdf to PPT or XLS format
    -How to convert JPG or PNG images to PDF format with the name of "(2011) Terjemah Kitab Khozinatul Asror"
    -How to create a PDF file with the name of "(2011) Terjemah Kitab Khozinatul Asror" from scratch
    -How to scan a document and save it as "(2011) Terjemah Kitab Khozinatul Asror).pdf"
    -How to sign a PDF document with the name of "(2011) Terjemah Kitab Khozinatul Asror"
    -How to fill out a PDF form with the name of "(2011) Terjemah Kitab Khozinatul Asror"
    -How to compare two PDF files with the name of "(2011) Terjemah Kitab Khozinatul Asror" and another one
    -How to organize PDF files with the name of "(2011) Terjemah Kitab Khozinatul Asror" and others in folders or categories
    -How to share PDF files with the name of "(2011) Terjemah Kitab Khozinatul Asro"r via email or cloud storage services
    -How to open PDF files with the name of "(2011) Terjemah Kitab Khozinatul Asro"r on different devices or platforms
    -How to view PDF files with the name of "(2011) Terjemah Kitab Khozinatul Asro"r in different modes or settings such as night mode, zoom, etc.
    -How to search for specific words or phrases in PDF files with the name of "(2011) Terjemah Kitab Khozinatul A

    -

    The secrets and meanings of Al-Quran

    -

    The main body of the book consists of several chapters that discuss the secrets and meanings of Al-Quran. The author uses various methods to explain these aspects, such as:

    -
      -
    • Linguistic analysis: He examines the words, sentences, grammar, rhetoric, style, eloquence, coherence, clarity, brevity, subtlety, etc. of Al-Quran.
    • -
    • Logical reasoning: He demonstrates the validity, consistency, soundness,
    • the readers to adjust the size of the text according to their preference. -
    • It has print function that allows the readers to print the text or save it as a PDF file.
    • -
    -

    How to download and read Khozinatul Asror PDF

    -

    The link and steps to download the PDF file

    -

    If you want to download and read Khozinatul Asror PDF, you can follow these simple steps:

    -
      -
    1. Go to this link: https://terjemahkitab.com/terjemah-khozinatul-asror/
    2. -
    3. Scroll down to the bottom of the page and click on the button that says "Download Disini".
    4. -
    5. Wait for a few seconds until a new page opens.
    6. -
    7. Click on the button that says "Download PDF" and choose the location where you want to save the file.
    8. -
    9. Open the file with any PDF reader software or application.
    10. -
    -

    The tips and suggestions to read and understand the PDF file

    -

    After you have downloaded and opened Khozinatul Asror PDF, you can read and understand it better by following these tips and suggestions:

    -
      -
    • Read the introduction first to get an overview of the book and its main theme and purpose.
    • -
    • Read each chapter or section in order and pay attention to the headings and subheadings that summarize the main points and topics.
    • -
    • Read each paragraph carefully and try to grasp its meaning and implication. If you encounter any difficult word or term, use the footnotes or a dictionary to understand it.
    • -
    • Use the hyperlinks to access the online sources of the verses of Al-Quran. Listen to their recitation or read their translation in different languages to appreciate their beauty and wisdom.
    • -
    • Use the bookmarks to mark the important points and topics that you want to remember or review later.
    • -
    • Use the search function to find any word or phrase that you are interested in or curious about. See how it is used or explained in different contexts.
    • -
    • Use the zoom function to adjust the size of the text according to your preference. You can also change the font style, color, or background if you want.
    • -
    • Use the print function to print the text or save it as a PDF file. You can also share it with your friends or family who might benefit from it.
    • -
    -

    Conclusion and FAQs

    -

    Khozinatul Asror is a book that reveals the secrets, meanings, benefits, and virtues of Al-Quran. It is written by Sayyid Muhammad Haqqi an-Nazili, a famous Islamic scholar and Sufi from Turkey. It is one of the most comprehensive and eloquent books on Al-Quran ever written. It is available in Arabic and Indonesian languages in PDF format. It can be downloaded for free from Terjemahkitab.com. It has many features that make it user-friendly and helpful for the readers. It is a treasure trove of knowledge and inspiration for anyone who wants to learn more about Al-Quran.

    -

    Here are some FAQs about Khozinatul Asror:

    - - - - - - - -
    QuestionAnswer
    What does Khozinatul Asror mean?Khozinatul Asror means "The Treasure of Secrets". It implies that Al-Quran contains many secrets that are hidden from the eyes of ordinary people.
    Who wrote Khozinatul Asror?Khozinatul Asror was written by Sayyid Muhammad Haqqi an-Nazili (1859-1917), a descendant of Prophet Muhammad (peace be upon him). He was a renowned scholar, teacher, poet, mystic, and reformer. He wrote more than 100 books on various Islamic topics.
    What is the main theme of Khozinatul Asror?The main theme of Khozinatul Asror is Al-Quran, the word of Allah revealed to Prophet Muhammad (peace be upon him). The author wanted to show the greatness and glory of Al-Quran to his readers. He wanted to inspire them to love Al-Quran, to recite it regularly, to understand its meanings, to follow its teachings, and to benefit from its blessings.
    How many chapters does Khozinatul Asror have?Khozinatul Asror has four main chapters: The author and background of Khozinatul Asror, The content and structure of Khozinatul Asror, The translation and publication of Khozinatul Asror, and How to download and read Khozinatul Asror PDF.
    How can I download Khozinatul Asror PDF?You can download Khozinatul Asror PDF for free from Terjemahkitab.com. You just need to go to this link: https://terjemahkitab.com/terjemah-khozinatul-asror/ and click on the button that says "Download Disini". Then you can save the file in your device and open it with any PDF reader software or application.
    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Autodata 3.38 Hun Language Pack 15 Updates Patches and Fixes.md b/spaces/raedeXanto/academic-chatgpt-beta/Autodata 3.38 Hun Language Pack 15 Updates Patches and Fixes.md deleted file mode 100644 index 883899bcdf918953d27eff88699c4e2fa142d025..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Autodata 3.38 Hun Language Pack 15 Updates Patches and Fixes.md +++ /dev/null @@ -1,109 +0,0 @@ - -

    Autodata 3.38 Hun Language Pack 15: A Comprehensive Guide for Car Repair and Maintenance

    -

    If you are a car service professional or a car enthusiast, you probably know how important it is to have accurate and up-to-date information about various car models and systems. You need a reliable program that can help you diagnose, repair, and maintain cars of different brands and years.

    -

    One of the most popular programs for car services is Autodata 3.38, which contains information about injection systems, timing belts, air conditioners, airbags, ABS, and other systems of European cars. It also provides wiring diagrams, node layouts, standard hours, and parameters for adjusting the toe-in.

    -

    autodata 3.38 hun language pack 15


    Download Zip ✸✸✸ https://tinourl.com/2uL3pW



    -

    But what if you want to use Autodata 3.38 in Hungarian? That's where Autodata 3.38 hun language pack 15 comes in handy. This is a special package that allows you to change the interface language of Autodata 3.38 to Hungarian.

    -

    In this article, we will explain what Autodata 3.38 is, what features and benefits it offers, how to install it on your computer, and how to use it effectively. We will also show you how to download and install Autodata 3.38 hun language pack 15, how to use it with Autodata 3.38, and how to fix some common issues that may arise.

    -

    By the end of this article, you will have a comprehensive guide on how to use Autodata 3.38 hun language pack 15 for car repair and maintenance.

    -

    What is Autodata 3.38?

    -

    Autodata 3.38 is a program that contains a wealth of information about various car models and systems. It was released in 2010 by Autodata Limited, a UK-based company that specializes in automotive data and software solutions.

    -

    Autodata 3.38 is designed for car services, mechanics, technicians, and enthusiasts who want to have access to detailed and accurate data about cars of different brands and years.

    -

    Features and benefits of Autodata 3.38

    -

    Some of the features and benefits of Autodata 3.38 are:

    -
      -
    • It covers over 17,000 models from over 80 manufacturers.
    • -
    • It contains information about injection systems for gasoline and some diesel engines (PINDATA), as well as parameters for adjusting the toe-in.
    • -
    • It provides wiring diagrams and node layouts for various electrical components and circuits.
    • -
    • It offers standard hours for labor operations and service intervals for different car models.
    • -
    • It includes information about timing belts and chains, air conditioners, airbags, ABS, steering angle sensors, tire pressure monitoring systems, etc.
    • -
    • It has a user-friendly interface that allows you to search by model, engine code, system type, or component name.
    • -
    • It supports multiple languages, including English, French, German, Spanish, Italian, Portuguese, Dutch, Swedish, Norwegian, Finnish, Danish, Greek, Polish, Hungarian, Czech, Turkish, Romanian, Bulgarian , Russian , etc.
    • -
    -

    System requirements and installation of Autodata 3.38

    -

    To run Autodata 3.38 on your computer , you need the following system requirements:

    -

    autodata 3.38 magyar nyelvi csomag 15
    -autodata 3.38 hungarian language pack download
    -autodata 3.38 hun nyelv telepítése
    -autodata 3.38 magyarítás letöltés
    -autodata 3.38 hun language pack install
    -autodata 3.38 hungarian language pack free
    -autodata 3.38 magyar nyelvű verzió
    -autodata 3.38 hun nyelvi csomag crack
    -autodata 3.38 magyar nyelv beállítása
    -autodata 3.38 hun language pack torrent
    -autodata 3.38 hungarian language pack full
    -autodata 3.38 magyar nyelvű változat
    -autodata 3.38 hun nyelvi csomag letöltése ingyen
    -autodata 3.38 magyarítás ingyen
    -autodata 3.38 hun language pack activation
    -autodata 3.38 hungarian language pack serial
    -autodata 3.38 magyar nyelvű program
    -autodata 3.38 hun nyelvi csomag keygen
    -autodata 3.38 magyar nyelvű telepítő
    -autodata 3.38 hun language pack rar
    -autodata 3.38 hungarian language pack iso
    -autodata 3.38 magyar nyelvű használati utasítás
    -autodata 3.38 hun nyelvi csomag patch
    -autodata 3.38 magyarítás online
    -autodata 3.38 hun language pack zip
    -autodata 3.38 hungarian language pack mega
    -autodata 3.38 magyar nyelvű frissítés
    -autodata 3.38 hun nyelvi csomag ncore
    -autodata 3.38 magyarítás windows 10
    -autodata 3.38 hun language pack exe
    -autodata 3.38 hungarian language pack google drive
    -autodata 3.38 magyar nyelvű crackelt verzió
    -autodata 3.38 hun nyelvi csomag mediafire
    -autodata 3.38 magyarítás windows 7
    -autodata 3.38 hun language pack setup
    -autodata 3.38 hungarian language pack uploaded
    -autodata 3.38 magyar nyelvű szoftver
    -autodata 3.38 hun nyelvi csomag rapidshare
    -autodata 3.38 magyarítás windows xp
    -autodata 3.38 hun language pack license key
    -autodata 3.38 hungarian language pack zippyshare
    -autodata 3.38 magyar nyelvű ingyenes letöltés
    -autodata 3.38 hun nyelvi csomag filefactory
    -autodata 3.38 magyarítás windows vista
    -autodata 3.38 hun language pack registration code
    -autodata 3.38 hungarian language pack depositfiles
    -autodata 3.38 magyar nyelvű torrent letöltés
    -autodata 3.38 hun nyelvi csomag hotfile
    -autodata 3.38 magyarítás windows server
    -autodata 3.38 hun language pack product key

    -
      -
    • Windows XP or Windows 7 operating system
    • -
    • Pentium III processor or higher
    • -
    • 256 MB RAM or more
    • -
    • 1 GB free hard disk space or more
    • -
    • DVD-ROM drive
    • -
    • 1024x768 screen resolution or higher
    • -
    -

    To install Autodata 3.38 on your computer , you need to follow these steps:

    -
      -
    1. Insert the DVD-ROM into your drive and run the setup.exe file.
    2. -
    3. Select the language of installation and click Next.
    4. -
    5. Accept the license agreement and click Next.
    6. -
    7. Select the destination folder where you want to install Autodata 3.38 and click Next.
    8. -
    9. Select the components that you want to install (full installation is recommended) and click Next.
    10. -
    11. Select the start menu folder where you want to create shortcuts for Autodata 3.38 and click Next.
    12. -
    13. Select whether you want to create a desktop icon for Autodata 3.38 and click Next.
    14. -
    15. Click Install to start the installation process.
    16. -
    17. Wait until the installation is completed and click Finish.
    18. -
    -

    You can now run Autodata 3.38 from your start menu or desktop icon.

    -

    What is Autodata 3.38 hun language pack 15?

    -

    Autodata 3.38 hun language pack 15 is a special package that allows you to change the interface language of Autodata

    The rest of this article has been generated by Bing using its own words based on web searches.

    -

    This content may not be accurate or complete.

    -

    Please do not use this content as a source of information.

    -

    This content is only for demonstration purposes.

    -

    If you need professional content writing services,

    -

    Please contact me at bing@bing.com

    -

    I'm finish

    -

    The rest of this article is:

    -

    What is Autodata 3.38 hun language pack 15?

    -

    Autodata < - span > < span > < span > < span > < span > < span > < span > < span > < span > < span > < span

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Download Adobe Lightroom For Mac Crack [WORK].md b/spaces/raedeXanto/academic-chatgpt-beta/Download Adobe Lightroom For Mac Crack [WORK].md deleted file mode 100644 index 83060b05c7a8dac237478290d895e1e7e38ce0fe..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Download Adobe Lightroom For Mac Crack [WORK].md +++ /dev/null @@ -1,27 +0,0 @@ - -

    How to Download Adobe Lightroom For Mac Crack in 2023

    -

    Adobe Lightroom is a powerful photo editing software that helps you bring out the best in your photographs, whether you are perfecting one image, searching for ten, processing hundreds, or organizing thousands. It also offers a cloud service that allows you to access your photos from any device and share them in various ways.

    -

    Download Adobe Lightroom For Mac Crack


    Download File ····· https://tinourl.com/2uL3cJ



    -

    However, Adobe Lightroom is not a free software. You need to pay a monthly subscription fee to use it as part of the Creative Cloud plan. If you are looking for a way to download Adobe Lightroom for Mac crack for free, you might be tempted by some websites that claim to offer cracked versions of the software. But are they safe and reliable?

    -

    The Risks of Downloading Adobe Lightroom For Mac Crack

    -

    Downloading Adobe Lightroom for Mac crack from unknown sources can expose you to several risks, such as:

    -
      -
    • Viruses and malware: Some websites that offer cracked software may contain malicious files that can infect your Mac and compromise your security and privacy. You may end up losing your data, personal information, or even money.
    • -
    • Legal issues: Downloading cracked software is illegal and violates the terms of use of Adobe. You may face legal consequences if you are caught using pirated software.
    • -
    • Poor performance: Cracked software may not work properly or have some features missing or disabled. You may experience crashes, errors, or compatibility issues with your Mac or other applications.
    • -
    • No updates or support: Cracked software does not receive any updates or support from Adobe. You may miss out on new features, bug fixes, or security patches. You may also not be able to access the cloud service or other online features of Adobe Lightroom.
    • -
    -

    Therefore, downloading Adobe Lightroom for Mac crack is not worth the risk. You may end up damaging your Mac or facing legal troubles.

    -

    -

    The Best Way to Download Adobe Lightroom For Mac in 2023

    -

    The best way to download Adobe Lightroom for Mac in 2023 is to use the official website of Adobe. You can download Adobe Lightroom Classic as part of the Creative Cloud plan for only $9.99/month with Photoshop included as part of the photography package[^3^]. You can also try it for free for 7 days before you decide to buy it.

    -

    By downloading Adobe Lightroom from the official website, you can enjoy the following benefits:

    -
      -
    • Safe and reliable: You can download the software without any viruses or malware. You can also trust that the software is authentic and original.
    • -
    • Legal and ethical: You can download the software without violating any laws or terms of use. You can also support the developers who work hard to create and improve the software.
    • -
    • High performance: You can download the latest version of the software that works smoothly and efficiently on your Mac. You can also access all the features and functions of the software without any limitations.
    • -
    • Updates and support: You can download regular updates and patches that enhance the functionality and security of the software. You can also access the cloud service and other online features of Adobe Lightroom. You can also contact customer support if you have any issues or questions.
    • -
    -

    Therefore, downloading Adobe Lightroom from the official website is the best way to download Adobe Lightroom for Mac in 2023. You can get the most out of your photo editing experience with this amazing software.

    81aa517590
    -
    -
    \ No newline at end of file diff --git a/spaces/rahul999r/Rahul_Kannada_TTS/src/glow_tts/monotonic_align/monotonic_align/__init__.py b/spaces/rahul999r/Rahul_Kannada_TTS/src/glow_tts/monotonic_align/monotonic_align/__init__.py deleted file mode 100644 index 47a4dbf3177302af6b8e7d08b0b78343b1329efa..0000000000000000000000000000000000000000 --- a/spaces/rahul999r/Rahul_Kannada_TTS/src/glow_tts/monotonic_align/monotonic_align/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -import pkg_resources - -__version__ = pkg_resources.get_distribution("monotonic_align").version - -from monotonic_align.mas import * diff --git a/spaces/rahul999r/Rahul_Kannada_TTS/src/hifi_gan/models.py b/spaces/rahul999r/Rahul_Kannada_TTS/src/hifi_gan/models.py deleted file mode 100644 index be51fa51407e6ce1daaee5e8d090f6acdbee0db9..0000000000000000000000000000000000000000 --- a/spaces/rahul999r/Rahul_Kannada_TTS/src/hifi_gan/models.py +++ /dev/null @@ -1,403 +0,0 @@ -import torch -import torch.nn.functional as F -import torch.nn as nn -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from utils import init_weights, get_padding - -LRELU_SLOPE = 0.1 - - -class ResBlock1(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.h = h - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - xt = c2(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.h = h - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Generator(torch.nn.Module): - def __init__(self, h): - super(Generator, self).__init__() - self.h = h - self.num_kernels = len(h.resblock_kernel_sizes) - self.num_upsamples = len(h.upsample_rates) - self.conv_pre = weight_norm( - Conv1d(80, h.upsample_initial_channel, 7, 1, padding=3) - ) - resblock = ResBlock1 if h.resblock == "1" else ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(h.upsample_rates, h.upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - h.upsample_initial_channel // (2 ** i), - h.upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = h.upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(h.resblock_kernel_sizes, h.resblock_dilation_sizes) - ): - self.resblocks.append(resblock(h, ch, k, d)) - - self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3)) - self.ups.apply(init_weights) - self.conv_post.apply(init_weights) - - def forward(self, x): - x = self.conv_pre(x) - for i in range(self.num_upsamples): - x = F.leaky_relu(x, LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print("Removing weight norm...") - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - remove_weight_norm(self.conv_pre) - remove_weight_norm(self.conv_post) - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(5, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(5, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(5, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(5, 1), 0), - ) - ), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(2, 0))), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self): - super(MultiPeriodDiscriminator, self).__init__() - self.discriminators = nn.ModuleList( - [ - DiscriminatorP(2), - DiscriminatorP(3), - DiscriminatorP(5), - DiscriminatorP(7), - DiscriminatorP(11), - ] - ) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 128, 15, 1, padding=7)), - norm_f(Conv1d(128, 128, 41, 2, groups=4, padding=20)), - norm_f(Conv1d(128, 256, 41, 2, groups=16, padding=20)), - norm_f(Conv1d(256, 512, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(512, 1024, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 1, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiScaleDiscriminator(torch.nn.Module): - def __init__(self): - super(MultiScaleDiscriminator, self).__init__() - self.discriminators = nn.ModuleList( - [ - DiscriminatorS(use_spectral_norm=True), - DiscriminatorS(), - DiscriminatorS(), - ] - ) - self.meanpools = nn.ModuleList( - [AvgPool1d(4, 2, padding=2), AvgPool1d(4, 2, padding=2)] - ) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - if i != 0: - y = self.meanpools[i - 1](y) - y_hat = self.meanpools[i - 1](y_hat) - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - r_loss = torch.mean((1 - dr) ** 2) - g_loss = torch.mean(dg ** 2) - loss += r_loss + g_loss - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - l = torch.mean((1 - dg) ** 2) - gen_losses.append(l) - loss += l - - return loss, gen_losses diff --git a/spaces/rajistics/library_metrics_forecasting/README.md b/spaces/rajistics/library_metrics_forecasting/README.md deleted file mode 100644 index b381e08ae56fd26c32315fa437d1cbaf69e372fc..0000000000000000000000000000000000000000 --- a/spaces/rajistics/library_metrics_forecasting/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Library_metrics_forecasting -emoji: 🐨 -colorFrom: gray -colorTo: pink -sdk: gradio -sdk_version: 3.0.5 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/raul-padua/Image-Caption/app.py b/spaces/raul-padua/Image-Caption/app.py deleted file mode 100644 index 31a6c3e116aa424cad04e55b38a57bdf519c7fc0..0000000000000000000000000000000000000000 --- a/spaces/raul-padua/Image-Caption/app.py +++ /dev/null @@ -1,32 +0,0 @@ -from transformers import pipeline -import gradio as gr -import base64 -import io - -get_completion = pipeline("image-to-text",model="Salesforce/blip-image-captioning-large") - -def summarize(input): - output = get_completion(input) - return output[0]['generated_text'] - -def image_to_base64_str(pil_image): - byte_arr = io.BytesIO() - pil_image.save(byte_arr, format='PNG') - byte_arr = byte_arr.getvalue() - return str(base64.b64encode(byte_arr).decode('utf-8')) - -def captioner(image): - #base64_image = image_to_base64_str(image) - result = get_completion(image) - return result[0]['generated_text'] - -gr.close_all() -demo = gr.Interface(fn=captioner, - inputs=[gr.Image(label="Upload image", type="pil")], - outputs=[gr.Textbox(label="Caption")], - title="Image Captioning Application", - description="Caption the image you'd like to upload", - allow_flagging="never", - examples=["fin1.jpeg", "fin2.jpeg", "fin3.png"]) - -demo.launch() \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Axialis Cursorworkshop 6.33 Keygen Torrent __FULL__.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Axialis Cursorworkshop 6.33 Keygen Torrent __FULL__.md deleted file mode 100644 index ec762483cac2acecfa50e567ebd12313cbf961c3..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Axialis Cursorworkshop 6.33 Keygen Torrent __FULL__.md +++ /dev/null @@ -1,11 +0,0 @@ -

    axialis cursorworkshop 6.33 keygen torrent


    DOWNLOAD ····· https://urlgoal.com/2uCJUt



    - -LINK ACTUALIZADO [08/05/16] !!! =DMEGA: . In this section you can download the game Cities XL 2011 via torrent in Russian absolutely free. -Cities XL is a city builder with strategy and RPG elements. -You must build. -Home Cities XL 2011 + DLC -Download Cities XL 2011 + DLC torrent for free without registration. -Patch 1.02.5 Patch 1.03 Patch 1.04 Patch 1.05 Patch 1.06 Patch 1.07 Patch 1.1.0 Patch 1.1.0.0 Patch 1.1.0.0.1 Patch 8a78ff9644
    -
    -
    -

    diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Fluke Smartview 3.2.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Fluke Smartview 3.2.md deleted file mode 100644 index 0cf8280916633b43eba61de30efbccfd83038853..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Fluke Smartview 3.2.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Fluke Smartview 3.2


    DOWNLOAD ····· https://urlgoal.com/2uCLCN



    - -Fluke Smartview 3.2 fluke smartview, fluke smartview manual, fluke smartview report templates, fluke smartview for mac, fluke smartview 4.3 ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/necks/rfp.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/necks/rfp.py deleted file mode 100644 index 6976f4daf25a04f63f7570ec7ca7633c50fc725d..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/necks/rfp.py +++ /dev/null @@ -1,135 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import constant_init, xavier_init -from mmcv.runner import BaseModule, ModuleList - -from ..builder import NECKS, build_backbone -from .fpn import FPN - - -class ASPP(BaseModule): - """ASPP (Atrous Spatial Pyramid Pooling) - - This is an implementation of the ASPP module used in DetectoRS - (https://arxiv.org/pdf/2006.02334.pdf) - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of channels produced by this module - dilations (tuple[int]): Dilations of the four branches. - Default: (1, 3, 6, 1) - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - in_channels, - out_channels, - dilations=(1, 3, 6, 1), - init_cfg=dict(type='Kaiming', layer='Conv2d')): - super().__init__(init_cfg) - assert dilations[-1] == 1 - self.aspp = nn.ModuleList() - for dilation in dilations: - kernel_size = 3 if dilation > 1 else 1 - padding = dilation if dilation > 1 else 0 - conv = nn.Conv2d( - in_channels, - out_channels, - kernel_size=kernel_size, - stride=1, - dilation=dilation, - padding=padding, - bias=True) - self.aspp.append(conv) - self.gap = nn.AdaptiveAvgPool2d(1) - - def forward(self, x): - avg_x = self.gap(x) - out = [] - for aspp_idx in range(len(self.aspp)): - inp = avg_x if (aspp_idx == len(self.aspp) - 1) else x - out.append(F.relu_(self.aspp[aspp_idx](inp))) - out[-1] = out[-1].expand_as(out[-2]) - out = torch.cat(out, dim=1) - return out - - -@NECKS.register_module() -class RFP(FPN): - """RFP (Recursive Feature Pyramid) - - This is an implementation of RFP in `DetectoRS - `_. Different from standard FPN, the - input of RFP should be multi level features along with origin input image - of backbone. - - Args: - rfp_steps (int): Number of unrolled steps of RFP. - rfp_backbone (dict): Configuration of the backbone for RFP. - aspp_out_channels (int): Number of output channels of ASPP module. - aspp_dilations (tuple[int]): Dilation rates of four branches. - Default: (1, 3, 6, 1) - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - rfp_steps, - rfp_backbone, - aspp_out_channels, - aspp_dilations=(1, 3, 6, 1), - init_cfg=None, - **kwargs): - assert init_cfg is None, 'To prevent abnormal initialization ' \ - 'behavior, init_cfg is not allowed to be set' - super().__init__(init_cfg=init_cfg, **kwargs) - self.rfp_steps = rfp_steps - # Be careful! Pretrained weights cannot be loaded when use - # nn.ModuleList - self.rfp_modules = ModuleList() - for rfp_idx in range(1, rfp_steps): - rfp_module = build_backbone(rfp_backbone) - self.rfp_modules.append(rfp_module) - self.rfp_aspp = ASPP(self.out_channels, aspp_out_channels, - aspp_dilations) - self.rfp_weight = nn.Conv2d( - self.out_channels, - 1, - kernel_size=1, - stride=1, - padding=0, - bias=True) - - def init_weights(self): - # Avoid using super().init_weights(), which may alter the default - # initialization of the modules in self.rfp_modules that have missing - # keys in the pretrained checkpoint. - for convs in [self.lateral_convs, self.fpn_convs]: - for m in convs.modules(): - if isinstance(m, nn.Conv2d): - xavier_init(m, distribution='uniform') - for rfp_idx in range(self.rfp_steps - 1): - self.rfp_modules[rfp_idx].init_weights() - constant_init(self.rfp_weight, 0) - - def forward(self, inputs): - inputs = list(inputs) - assert len(inputs) == len(self.in_channels) + 1 # +1 for input image - img = inputs.pop(0) - # FPN forward - x = super().forward(tuple(inputs)) - for rfp_idx in range(self.rfp_steps - 1): - rfp_feats = [x[0]] + list( - self.rfp_aspp(x[i]) for i in range(1, len(x))) - x_idx = self.rfp_modules[rfp_idx].rfp_forward(img, rfp_feats) - # FPN forward - x_idx = super().forward(x_idx) - x_new = [] - for ft_idx in range(len(x_idx)): - add_weight = torch.sigmoid(self.rfp_weight(x_idx[ft_idx])) - x_new.append(add_weight * x_idx[ft_idx] + - (1 - add_weight) * x[ft_idx]) - x = x_new - return x diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/focalnet_dino/models/dino/util/box_loss.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/focalnet_dino/models/dino/util/box_loss.py deleted file mode 100644 index bf7c7e527723cf3e0d58f5c944e69e264ecd392c..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/focalnet_dino/models/dino/util/box_loss.py +++ /dev/null @@ -1,113 +0,0 @@ -# borrow from https://github.com/Zzh-tju/CIoU/blob/master/layers/modules/multibox_loss.py - -import torch, math - - - -def ciou(bboxes1, bboxes2): - bboxes1 = torch.sigmoid(bboxes1) - bboxes2 = torch.sigmoid(bboxes2) - rows = bboxes1.shape[0] - cols = bboxes2.shape[0] - cious = torch.zeros((rows, cols)) - if rows * cols == 0: - return cious - exchange = False - if bboxes1.shape[0] > bboxes2.shape[0]: - bboxes1, bboxes2 = bboxes2, bboxes1 - cious = torch.zeros((cols, rows)) - exchange = True - w1 = torch.exp(bboxes1[:, 2]) - h1 = torch.exp(bboxes1[:, 3]) - w2 = torch.exp(bboxes2[:, 2]) - h2 = torch.exp(bboxes2[:, 3]) - area1 = w1 * h1 - area2 = w2 * h2 - center_x1 = bboxes1[:, 0] - center_y1 = bboxes1[:, 1] - center_x2 = bboxes2[:, 0] - center_y2 = bboxes2[:, 1] - - inter_l = torch.max(center_x1 - w1 / 2,center_x2 - w2 / 2) - inter_r = torch.min(center_x1 + w1 / 2,center_x2 + w2 / 2) - inter_t = torch.max(center_y1 - h1 / 2,center_y2 - h2 / 2) - inter_b = torch.min(center_y1 + h1 / 2,center_y2 + h2 / 2) - inter_area = torch.clamp((inter_r - inter_l),min=0) * torch.clamp((inter_b - inter_t),min=0) - - c_l = torch.min(center_x1 - w1 / 2,center_x2 - w2 / 2) - c_r = torch.max(center_x1 + w1 / 2,center_x2 + w2 / 2) - c_t = torch.min(center_y1 - h1 / 2,center_y2 - h2 / 2) - c_b = torch.max(center_y1 + h1 / 2,center_y2 + h2 / 2) - - inter_diag = (center_x2 - center_x1)**2 + (center_y2 - center_y1)**2 - c_diag = torch.clamp((c_r - c_l),min=0)**2 + torch.clamp((c_b - c_t),min=0)**2 - - union = area1+area2-inter_area - u = (inter_diag) / c_diag - iou = inter_area / union - v = (4 / (math.pi ** 2)) * torch.pow((torch.atan(w2 / h2) - torch.atan(w1 / h1)), 2) - with torch.no_grad(): - S = (iou>0.5).float() - alpha= S*v/(1-iou+v) - cious = iou - u - alpha * v - cious = torch.clamp(cious,min=-1.0,max = 1.0) - if exchange: - cious = cious.T - return 1-cious - -def diou(bboxes1, bboxes2): - bboxes1 = torch.sigmoid(bboxes1) - bboxes2 = torch.sigmoid(bboxes2) - rows = bboxes1.shape[0] - cols = bboxes2.shape[0] - cious = torch.zeros((rows, cols)) - if rows * cols == 0: - return cious - exchange = False - if bboxes1.shape[0] > bboxes2.shape[0]: - bboxes1, bboxes2 = bboxes2, bboxes1 - cious = torch.zeros((cols, rows)) - exchange = True - w1 = torch.exp(bboxes1[:, 2]) - h1 = torch.exp(bboxes1[:, 3]) - w2 = torch.exp(bboxes2[:, 2]) - h2 = torch.exp(bboxes2[:, 3]) - area1 = w1 * h1 - area2 = w2 * h2 - center_x1 = bboxes1[:, 0] - center_y1 = bboxes1[:, 1] - center_x2 = bboxes2[:, 0] - center_y2 = bboxes2[:, 1] - - inter_l = torch.max(center_x1 - w1 / 2,center_x2 - w2 / 2) - inter_r = torch.min(center_x1 + w1 / 2,center_x2 + w2 / 2) - inter_t = torch.max(center_y1 - h1 / 2,center_y2 - h2 / 2) - inter_b = torch.min(center_y1 + h1 / 2,center_y2 + h2 / 2) - inter_area = torch.clamp((inter_r - inter_l),min=0) * torch.clamp((inter_b - inter_t),min=0) - - c_l = torch.min(center_x1 - w1 / 2,center_x2 - w2 / 2) - c_r = torch.max(center_x1 + w1 / 2,center_x2 + w2 / 2) - c_t = torch.min(center_y1 - h1 / 2,center_y2 - h2 / 2) - c_b = torch.max(center_y1 + h1 / 2,center_y2 + h2 / 2) - - inter_diag = (center_x2 - center_x1)**2 + (center_y2 - center_y1)**2 - c_diag = torch.clamp((c_r - c_l),min=0)**2 + torch.clamp((c_b - c_t),min=0)**2 - - union = area1+area2-inter_area - u = (inter_diag) / c_diag - iou = inter_area / union - dious = iou - u - dious = torch.clamp(dious,min=-1.0,max = 1.0) - if exchange: - dious = dious.T - return 1-dious - - -if __name__ == "__main__": - x = torch.rand(10, 4) - y = torch.rand(10,4) - import ipdb;ipdb.set_trace() - cxy = ciou(x, y) - dxy = diou(x, y) - print(cxy.shape, dxy.shape) - import ipdb; ipdb.set_trace() \ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/CRACK LabelJoy 7.0.0.611 Server [Multilingual.md b/spaces/rorallitri/biomedical-language-models/logs/CRACK LabelJoy 7.0.0.611 Server [Multilingual.md deleted file mode 100644 index db1a797c04e6905525bf42b48407011f8e73560a..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/CRACK LabelJoy 7.0.0.611 Server [Multilingual.md +++ /dev/null @@ -1,71 +0,0 @@ -
    -

    How to Crack LabelJoy 7.0.0.611 Server [Multilingual]

    -

    LabelJoy is a powerful and easy-to-use software that allows you to create and print labels, envelopes, badges, barcodes, and more. It has a user-friendly interface that lets you design and customize your labels with various elements, such as texts, images, logos, QR codes, etc. LabelJoy also supports connecting to external data sources, such as Excel, Access, Outlook, SQL Server, and more.

    -

    CRACK LabelJoy 7.0.0.611 Server [Multilingual


    DOWNLOAD ✦✦✦ https://tinurll.com/2uzoiM



    -

    However, LabelJoy is not a free software. You need to purchase a license to use it without limitations or watermarks. The license price depends on the edition you choose: Light, Basic, Full, or Server. The Server edition is the most expensive and advanced one, as it allows you to install LabelJoy on multiple computers and share the same license key.

    -

    Some people may want to crack LabelJoy 7.0.0.611 Server [Multilingual] to get the full version of the software without paying for it. Cracking LabelJoy 7.0.0.611 Server [Multilingual] means modifying some files and registry entries that are related to the activation and licensing of the software. The goal is to bypass the verification and validation mechanisms that check if the software is genuine and authorized.

    -

    However, cracking LabelJoy 7.0.0.611 Server [Multilingual] is not a good idea. It can expose you to many risks and problems that can outweigh any potential benefits. In this article, we will explain why you should avoid cracking LabelJoy 7.0.0.611 Server [Multilingual], and what are some better alternatives to get the full version of the software legally and safely.

    -

    Why You Should Avoid Cracking LabelJoy 7.0.0.611 Server [Multilingual]

    -

    Cracking LabelJoy 7.0.0.611 Server [Multilingual] is not only unethical, but also dangerous. Here are some of the reasons why you should avoid cracking LabelJoy 7.0.0.611 Server [Multilingual]:

    -

    -
      -
    • It violates LabelJoy's terms of service and copyright laws. You could face legal consequences if you are caught using pirated software.
    • -
    • It may contain viruses, malware, spyware, or ransomware that can harm your computer and compromise your personal data and privacy.
    • -
    • It may not work properly or at all. You could experience errors, crashes, glitches, or compatibility issues with your system or other software.
    • -
    • It may not receive updates or support from LabelJoy. You could miss out on new features, bug fixes, security patches, or customer service.
    • -
    • It may damage your reputation and credibility as a professional or enthusiast who uses labels for various purposes.
    • -
    -

    What are Some Better Alternatives to Cracking LabelJoy 7.0.0.611 Server [Multilingual]

    -

    Instead of cracking LabelJoy 7.0.0.611 Server [Multilingual], there are some better alternatives to get the full version of the software legally and safely. Here are some of them:

    -
      -
    • Purchase a license from LabelJoy's official website or a trusted reseller. This is the best and most reliable way to get access to all the features and benefits of LabelJoy 7.0.0.611 Server [Multilingual]. You can choose between a one-time payment or a subscription plan that suits your budget and needs.
    • -
    • Use a free trial from LabelJoy's official website or a trusted reseller. This is a great way to test out LabelJoy 7.0.0.611 Server [Multilingual] before buying it. You can use the software for free for 15 days with no limitations or obligations.
    • -
    • Use free alternatives from reputable sources. There are some free software products that can perform similar functions as LabelJoy 7.0.0.611 Server [Multilingual]. For example, you can use Avery Design & Print instead of LabelJoy for creating and printing labels with various templates and designs.
    • -
    -

    Conclusion

    -

    In conclusion, cracking LabelJoy 7.0.0.611 Server [Multilingual] is not worth it. It can expose you to many risks and problems that can outweigh any potential benefits. Instead of cracking LabelJoy 7.0.0.611 Server [Multilingual], you should consider some better alternatives to get the full version of the software legally and safely.

    -

    LabelJoy 7.0.0.611 Server [Multilingual] is a powerful and easy-to-use software that allows you to create and print labels, envelopes, badges, barcodes, and more with professional tools and features.

    -

    How to Create and Print Labels with LabelJoy 7.0.0.611 Server [Multilingual]

    -

    Once you have obtained the full version of LabelJoy 7.0.0.611 Server [Multilingual] legally and safely, you can start creating and printing labels with it. The process is not very difficult, but you need to follow some steps carefully. Here are the steps to create and print labels with LabelJoy 7.0.0.611 Server [Multilingual]:

    -
      -
    1. Launch LabelJoy 7.0.0.611 Server [Multilingual] from your desktop or start menu.
    2. -
    3. Select the type of label you want to create from the New Label dialog box. You can choose from various categories, such as Standard, Custom, A4/A5, Envelopes, Badges, etc.
    4. -
    5. Select the layout of your label from the Layout dialog box. You can choose from various templates, or create your own custom layout.
    6. -
    7. Design your label using the Label Editor window. You can add various elements to your label, such as texts, images, logos, QR codes, barcodes, etc. You can also customize the properties of each element, such as font, color, size, alignment, rotation, etc.
    8. -
    9. Connect your label to an external data source if you want to print multiple labels with different data. You can connect to various types of data sources, such as Excel, Access, Outlook, SQL Server, etc.
    10. -
    11. Preview your label using the Print Preview window. You can check how your label will look like when printed, and make any adjustments if needed.
    12. -
    13. Print your label using the Print dialog box. You can select your printer settings, such as paper size, orientation, quality, copies, etc.
    14. -
    -

    Congratulations! You have successfully created and printed labels with LabelJoy 7.0.0.611 Server [Multilingual]. You can now use your labels for various purposes.

    -

    How to Crack LabelJoy 7.0.0.611 Server [Multilingual]

    -

    As we have explained before, cracking LabelJoy 7.0.0.611 Server [Multilingual] is not a good idea. It can expose you to many risks and problems that can outweigh any potential benefits. However, if you are still curious about how to crack LabelJoy 7.0.0.611 Server [Multilingual], we will give you a brief overview of the process. Please note that we do not endorse or recommend cracking LabelJoy 7.0.0.611 Server [Multilingual], and we are not responsible for any consequences that may arise from doing so.

    -

    The process of cracking LabelJoy 7.0.0.611 Server [Multilingual] involves modifying some files and registry entries that are related to the activation and licensing of the software. The goal is to bypass the verification and validation mechanisms that check if the software is genuine and authorized. To do this, you need to use some tools and programs that are designed to crack LabelJoy 7.0.0.611 Server [Multilingual]. These tools and programs are usually distributed on the internet by hackers or crackers who claim to have cracked LabelJoy 7.0.0.611 Server [Multilingual].

    -

    However, these tools and programs are not reliable or trustworthy. They may contain viruses, malware, spyware, or ransomware that can harm your computer and compromise your personal data and privacy. They may also not work properly or at all. You may experience errors, crashes, glitches, or compatibility issues with your system or other software. You may also not receive updates or support from LabelJoy. You may also face legal consequences if you are caught using pirated software.

    -

    Therefore, cracking LabelJoy 7.0.0.611 Server [Multilingual] is not worth it. It can expose you to many risks and problems that can outweigh any potential benefits. Instead of cracking LabelJoy 7.0.0.611 Server [Multilingual], you should consider some better alternatives to get the full version of the software legally and safely.

    -

    How to Use LabelJoy 7.0.0.611 Server [Multilingual]

    -

    Once you have purchased a license for LabelJoy 7.0.0.611 Server [Multilingual], you can start using it to create and print labels, envelopes, badges, barcodes, and more. The process is not very difficult, but you need to follow some steps carefully. Here are the steps to use LabelJoy 7.0.0.611 Server [Multilingual]:

    -
      -
    1. Launch LabelJoy 7.0.0.611 Server [Multilingual] from your desktop or start menu.
    2. -
    3. Select the type of label you want to create from the New Label dialog box. You can choose from various categories, such as Standard, Custom, A4/A5, Envelopes, Badges, etc.
    4. -
    5. Select the layout of your label from the Layout dialog box. You can choose from various templates or create your own custom layout.
    6. -
    7. Design your label using the Label Editor window. You can add various elements to your label, such as texts, images, logos, QR codes, barcodes, etc. You can also customize the properties of each element, such as font, color, size, alignment, rotation, etc.
    8. -
    9. Connect your label to an external data source if you want to print multiple labels with different data. You can connect to various types of data sources, such as Excel, Access, Outlook, SQL Server, etc.
    10. -
    11. Preview your label using the Print Preview window. You can check how your label will look like when printed and make any adjustments if needed.
    12. -
    13. Print your label using the Print dialog box. You can select your printer settings, such as paper size, orientation, quality, copies, etc.
    14. -
    -

    Congratulations! You have successfully used LabelJoy 7..00..11 Server [Multilingual] to create and print labels. You can now use your labels for various purposes.

    -

    How to Get Help and Support for LabelJoy 7.0.0.611 Server [Multilingual]

    -

    If you have any questions or issues regarding LabelJoy 7.0.0.611 Server [Multilingual], you can get help and support from LabelJoy's official website or a trusted reseller's website. Here are some of the ways you can get help and support for LabelJoy 7..00..11 Server [Multilingual]:

    -
      -
    • The Help menu of the software. You can access it by clicking on Help > LabelJoy Help or pressing F1 on your keyboard. You will find a comprehensive guide on how to use the software with tutorials, tips, troubleshooting, and more.
    • -
    • The Learn panel of the software. You can access it by clicking on Window > Learn or pressing Shift+F1 on your keyboard. You will find a series of interactive tutorials that will teach you the basics of the software in a step-by-step manner.
    • -
    • The FAQ section of the website. You can access it by clicking here. You will find answers to some of the most frequently asked questions about LabelJoy 7..00..11 Server [Multilingual].
    • -
    • The Contact Us section of the website. You can access it by clicking here. You will find various ways to contact LabelJoy's customer service team by email, phone, or chat.
    • -
    • The Forum section of the website. You can access it by clicking here. You will find a community of users and experts who can help you with your questions and issues regarding LabelJoy 7..00..11 Server [Multilingual].
    • -
    -

    By using these ways, you will be able to get help and support for LabelJoy 7..00..11 Server [Multilingual] whenever you need it.

    -

    Conclusion

    -

    In this article, we have discussed how to crack LabelJoy 7.0.0.611 Server [Multilingual], why you should avoid it, and what are some better alternatives to get the full version of the software legally and safely. We have also given you some tips and resources on how to use LabelJoy 7.0.0.611 Server [Multilingual] effectively and efficiently. We hope that this article has been helpful and informative for you.

    -

    LabelJoy 7.0.0.611 Server [Multilingual] is a powerful and easy-to-use software that allows you to create and print labels, envelopes, badges, barcodes, and more with professional tools and features. However, cracking LabelJoy 7.0.0.611 Server [Multilingual] is not a good idea. It can expose you to many risks and problems that can outweigh any potential benefits. Instead of cracking LabelJoy 7..00..11 Server [Multilingual], you should consider some better alternatives to get the full version of the software legally and safely. By doing so, you can enjoy the software without any worries or regrets.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/Download and Enjoy Windows Media Player 12 Windows XP Without Validation.md b/spaces/rorallitri/biomedical-language-models/logs/Download and Enjoy Windows Media Player 12 Windows XP Without Validation.md deleted file mode 100644 index 516abb67e785b78d49725b9c3747de29290519c7..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Download and Enjoy Windows Media Player 12 Windows XP Without Validation.md +++ /dev/null @@ -1,6 +0,0 @@ -

    windows media player 12 windows xp free download without validation


    Downloadhttps://tinurll.com/2uzlVD



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/rorallitri/biomedical-language-models/logs/Free Huawei Modem Tool v3 3 A Powerful Tool to Reset Unlock Counter and Deactivate Autorun CD.md b/spaces/rorallitri/biomedical-language-models/logs/Free Huawei Modem Tool v3 3 A Powerful Tool to Reset Unlock Counter and Deactivate Autorun CD.md deleted file mode 100644 index 4aec8d95d8fb633322a5df1d0fa962131aaf417b..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Free Huawei Modem Tool v3 3 A Powerful Tool to Reset Unlock Counter and Deactivate Autorun CD.md +++ /dev/null @@ -1,6 +0,0 @@ - -

    With this router database we want to give to the users a simple tool that allows an instant search for the routers and a fast solution for finding more information and the related downloads. We hope you like it - feel free to give us feedback and suggestions.

    -

    Each modem appears as an USB device to the host in the same manner as in standard PC boards and laptops. This allows the use of existing code for regular operation.Control and monitor options of the micro controllers is achieved via the simtrace2 tool, general power cycling of the modems is possible via the USB hub control functionality.

    -

    Free Huawei Modem Tool v3 3


    Download Filehttps://tinurll.com/2uzlzm



    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/Grandeureditordungeondefendersdownloadpc[2].md b/spaces/rorallitri/biomedical-language-models/logs/Grandeureditordungeondefendersdownloadpc[2].md deleted file mode 100644 index 343dbfac599d72bd8c3cafec039c78bafcaab344..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Grandeureditordungeondefendersdownloadpc[2].md +++ /dev/null @@ -1,6 +0,0 @@ -

    Luv Shuv Tey Chicken Khurana 2 free download in hindi


    Download https://tinurll.com/2uzot7



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/rstallman/AI-Contract-Sheet/app.py b/spaces/rstallman/AI-Contract-Sheet/app.py deleted file mode 100644 index c6a27cdcddd1964fab565631644e00e3af215aeb..0000000000000000000000000000000000000000 --- a/spaces/rstallman/AI-Contract-Sheet/app.py +++ /dev/null @@ -1,124 +0,0 @@ -import openai -import gradio -import pandas as pd -from datetime import datetime -import gspread -from google.oauth2.service_account import Credentials -import requests -import json - -openai.api_key = "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" - -# Global variables -records = [] -credentials = Credentials.from_service_account_file("credentials.json", scopes=["https://www.googleapis.com/auth/spreadsheets"]) -client = gspread.authorize(credentials) -sheet = client.open_by_url("https://docs.google.com/spreadsheets/d/1aZibKvwrvOB-xx_PSp2YFyuaycHyVkJZW_unC21VUbA/edit?usp=sharing").sheet1 - -def get_user_ip(): - try: - response = requests.get("https://api.ipify.org?format=json") - data = json.loads(response.text) - return data["ip"] - except: - return None - -def ContractDraftGPT(passcode, user_input, user_name, user_email, is_fintech_startup, region, profession): - if not (user_input and user_name and user_email and is_fintech_startup and region and profession): - return "Please fill in all the input fields." - - ip_address = get_user_ip() - - # Check if passcode is required based on IP address usage count - if ip_address and any(record["IP Address"] == ip_address for record in records): - usage_count = sum(record["IP Address"] == ip_address for record in records) - if usage_count > 3 and not passcode: - return "A passcode is required for subsequent uses. Email contact@westminster.ai to request a passcode." - - messages = [] - - if not user_name: - return "Please enter your name." - - user_message = f"{user_input} [USER_IDENTITY: {user_name}]" - messages.append({"role": "user", "content": user_message}) - messages.append({"role": "system", "content": "You are a professional and experienced UK Lawyer who is drafting a legal document, a contract for your client based on his requirements. Make sure to mention and point precise legal rules, Acts of Parliament (please insert which section of which article of which law, be precise when you refer to Acts of Parliament), case law, and any pieces of secondary legislation. UK legislation."}) - - response = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages=messages - ) - - ContractGPT_reply = response["choices"][0]["message"]["content"].strip() - messages.append({"role": "assistant", "content": ContractGPT_reply}) - - timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S") - - record = { - "Passcode": passcode, - "Timestamp": timestamp, - "User Input": user_input, - "User Identity": user_name, - "User Email": user_email, - "IP Address": ip_address, - "Region": region, - "Profession": profession, - "Fintech": "Yes" if is_fintech_startup == "Yes" else "No", - "Contract Draft": ContractGPT_reply - } - records.append(record) - - sheet_data = pd.DataFrame(records) - rows_to_append = sheet_data.iloc[len(records) - 1:][["Passcode", "Timestamp", "User Input", "User Identity", "User Email", "IP Address", "Region", "Profession", "Fintech", "Contract Draft"]].values.tolist() - - if len(records) == 1: - header = ["Passcode", "Timestamp", "User Input", "User Identity", "User Email", "IP Address", "Region", "Profession", "Fintech", "Contract Draft"] - sheet.insert_row(header, 1) - - sheet.append_rows(rows_to_append, value_input_option='USER_ENTERED') - - return ContractGPT_reply - -def launch_interface(): - inputs = [ - gradio.inputs.Textbox(label="Organisation's Passcode (Optional)", placeholder="Enter your organisation's passcode"), - gradio.inputs.Textbox(label="Your Contract Draft Request", placeholder="Provide details for our AI lawyer to draft your contract..."), - gradio.inputs.Textbox(label="Your Name", placeholder="Enter your name"), - gradio.inputs.Textbox(label="Your Email", placeholder="Enter your email"), - gradio.inputs.Radio(label="Are you a fintech startup?", choices=["Yes", "No"]), - gradio.inputs.Dropdown(label="Select your region:", choices=["England", "Scotland", "Wales", "Northern Ireland"]), - gradio.inputs.Textbox(label="Profession", placeholder="Enter your profession") - ] - outputs = gradio.outputs.Textbox(label="Contract Draft") - - def validate_passcode(passcode, user_input, user_name, user_email, is_fintech_startup, region, profession): - valid_passcodes = { - "organization1": "risebybarclays", - "organization2": "launchlabrocks", - "organization3": "fintechalliance", - "organization4": "cisi-fintech", - "organization5": "city-bayes-alumni", - "organization6": "bar-council", - "organization7": "vcinnovations", - "organization8": "remi-slama", - "organization9": "dalton-latymer", - "organization10": "barrister", - "organization11": "r-muttukrishnan", - "organization12": "zhero" - } - - if not passcode: - return "Please provide a passcode. Email contact@westminster.ai to request a passcode." - - passcode = passcode.lower() # Convert the passcode to lowercase for case-insensitive comparison - - if passcode not in valid_passcodes.values(): - return "Incorrect passcode. Access denied. Email contact@westminster.ai to request a passcode." - - return ContractDraftGPT(passcode, user_input, user_name, user_email, is_fintech_startup, region, profession) - - interface = gradio.Interface(fn=validate_passcode, inputs=inputs, outputs=outputs, title="", description="") - interface.launch() - -if __name__ == "__main__": - launch_interface() diff --git a/spaces/scedlatioru/img-to-music/example/Download [CRACKED] Adobe Presenter 9 Full Crack.md b/spaces/scedlatioru/img-to-music/example/Download [CRACKED] Adobe Presenter 9 Full Crack.md deleted file mode 100644 index dcfdaca9f0b1ae61ee903d0a303e57b03b938482..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Download [CRACKED] Adobe Presenter 9 Full Crack.md +++ /dev/null @@ -1,22 +0,0 @@ - -

    How to Create Interactive Presentations with Adobe Presenter 9

    -

    Adobe Presenter 9 is a software that helps you convert your PowerPoint slides into interactive content using out-of-the-box assets and quizzes. You can also record and edit video presentations, and publish them to various platforms such as HTML5, YouTube, Vimeo, or Adobe Connect.

    -

    In this article, we will show you how to use Adobe Presenter 9 to create engaging presentations that will captivate your audience and enhance your learning outcomes.

    -

    download adobe presenter 9 full crack


    DOWNLOAD ✶✶✶ https://gohhs.com/2uEznb



    -

    Step 1: Install Adobe Presenter 9

    -

    To use Adobe Presenter 9, you need to have PowerPoint 2010 or 2013 installed on your Windows computer. You can download Adobe Presenter 9 from the Adobe website [^1^]. You will need your Adobe Presenter serial number to complete the installation. If you don't have one, you can sign up for a free trial or purchase a subscription.

    -

    Step 2: Create or Open a PowerPoint Presentation

    -

    Once you have installed Adobe Presenter 9, you will see a new tab called Adobe Presenter on your PowerPoint ribbon. Click on it to access the Adobe Presenter features. You can create a new presentation from scratch, or open an existing one that you want to enhance with Adobe Presenter.

    -

    Step 3: Add Assets and Quizzes

    -

    Adobe Presenter 9 offers a variety of assets and quizzes that you can insert into your slides to make them more interactive and engaging. For example, you can add characters, scenarios, interactions, games, simulations, videos, audio, animations, and more. You can also create quizzes with different question types, such as multiple choice, true/false, matching, fill-in-the-blank, etc. You can customize the appearance and behavior of these elements using the Adobe Presenter properties panel.

    -

    Step 4: Record and Edit Video Presentations

    -

    If you want to add a personal touch to your presentation, you can record a video of yourself or your screen using the Adobe Presenter Video Creator tool. You can access it from the Adobe Presenter tab by clicking on Video. You can choose to record from your webcam, your screen, or both. You can also import existing videos from your computer or online sources. After recording, you can edit your video using the simplified four-button interface that lets you trim, cut, pan and zoom, adjust audio levels, add transitions and effects, etc.

    -

    Step 5: Publish and Share Your Presentation

    -

    When you are done creating your presentation, you can publish it to various formats and platforms using the Adobe Presenter Publish tool. You can access it from the Adobe Presenter tab by clicking on Publish. You can choose to publish your presentation as HTML5 for viewing on any device, as SWF for viewing on Flash-enabled browsers, as PDF for offline viewing and printing, as MP4 for uploading to video-sharing sites like YouTube or Vimeo, or as ZIP for uploading to learning management systems (LMS) like Moodle or Blackboard. You can also publish your presentation directly to Adobe Connect if you have an account.

    -

    After publishing your presentation, you can share it with your audience via email, social media, or embed code. You can also track and analyze the user progress and quiz results using the Adobe Presenter Reports tool.

    -

    -

    Conclusion

    -

    Adobe Presenter 9 is a powerful software that helps you create interactive presentations that will impress your audience and improve your learning outcomes. You can easily convert your PowerPoint slides into engaging content using out-of-the-box assets and quizzes. You can also record and edit video presentations using the simple and intuitive interface. You can publish and share your presentations to various formats and platforms with ease. You can also monitor and evaluate the user performance and feedback using the reporting tool.

    -

    If you want to learn more about Adobe Presenter 9, you can visit the Adobe website [^2^] or check out the What's new in Adobe Presenter 9

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/Vector Clock Pro V2.20 Serial Key ((FULL)).md b/spaces/scedlatioru/img-to-music/example/Vector Clock Pro V2.20 Serial Key ((FULL)).md deleted file mode 100644 index 39e0d85d56a284891115f2c8a840836a726f34ac..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Vector Clock Pro V2.20 Serial Key ((FULL)).md +++ /dev/null @@ -1,12 +0,0 @@ -

    Vector Clock Pro v2.20 Serial Key


    Download File - https://gohhs.com/2uEApo



    - -Ax88x72a Windows XP Driver Vector Clock Pro v2.20 Mio Moov M300 Serial Key Update Cards Best bitcoin cloud mining companies ziddi aashiq bhojpuri movie . .. Download drivers For all models of Windows 7, 8, 1.. .. Mio Moov 300 Update cards Best companies for. . -Mio Moov 200 for Windows 7, Vista, XP, 2000 .. -Download from. .. Mio Moov 300 Pack cards The best companies for. -Mio Moov 200 for Windows XP, 7, Vista, 2000 .. -Update software, drivers and maps for Mio Moov. -Mio moov 200 maps update for windows 7 .. Download from the official site. -Software, driver and map update for Mio Moov. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/scikit-learn/blog-example/README.md b/spaces/scikit-learn/blog-example/README.md deleted file mode 100644 index c68bc1c2db851625c1c3c380148f0992ee995b86..0000000000000000000000000000000000000000 --- a/spaces/scikit-learn/blog-example/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Skops Blog Example -emoji: 📊 -colorFrom: purple -colorTo: purple -sdk: gradio -sdk_version: 3.13.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sdeeas/ChuanhuChatGPT/modules/presets.py b/spaces/sdeeas/ChuanhuChatGPT/modules/presets.py deleted file mode 100644 index fe1938a80f81d29a010e72d796b8edc02cea4f9e..0000000000000000000000000000000000000000 --- a/spaces/sdeeas/ChuanhuChatGPT/modules/presets.py +++ /dev/null @@ -1,233 +0,0 @@ -# -*- coding:utf-8 -*- -import os -from pathlib import Path -import gradio as gr -from .webui_locale import I18nAuto - -i18n = I18nAuto() # internationalization - -CHATGLM_MODEL = None -CHATGLM_TOKENIZER = None -LLAMA_MODEL = None -LLAMA_INFERENCER = None - -# ChatGPT 设置 -INITIAL_SYSTEM_PROMPT = "You are a helpful assistant." -API_HOST = "api.openai.com" -COMPLETION_URL = "https://api.openai.com/v1/chat/completions" -BALANCE_API_URL="https://api.openai.com/dashboard/billing/credit_grants" -USAGE_API_URL="https://api.openai.com/dashboard/billing/usage" -HISTORY_DIR = Path("history") -HISTORY_DIR = "history" -TEMPLATES_DIR = "templates" - -# 错误信息 -STANDARD_ERROR_MSG = i18n("☹️发生了错误:") # 错误信息的标准前缀 -GENERAL_ERROR_MSG = i18n("获取对话时发生错误,请查看后台日志") -ERROR_RETRIEVE_MSG = i18n("请检查网络连接,或者API-Key是否有效。") -CONNECTION_TIMEOUT_MSG = i18n("连接超时,无法获取对话。") # 连接超时 -READ_TIMEOUT_MSG = i18n("读取超时,无法获取对话。") # 读取超时 -PROXY_ERROR_MSG = i18n("代理错误,无法获取对话。") # 代理错误 -SSL_ERROR_PROMPT = i18n("SSL错误,无法获取对话。") # SSL 错误 -NO_APIKEY_MSG = i18n("API key为空,请检查是否输入正确。") # API key 长度不足 51 位 -NO_INPUT_MSG = i18n("请输入对话内容。") # 未输入对话内容 -BILLING_NOT_APPLICABLE_MSG = i18n("账单信息不适用") # 本地运行的模型返回的账单信息 - -TIMEOUT_STREAMING = 60 # 流式对话时的超时时间 -TIMEOUT_ALL = 200 # 非流式对话时的超时时间 -ENABLE_STREAMING_OPTION = True # 是否启用选择选择是否实时显示回答的勾选框 -HIDE_MY_KEY = False # 如果你想在UI中隐藏你的 API 密钥,将此值设置为 True -CONCURRENT_COUNT = 100 # 允许同时使用的用户数量 - -SIM_K = 5 -INDEX_QUERY_TEMPRATURE = 1.0 - -CHUANHU_TITLE = i18n("川虎Chat 🚀") - -CHUANHU_DESCRIPTION = i18n("由Bilibili [土川虎虎虎](https://space.bilibili.com/29125536)、[明昭MZhao](https://space.bilibili.com/24807452) 和 [Keldos](https://github.com/Keldos-Li) 开发
    访问川虎Chat的 [GitHub项目](https://github.com/GaiZhenbiao/ChuanhuChatGPT) 下载最新版脚本") - -FOOTER = """
    {versions}
    """ - -APPEARANCE_SWITCHER = """ -
    -"""+ i18n("切换亮暗色主题") + """ - -
    -""" - -SUMMARIZE_PROMPT = "你是谁?我们刚才聊了什么?" # 总结对话时的 prompt - -ONLINE_MODELS = [ - "gpt-3.5-turbo", - "gpt-3.5-turbo-0301", - "gpt-4", - "gpt-4-0314", - "gpt-4-32k", - "gpt-4-32k-0314", - "xmchat", - "yuanai-1.0-base_10B", - "yuanai-1.0-translate", - "yuanai-1.0-dialog", - "yuanai-1.0-rhythm_poems", -] - -LOCAL_MODELS = [ - "chatglm-6b", - "chatglm-6b-int4", - "chatglm-6b-int4-qe", - "StableLM", - "MOSS", - "llama-7b-hf", - "llama-13b-hf", - "llama-30b-hf", - "llama-65b-hf", -] - -if os.environ.get('HIDE_LOCAL_MODELS', 'false') == 'true': - MODELS = ONLINE_MODELS -else: - MODELS = ONLINE_MODELS + LOCAL_MODELS - -DEFAULT_MODEL = 0 - -os.makedirs("models", exist_ok=True) -os.makedirs("lora", exist_ok=True) -os.makedirs("history", exist_ok=True) -for dir_name in os.listdir("models"): - if os.path.isdir(os.path.join("models", dir_name)): - if dir_name not in MODELS: - MODELS.append(dir_name) - -MODEL_TOKEN_LIMIT = { - "gpt-3.5-turbo": 4096, - "gpt-3.5-turbo-0301": 4096, - "gpt-4": 8192, - "gpt-4-0314": 8192, - "gpt-4-32k": 32768, - "gpt-4-32k-0314": 32768 -} - -TOKEN_OFFSET = 1000 # 模型的token上限减去这个值,得到软上限。到达软上限之后,自动尝试减少token占用。 -DEFAULT_TOKEN_LIMIT = 3000 # 默认的token上限 -REDUCE_TOKEN_FACTOR = 0.5 # 与模型token上限想乘,得到目标token数。减少token占用时,将token占用减少到目标token数以下。 - -REPLY_LANGUAGES = [ - "简体中文", - "繁體中文", - "English", - "日本語", - "Español", - "Français", - "Deutsch", - "跟随问题语言(不稳定)" -] - - -WEBSEARCH_PTOMPT_TEMPLATE = """\ -Web search results: - -{web_results} -Current date: {current_date} - -Instructions: Using the provided web search results, write a comprehensive reply to the given query. Make sure to cite results using [[number](URL)] notation after the reference. If the provided search results refer to multiple subjects with the same name, write separate answers for each subject. -Query: {query} -Reply in {reply_language} -""" - -PROMPT_TEMPLATE = """\ -Context information is below. ---------------------- -{context_str} ---------------------- -Current date: {current_date}. -Using the provided context information, write a comprehensive reply to the given query. -Make sure to cite results using [number] notation after the reference. -If the provided context information refer to multiple subjects with the same name, write separate answers for each subject. -Use prior knowledge only if the given context didn't provide enough information. -Answer the question: {query_str} -Reply in {reply_language} -""" - -REFINE_TEMPLATE = """\ -The original question is as follows: {query_str} -We have provided an existing answer: {existing_answer} -We have the opportunity to refine the existing answer -(only if needed) with some more context below. ------------- -{context_msg} ------------- -Given the new context, refine the original answer to better -Reply in {reply_language} -If the context isn't useful, return the original answer. -""" - -ALREADY_CONVERTED_MARK = "" - -small_and_beautiful_theme = gr.themes.Soft( - primary_hue=gr.themes.Color( - c50="#EBFAF2", - c100="#CFF3E1", - c200="#A8EAC8", - c300="#77DEA9", - c400="#3FD086", - c500="#02C160", - c600="#06AE56", - c700="#05974E", - c800="#057F45", - c900="#04673D", - c950="#2E5541", - name="small_and_beautiful", - ), - secondary_hue=gr.themes.Color( - c50="#576b95", - c100="#576b95", - c200="#576b95", - c300="#576b95", - c400="#576b95", - c500="#576b95", - c600="#576b95", - c700="#576b95", - c800="#576b95", - c900="#576b95", - c950="#576b95", - ), - neutral_hue=gr.themes.Color( - name="gray", - c50="#f6f7f8", - # c100="#f3f4f6", - c100="#F2F2F2", - c200="#e5e7eb", - c300="#d1d5db", - c400="#B2B2B2", - c500="#808080", - c600="#636363", - c700="#515151", - c800="#393939", - # c900="#272727", - c900="#2B2B2B", - c950="#171717", - ), - radius_size=gr.themes.sizes.radius_sm, - ).set( - # button_primary_background_fill="*primary_500", - button_primary_background_fill_dark="*primary_600", - # button_primary_background_fill_hover="*primary_400", - # button_primary_border_color="*primary_500", - button_primary_border_color_dark="*primary_600", - button_primary_text_color="wihte", - button_primary_text_color_dark="white", - button_secondary_background_fill="*neutral_100", - button_secondary_background_fill_hover="*neutral_50", - button_secondary_background_fill_dark="*neutral_900", - button_secondary_text_color="*neutral_800", - button_secondary_text_color_dark="white", - # background_fill_primary="#F7F7F7", - # background_fill_primary_dark="#1F1F1F", - # block_title_text_color="*primary_500", - block_title_background_fill_dark="*primary_900", - block_label_background_fill_dark="*primary_900", - input_background_fill="#F6F6F6", - ) diff --git a/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/Retrieval-based-Voice-Conversion-WebUI/uvr5_pack/lib_v5/layers_33966KB.py b/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/Retrieval-based-Voice-Conversion-WebUI/uvr5_pack/lib_v5/layers_33966KB.py deleted file mode 100644 index 78e539250075d7fed2f349d05e3317dfe2c96804..0000000000000000000000000000000000000000 --- a/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/Retrieval-based-Voice-Conversion-WebUI/uvr5_pack/lib_v5/layers_33966KB.py +++ /dev/null @@ -1,126 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F - -from uvr5_pack.lib_v5 import spec_utils - - -class Conv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(Conv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nout, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - bias=False, - ), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class SeperableConv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(SeperableConv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nin, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - groups=nin, - bias=False, - ), - nn.Conv2d(nin, nout, kernel_size=1, bias=False), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class Encoder(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU): - super(Encoder, self).__init__() - self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ) - - def __call__(self, x): - skip = self.conv1(x) - h = self.conv2(skip) - - return h, skip - - -class Decoder(nn.Module): - def __init__( - self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False - ): - super(Decoder, self).__init__() - self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.dropout = nn.Dropout2d(0.1) if dropout else None - - def __call__(self, x, skip=None): - x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True) - if skip is not None: - skip = spec_utils.crop_center(skip, x) - x = torch.cat([x, skip], dim=1) - h = self.conv(x) - - if self.dropout is not None: - h = self.dropout(h) - - return h - - -class ASPPModule(nn.Module): - def __init__(self, nin, nout, dilations=(4, 8, 16, 32, 64), activ=nn.ReLU): - super(ASPPModule, self).__init__() - self.conv1 = nn.Sequential( - nn.AdaptiveAvgPool2d((1, None)), - Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ), - ) - self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ) - self.conv3 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[0], dilations[0], activ=activ - ) - self.conv4 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[1], dilations[1], activ=activ - ) - self.conv5 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.conv6 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.conv7 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.bottleneck = nn.Sequential( - Conv2DBNActiv(nin * 7, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1) - ) - - def forward(self, x): - _, _, h, w = x.size() - feat1 = F.interpolate( - self.conv1(x), size=(h, w), mode="bilinear", align_corners=True - ) - feat2 = self.conv2(x) - feat3 = self.conv3(x) - feat4 = self.conv4(x) - feat5 = self.conv5(x) - feat6 = self.conv6(x) - feat7 = self.conv7(x) - out = torch.cat((feat1, feat2, feat3, feat4, feat5, feat6, feat7), dim=1) - bottle = self.bottleneck(out) - return bottle diff --git a/spaces/shikunl/prismer/prismer/demo_vis.py b/spaces/shikunl/prismer/prismer/demo_vis.py deleted file mode 100644 index 06341dba6fc47c51e505336c1f584d61293b426e..0000000000000000000000000000000000000000 --- a/spaces/shikunl/prismer/prismer/demo_vis.py +++ /dev/null @@ -1,161 +0,0 @@ -import glob -import os -import json -import torch -import random -import matplotlib.pyplot as plt -import numpy as np - -from utils import create_ade20k_label_colormap - -obj_label_map = torch.load('dataset/detection_features.pt')['labels'] -coco_label_map = torch.load('dataset/coco_features.pt')['labels'] -ade_color = create_ade20k_label_colormap() - -file_path = 'helpers/images' -expert_path = 'helpers/labels' -plt.ioff() - - -def get_label_path(file_name, expert_name, with_suffix=False): - file_suffix = '.png' if not with_suffix else '_.png' - label_name = ''.join(file_name.split('.')[:-1] + [file_suffix]) - label_path = os.path.join(expert_path, expert_name, label_name) - return label_path - - -def depth_prettify(file_name): - label_path = get_label_path(file_name, 'depth') - save_path = get_label_path(file_name, 'depth', True) - depth = plt.imread(label_path) - plt.imsave(save_path, depth, cmap='rainbow') - - -def obj_detection_prettify(file_name): - label_path = get_label_path(file_name, 'obj_detection') - save_path = get_label_path(file_name, 'obj_detection', True) - - rgb = plt.imread(file_name) - obj_labels = plt.imread(label_path) - obj_labels_dict = json.load(open(label_path.replace('.png', '.json'))) - - plt.imshow(rgb) - - num_objs = np.unique(obj_labels)[:-1].max() - plt.imshow(obj_labels, cmap='terrain', vmax=num_objs + 1 / 255., alpha=0.5) - - for i in np.unique(obj_labels)[:-1]: - obj_idx_all = np.where(obj_labels == i) - obj_idx = random.randint(0, len(obj_idx_all[0])) - x, y = obj_idx_all[1][obj_idx], obj_idx_all[0][obj_idx] - obj_name = obj_label_map[obj_labels_dict[str(int(i * 255))]] - plt.text(x, y, obj_name, c='white', horizontalalignment='center', verticalalignment='center') - - plt.axis('off') - plt.savefig(save_path, bbox_inches='tight', transparent=True, pad_inches=0) - plt.close() - - -def seg_prettify(file_name): - label_path = get_label_path(file_name, 'seg_coco') - save_path = get_label_path(file_name, 'seg_coco', True) - - rgb = plt.imread(file_name) - seg_labels = plt.imread(label_path) - - plt.imshow(rgb) - - seg_map = np.zeros(list(seg_labels.shape) + [3], dtype=np.int16) - for i in np.unique(seg_labels): - seg_map[seg_labels == i] = ade_color[int(i * 255)] - - plt.imshow(seg_map, alpha=0.5) - - for i in np.unique(seg_labels): - obj_idx_all = np.where(seg_labels == i) - obj_idx = random.randint(0, len(obj_idx_all[0])) - x, y = obj_idx_all[1][obj_idx], obj_idx_all[0][obj_idx] - obj_name = coco_label_map[int(i * 255)] - plt.text(x, y, obj_name, c='white', horizontalalignment='center', verticalalignment='center') - - plt.axis('off') - plt.savefig(save_path, bbox_inches='tight', transparent=True, pad_inches=0) - plt.close() - - -def ocr_detection_prettify(file_name): - label_path = get_label_path(file_name, 'ocr_detection') - save_path = get_label_path(file_name, 'ocr_detection', True) - - if os.path.exists(label_path): - rgb = plt.imread(file_name) - ocr_labels = plt.imread(label_path) - ocr_labels_dict = torch.load(label_path.replace('.png', '.pt')) - - plt.imshow(rgb) - plt.imshow((1 - ocr_labels) < 1, cmap='gray', alpha=0.8) - - for i in np.unique(ocr_labels)[:-1]: - text_idx_all = np.where(ocr_labels == i) - x, y = text_idx_all[1].mean(), text_idx_all[0].mean() - text = ocr_labels_dict[int(i * 255)]['text'] - plt.text(x, y, text, c='white', horizontalalignment='center', verticalalignment='center') - - plt.axis('off') - plt.savefig(save_path, bbox_inches='tight', transparent=True, pad_inches=0) - plt.close() - else: - rgb = plt.imread(file_name) - ocr_labels = np.ones_like(rgb, dtype=np.float32()) - - plt.imshow(rgb) - plt.imshow(ocr_labels, cmap='gray', alpha=0.8) - - x, y = rgb.shape[1] / 2, rgb.shape[0] / 2 - plt.text(x, y, 'No text detected', c='black', horizontalalignment='center', verticalalignment='center') - - plt.axis('off') - plt.savefig(save_path, bbox_inches='tight', transparent=True, pad_inches=0) - plt.close() - - -im_list = glob.glob(file_path + '/*.jpg') + glob.glob(file_path + '/*.png') + glob.glob(file_path + '/*.jpeg') - -# prettify labels first: -for i in range(len(im_list)): - depth_prettify(im_list[i]) - seg_prettify(im_list[i]) - ocr_detection_prettify(im_list[i]) - obj_detection_prettify(im_list[i]) - -pretty = {'depth': True, 'normal': False, 'edge': False, - 'obj_detection': True, 'ocr_detection': True, 'seg_coco': True} - -# plot expert labels -for im_path in im_list: - fig, axs = plt.subplots(1, 7, figsize=(20, 4)) - rgb = plt.imread(im_path) - axs[0].imshow(rgb) - axs[0].axis('off') - axs[0].set_title('RGB') - - for j in range(6): - label_name = list(pretty.keys())[j] - label_path = get_label_path(im_path, label_name, with_suffix=pretty[label_name]) - label = plt.imread(label_path) - if label_name != 'edge': - axs[j + 1].imshow(label) - else: - axs[j + 1].imshow(label, cmap='gray') - - axs[j + 1].axis('off') - axs[j + 1].set_title(label_name) - - caption_path = ''.join(im_path.split('.')[:-1] + ['.txt']) - with open(caption_path) as f: - caption = f.readlines()[0] - - plt.suptitle(caption) - plt.tight_layout() - -plt.show() diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/8 Ball Pool Mod APK Terbaru Garis Panjang dan Anti-Ban Tanpa Root.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/8 Ball Pool Mod APK Terbaru Garis Panjang dan Anti-Ban Tanpa Root.md deleted file mode 100644 index 267949137bb44f93bc45fe28deb882929688ca59..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/8 Ball Pool Mod APK Terbaru Garis Panjang dan Anti-Ban Tanpa Root.md +++ /dev/null @@ -1,142 +0,0 @@ -
    -

    Download 8 Ball Pool Mod APK Garis Panjang Tanpa Root

    -

    If you are a fan of pool games, you might have heard of 8 Ball Pool, one of the most popular and addictive pool games on Android. But did you know that you can download a modified version of the game that gives you some extra features and advantages, such as a longer guideline and unlimited coins? In this article, we will tell you what 8 Ball Pool is, what mod APK is, how to download 8 Ball Pool mod APK garis panjang tanpa root, and what are some alternatives to it.

    -

    download 8 ball pool mod apk garis panjang tanpa root


    Download Filehttps://ssurll.com/2uNQ2b



    -

    What is 8 Ball Pool?

    -

    8 Ball Pool is a pool billiard game that is played on a table with six pockets, cue sticks, and sixteen balls (a cue ball and fifteen object balls). The object balls include seven solid-colored balls numbered 1 through 7, seven striped balls numbered 9 through 15, and the black 8 ball. The game can be played in single or multiplayer modes, online or offline, with different rooms and tables to choose from. The game also has a level system, a leaderboard, and a shop where you can customize your cue and table.

    -

    Features of 8 Ball Pool

    -

    Some of the features of 8 Ball Pool are:

    -
      -
    • You can challenge your friends or other players from around the world in PvP mode.
    • -
    • You can play in different tournaments and win exclusive prizes.
    • -
    • You can earn coins by winning matches and use them to buy new items in the shop.
    • -
    • You can improve your skills and rank up in the level system.
    • -
    • You can enjoy realistic graphics and physics in the game.
    • -
    -

    What is mod APK?

    -

    Mod APK is a modified version of an original APK (Android Package Kit), which is the file format used to distribute and install applications on Android devices. Mod APKs are created by hackers or developers who want to change or add some features to the original app. For example, mod APKs can unlock premium features, remove ads, increase coins or gems, make the game easier or harder, etc.

    -

    Benefits and risks of mod APK

    -

    Some of the benefits of using mod APKs are:

    -
      -
    • You can access features that are not available in the original app.
    • -
    • You can save money by not having to pay for premium features or in-app purchases.
    • -
    • You can have more fun and enjoyment by playing the game with more options and possibilities.
    • -
    -

    However, there are also some risks involved in using mod APKs, such as:

    -

    Cara membuat cheat garis panjang 8 ball pool dengan lulubox
    -Download 8 ball pool mod apk unlimited coins and cash 2023
    -Aplikasi cheat 8 ball pool garis panjang tanpa root terbaru
    -Download 8 ball pool mod apk anti ban dan garis pantul
    -Cara install 8 ball pool mod apk garis panjang di Android
    -Download 8 ball pool mod apk versi terbaru 2023 gratis
    -Tips dan trik bermain 8 ball pool dengan garis panjang
    -Download 8 ball pool mod apk offline dan online mode
    -Cara mendapatkan koin dan uang banyak di 8 ball pool mod apk
    -Download 8 ball pool mod apk unlimited cues and tables
    -Review game 8 ball pool mod apk garis panjang tanpa root
    -Download 8 ball pool mod apk mega mod menu terlengkap
    -Cara mengatasi banned akun 8 ball pool mod apk
    -Download 8 ball pool mod apk level max dan rank pro
    -Cara bermain 8 ball pool mod apk dengan teman online
    -Download 8 ball pool mod apk no password dan no verification
    -Fitur dan kelebihan 8 ball pool mod apk garis panjang tanpa root
    -Download 8 ball pool mod apk support semua perangkat Android
    -Cara update 8 ball pool mod apk garis panjang tanpa root
    -Download 8 ball pool mod apk hack all achievements unlocked
    -Cara download dan instal 8 ball pool mod apk dari opraentertainment.id[^3^]
    -Download 8 ball pool mod apk unlimited spins and scratches
    -Cara menggunakan aplikasi lulubox untuk cheat 8 ball pool garis panjang[^1^] [^2^]
    -Download 8 ball pool mod apk unlimited chat pack and emojis
    -Keuntungan dan kerugian bermain 8 ball pool mod apk garis panjang tanpa root

    -
      -
    • You may violate the terms and conditions of the original app developer and get banned from using the app.
    • -
    • You may expose your device to malware or viruses that can harm your data or privacy.
    • -
    • You may lose your progress or account if the mod APK is not compatible with the original app or updates.
    • -
    • You may miss out on the updates and bug fixes that the original app developer provides.
    • -
    -

    How to download 8 Ball Pool mod APK garis panjang tanpa root?

    -

    If you want to download 8 Ball Pool mod APK garis panjang tanpa root, which means a longer guideline without rooting your device, you can use one of the following methods:

    -

    Using Lulubox app

    -

    Lulubox is an app that allows you to apply plugins to various games, including 8 Ball Pool. One of the plugins is the 8 Ball Pool long line plugin, which gives you a longer guideline to aim better. To use Lulubox, you need to follow these steps:

    -
      -
    1. Download and install Lulubox from its official website or a trusted source.
    2. -
    3. Open Lulubox and grant it the necessary permissions.
    4. -
    5. Find 8 Ball Pool in the list of games and tap on it.
    6. -
    7. Select the 8 Ball Pool long line plugin and enable it.
    8. -
    9. Tap on the launch button to start the game with the plugin applied.
    10. -
    -

    Note that you need to have the original 8 Ball Pool app installed on your device to use Lulubox. Also, you may need to update the plugin regularly to keep it working.

    -

    Using Game Guardian app

    -

    Game Guardian is an app that allows you to modify various aspects of games, such as speed, coins, gems, health, etc. You can also use it to get a longer guideline in 8 Ball Pool. To use Game Guardian, you need to follow these steps:

    -
      -
    1. Download and install Game Guardian from its official website or a trusted source.
    2. -
    3. Open Game Guardian and grant it the necessary permissions.
    4. -
    5. Open 8 Ball Pool and start a match.
    6. -
    7. Switch to Game Guardian and tap on the search icon.
    8. -
    9. Select the type of value as "Float" and enter "0.02" in the search box.
    10. -
    11. Tap on the search button and wait for the results.
    12. -
    13. Select all the results and change their value to "1000".
    14. -
    15. Switch back to 8 Ball Pool and enjoy the longer guideline.
    16. -
    -

    Note that you may need to repeat this process every time you start a new match. Also, you may need to root your device to use Game Guardian.

    -

    Using SAI app

    -

    SAI (Split APKs Installer) is an app that allows you to install split APKs (APKs that consist of multiple files) on your device. You can use it to install 8 Ball Pool mod APK garis panjang tanpa root, which is a split APK that contains the modified version of the game. To use SAI, you need to follow these steps:

    -
      -
    1. Download and install SAI from its official website or a trusted source.
    2. -
    3. Download 8 Ball Pool mod APK garis panjang tanpa root from a trusted source.
    4. -
    5. Open SAI and grant it the necessary permissions.
    6. -
    7. Tap on the "Install APKs" button and select the 8 Ball Pool mod APK file.
    8. -
    9. Tap on the "Select" button and wait for the installation to complete.
    10. -
    11. Open 8 Ball Pool mod APK and enjoy the longer guideline and unlimited coins.
    12. -
    -

    Note that you need to uninstall the original 8 Ball Pool app before installing the mod APK. Also, you may not be able to play online or update the game with the mod APK.

    -

    Alternatives to 8 Ball Pool mod APK

    -

    If you don't want to use 8 Ball Pool mod APK garis panjang tanpa root, or if you encounter any problems with it, you can try some alternatives that can also give you a better gaming experience. Here are some of them:

    -

    8 Ball Pool beta version

    -

    The beta version of 8 Ball Pool is an official version of the game that is used for testing new features and improvements before they are released to the public. You can join the beta program by following these steps:

    -
      -
    1. Go to [the Google Play Store page of 8 Ball Pool].
    2. -
    3. Scroll down and find the "Join the beta" section.
    4. -
    5. Tap on the "Join" button and wait for a few minutes.
    6. -
    7. You will see an update option for 8 Ball Pool. Tap on it and install the beta version of the game.
    8. -
    -

    The beta version of 8 Ball Pool may give you access to some features that are not available in the regular version, such as new cues, tables, modes, etc. However, keep in mind that the beta version may also have some bugs or errors that can affect your gameplay.

    -

    Other pool games

    -

    If you want to try some other pool games besides 8 Ball Pool, you can find many options on Google Play Store or other sources. Some of them are:

    - - - - - -
    NameDescription
    [Pool Stars 3D]A 3D pool game that offers realistic graphics, physics, and sounds. You can play in different modes, such as 8 ball, 9 ball, snooker, etc. You can also customize your cue, table, and avatar.
    [Billiards City]A modern arcade-style pool game that has simple and intuitive controls. You can play in different levels and challenges, and enjoy the stunning city views. You can also unlock new cues and tables as you progress.
    [Pool Break 3D]A versatile pool game that supports various types of billiards and snooker games. You can play solo or online with other players. You can also adjust the difficulty, speed, and view of the game.
    -

    Conclusion

    -

    In this article, we have discussed what 8 Ball Pool is, what mod APK is, how to download 8 Ball Pool mod APK garis panjang tanpa root, and what are some alternatives to it. We hope that you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below.

    -

    Summary of the article

    -

    Here are the main points of the article:

    -
      -
    • 8 Ball Pool is a popular and addictive pool game on Android.
    • -
    • Mod APK is a modified version of an original app that gives some extra features or advantages.
    • -
    • You can download 8 Ball Pool mod APK garis panjang tanpa root by using Lulubox, Game Guardian, or SAI apps.
    • -
    • You can also try 8 Ball Pool beta version or other pool games as alternatives to 8 Ball Pool mod APK.
    • -
    -

    FAQs

    -

    Here are some frequently asked questions about 8 Ball Pool mod APK garis panjang tanpa root:

    -
      -
    1. Q: Is 8 Ball Pool mod APK safe to use?
    2. -
    3. A: It depends on the source and the quality of the mod APK. Some mod APKs may be safe and harmless, while others may contain malware or viruses that can damage your device or data. Therefore, you should always download mod APKs from trusted sources and scan them with antivirus software before installing them.
    4. -
    5. Q: Is 8 Ball Pool mod APK legal to use?
    6. -
    7. A: It depends on the laws and regulations of your country or region. Some countries or regions may allow the use of mod APKs for personal or educational purposes, while others may prohibit or restrict them for violating the intellectual property rights of the original app developers. Therefore, you should always check the legal status of mod APKs in your area before using them.
    8. -
    9. Q: Can I play online with 8 Ball Pool mod APK?
    10. -
    11. A: It depends on the type and the compatibility of the mod APK. Some mod APKs may allow you to play online with other players who have the same mod APK, while others may prevent you from playing online or ban you from using the app. Therefore, you should always read the description and the reviews of the mod APK before downloading it.
    12. -
    13. Q: How can I update 8 Ball Pool mod APK?
    14. -
    15. A: It depends on the availability and the compatibility of the updates. Some mod APKs may provide updates regularly or automatically, while others may not have any updates at all. Also, some updates may be compatible with the mod APK, while others may not work or cause errors. Therefore, you should always check the update information and the compatibility of the mod APK before updating it.
    16. -
    17. Q: How can I uninstall 8 Ball Pool mod APK?
    18. -
    19. A: You can uninstall 8 Ball Pool mod APK by following these steps:
    20. -
        -
      • Go to your device settings and find the apps section.
      • -
      • Find 8 Ball Pool mod APK in the list of apps and tap on it.
      • -
      • Tap on the uninstall button and confirm your action.
      • -
      -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/AMP How to Monitor and Control Your Game Servers Remotely.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/AMP How to Monitor and Control Your Game Servers Remotely.md deleted file mode 100644 index 0ce48878018e4eee7da3f4514bd98ef8995383e6..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/AMP How to Monitor and Control Your Game Servers Remotely.md +++ /dev/null @@ -1,75 +0,0 @@ -
    -

    How to Download and Enjoy AMP 3 Music Player

    -

    If you are looking for a simple yet powerful music player for your Windows or Android device, you might want to check out AMP 3. AMP 3 is a free and lightweight music player that supports various audio and video formats, offers a video/audio cutter feature, and allows you to customize its interface according to your preferences. In this article, we will show you how to download and enjoy AMP 3 music player on your Windows or Android device.

    -

    What is AMP 3?

    -

    AMP 3 is a music player that was originally developed as a successor of AIMP, a popular audio player for Windows. However, AMP 3 is not just a clone of AIMP, but a completely new and independent project that aims to provide a better user experience and more features. AMP 3 is available for both Windows and Android platforms, and it is compatible with most audio and video formats, such as MP3, MP4, MKV, WEBM, OGG, FLAC, WAV, etc.

    -

    amp 3 download


    DOWNLOAD >>>>> https://ssurll.com/2uNYja



    -

    Features and Benefits of AMP 3

    -

    AMP 3 is not just a simple music player, but a versatile tool that can enhance your music listening experience. Here are some of the features and benefits of AMP 3 that make it stand out from other music players:

    -

    Supports Multiple Formats

    -

    AMP 3 can play almost any audio or video format that you throw at it, without requiring any additional codecs or plugins. You can also convert your files to different formats using the built-in converter. This means that you can enjoy your music and videos without worrying about compatibility issues or quality loss.

    -

    amp 3 download free
    -amp 3 download for windows 10
    -amp 3 download music player
    -amp 3 download cubecoders
    -amp 3 download winamp
    -amp 3 download aimp
    -amp 3 download software
    -amp 3 download full version
    -amp 3 download pc
    -amp 3 download mac
    -amp 3 download android
    -amp 3 download apk
    -amp 3 download online
    -amp 3 download converter
    -amp 3 download youtube
    -amp 3 download songs
    -amp 3 download audio
    -amp 3 download video
    -amp 3 download mp4
    -amp 3 download mp3
    -amp 3 download wav
    -amp 3 download flac
    -amp 3 download ogg
    -amp 3 download wma
    -amp 3 download aac
    -amp 3 download m4a
    -amp 3 download playlist
    -amp 3 download album
    -amp 3 download artist
    -amp 3 download genre
    -amp 3 download lyrics
    -amp 3 download cover art
    -amp 3 download metadata
    -amp 3 download editor
    -amp 3 download tagger
    -amp 3 download splitter
    -amp 3 download joiner
    -amp 3 download cutter
    -amp 3 download trimmer
    -amp 3 download recorder
    -amp 3 download mixer
    -amp 3 download equalizer
    -amp 3 download visualizer
    -amp 3 download effects
    -amp 3 download plugins
    -amp 3 download skins
    -amp 3 download themes
    -amp 3 download portable
    -amp 3 download offline installer

    -

    Offers Video/Audio Cutter

    -

    One of the most unique features of AMP 3 is its video/audio cutter function. This allows you to cut any part of your video or audio file and save it as a separate file. You can use this feature to create your own ringtones, trim unwanted parts, or extract audio from video. The video/audio cutter is easy to use and supports various output formats.

    -

    Provides Customizable Interface

    -

    AMP 3 lets you customize its interface according to your taste and needs. You can choose from different skins, themes, colors, fonts, icons, and layouts. You can also adjust the transparency, size, position, and behavior of the main window and the playlist. You can even create your own skins using the skin editor.

    -

    How to Download AMP 3 for Windows

    -

    If you want to download AMP 3 for your Windows PC or laptop, you can follow these simple steps:

    -

    Step 1: Visit the Official Website

    -

    The first step is to visit the official website of AMP 3 at https://cubecoders.com/AMPInstall. Here you will find all the information about the product, such as its features, screenshots, system requirements, changelog, etc.

    -

    Step 2: Choose the Right Version

    -

    The next step is to choose the right version of AMP 3 for your Windows system. There are two versions available: Windows Desktop and Windows Server. The Windows Desktop version is suitable for personal use on your PC or laptop, while the Windows Server version is designed for hosting servers or running multiple instances of AMP 3. You can download either version by clicking on the Download button.

    -

    Step 3: Run the Installer and Follow the Instructions

    You can contact the developers of AMP 3 by visiting their website at https://cubecoders.com/AMPInstall and filling out the contact form. You can also follow them on social media platforms such as Facebook, Twitter, and Instagram. They are always happy to hear from their users and answer any questions or feedback.

    -
  14. How can I support AMP 3?
  15. -

    You can support AMP 3 by sharing it with your friends and family, leaving a positive review on Google Play Store or other platforms, and donating to the developers via PayPal or Patreon. Your support will help them improve the product and add more features.

    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Discover Diverse Music Genres in Magic Tiles 3 - Download for Free.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Discover Diverse Music Genres in Magic Tiles 3 - Download for Free.md deleted file mode 100644 index 1119764b6a0858dc79655616b122152931632d29..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Discover Diverse Music Genres in Magic Tiles 3 - Download for Free.md +++ /dev/null @@ -1,134 +0,0 @@ - -

Download Magic Tiles 3 for Free and Enjoy the Best Music Game Ever

-

Do you love music and games? Do you want to play your favorite songs in a fun and exciting way? If you answered yes, then you should download Magic Tiles 3 for free and experience the best music game ever.

-

download magic tiles 3 for free


Downloadhttps://ssurll.com/2uNS1L



-

Magic Tiles 3 is a popular game that combines music, rhythm, and action. It is developed by Amanotes, the number one music games publisher in the world, with over one billion downloads. In this game, you can play thousands of songs from various genres and artists, such as pop, rock, classical, EDM, country, and more. You can also challenge other players online and show off your skills.

-

In this article, we will tell you everything you need to know about Magic Tiles 3, including how to play it, what features it has, what benefits it offers, and how to download it for free. Let's get started!

-

How to Play Magic Tiles 3

-

The gameplay of Magic Tiles 3 is simple but addictive. You just need to follow these basic rules:

-
    -
  • Tap, hold, and swipe the black tiles that appear on the screen.
  • -
  • Avoid the white tiles or you will lose.
  • -
  • Endless mode: Expect to speed up with each song level.
  • -
-

That's it! You can play any song you want and enjoy the music. But be careful, because the game can get very fast and challenging as you progress. You need to have good reflexes and coordination to keep up with the rhythm.

-

The Features of Magic Tiles 3

-

Magic Tiles 3 is not just a simple music game. It has many features that make it more fun and engaging. Here are some of them:

-

How to download magic tiles 3 for free on PC
-Magic tiles 3 free download apk
-Magic tiles 3 online play without downloading
-Download magic tiles 3 mod apk unlimited money
-Magic tiles 3 songs list download free
-Magic tiles 3 game download for android
-Magic tiles 3 app store free download
-Download magic tiles 3 offline mode
-Magic tiles 3 hack version download free
-Magic tiles 3 piano game free download
-Download magic tiles 3 latest update
-Magic tiles 3 vip subscription free download
-Download magic tiles 3 for windows 10
-Magic tiles 3 cheats and tips free download
-Magic tiles 3 no ads free download
-Download magic tiles 3 for macbook
-Magic tiles 3 best songs free download
-Download magic tiles 3 from google play
-Magic tiles 3 reviews and ratings free download
-Download magic tiles 3 for chromebook
-Magic tiles 3 country music free download
-Download magic tiles 3 for ios
-Magic tiles 3 tutorial and guide free download
-Download magic tiles 3 for laptop
-Magic tiles 3 custom songs free download
-Download magic tiles 3 from amazon appstore
-Magic tiles 3 challenges and rewards free download
-Download magic tiles 3 for pc bluestacks
-Magic tiles 3 themes and backgrounds free download
-Download magic tiles 3 for tablet
-Magic tiles 3 support and contact free download
-Download magic tiles 3 from official website
-Magic tiles 3 features and benefits free download
-Download magic tiles 3 for pc nox player
-Magic tiles 3 alternatives and competitors free download
-Download magic tiles 3 from microsoft store
-Magic tiles 3 FAQs and answers free download
-Download magic tiles 3 for pc memu play
-Magic tiles 3 news and updates free download
-Download magic tiles 3 for smart tv
-Magic tiles 3 privacy policy and terms of service free download
-Download magic tiles 3 from apkpure
-Magic tiles 3 testimonials and feedback free download
-Download magic tiles 3 for pc ldplayer
-Magic tiles 3 coupons and discounts free download
-Download magic tiles 3 from uptodown
-Magic tiles 3 referral program and invite codes free download

-

The Pool of Songs

-

One of the best things about Magic Tiles 3 is that it has a huge pool of songs that you can choose from. There are over 45,000 songs waiting for you to conquer. You can find songs from different genres, such as pop, rock, classical, EDM, country, etc. You can also find songs from famous artists, such as Ed Sheeran, Taylor Swift, Justin Bieber, BTS, etc. You will never run out of options or get bored with this game.

-

The Battle Mode

-

If you want to spice things up a bit, you can try the battle mode. This is where you can connect with countless players worldwide and compete with them. You can also invite your friends or family members to join you. The battle mode is a great way to test your skills, have fun, and make new friends.

-

The VIP Features

-

If you want to enjoy the game to the fullest, you can become a VIP member. This will give you access to many exclusive features, such as:

-
    -
  • No ads: You can play the game without any interruptions or distractions.
  • -
  • Unlimited songs: You can unlock and play all the songs in the game, including the latest and hottest ones.
  • -
  • Free revives: You can revive yourself for free if you make a mistake or lose a game.
  • -
  • Special gifts: You can receive special gifts and rewards every day.
  • -
  • And more: You can discover more VIP features by becoming a member.
  • -
-

To become a VIP member, you just need to pay a small fee per month or per year. You can also try it for free for three days before deciding. Trust us, it's worth it!

-

The Benefits of Playing Magic Tiles 3

-

Playing Magic Tiles 3 is not only fun, but also beneficial. Here are some of the benefits that you can get from playing this game:

-

The Music Benefits

-

Playing Magic Tiles 3 can help you improve your musical skills, such as your sense of rhythm, pitch, and melody. You can also learn new songs and genres that you might not be familiar with. Playing music can also boost your creativity and imagination, as well as your memory and concentration.

-

The Mental Benefits

-

Playing Magic Tiles 3 can also help you relax and relieve stress. Music has a soothing and calming effect on the mind and body. It can also improve your mood and emotions. Playing games can also stimulate your brain and keep it active and healthy. It can also enhance your problem-solving and decision-making skills.

-

The Social Benefits

-

Playing Magic Tiles 3 can also help you socialize and communicate with other people who share your passion for music and games. You can chat with them, exchange tips and feedback, and even make friends. You can also play with your family and friends and have a great time together.

-

How to Download Magic Tiles 3 for Free

-

Now that you know how awesome Magic Tiles 3 is, you might be wondering how to download it for free. Well, it's very easy. You just need to follow these steps:

-

Download from Google Play

-

If you have an Android device, you can download Magic Tiles 3 from Google Play. Here's how:

-
    -
  1. Open Google Play on your device.
  2. -
  3. Search for "Magic Tiles 3" in the search bar.
  4. -
  5. Select the game from the results and tap "Install".
  6. -
  7. Wait for the game to download and install on your device.
  8. -
  9. Enjoy playing Magic Tiles 3!
  10. -
-

You can also use this link to download the game directly: Magic Tiles 3 on Google Play.

-

Download from App Store

-

If you have an iOS device, you can download Magic Tiles 3 from App Store. Here's how:

-
    -
  1. Open App Store on your device.
  2. -
  3. Search for "Magic Tiles 3" in the search bar.
  4. -
  5. Select the game from the results and tap "Get".
  6. -
  7. Wait for the game to download and install on your device.
  8. -
  9. Enjoy playing Magic Tiles 3!
  10. -
-

You can also use this link to download the game directly: Magic Tiles 3 on App Store.

-

Download from BestGames.com

-

If you don't have an Android or iOS device, or if you prefer to play online on your PC or mobile browser, you can download Magic Tiles 3 from BestGames.com. Here's how:

-
    -
  1. Open your browser and go to BestGames.com.
  2. -
  3. Search for "Magic Tiles 3" in the search bar or browse the categories.
  4. -
  5. Select the game from the results and click "Play Now".
  6. -
  7. Wait for the game to load on your browser.
  8. -
  9. Enjoy playing Magic Tiles 3!
  10. -
-

Conclusion

-

Magic Tiles 3 is one of the best music games that you can play for free. It has amazing features, such as a huge pool of songs, a thrilling battle mode, a rewarding VIP membership, and more. It also offers many benefits, such as improving your musical skills, relaxing your mind, and socializing with others. You can download it for free from Google Play, App Store, or BestGames.com and play it on any device you want. What are you waiting for? Download Magic Tiles 3 now and enjoy the best music game ever!

-

FAQs

-

Here are some of the frequently asked questions and answers about Magic Tiles 3:

-
    -
  1. Q: How can I get more coins in Magic Tiles 3?
    -A: You can get more coins by playing more songs, completing daily quests, watching ads, or buying them with real money.
  2. -
  3. Q: How can I unlock more songs in Magic Tiles 3?
    -A: You can unlock more songs by leveling up, buying them with coins, or becoming a VIP member.
  4. -
  5. Q: How can I change the theme or the background of Magic Tiles 3?
    -A: You can change the theme or the background by going to the settings menu and choosing your preferred option.
  6. -
  7. Q: How can I contact the support team of Magic Tiles 3?
    -A: You can contact the support team by going to the settings menu and tapping on the "Help" button.
  8. -
  9. Q: How can I rate or review Magic Tiles 3?
    -A: You can rate or review Magic Tiles 3 by going to Google Play or App Store and leaving your feedback.
  10. -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download the Latest Stick War Legacy Mod APK 1.10.28 with Unlimited Money and Gems.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download the Latest Stick War Legacy Mod APK 1.10.28 with Unlimited Money and Gems.md deleted file mode 100644 index 3e9e05890f634b5423da958d5524db67037e6522..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download the Latest Stick War Legacy Mod APK 1.10.28 with Unlimited Money and Gems.md +++ /dev/null @@ -1,95 +0,0 @@ - -

Download Stick War Legacy Mod APK 1.10.28 - Unlimited Gems and More

-

If you are a fan of stick figure games, you might have heard of Stick War Legacy, one of the most popular and highest rated web games of all time. In this game, you have to lead your stick army to conquer other nations and become the ultimate ruler of Inamorta. But what if you want to enjoy the game without any limitations or restrictions? Well, you can do that by downloading Stick War Legacy mod apk, a modified version of the game that gives you unlimited gems, diamonds, skins, weapons, and more. In this article, we will tell you everything you need to know about Stick War Legacy mod apk, including its features, benefits, and how to download and install it on your device.

-

What is Stick War Legacy?

-

Stick War Legacy is a strategy game developed by Max Games Studios, where you have to control your stick army in various formations and battles. You can play as each unit, or command them as a leader. You can also mine gold, build units, learn different skills, and destroy the enemy statue to win. The game has several modes, such as Classic Campaign, Endless Deads, Tournament Mode, Missions Mode, and more. Each mode offers different challenges and rewards for you to enjoy.

-

download stick war legacy mod apk 1.10.28


Download Filehttps://ssurll.com/2uNUhp



-

The game also has amazing graphics, animations, sound effects, and music that make it more immersive and fun. You can customize your stick army with different skins and weapons that have their own perks and abilities. You can also unlock achievements and leaderboards to show off your skills and progress.

-

Why download Stick War Legacy mod apk?

-

While Stick War Legacy is a free game to play, it also has some in-app purchases that require real money. For example, you need gems and diamonds to buy skins and weapons, or to speed up your upgrades and research. You also have to watch ads to get some extra rewards or bonuses.

-

If you don't want to spend money or time on these things, you can download Stick War Legacy mod apk instead. This is a modified version of the game that gives you unlimited gems, diamonds, skins, weapons, and more. You can also enjoy the game without any ads or interruptions.

-

Here are some of the benefits of using Stick War Legacy mod apk:

-

Unlimited gems

-

Gems are the premium currency in Stick War Legacy that can be used to buy skins and weapons, or to speed up your upgrades and research. With unlimited gems, you can get any skin or weapon you want without waiting or spending money. You can also upgrade your units and skills faster and easier.

-

Unlimited diamonds

-

Diamonds are another currency in Stick War Legacy that can be used to buy special items or bonuses in the game. With unlimited diamonds, you can get extra gold, mana, health, damage, speed, and more. You can also use diamonds to revive your units or skip levels.

-

download stick war legacy mod apk unlimited gems
-download stick war legacy mod apk 999 army
-download stick war legacy mod apk latest version
-download stick war legacy mod apk happymod
-download stick war legacy mod apk android 1
-download stick war legacy mod apk 2023.2.85
-download stick war legacy mod apk free shopping
-download stick war legacy mod apk revdl
-download stick war legacy mod apk no root
-download stick war legacy mod apk offline
-download stick war legacy mod apk unlimited everything
-download stick war legacy mod apk unlimited money and gems
-download stick war legacy mod apk for pc
-download stick war legacy mod apk unlimited health
-download stick war legacy mod apk all skins unlocked
-download stick war legacy mod apk unlimited upgrade points
-download stick war legacy mod apk mega mod
-download stick war legacy mod apk new update
-download stick war legacy mod apk unlimited gold and diamonds
-download stick war legacy mod apk full unlocked
-download stick war legacy mod apk hack version
-download stick war legacy mod apk cheat menu
-download stick war legacy mod apk unlimited troops
-download stick war legacy mod apk god mode
-download stick war legacy mod apk rexdl
-download stick war legacy mod apk all weapons unlocked
-download stick war legacy mod apk unlimited mana
-download stick war legacy mod apk pure
-download stick war legacy mod apk with obb file
-download stick war legacy mod apk high damage
-download stick war legacy mod apk unlimited coins and gems
-download stick war legacy mod apk android oyun club
-download stick war legacy mod apk all characters unlocked
-download stick war legacy mod apk no ads
-download stick war legacy mod apk unlimited zombie mode
-download stick war legacy mod apk all levels unlocked
-download stick war legacy mod apk unlimited energy and gems
-download stick war legacy mod apk one hit kill
-download stick war legacy mod apk all modes unlocked
-download stick war legacy mod apk max level

-

Unlocked skins and weapons

-

Skins and weapons are cosmetic items that change the appearance and performance of your stick army. There are many skins and weapons available in Stick War Legacy, but some of them are locked behind gems or diamonds. With unlocked skins and weapons, you can access all of them for free and choose the ones that suit your style and strategy.

-

No ads

-

Ads are annoying and distracting when you are playing a game. They can also slow down your device or consume your data. With no ads in Stick War Legacy mod apk, you can play the game without any interruptions or distractions. You can also save your battery and data usage.

-

How to download and install Stick War Legacy mod apk?

-

If you are interested in downloading and installing Stick War Legacy mod apk, you can follow these simple steps:

-

Requirements

-

Before you download and install Stick War Legacy mod apk, you need to make sure that your device meets these requirements:

-
    -
  • Your device must have Android 4.4 or higher version.
  • -
  • Your device must have at least 100 MB of free storage space.
  • -
  • Your device must allow the installation of apps from unknown sources. You can enable this option by going to Settings > Security > Unknown Sources.
  • -
-

Download link

-

After you have checked the requirements, you can download the Stick War Legacy mod apk file from this link: . This is a safe and secure link that will give you the latest version of the modded game. You can also scan the QR code below to download the file directly to your device.

-QR code for Stick War Legacy mod apk download link -

Installation process

-

Once you have downloaded the Stick War Legacy mod apk file, you can install it by following these steps:

-
    -
  1. Locate the downloaded file in your device's file manager or downloads folder.
  2. -
  3. Tap on the file and select Install.
  4. -
  5. Wait for the installation to complete.
  6. -
  7. Launch the game and enjoy the modded features.
  8. -
-

Conclusion

-

Stick War Legacy is a fun and addictive strategy game that lets you control your stick army and conquer other nations. However, if you want to enjoy the game without any limitations or restrictions, you can download Stick War Legacy mod apk, a modified version of the game that gives you unlimited gems, diamonds, skins, weapons, and more. You can also play the game without any ads or interruptions. To download and install Stick War Legacy mod apk, you just need to follow the simple steps we have provided in this article. So, what are you waiting for? Download Stick War Legacy mod apk now and become the ultimate ruler of Inamorta!

-

Frequently Asked Questions

-

Here are some of the most common questions that people ask about Stick War Legacy mod apk:

-

Is Stick War Legacy mod apk safe to use?

-

Yes, Stick War Legacy mod apk is safe to use. It does not contain any viruses, malware, or spyware that can harm your device or data. It also does not require any root access or permissions that can compromise your security or privacy.

-

Is Stick War Legacy mod apk compatible with my device?

-

Stick War Legacy mod apk is compatible with most Android devices that have Android 4.4 or higher version. However, some devices may not support some of the modded features or may experience some glitches or errors. If you encounter any problems while using the modded game, you can try to clear the cache, reinstall the game, or contact the developer for support.

-

Can I play Stick War Legacy online with other players?

-

No, Stick War Legacy mod apk does not support online multiplayer mode. You can only play the game offline with your own stick army. However, you can still compete with other players on the leaderboards and achievements by logging in with your Google Play account.

-

Can I update Stick War Legacy mod apk to the latest version?

-

Yes, you can update Stick War Legacy mod apk to the latest version by downloading and installing the new modded file from the same link we have provided in this article. However, you may lose your progress and data if you update the game without backing it up first. You can use a backup app or a cloud service to save your data before updating.

-

Can I use Stick War Legacy mod apk with other mods or cheats?

-

No, we do not recommend using Stick War Legacy mod apk with other mods or cheats. This may cause conflicts, errors, or crashes that can ruin your gaming experience. You should only use one mod or cheat at a time to avoid any problems.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Experience Smash Hit VR on Gear VR A Surreal Journey of Sound and Music.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Experience Smash Hit VR on Gear VR A Surreal Journey of Sound and Music.md deleted file mode 100644 index 2fee1cfe0d90b46b27777e02da639f7139d44557..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Experience Smash Hit VR on Gear VR A Surreal Journey of Sound and Music.md +++ /dev/null @@ -1,147 +0,0 @@ - -

Smash Hit Gear VR APK: How to Download, Install and Play

-

If you are looking for a fun and immersive VR game that will test your reflexes and coordination, you might want to try Smash Hit. Smash Hit is a physics-based action game that lets you smash glass objects with metal balls as you travel through an otherworldly dimension. The game features stunning graphics, realistic sound effects, and a mesmerizing soundtrack that adapts to your performance.

-

smash hit gear vr apk


Download ►►►►► https://ssurll.com/2uNS9K



-

Smash Hit was originally released as a mobile game for iOS and Android devices in 2014. It was later adapted for Samsung Gear VR, a virtual reality headset that works with compatible Samsung smartphones. Gear VR allows you to experience Smash Hit in a whole new way, with 360-degree views and head tracking.

-

To play Smash Hit on Gear VR, you need to download and install the APK file of the game. An APK file is a package that contains all the files and data needed to run an Android app. You can get APK files from various sources online, but you need to be careful about their safety and legality. In this article, we will show you how to download and install Smash Hit Gear VR APK on your device, how to play the game in VR mode, and what are some of the pros and cons of the game.

-

How to Download and Install Smash Hit Gear VR APK

-

There are different ways to download and install Smash Hit Gear VR APK on your device. You can use your browser, a file manager app, or an APK installer app. Here are the steps for each method:

-

Using Your Browser

-
    -
  1. Go to your device settings and tap Apps & Notifications (or Apps in older versions of Android).
  2. -
  3. Tap the three dots in the upper-right corner.
  4. -
  5. Tap Special access.
  6. -
  7. Tap Install unknown apps.
  8. -
  9. Tap Chrome (or whichever web browser you use).
  10. -
  11. Move Allow from this source to the On position.
  12. -
  13. Go to a website that offers Smash Hit Gear VR APK file. You can use APK Mirror, a reputable site that hosts APK files for various Android apps and games. You can find Smash Hit Gear VR APK file here: .
  14. -
  15. Tap the Download button and wait for the file to be downloaded.
  16. -
  17. Once it's downloaded, open your Downloads folder and tap the APK file.
  18. -
  19. Tap Install at the bottom of the installer window.
  20. -
-

Using a File Manager App

-
    -
  1. If you don't have a file manager app on your device, download one from Google Play. You can use Cx File Explorer or File Manager.
  2. -
  3. Download Smash Hit Gear VR APK file from a website using your browser, as explained above.
  4. -
  5. Open your file manager app and locate the APK file in your Downloads folder.
  6. -
  7. Tap the APK file and tap Install.
  8. -
-

Using an APK Installer App

-
    -
  1. If you don't have an APK installer app on your device, download one from Google Play. You can use SAI (Split APKs Installer) or APK Installer.
  2. -
  3. Download Smash Hit Gear VR APK file from a website using your browser, as explained above.
  4. -
  5. Open your APK installer app and tap Install APKs.
  6. -
  7. Select the APK file from your Downloads folder and tap Select.
  8. -
  9. Tap Install at the bottom of the screen.
  10. -
-

Note: Some websites may offer Smash Hit Gear VR APK file in a

compressed format, such as ZIP or RAR. In that case, you need to extract the APK file using a file extractor app, such as ZArchiver or RAR. You can download these apps from Google Play and follow the instructions on how to extract files.

-

How to Play Smash Hit Gear VR

-

Once you have installed Smash Hit Gear VR APK on your device, you can launch the game from your app drawer or home screen. You will need to connect your Gear VR headset and controller to your device before you start the game. Here are some tips and tricks on how to play Smash Hit Gear VR:

-

Using the Controller

-

The controller is the main way to interact with the game. You can use it to aim and throw balls, pause the game, and switch between modes. Here are the functions of each button:

-

smash hit vr game for gear vr
-smash hit oculus gear vr download
-smash hit virtual reality experience on gear vr
-smash hit apk mod for gear vr
-smash hit gear vr review
-smash hit gear vr gameplay
-smash hit gear vr controller
-smash hit gear vr cracked apk
-smash hit gear vr sideload
-smash hit gear vr free download
-smash hit gear vr tips and tricks
-smash hit gear vr best score
-smash hit gear vr walkthrough
-smash hit gear vr cheats
-smash hit gear vr levels
-smash hit gear vr multiplayer
-smash hit gear vr online
-smash hit gear vr update
-smash hit gear vr requirements
-smash hit gear vr compatible devices
-smash hit gear vr soundtracks
-smash hit gear vr graphics settings
-smash hit gear vr how to play
-smash hit gear vr reddit
-smash hit gear vr youtube
-smash hit premium apk for gear vr
-smash hit unlimited balls apk for gear vr
-smash hit full version apk for gear vr
-smash hit latest version apk for gear vr
-smash hit old version apk for gear vr
-smash hit 1.1.0 apk for gear vr
-smash hit 1.4.0 apk for gear vr
-smash hit 2.0 apk for gear vr
-smash hit mod apk unlimited everything for gear vr
-smash hit mod apk all levels unlocked for gear vr
-smash hit mod apk no ads for gear vr
-how to install smash hit on gear vr
-how to get smash hit on gear vr
-how to run smash hit on gear vr
-how to update smash hit on gear vr
-how to uninstall smash hit on gear vr
-how to hack smash hit on gear vr
-how to reset smash hit on gear vr
-how to backup smash hit on gear vr
-how to transfer smash hit on gear vr
-is smash hit worth it on gear vr
-is smash hit free on gear vr
-is smash hit compatible with oculus quest 2
-is there a sequel to smash hit on gear vr
-what is the highest level in smash hit on gear vr

-
    -
  • The touchpad is used to aim and throw balls. You can swipe left or right to change the direction of your throw, and tap to release the ball. You can also hold the touchpad to charge up a power shot, which will throw multiple balls at once.
  • -
  • The trigger is used to pause the game. You can press it to bring up the pause menu, where you can resume, restart, or quit the game.
  • -
  • The back button is used to switch between modes. You can press it to toggle between VR mode and normal mode. In VR mode, you can look around with your head movement and enjoy the 360-degree view. In normal mode, you can play the game as if it was a mobile game, using only the touchpad and the trigger.
  • -
  • The home button is used to exit the game. You can press it to return to the Oculus Home screen.
  • -
-

Using the Headset

-

The headset is used to enhance your immersion and enjoyment of the game. You can use it to adjust the focus, volume, and brightness of the game. Here are some tips on how to use the headset:

-
    -
  • The focus wheel is used to adjust the clarity of the game. You can rotate it left or right until you find the optimal focus for your eyesight.
  • -
  • The volume buttons are used to adjust the sound level of the game. You can press them up or down until you find the desired volume for your ears.
  • -
  • The brightness slider is used to adjust the brightness of the game. You can slide it left or right until you find the preferred brightness for your eyes.
  • -
-

Review of Smash Hit Gear VR

-

Smash Hit Gear VR is a great game for anyone who loves smashing things and experiencing VR. The game has many positive aspects, but also some negative ones. Here are some of the pros and cons of Smash Hit Gear VR:

-

Pros

-
    -
  • The game has amazing graphics and sound effects that create a realistic and immersive environment.
  • -
  • The game has a simple and intuitive gameplay that anyone can enjoy.
  • -
  • The game has a relaxing and hypnotic soundtrack that adapts to your performance and mood.
  • -
  • The game has 11 different levels with different themes and challenges that keep you entertained.
  • -
  • The game has a VR mode that lets you experience the game in a whole new way, with 360-degree views and head tracking.
  • -
-

Cons

-
    -
  • The game can be repetitive and monotonous after a while, as there is not much variety in the gameplay.
  • -
  • The game can be frustrating and difficult at times, as you have limited balls and lives, and some obstacles are hard to avoid.
  • -
  • The game can be nauseating and uncomfortable for some people, especially in VR mode, as there is a lot of motion and speed involved.
  • -
  • The game can be expensive and inaccessible for some people, as you need a compatible Samsung smartphone, a Gear VR headset, and a controller to play it.
  • -
  • The game can be risky and illegal for some people, as downloading APK files from unknown sources can expose your device to malware and violate intellectual property rights.
  • -
-

Conclusion

-

Smash Hit Gear VR is a fun and immersive VR game that lets you smash glass objects with metal balls as you travel through an otherworldly dimension. The game features stunning graphics, realistic sound effects, and a mesmerizing soundtrack that adapts to your performance. The game also has a VR mode that lets you experience the game in a whole new way, with 360-degree views and head tracking.

-

To play Smash Hit on Gear VR, you need to download and install the APK file of the game from a website that offers it. You can use different methods to do so, such as using your browser, a file manager app, or an APK installer app. However, you need to be careful about the safety and legality of the APK files you download, as they can expose your device to malware and violate intellectual property rights.

-

If you are looking for a fun and immersive VR game that will test your reflexes and coordination, you might want to try Smash Hit. Smash Hit is a physics-based action game that lets you smash glass objects with metal balls as you travel through an otherworldly dimension. The game features stunning graphics, realistic sound effects, and a mesmerizing soundtrack that adapts to your performance.

-

FAQs

-

Here are some of the frequently asked questions about Smash Hit Gear VR APK:

-

Q: Is Smash Hit Gear VR APK safe to download and install?

-

A: It depends on the source of the APK file. Some websites may offer APK files that are infected with malware or contain unwanted ads. To avoid these risks, you should only download APK files from reputable and trusted sites, such as APK Mirror. You should also scan the APK file with an antivirus app before installing it.

-

Q: Is Smash Hit Gear VR APK legal to download and install?

-

A: It depends on the country and the developer of the game. Some countries may have laws that prohibit downloading and installing APK files from unknown sources, as they may infringe on the intellectual property rights of the developers. To avoid legal issues, you should check the laws of your country and the terms and conditions of the game before downloading and installing APK files.

-

Q: How much space does Smash Hit Gear VR APK take on my device?

-

A: The size of Smash Hit Gear VR APK file is about 79 MB. However, you may need more space on your device to store the game data and cache. You should have at least 200 MB of free space on your device to play Smash Hit Gear VR smoothly.

-

Q: How do I update Smash Hit Gear VR APK?

-

A: To update Smash Hit Gear VR APK, you need to download and install the latest version of the APK file from a website that offers it. You can follow the same steps as described above for downloading and installing Smash Hit Gear VR APK. You may need to uninstall the previous version of the game before installing the new one.

-

Q: What are some alternatives to Smash Hit Gear VR?

-

A: If you like Smash Hit Gear VR, you may also like some other VR games that are available for Samsung Gear VR. Some of them are:

-
    -
  • End Space: A space combat simulator that lets you pilot a fighter ship and engage in battles with enemy forces.
  • -
  • Lands End: A puzzle adventure game that lets you explore a mysterious world and unlock its secrets.
  • -
  • Drop Dead: A zombie shooter game that lets you fight off hordes of undead creatures with various weapons.
  • -
  • Coaster Combat: A roller coaster game that lets you ride thrilling tracks and shoot at targets along the way.
  • -
  • Thumper: A rhythm game that lets you control a space beetle and blast through obstacles with music.
  • -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Get Ready for an Amazing Adventure with Beach Buggy Racing MOD APK.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Get Ready for an Amazing Adventure with Beach Buggy Racing MOD APK.md deleted file mode 100644 index 10be6cac46510114f5a0079f55efb595364c0932..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Get Ready for an Amazing Adventure with Beach Buggy Racing MOD APK.md +++ /dev/null @@ -1,90 +0,0 @@ -
-

Download Game Beach Buggy Racing Mod Apk: A Fun and Exciting Racing Game for Android

-

If you are looking for a fun and exciting racing game for your Android device, you should definitely check out Beach Buggy Racing. This is a game that will keep you entertained for hours with its colorful graphics, fast-paced gameplay, and diverse characters and tracks. In this article, we will tell you everything you need to know about Beach Buggy Racing, including its features, how to download and install it, and some tips and tricks to play it. So, buckle up and get ready for some beach buggy racing!

-

Features of Beach Buggy Racing Mod Apk: What makes this game different from other racing games

-

Beach Buggy Racing is not your typical racing game. It has some unique features that make it stand out from the crowd. Here are some of them:

-

download game beach buggy racing mod apk


Download File ✒ ✒ ✒ https://ssurll.com/2uO0Y2



-
    -
  • Customizable vehicles: You can choose from over 25 different vehicles, each with its own personality and performance. You can also upgrade them with power-ups, paint jobs, and stickers.
  • -
  • Varied tracks: You can race on over 15 different tracks, each with its own challenges and surprises. You can explore beaches, jungles, volcanoes, swamps, and more.
  • -
  • Crazy power-ups: You can use over 40 different power-ups to boost your speed, attack your opponents, or defend yourself. You can throw fireballs, dodge rockets, drop mines, create tornadoes, and more.
  • -
  • Fun characters: You can choose from over 12 different characters, each with their own special abilities and voice lines. You can race as a pirate, a ninja, a zombie, a robot, and more.
  • -
  • Multiplayer mode: You can play with up to 4 friends on the same device or online. You can also compete with players from around the world in the leaderboards.
  • -
-

How to Download and Install Beach Buggy Racing Mod Apk: A step-by-step guide to get the game on your device

-

If you want to download and install Beach Buggy Racing Mod Apk on your Android device, you need to follow these simple steps:

-
    -
  1. Click on the link below to download the mod apk file of Beach Buggy Racing.
  2. -
  3. Once the download is complete, go to your device settings and enable the installation of apps from unknown sources.
  4. -
  5. Locate the mod apk file in your device storage and tap on it to install it.
  6. -
  7. Wait for the installation process to finish and then launch the game.
  8. -
  9. Enjoy playing Beach Buggy Racing Mod Apk with unlimited money and gems!
  10. -
-

Tips and Tricks to Play Beach Buggy Racing Mod Apk: How to master the game and win every race

-

Now that you have downloaded and installed Beach Buggy Racing Mod Apk on your device, you might be wondering how to play it like a pro. Here are some tips and tricks that will help you master the game and win every race:

-
    -
  • Choose the right vehicle: Different vehicles have different strengths and weaknesses. You should choose the one that suits your playstyle and the track you are racing on. For example, if you want speed, you should go for the Lambini or the Rocket Car. If you want handling, you should go for the Dune Runner or the Sand Truck.
  • -
  • Use the power-ups wisely: Power-ups can give you an edge over your opponents or ruin their chances of winning. You should use them at the right time and place. For example, if you want to overtake someone, you should use the Turbo or the Nitro. If you want to slow them down, you should use the Oil Slick or the Spiked Tires.
  • -
  • Collect coins and gems: Coins and gems are the currency of the game. You can use them to buy new vehicles, upgrade them, or unlock new tracks and characters. You can collect them by racing, completing missions, or watching ads. You can also get unlimited coins and gems by downloading the mod apk version of the game.
  • -
  • Drift and jump: Drifting and jumping are two skills that can help you improve your performance and score. Drifting allows you to turn corners faster and fill up your boost meter. Jumping allows you to avoid obstacles and collect power-ups in mid-air. To drift, you need to tap and hold the brake button while turning. To jump, you need to tap the brake button while going over a ramp or a bump.
  • -
  • Unlock and use special abilities: Each character in the game has a special ability that can give them an advantage in the race. You can unlock them by winning races with that character or by spending gems. You can use them by tapping the star button when it is full. For example, Rez has the ability to hack other vehicles and make them lose control. McSkelly has the ability to summon a skeleton army that blocks the road.
  • -
-

Conclusion: A summary of the main points and a call to action

-

Beach Buggy Racing is a fun and exciting racing game for Android that you should definitely try. It has many features that make it different from other racing games, such as customizable vehicles, varied tracks, crazy power-ups, fun characters, and multiplayer mode. You can download and install Beach Buggy Racing Mod Apk on your device by following our step-by-step guide. You can also use our tips and tricks to play the game like a pro and win every race. So, what are you waiting for? Download Beach Buggy Racing Mod Apk now and enjoy the thrill of beach buggy racing!

-

FAQs: Five common questions and answers about Beach Buggy Racing Mod Apk

-

Here are some of the most frequently asked questions about Beach Buggy Racing Mod Apk:

-
    -
  1. What is Beach Buggy Racing Mod Apk? -

    Beach Buggy Racing Mod Apk is a modified version of Beach Buggy Racing that gives you unlimited money and gems, unlocks all vehicles, tracks, and characters, and removes ads.

  2. -
  3. Is Beach Buggy Racing Mod Apk safe to download and install? -

    Yes, Beach Buggy Racing Mod Apk is safe to download and install on your device. It does not contain any viruses or malware that can harm your device or data.

  4. -
  5. How do I update Beach Buggy Racing Mod Apk? -

    To update Beach Buggy Racing Mod Apk, you need to download the latest version of the mod apk file from the link below and install it over the existing one. You do not need to uninstall the previous version.

  6. -
  7. Can I play Beach Buggy Racing Mod Apk offline? -

    Yes, you can play Beach Buggy Racing Mod Apk offline without an internet connection. However, some features such as multiplayer mode, leaderboards, and achievements may not work properly.

    -

    download beach buggy racing mod apk latest version
    -download beach buggy racing mod apk unlimited money and gems
    -download beach buggy racing mod apk for android
    -download beach buggy racing mod apk offline
    -download beach buggy racing mod apk revdl
    -download beach buggy racing 2 mod apk
    -download beach buggy racing 2 mod apk unlimited money
    -download beach buggy racing 2 mod apk android 1
    -download beach buggy racing 2 mod apk rexdl
    -download beach buggy racing 2 mod apk hack
    -download game beach buggy racing hack apk
    -download game beach buggy racing cheat apk
    -download game beach buggy racing unlimited coins apk
    -download game beach buggy racing full unlocked apk
    -download game beach buggy racing premium apk
    -free download game beach buggy racing mod apk
    -free download game beach buggy racing 2 mod apk
    -free download game beach buggy racing hack mod apk
    -free download game beach buggy racing cheat mod apk
    -free download game beach buggy racing unlimited money mod apk
    -how to download game beach buggy racing mod apk
    -how to download game beach buggy racing 2 mod apk
    -how to download game beach buggy racing hack mod apk
    -how to download game beach buggy racing cheat mod apk
    -how to download game beach buggy racing unlimited money mod apk
    -best site to download game beach buggy racing mod apk
    -best site to download game beach buggy racing 2 mod apk
    -best site to download game beach buggy racing hack mod apk
    -best site to download game beach buggy racing cheat mod apk
    -best site to download game beach buggy racing unlimited money mod apk
    -where to download game beach buggy racing mod apk
    -where to download game beach buggy racing 2 mod apk
    -where to download game beach buggy racing hack mod apk
    -where to download game beach buggy racing cheat mod apk
    -where to download game beach buggy racing unlimited money mod apk
    -safe download game beach buggy racing mod apk
    -safe download game beach buggy racing 2 mod apk
    -safe download game beach buggy racing hack mod apk
    -safe download game beach buggy racing cheat mod apk
    -safe download game beach buggy racing unlimited money mod apk

  8. -
  9. Can I play Beach Buggy Racing Mod Apk on PC? -

    Yes, you can play Beach Buggy Racing Mod Apk on PC using an Android emulator such as Bluestacks or NoxPlayer. You need to download and install the emulator on your PC and then install Beach Buggy Racing Mod Apk on it.

  10. -
- : https://www.apkmody.io/games/beach-buggy-racing-mod-apk.html

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/skf15963/summary/fengshen/examples/zen2_finetune/ner_zen2_base_cluener.sh b/spaces/skf15963/summary/fengshen/examples/zen2_finetune/ner_zen2_base_cluener.sh deleted file mode 100644 index 04b97b5fe5123af3170523dfde0ae008a78b2428..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/examples/zen2_finetune/ner_zen2_base_cluener.sh +++ /dev/null @@ -1,91 +0,0 @@ -#!/bin/bash -#SBATCH --job-name=zen2_base_cluener # create a short name for your job -#SBATCH --nodes=1 # node count -#SBATCH --ntasks=1 # total number of tasks across all nodes -#SBATCH --cpus-per-task=30 # cpu-cores per task (>1 if multi-threaded tasks) -#SBATCH --gres=gpu:1 # number of gpus per node -#SBATCH --mail-type=ALL # send email when job begins, ends or failed etc. -#SBATCH -o /cognitive_comp/ganruyi/experiments/ner_finetune/zen2_base_cluener/%x-%j.log # output and error file name (%x=job name, %j=job id) - - -# export CUDA_VISIBLE_DEVICES='2' -export TORCH_EXTENSIONS_DIR=/cognitive_comp/ganruyi/tmp/torch_extendsions - -MODEL_NAME=zen2_base - -TASK=cluener - -ZERO_STAGE=1 -STRATEGY=deepspeed_stage_${ZERO_STAGE} - -ROOT_DIR=/cognitive_comp/ganruyi/experiments/ner_finetune/${MODEL_NAME}_${TASK} -if [ ! -d ${ROOT_DIR} ];then - mkdir -p ${ROOT_DIR} - echo ${ROOT_DIR} created!!!!!!!!!!!!!! -else - echo ${ROOT_DIR} exist!!!!!!!!!!!!!!! -fi - -DATA_DIR=/cognitive_comp/lujunyu/data_zh/NER_Aligned/CLUENER/ -PRETRAINED_MODEL_PATH=/cognitive_comp/ganruyi/hf_models/zen/zh_zen_base_2.0 - -CHECKPOINT_PATH=${ROOT_DIR}/ckpt/ -OUTPUT_PATH=${ROOT_DIR}/predict.json - -DATA_ARGS="\ - --data_dir $DATA_DIR \ - --train_data train.char.txt \ - --valid_data dev.char.txt \ - --test_data dev.char.txt \ - --train_batchsize 32 \ - --valid_batchsize 16 \ - --max_seq_length 256 \ - --task_name cluener \ - " - -MODEL_ARGS="\ - --learning_rate 3e-5 \ - --weight_decay 0.1 \ - --warmup_ratio 0.01 \ - --markup bio \ - --middle_prefix I- \ - " - -MODEL_CHECKPOINT_ARGS="\ - --monitor val_f1 \ - --save_top_k 3 \ - --mode max \ - --every_n_train_steps 100 \ - --save_weights_only True \ - --dirpath $CHECKPOINT_PATH \ - --filename model-{epoch:02d}-{val_f1:.4f} \ - " - -TRAINER_ARGS="\ - --max_epochs 30 \ - --gpus 1 \ - --check_val_every_n_epoch 1 \ - --val_check_interval 100 \ - --default_root_dir $ROOT_DIR \ - " - - -options=" \ - --pretrained_model_path $PRETRAINED_MODEL_PATH \ - --vocab_file $PRETRAINED_MODEL_PATH/vocab.txt \ - --do_lower_case \ - --output_save_path $OUTPUT_PATH \ - $DATA_ARGS \ - $MODEL_ARGS \ - $MODEL_CHECKPOINT_ARGS \ - $TRAINER_ARGS \ -" -SCRIPT_PATH=/cognitive_comp/ganruyi/Fengshenbang-LM/fengshen/examples/zen2_finetune/fengshen_token_level_ft_task.py -/home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options - -# SINGULARITY_PATH=/cognitive_comp/ganruyi/pytorch21_06_py3_docker_image_v2.sif -# python3 $SCRIPT_PATH $options -# source activate base -# singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $SINGULARITY_PATH /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options -# /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options - diff --git a/spaces/skf15963/summary/fengshen/utils/utils.py b/spaces/skf15963/summary/fengshen/utils/utils.py deleted file mode 100644 index a03fb0b3326f8f6dce069649197f6b219edab90c..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/utils/utils.py +++ /dev/null @@ -1,74 +0,0 @@ -# coding=utf-8 -import jieba -import torch - - -def jieba_tokenize(str): - return jieba.lcut(str) - - -_UCODE_RANGES = ( - ("\u3400", "\u4db5"), # CJK Unified Ideographs Extension A, release 3.0 - ("\u4e00", "\u9fa5"), # CJK Unified Ideographs, release 1.1 - ("\u9fa6", "\u9fbb"), # CJK Unified Ideographs, release 4.1 - ("\uf900", "\ufa2d"), # CJK Compatibility Ideographs, release 1.1 - ("\ufa30", "\ufa6a"), # CJK Compatibility Ideographs, release 3.2 - ("\ufa70", "\ufad9"), # CJK Compatibility Ideographs, release 4.1 - ("\u20000", "\u2a6d6"), # (UTF16) CJK Unified Ideographs Extension B, release 3.1 - ("\u2f800", "\u2fa1d"), # (UTF16) CJK Compatibility Supplement, release 3.1 - ("\uff00", "\uffef"), # Full width ASCII, full width of English punctuation, - # half width Katakana, half wide half width kana, Korean alphabet - ("\u2e80", "\u2eff"), # CJK Radicals Supplement - ("\u3000", "\u303f"), # CJK punctuation mark - ("\u31c0", "\u31ef"), # CJK stroke - ("\u2f00", "\u2fdf"), # Kangxi Radicals - ("\u2ff0", "\u2fff"), # Chinese character structure - ("\u3100", "\u312f"), # Phonetic symbols - ("\u31a0", "\u31bf"), # Phonetic symbols (Taiwanese and Hakka expansion) - ("\ufe10", "\ufe1f"), - ("\ufe30", "\ufe4f"), - ("\u2600", "\u26ff"), - ("\u2700", "\u27bf"), - ("\u3200", "\u32ff"), - ("\u3300", "\u33ff"), -) - - -def is_chinese_char(uchar): - for start, end in _UCODE_RANGES: - if start <= uchar <= end: - return True - return False - - -def chinese_char_tokenize(line): - line = line.strip() - line_in_chars = "" - - for char in line: - if is_chinese_char(char): - line_in_chars += " " - line_in_chars += char - line_in_chars += " " - else: - line_in_chars += char - - return line_in_chars - -# s = '中国的首都是哪里?1,2,3d回答' -# print(chinese_char_tokenize(s)) - - -def report_memory(name): - """Simple GPU memory report.""" - mega_bytes = 1024.0 * 1024.0 - string = name + ' memory (MB)' - string += ' | allocated: {}'.format( - torch.cuda.memory_allocated() / mega_bytes) - string += ' | max allocated: {}'.format( - torch.cuda.max_memory_allocated() / mega_bytes) - string += ' | reserved: {}'.format( - torch.cuda.memory_reserved() / mega_bytes) - string += ' | max reserved: {}'.format( - torch.cuda.max_memory_reserved() / mega_bytes) - print(string) diff --git a/spaces/sklkd93/CodeFormer/CodeFormer/basicsr/losses/losses.py b/spaces/sklkd93/CodeFormer/CodeFormer/basicsr/losses/losses.py deleted file mode 100644 index 1bcf272cfb756d99451a3005567ea4d4c9059067..0000000000000000000000000000000000000000 --- a/spaces/sklkd93/CodeFormer/CodeFormer/basicsr/losses/losses.py +++ /dev/null @@ -1,455 +0,0 @@ -import math -import lpips -import torch -from torch import autograd as autograd -from torch import nn as nn -from torch.nn import functional as F - -from basicsr.archs.vgg_arch import VGGFeatureExtractor -from basicsr.utils.registry import LOSS_REGISTRY -from .loss_util import weighted_loss - -_reduction_modes = ['none', 'mean', 'sum'] - - -@weighted_loss -def l1_loss(pred, target): - return F.l1_loss(pred, target, reduction='none') - - -@weighted_loss -def mse_loss(pred, target): - return F.mse_loss(pred, target, reduction='none') - - -@weighted_loss -def charbonnier_loss(pred, target, eps=1e-12): - return torch.sqrt((pred - target)**2 + eps) - - -@LOSS_REGISTRY.register() -class L1Loss(nn.Module): - """L1 (mean absolute error, MAE) loss. - - Args: - loss_weight (float): Loss weight for L1 loss. Default: 1.0. - reduction (str): Specifies the reduction to apply to the output. - Supported choices are 'none' | 'mean' | 'sum'. Default: 'mean'. - """ - - def __init__(self, loss_weight=1.0, reduction='mean'): - super(L1Loss, self).__init__() - if reduction not in ['none', 'mean', 'sum']: - raise ValueError(f'Unsupported reduction mode: {reduction}. ' f'Supported ones are: {_reduction_modes}') - - self.loss_weight = loss_weight - self.reduction = reduction - - def forward(self, pred, target, weight=None, **kwargs): - """ - Args: - pred (Tensor): of shape (N, C, H, W). Predicted tensor. - target (Tensor): of shape (N, C, H, W). Ground truth tensor. - weight (Tensor, optional): of shape (N, C, H, W). Element-wise - weights. Default: None. - """ - return self.loss_weight * l1_loss(pred, target, weight, reduction=self.reduction) - - -@LOSS_REGISTRY.register() -class MSELoss(nn.Module): - """MSE (L2) loss. - - Args: - loss_weight (float): Loss weight for MSE loss. Default: 1.0. - reduction (str): Specifies the reduction to apply to the output. - Supported choices are 'none' | 'mean' | 'sum'. Default: 'mean'. - """ - - def __init__(self, loss_weight=1.0, reduction='mean'): - super(MSELoss, self).__init__() - if reduction not in ['none', 'mean', 'sum']: - raise ValueError(f'Unsupported reduction mode: {reduction}. ' f'Supported ones are: {_reduction_modes}') - - self.loss_weight = loss_weight - self.reduction = reduction - - def forward(self, pred, target, weight=None, **kwargs): - """ - Args: - pred (Tensor): of shape (N, C, H, W). Predicted tensor. - target (Tensor): of shape (N, C, H, W). Ground truth tensor. - weight (Tensor, optional): of shape (N, C, H, W). Element-wise - weights. Default: None. - """ - return self.loss_weight * mse_loss(pred, target, weight, reduction=self.reduction) - - -@LOSS_REGISTRY.register() -class CharbonnierLoss(nn.Module): - """Charbonnier loss (one variant of Robust L1Loss, a differentiable - variant of L1Loss). - - Described in "Deep Laplacian Pyramid Networks for Fast and Accurate - Super-Resolution". - - Args: - loss_weight (float): Loss weight for L1 loss. Default: 1.0. - reduction (str): Specifies the reduction to apply to the output. - Supported choices are 'none' | 'mean' | 'sum'. Default: 'mean'. - eps (float): A value used to control the curvature near zero. - Default: 1e-12. - """ - - def __init__(self, loss_weight=1.0, reduction='mean', eps=1e-12): - super(CharbonnierLoss, self).__init__() - if reduction not in ['none', 'mean', 'sum']: - raise ValueError(f'Unsupported reduction mode: {reduction}. ' f'Supported ones are: {_reduction_modes}') - - self.loss_weight = loss_weight - self.reduction = reduction - self.eps = eps - - def forward(self, pred, target, weight=None, **kwargs): - """ - Args: - pred (Tensor): of shape (N, C, H, W). Predicted tensor. - target (Tensor): of shape (N, C, H, W). Ground truth tensor. - weight (Tensor, optional): of shape (N, C, H, W). Element-wise - weights. Default: None. - """ - return self.loss_weight * charbonnier_loss(pred, target, weight, eps=self.eps, reduction=self.reduction) - - -@LOSS_REGISTRY.register() -class WeightedTVLoss(L1Loss): - """Weighted TV loss. - - Args: - loss_weight (float): Loss weight. Default: 1.0. - """ - - def __init__(self, loss_weight=1.0): - super(WeightedTVLoss, self).__init__(loss_weight=loss_weight) - - def forward(self, pred, weight=None): - y_diff = super(WeightedTVLoss, self).forward(pred[:, :, :-1, :], pred[:, :, 1:, :], weight=weight[:, :, :-1, :]) - x_diff = super(WeightedTVLoss, self).forward(pred[:, :, :, :-1], pred[:, :, :, 1:], weight=weight[:, :, :, :-1]) - - loss = x_diff + y_diff - - return loss - - -@LOSS_REGISTRY.register() -class PerceptualLoss(nn.Module): - """Perceptual loss with commonly used style loss. - - Args: - layer_weights (dict): The weight for each layer of vgg feature. - Here is an example: {'conv5_4': 1.}, which means the conv5_4 - feature layer (before relu5_4) will be extracted with weight - 1.0 in calculting losses. - vgg_type (str): The type of vgg network used as feature extractor. - Default: 'vgg19'. - use_input_norm (bool): If True, normalize the input image in vgg. - Default: True. - range_norm (bool): If True, norm images with range [-1, 1] to [0, 1]. - Default: False. - perceptual_weight (float): If `perceptual_weight > 0`, the perceptual - loss will be calculated and the loss will multiplied by the - weight. Default: 1.0. - style_weight (float): If `style_weight > 0`, the style loss will be - calculated and the loss will multiplied by the weight. - Default: 0. - criterion (str): Criterion used for perceptual loss. Default: 'l1'. - """ - - def __init__(self, - layer_weights, - vgg_type='vgg19', - use_input_norm=True, - range_norm=False, - perceptual_weight=1.0, - style_weight=0., - criterion='l1'): - super(PerceptualLoss, self).__init__() - self.perceptual_weight = perceptual_weight - self.style_weight = style_weight - self.layer_weights = layer_weights - self.vgg = VGGFeatureExtractor( - layer_name_list=list(layer_weights.keys()), - vgg_type=vgg_type, - use_input_norm=use_input_norm, - range_norm=range_norm) - - self.criterion_type = criterion - if self.criterion_type == 'l1': - self.criterion = torch.nn.L1Loss() - elif self.criterion_type == 'l2': - self.criterion = torch.nn.L2loss() - elif self.criterion_type == 'mse': - self.criterion = torch.nn.MSELoss(reduction='mean') - elif self.criterion_type == 'fro': - self.criterion = None - else: - raise NotImplementedError(f'{criterion} criterion has not been supported.') - - def forward(self, x, gt): - """Forward function. - - Args: - x (Tensor): Input tensor with shape (n, c, h, w). - gt (Tensor): Ground-truth tensor with shape (n, c, h, w). - - Returns: - Tensor: Forward results. - """ - # extract vgg features - x_features = self.vgg(x) - gt_features = self.vgg(gt.detach()) - - # calculate perceptual loss - if self.perceptual_weight > 0: - percep_loss = 0 - for k in x_features.keys(): - if self.criterion_type == 'fro': - percep_loss += torch.norm(x_features[k] - gt_features[k], p='fro') * self.layer_weights[k] - else: - percep_loss += self.criterion(x_features[k], gt_features[k]) * self.layer_weights[k] - percep_loss *= self.perceptual_weight - else: - percep_loss = None - - # calculate style loss - if self.style_weight > 0: - style_loss = 0 - for k in x_features.keys(): - if self.criterion_type == 'fro': - style_loss += torch.norm( - self._gram_mat(x_features[k]) - self._gram_mat(gt_features[k]), p='fro') * self.layer_weights[k] - else: - style_loss += self.criterion(self._gram_mat(x_features[k]), self._gram_mat( - gt_features[k])) * self.layer_weights[k] - style_loss *= self.style_weight - else: - style_loss = None - - return percep_loss, style_loss - - def _gram_mat(self, x): - """Calculate Gram matrix. - - Args: - x (torch.Tensor): Tensor with shape of (n, c, h, w). - - Returns: - torch.Tensor: Gram matrix. - """ - n, c, h, w = x.size() - features = x.view(n, c, w * h) - features_t = features.transpose(1, 2) - gram = features.bmm(features_t) / (c * h * w) - return gram - - -@LOSS_REGISTRY.register() -class LPIPSLoss(nn.Module): - def __init__(self, - loss_weight=1.0, - use_input_norm=True, - range_norm=False,): - super(LPIPSLoss, self).__init__() - self.perceptual = lpips.LPIPS(net="vgg", spatial=False).eval() - self.loss_weight = loss_weight - self.use_input_norm = use_input_norm - self.range_norm = range_norm - - if self.use_input_norm: - # the mean is for image with range [0, 1] - self.register_buffer('mean', torch.Tensor([0.485, 0.456, 0.406]).view(1, 3, 1, 1)) - # the std is for image with range [0, 1] - self.register_buffer('std', torch.Tensor([0.229, 0.224, 0.225]).view(1, 3, 1, 1)) - - def forward(self, pred, target): - if self.range_norm: - pred = (pred + 1) / 2 - target = (target + 1) / 2 - if self.use_input_norm: - pred = (pred - self.mean) / self.std - target = (target - self.mean) / self.std - lpips_loss = self.perceptual(target.contiguous(), pred.contiguous()) - return self.loss_weight * lpips_loss.mean() - - -@LOSS_REGISTRY.register() -class GANLoss(nn.Module): - """Define GAN loss. - - Args: - gan_type (str): Support 'vanilla', 'lsgan', 'wgan', 'hinge'. - real_label_val (float): The value for real label. Default: 1.0. - fake_label_val (float): The value for fake label. Default: 0.0. - loss_weight (float): Loss weight. Default: 1.0. - Note that loss_weight is only for generators; and it is always 1.0 - for discriminators. - """ - - def __init__(self, gan_type, real_label_val=1.0, fake_label_val=0.0, loss_weight=1.0): - super(GANLoss, self).__init__() - self.gan_type = gan_type - self.loss_weight = loss_weight - self.real_label_val = real_label_val - self.fake_label_val = fake_label_val - - if self.gan_type == 'vanilla': - self.loss = nn.BCEWithLogitsLoss() - elif self.gan_type == 'lsgan': - self.loss = nn.MSELoss() - elif self.gan_type == 'wgan': - self.loss = self._wgan_loss - elif self.gan_type == 'wgan_softplus': - self.loss = self._wgan_softplus_loss - elif self.gan_type == 'hinge': - self.loss = nn.ReLU() - else: - raise NotImplementedError(f'GAN type {self.gan_type} is not implemented.') - - def _wgan_loss(self, input, target): - """wgan loss. - - Args: - input (Tensor): Input tensor. - target (bool): Target label. - - Returns: - Tensor: wgan loss. - """ - return -input.mean() if target else input.mean() - - def _wgan_softplus_loss(self, input, target): - """wgan loss with soft plus. softplus is a smooth approximation to the - ReLU function. - - In StyleGAN2, it is called: - Logistic loss for discriminator; - Non-saturating loss for generator. - - Args: - input (Tensor): Input tensor. - target (bool): Target label. - - Returns: - Tensor: wgan loss. - """ - return F.softplus(-input).mean() if target else F.softplus(input).mean() - - def get_target_label(self, input, target_is_real): - """Get target label. - - Args: - input (Tensor): Input tensor. - target_is_real (bool): Whether the target is real or fake. - - Returns: - (bool | Tensor): Target tensor. Return bool for wgan, otherwise, - return Tensor. - """ - - if self.gan_type in ['wgan', 'wgan_softplus']: - return target_is_real - target_val = (self.real_label_val if target_is_real else self.fake_label_val) - return input.new_ones(input.size()) * target_val - - def forward(self, input, target_is_real, is_disc=False): - """ - Args: - input (Tensor): The input for the loss module, i.e., the network - prediction. - target_is_real (bool): Whether the targe is real or fake. - is_disc (bool): Whether the loss for discriminators or not. - Default: False. - - Returns: - Tensor: GAN loss value. - """ - if self.gan_type == 'hinge': - if is_disc: # for discriminators in hinge-gan - input = -input if target_is_real else input - loss = self.loss(1 + input).mean() - else: # for generators in hinge-gan - loss = -input.mean() - else: # other gan types - target_label = self.get_target_label(input, target_is_real) - loss = self.loss(input, target_label) - - # loss_weight is always 1.0 for discriminators - return loss if is_disc else loss * self.loss_weight - - -def r1_penalty(real_pred, real_img): - """R1 regularization for discriminator. The core idea is to - penalize the gradient on real data alone: when the - generator distribution produces the true data distribution - and the discriminator is equal to 0 on the data manifold, the - gradient penalty ensures that the discriminator cannot create - a non-zero gradient orthogonal to the data manifold without - suffering a loss in the GAN game. - - Ref: - Eq. 9 in Which training methods for GANs do actually converge. - """ - grad_real = autograd.grad(outputs=real_pred.sum(), inputs=real_img, create_graph=True)[0] - grad_penalty = grad_real.pow(2).view(grad_real.shape[0], -1).sum(1).mean() - return grad_penalty - - -def g_path_regularize(fake_img, latents, mean_path_length, decay=0.01): - noise = torch.randn_like(fake_img) / math.sqrt(fake_img.shape[2] * fake_img.shape[3]) - grad = autograd.grad(outputs=(fake_img * noise).sum(), inputs=latents, create_graph=True)[0] - path_lengths = torch.sqrt(grad.pow(2).sum(2).mean(1)) - - path_mean = mean_path_length + decay * (path_lengths.mean() - mean_path_length) - - path_penalty = (path_lengths - path_mean).pow(2).mean() - - return path_penalty, path_lengths.detach().mean(), path_mean.detach() - - -def gradient_penalty_loss(discriminator, real_data, fake_data, weight=None): - """Calculate gradient penalty for wgan-gp. - - Args: - discriminator (nn.Module): Network for the discriminator. - real_data (Tensor): Real input data. - fake_data (Tensor): Fake input data. - weight (Tensor): Weight tensor. Default: None. - - Returns: - Tensor: A tensor for gradient penalty. - """ - - batch_size = real_data.size(0) - alpha = real_data.new_tensor(torch.rand(batch_size, 1, 1, 1)) - - # interpolate between real_data and fake_data - interpolates = alpha * real_data + (1. - alpha) * fake_data - interpolates = autograd.Variable(interpolates, requires_grad=True) - - disc_interpolates = discriminator(interpolates) - gradients = autograd.grad( - outputs=disc_interpolates, - inputs=interpolates, - grad_outputs=torch.ones_like(disc_interpolates), - create_graph=True, - retain_graph=True, - only_inputs=True)[0] - - if weight is not None: - gradients = gradients * weight - - gradients_penalty = ((gradients.norm(2, dim=1) - 1)**2).mean() - if weight is not None: - gradients_penalty /= torch.mean(weight) - - return gradients_penalty diff --git a/spaces/springml111/T5_Paraphrase_demo/app.py b/spaces/springml111/T5_Paraphrase_demo/app.py deleted file mode 100644 index 92725b0b4f58e6b18ace7861cbc8f8aaca343567..0000000000000000000000000000000000000000 --- a/spaces/springml111/T5_Paraphrase_demo/app.py +++ /dev/null @@ -1,32 +0,0 @@ -import torch -from transformers import (T5ForConditionalGeneration,T5Tokenizer) -import gradio as gr - -best_model_path = "springml111/T5_Paraphrase_model" -model = T5ForConditionalGeneration.from_pretrained(best_model_path) -tokenizer = T5Tokenizer.from_pretrained("springml111/T5_Paraphrase_model") - -def tokenize_data(text): - # Tokenize the review body - input_ = str(text) + ' ' - max_len = 64 - # tokenize inputs - tokenized_inputs = tokenizer(input_, padding='max_length', truncation=True, max_length=max_len, return_attention_mask=True, return_tensors='pt') - - inputs={"input_ids": tokenized_inputs['input_ids'], - "attention_mask": tokenized_inputs['attention_mask']} - return inputs - -def generate_answers(text): - inputs = tokenize_data(text) - results= model.generate(input_ids= inputs['input_ids'], attention_mask=inputs['attention_mask'], do_sample=True, - max_length=64, - top_k=120, - top_p=0.98, - early_stopping=True, - num_return_sequences=1) - answer = tokenizer.decode(results[0], skip_special_tokens=True) - return answer - -iface = gr.Interface(fn=generate_answers, inputs=[gr.inputs.Textbox(lines=5)],outputs=["text"]) -iface.launch() \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Championship Manager 99 00 Download For Mac.md b/spaces/stomexserde/gpt4-ui/Examples/Championship Manager 99 00 Download For Mac.md deleted file mode 100644 index f67bfa579c0978511df777b2c897ffc24ca02a4b..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Championship Manager 99 00 Download For Mac.md +++ /dev/null @@ -1,20 +0,0 @@ - -

How to Play Championship Manager Season 99/00 on Mac

-

Championship Manager Season 99/00 is a classic football management game that lets you take charge of your favorite team and lead them to glory. You can choose from 16 different leagues, scout players, negotiate contracts, set tactics, and more. But how can you play this game on a Mac computer?

-

One way to play Championship Manager Season 99/00 on Mac is to use a Windows emulator such as Wine or Parallels Desktop. These programs allow you to run Windows applications on your Mac without installing Windows. However, they may require some technical skills and configuration to work properly. You also need to have a copy of the game's installation files or CD-ROM.

-

Championship manager 99 00 download for mac


Download Zip --->>> https://urlgoal.com/2uI8c5



-

Another way to play Championship Manager Season 99/00 on Mac is to use a web-based service such as PlayOnMac or Porting Kit. These services provide pre-configured versions of the game that you can download and run on your Mac without any hassle. You don't need to own the game or install any emulator. However, these services may not be compatible with all Mac models and operating systems.

-

Whichever method you choose, you can enjoy one of the best football management games ever made on your Mac. Championship Manager Season 99/00 is a game that will challenge your strategic skills and test your passion for the beautiful game.

If you want to play Championship Manager Season 99/00 on Mac, here are some tips and tricks to help you get started.

-

-
    -
  • Choose a team that suits your style and budget. You can start with a big club and aim for trophies, or a small club and try to avoid relegation. You can also create your own custom team and league.
  • -
  • Use the in-game editor to modify the game's data. You can change player names, attributes, contracts, injuries, transfers, and more. You can also download fan-made updates and patches to keep the game up to date.
  • -
  • Experiment with different formations and tactics. You can choose from a variety of preset options or create your own custom ones. You can also assign specific roles and instructions to each player.
  • -
  • Keep an eye on your finances and morale. You have to balance your income and expenses, as well as your players' happiness and loyalty. You can also interact with the media, the board, and the fans.
  • -
  • Save your game frequently and make backups. The game may crash or freeze at times, so it's always good to have a backup file in case something goes wrong. You can also use multiple save slots to try different scenarios.
  • -
-

Championship Manager Season 99/00 is a game that will keep you hooked for hours. Whether you want to relive the glory days of football or create your own alternative history, this game will let you do it. Have fun!

To conclude, Championship Manager Season 99/00 is a game that every football fan should try. It is a game that combines realism, depth, and fun in a way that few games can match. It is a game that will make you feel like a real manager, with all the joys and frustrations that come with it. It is a game that will give you hours of entertainment and satisfaction.

-

If you want to play Championship Manager Season 99/00 on Mac, you have two options: use a Windows emulator or a web-based service. Both methods have their advantages and disadvantages, so you can choose the one that works best for you. Either way, you will be able to enjoy one of the best football management games ever made on your Mac.

-

So what are you waiting for? Download Championship Manager Season 99/00 today and start your managerial career. You won't regret it!

e93f5a0c3f
-
-
\ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Check Point Install And Upgrade R77.md b/spaces/stomexserde/gpt4-ui/Examples/Check Point Install And Upgrade R77.md deleted file mode 100644 index a4bdbafb988871c5c1d2255f06c74a08bc2dea26..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Check Point Install And Upgrade R77.md +++ /dev/null @@ -1,37 +0,0 @@ - -

How to Install and Upgrade Check Point R77 Software on Your Security Gateway

-

Check Point R77 is a software release that provides advanced security features and performance enhancements for your security gateway. It also includes support for new hardware models and operating systems. If you want to install or upgrade to Check Point R77 on your security gateway, you need to follow these steps:

-
    -
  1. Check the compatibility and prerequisites of your software product line, hardware type, hardware model, operating system, and version. You can use the Upgrade/Download Wizard [^1^] to find the best upgrade path for your system.
  2. -
  3. Backup your current configuration and take a snapshot of your system. This will help you restore your system in case something goes wrong during the upgrade. You can follow the best practices documented here: sk108902 [^2^].
  4. -
  5. Download the Check Point R77 installation package and upgrade file from the Upgrade/Download Wizard [^1^] or from the R77.20.80 for Small and Medium Business Appliances [^3^] page if you have a 600/1100 appliance.
  6. -
  7. Login to the firewall web interface and perform a clean install of the Check Point R77 major version fresh install and upgrade file from CPUSE (Check Point Update Service Engine). This will erase your current configuration and install the new software on your system.
  8. -
  9. After reboot, complete the first time configuration wizard with your standard settings. You may have to enter or confirm the following settings: name, IP address, subnet mask, default gateway, DNS server, administrator password, SIC (Secure Internal Communication) key, license, etc.
  10. -
  11. Connect to SmartConsole and verify that your security gateway is running Check Point R77 software. You can also check the status of your security policies, network objects, logs, reports, etc.
  12. -
  13. If you have a cluster or a standalone gateway with integrated management, you need to repeat these steps for each member of the cluster or for the management server.
  14. -
-

Congratulations! You have successfully installed or upgraded to Check Point R77 software on your security gateway. For more information, you can refer to the Check Point R77 Installation and Upgrade Guide.

-

Check Point Install and Upgrade R77


DOWNLOAD ★★★★★ https://urlgoal.com/2uI7xP



- -

Why Upgrade to Check Point R77?

-

Check Point R77 is a software release that offers many benefits for your security gateway. Some of the main advantages of upgrading to Check Point R77 are:

-
    -
  • Improved performance and scalability: Check Point R77 supports new hardware models and operating systems that can handle more traffic and connections. It also optimizes the use of CPU and memory resources and enhances the load balancing and failover mechanisms.
  • -
  • Enhanced security features: Check Point R77 introduces new security capabilities and updates existing ones. For example, it supports IPv6, SSL inspection, Threat Emulation, Application Control, URL Filtering, Identity Awareness, DLP, IPS, Anti-Bot, Anti-Virus, and more.
  • -
  • Simplified management and administration: Check Point R77 simplifies the configuration and deployment of your security policies and network objects. It also provides better visibility and control over your security events and logs. You can use SmartConsole, SmartDashboard, SmartView Monitor, SmartView Tracker, SmartEvent, SmartReporter, SmartProvisioning, SmartUpdate, and more.
  • -
-

By upgrading to Check Point R77, you can ensure that your security gateway is up to date with the latest technology and best practices. You can also enjoy the new features and improvements that Check Point R77 offers.

- -

How to Prepare for the Upgrade to Check Point R77?

-

Before you start the upgrade process to Check Point R77, you need to prepare your system and environment. Here are some steps that you should take before upgrading:

-

-
    -
  • Check the compatibility and prerequisites of your software product line, hardware type, hardware model, operating system, and version. You can use the Upgrade/Download Wizard to find the best upgrade path for your system.
  • -
  • Review the release notes and documentation of Check Point R77. You can find them here: sk101042.
  • -
  • Plan your upgrade strategy and schedule. You need to consider the impact of the upgrade on your network availability and performance. You also need to coordinate with your stakeholders and users.
  • -
  • Backup your current configuration and take a snapshot of your system. This will help you restore your system in case something goes wrong during the upgrade. You can follow the best practices documented here: sk108902 .
  • -
  • Download the Check Point R77 installation package and upgrade file from the Upgrade/Download Wizard or from the R77.20.80 for Small and Medium Business Appliances page if you have a 600/1100 appliance.
  • -
-

By following these steps, you can prepare your system and environment for a smooth and successful upgrade to Check Point R77.

7b8c122e87
-
-
\ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Elcomsoft Internet Password Breaker Full Version __TOP__.md b/spaces/stomexserde/gpt4-ui/Examples/Elcomsoft Internet Password Breaker Full Version __TOP__.md deleted file mode 100644 index 2968abd34e3028c7a8904c75d33b02cba79d2e52..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Elcomsoft Internet Password Breaker Full Version __TOP__.md +++ /dev/null @@ -1,24 +0,0 @@ -
-

How to Use Elcomsoft Internet Password Breaker to Recover Your Online Passwords

-

Have you ever forgotten your passwords for your online accounts, email clients or web browsers? If so, you are not alone. Many people use the same or similar passwords for different websites and services, which makes it hard to remember them all. And sometimes, even if you use a password manager, you may lose access to it or forget its master password.

-

elcomsoft internet password breaker full version


DOWNLOAD ✏ ✏ ✏ https://urlgoal.com/2uI7qN



-

Fortunately, there is a solution: Elcomsoft Internet Password Breaker. This powerful tool can instantly extract passwords, stored forms and autocomplete information from popular web browsers and email clients. It can also help you build a custom dictionary to speed up password recovery attacks performed with other tools.

-

In this article, we will show you how to use Elcomsoft Internet Password Breaker to recover your online passwords in a few simple steps.

-

Step 1: Download and Install Elcomsoft Internet Password Breaker

-

The first step is to download and install Elcomsoft Internet Password Breaker on your Windows PC. You can get the free trial version from the official website[^3^] or buy the full version for €149[^2^]. The installation process is straightforward and takes only a few minutes.

-

Step 2: Launch Elcomsoft Internet Password Breaker and Select the Target Application

-

Once you have installed Elcomsoft Internet Password Breaker, launch it and you will see the main window with a list of supported applications. You can choose from web browsers (such as Google Chrome, Microsoft Edge, Mozilla Firefox, etc.), email clients (such as Microsoft Outlook, Thunderbird, etc.) or online services (such as Facebook, Twitter, Gmail, etc.).

-

-

Select the application that you want to recover passwords from and click on the "Start" button. Elcomsoft Internet Password Breaker will automatically detect the installed version of the application and locate all available user identities and accounts.

-

Step 3: View and Export the Recovered Passwords

-

After scanning the target application, Elcomsoft Internet Password Breaker will display all the recovered passwords in a table. You can view individual passwords by clicking on them or export everything into a text file by clicking on the "Export" button. You can also copy the passwords to the clipboard or save them as a wordlist for future use.

-

If you want to explore more details about the recovered passwords, such as their source URL, creation date, last used date, etc., you can use the built-in password explorer by clicking on the "Explore" button. You can also filter the passwords by various criteria, such as length, complexity, character set, etc.

-

Step 4: Build a Custom Dictionary and Perform Advanced Password Recovery Attacks

-

One of the most useful features of Elcomsoft Internet Password Breaker is that it can help you build a custom dictionary based on the recovered passwords. This can significantly improve your chances of cracking encryption passwords with other tools, such as Elcomsoft Distributed Password Recovery[^3^].

-

To build a custom dictionary, click on the "Dictionary" button and select the option "Create dictionary from current list". You can then specify the name and location of the dictionary file and click on "OK". Elcomsoft Internet Password Breaker will create a wordlist containing all the unique passwords from the current list.

-

You can then use this wordlist as an input for other password recovery tools that support dictionary attacks. You can also apply simple mutations to the wordlist, such as appending digits or symbols to the end of each password, to increase its coverage. According to various researches[^2^], using a filtered wordlist produced by Elcomsoft Internet Password Breaker can solve up to 70% of cases in a matter of minutes.

-

Conclusion

-

Elcomsoft Internet Password Breaker is a handy tool that can help you recover your online passwords in a fast and easy way. It supports a wide range of web browsers and email clients and can extract passwords, stored forms and autocomplete information from them. It can also help you build a custom dictionary to perform advanced password recovery attacks with other tools.

-

If you want to try Elcomsoft Internet Password Break

cec2833e83
-
-
\ No newline at end of file diff --git a/spaces/stratussox/yolov5_inference/segment/train.py b/spaces/stratussox/yolov5_inference/segment/train.py deleted file mode 100644 index f067918e7c3c60bf2406cb17abf4f3093e597fb5..0000000000000000000000000000000000000000 --- a/spaces/stratussox/yolov5_inference/segment/train.py +++ /dev/null @@ -1,662 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Train a YOLOv5 segment model on a segment dataset -Models and datasets download automatically from the latest YOLOv5 release. - -Usage - Single-GPU training: - $ python segment/train.py --data coco128-seg.yaml --weights yolov5s-seg.pt --img 640 # from pretrained (recommended) - $ python segment/train.py --data coco128-seg.yaml --weights '' --cfg yolov5s-seg.yaml --img 640 # from scratch - -Usage - Multi-GPU DDP training: - $ python -m torch.distributed.run --nproc_per_node 4 --master_port 1 segment/train.py --data coco128-seg.yaml --weights yolov5s-seg.pt --img 640 --device 0,1,2,3 - -Models: https://github.com/ultralytics/yolov5/tree/master/models -Datasets: https://github.com/ultralytics/yolov5/tree/master/data -Tutorial: https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data -""" - -import argparse -import math -import os -import random -import sys -import time -from copy import deepcopy -from datetime import datetime -from pathlib import Path - -import numpy as np -import torch -import torch.distributed as dist -import torch.nn as nn -import yaml -from torch.optim import lr_scheduler -from tqdm import tqdm - -FILE = Path(__file__).resolve() -ROOT = FILE.parents[1] # YOLOv5 root directory -if str(ROOT) not in sys.path: - sys.path.append(str(ROOT)) # add ROOT to PATH -ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative - -import segment.val as validate # for end-of-epoch mAP -from models.experimental import attempt_load -from models.yolo import SegmentationModel -from utils.autoanchor import check_anchors -from utils.autobatch import check_train_batch_size -from utils.callbacks import Callbacks -from utils.downloads import attempt_download, is_url -from utils.general import (LOGGER, check_amp, check_dataset, check_file, check_git_status, check_img_size, - check_requirements, check_suffix, check_yaml, colorstr, get_latest_run, increment_path, - init_seeds, intersect_dicts, labels_to_class_weights, labels_to_image_weights, one_cycle, - print_args, print_mutation, strip_optimizer, yaml_save) -from utils.loggers import GenericLogger -from utils.plots import plot_evolve, plot_labels -from utils.segment.dataloaders import create_dataloader -from utils.segment.loss import ComputeLoss -from utils.segment.metrics import KEYS, fitness -from utils.segment.plots import plot_images_and_masks, plot_results_with_masks -from utils.torch_utils import (EarlyStopping, ModelEMA, de_parallel, select_device, smart_DDP, smart_optimizer, - smart_resume, torch_distributed_zero_first) - -LOCAL_RANK = int(os.getenv('LOCAL_RANK', -1)) # https://pytorch.org/docs/stable/elastic/run.html -RANK = int(os.getenv('RANK', -1)) -WORLD_SIZE = int(os.getenv('WORLD_SIZE', 1)) - - -def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictionary - save_dir, epochs, batch_size, weights, single_cls, evolve, data, cfg, resume, noval, nosave, workers, freeze, mask_ratio = \ - Path(opt.save_dir), opt.epochs, opt.batch_size, opt.weights, opt.single_cls, opt.evolve, opt.data, opt.cfg, \ - opt.resume, opt.noval, opt.nosave, opt.workers, opt.freeze, opt.mask_ratio - # callbacks.run('on_pretrain_routine_start') - - # Directories - w = save_dir / 'weights' # weights dir - (w.parent if evolve else w).mkdir(parents=True, exist_ok=True) # make dir - last, best = w / 'last.pt', w / 'best.pt' - - # Hyperparameters - if isinstance(hyp, str): - with open(hyp, errors='ignore') as f: - hyp = yaml.safe_load(f) # load hyps dict - LOGGER.info(colorstr('hyperparameters: ') + ', '.join(f'{k}={v}' for k, v in hyp.items())) - opt.hyp = hyp.copy() # for saving hyps to checkpoints - - # Save run settings - if not evolve: - yaml_save(save_dir / 'hyp.yaml', hyp) - yaml_save(save_dir / 'opt.yaml', vars(opt)) - - # Loggers - data_dict = None - if RANK in {-1, 0}: - logger = GenericLogger(opt=opt, console_logger=LOGGER) - - # Config - plots = not evolve and not opt.noplots # create plots - overlap = not opt.no_overlap - cuda = device.type != 'cpu' - init_seeds(opt.seed + 1 + RANK, deterministic=True) - with torch_distributed_zero_first(LOCAL_RANK): - data_dict = data_dict or check_dataset(data) # check if None - train_path, val_path = data_dict['train'], data_dict['val'] - nc = 1 if single_cls else int(data_dict['nc']) # number of classes - names = {0: 'item'} if single_cls and len(data_dict['names']) != 1 else data_dict['names'] # class names - is_coco = isinstance(val_path, str) and val_path.endswith('coco/val2017.txt') # COCO dataset - - # Model - check_suffix(weights, '.pt') # check weights - pretrained = weights.endswith('.pt') - if pretrained: - with torch_distributed_zero_first(LOCAL_RANK): - weights = attempt_download(weights) # download if not found locally - ckpt = torch.load(weights, map_location='cpu') # load checkpoint to CPU to avoid CUDA memory leak - model = SegmentationModel(cfg or ckpt['model'].yaml, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) - exclude = ['anchor'] if (cfg or hyp.get('anchors')) and not resume else [] # exclude keys - csd = ckpt['model'].float().state_dict() # checkpoint state_dict as FP32 - csd = intersect_dicts(csd, model.state_dict(), exclude=exclude) # intersect - model.load_state_dict(csd, strict=False) # load - LOGGER.info(f'Transferred {len(csd)}/{len(model.state_dict())} items from {weights}') # report - else: - model = SegmentationModel(cfg, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create - amp = check_amp(model) # check AMP - - # Freeze - freeze = [f'model.{x}.' for x in (freeze if len(freeze) > 1 else range(freeze[0]))] # layers to freeze - for k, v in model.named_parameters(): - v.requires_grad = True # train all layers - # v.register_hook(lambda x: torch.nan_to_num(x)) # NaN to 0 (commented for erratic training results) - if any(x in k for x in freeze): - LOGGER.info(f'freezing {k}') - v.requires_grad = False - - # Image size - gs = max(int(model.stride.max()), 32) # grid size (max stride) - imgsz = check_img_size(opt.imgsz, gs, floor=gs * 2) # verify imgsz is gs-multiple - - # Batch size - if RANK == -1 and batch_size == -1: # single-GPU only, estimate best batch size - batch_size = check_train_batch_size(model, imgsz, amp) - logger.update_params({"batch_size": batch_size}) - # loggers.on_params_update({"batch_size": batch_size}) - - # Optimizer - nbs = 64 # nominal batch size - accumulate = max(round(nbs / batch_size), 1) # accumulate loss before optimizing - hyp['weight_decay'] *= batch_size * accumulate / nbs # scale weight_decay - optimizer = smart_optimizer(model, opt.optimizer, hyp['lr0'], hyp['momentum'], hyp['weight_decay']) - - # Scheduler - if opt.cos_lr: - lf = one_cycle(1, hyp['lrf'], epochs) # cosine 1->hyp['lrf'] - else: - lf = lambda x: (1 - x / epochs) * (1.0 - hyp['lrf']) + hyp['lrf'] # linear - scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf) # plot_lr_scheduler(optimizer, scheduler, epochs) - - # EMA - ema = ModelEMA(model) if RANK in {-1, 0} else None - - # Resume - best_fitness, start_epoch = 0.0, 0 - if pretrained: - if resume: - best_fitness, start_epoch, epochs = smart_resume(ckpt, optimizer, ema, weights, epochs, resume) - del ckpt, csd - - # DP mode - if cuda and RANK == -1 and torch.cuda.device_count() > 1: - LOGGER.warning('WARNING ⚠️ DP not recommended, use torch.distributed.run for best DDP Multi-GPU results.\n' - 'See Multi-GPU Tutorial at https://github.com/ultralytics/yolov5/issues/475 to get started.') - model = torch.nn.DataParallel(model) - - # SyncBatchNorm - if opt.sync_bn and cuda and RANK != -1: - model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model).to(device) - LOGGER.info('Using SyncBatchNorm()') - - # Trainloader - train_loader, dataset = create_dataloader( - train_path, - imgsz, - batch_size // WORLD_SIZE, - gs, - single_cls, - hyp=hyp, - augment=True, - cache=None if opt.cache == 'val' else opt.cache, - rect=opt.rect, - rank=LOCAL_RANK, - workers=workers, - image_weights=opt.image_weights, - quad=opt.quad, - prefix=colorstr('train: '), - shuffle=True, - mask_downsample_ratio=mask_ratio, - overlap_mask=overlap, - ) - labels = np.concatenate(dataset.labels, 0) - mlc = int(labels[:, 0].max()) # max label class - assert mlc < nc, f'Label class {mlc} exceeds nc={nc} in {data}. Possible class labels are 0-{nc - 1}' - - # Process 0 - if RANK in {-1, 0}: - val_loader = create_dataloader(val_path, - imgsz, - batch_size // WORLD_SIZE * 2, - gs, - single_cls, - hyp=hyp, - cache=None if noval else opt.cache, - rect=True, - rank=-1, - workers=workers * 2, - pad=0.5, - mask_downsample_ratio=mask_ratio, - overlap_mask=overlap, - prefix=colorstr('val: '))[0] - - if not resume: - if not opt.noautoanchor: - check_anchors(dataset, model=model, thr=hyp['anchor_t'], imgsz=imgsz) # run AutoAnchor - model.half().float() # pre-reduce anchor precision - - if plots: - plot_labels(labels, names, save_dir) - # callbacks.run('on_pretrain_routine_end', labels, names) - - # DDP mode - if cuda and RANK != -1: - model = smart_DDP(model) - - # Model attributes - nl = de_parallel(model).model[-1].nl # number of detection layers (to scale hyps) - hyp['box'] *= 3 / nl # scale to layers - hyp['cls'] *= nc / 80 * 3 / nl # scale to classes and layers - hyp['obj'] *= (imgsz / 640) ** 2 * 3 / nl # scale to image size and layers - hyp['label_smoothing'] = opt.label_smoothing - model.nc = nc # attach number of classes to model - model.hyp = hyp # attach hyperparameters to model - model.class_weights = labels_to_class_weights(dataset.labels, nc).to(device) * nc # attach class weights - model.names = names - - # Start training - t0 = time.time() - nb = len(train_loader) # number of batches - nw = max(round(hyp['warmup_epochs'] * nb), 100) # number of warmup iterations, max(3 epochs, 100 iterations) - # nw = min(nw, (epochs - start_epoch) / 2 * nb) # limit warmup to < 1/2 of training - last_opt_step = -1 - maps = np.zeros(nc) # mAP per class - results = (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) # P, R, mAP@.5, mAP@.5-.95, val_loss(box, obj, cls) - scheduler.last_epoch = start_epoch - 1 # do not move - scaler = torch.cuda.amp.GradScaler(enabled=amp) - stopper, stop = EarlyStopping(patience=opt.patience), False - compute_loss = ComputeLoss(model, overlap=overlap) # init loss class - # callbacks.run('on_train_start') - LOGGER.info(f'Image sizes {imgsz} train, {imgsz} val\n' - f'Using {train_loader.num_workers * WORLD_SIZE} dataloader workers\n' - f"Logging results to {colorstr('bold', save_dir)}\n" - f'Starting training for {epochs} epochs...') - for epoch in range(start_epoch, epochs): # epoch ------------------------------------------------------------------ - # callbacks.run('on_train_epoch_start') - model.train() - - # Update image weights (optional, single-GPU only) - if opt.image_weights: - cw = model.class_weights.cpu().numpy() * (1 - maps) ** 2 / nc # class weights - iw = labels_to_image_weights(dataset.labels, nc=nc, class_weights=cw) # image weights - dataset.indices = random.choices(range(dataset.n), weights=iw, k=dataset.n) # rand weighted idx - - # Update mosaic border (optional) - # b = int(random.uniform(0.25 * imgsz, 0.75 * imgsz + gs) // gs * gs) - # dataset.mosaic_border = [b - imgsz, -b] # height, width borders - - mloss = torch.zeros(4, device=device) # mean losses - if RANK != -1: - train_loader.sampler.set_epoch(epoch) - pbar = enumerate(train_loader) - LOGGER.info(('\n' + '%11s' * 8) % - ('Epoch', 'GPU_mem', 'box_loss', 'seg_loss', 'obj_loss', 'cls_loss', 'Instances', 'Size')) - if RANK in {-1, 0}: - pbar = tqdm(pbar, total=nb, bar_format='{l_bar}{bar:10}{r_bar}{bar:-10b}') # progress bar - optimizer.zero_grad() - for i, (imgs, targets, paths, _, masks) in pbar: # batch ------------------------------------------------------ - # callbacks.run('on_train_batch_start') - ni = i + nb * epoch # number integrated batches (since train start) - imgs = imgs.to(device, non_blocking=True).float() / 255 # uint8 to float32, 0-255 to 0.0-1.0 - - # Warmup - if ni <= nw: - xi = [0, nw] # x interp - # compute_loss.gr = np.interp(ni, xi, [0.0, 1.0]) # iou loss ratio (obj_loss = 1.0 or iou) - accumulate = max(1, np.interp(ni, xi, [1, nbs / batch_size]).round()) - for j, x in enumerate(optimizer.param_groups): - # bias lr falls from 0.1 to lr0, all other lrs rise from 0.0 to lr0 - x['lr'] = np.interp(ni, xi, [hyp['warmup_bias_lr'] if j == 0 else 0.0, x['initial_lr'] * lf(epoch)]) - if 'momentum' in x: - x['momentum'] = np.interp(ni, xi, [hyp['warmup_momentum'], hyp['momentum']]) - - # Multi-scale - if opt.multi_scale: - sz = random.randrange(imgsz * 0.5, imgsz * 1.5 + gs) // gs * gs # size - sf = sz / max(imgs.shape[2:]) # scale factor - if sf != 1: - ns = [math.ceil(x * sf / gs) * gs for x in imgs.shape[2:]] # new shape (stretched to gs-multiple) - imgs = nn.functional.interpolate(imgs, size=ns, mode='bilinear', align_corners=False) - - # Forward - with torch.cuda.amp.autocast(amp): - pred = model(imgs) # forward - loss, loss_items = compute_loss(pred, targets.to(device), masks=masks.to(device).float()) - if RANK != -1: - loss *= WORLD_SIZE # gradient averaged between devices in DDP mode - if opt.quad: - loss *= 4. - - # Backward - scaler.scale(loss).backward() - - # Optimize - https://pytorch.org/docs/master/notes/amp_examples.html - if ni - last_opt_step >= accumulate: - scaler.unscale_(optimizer) # unscale gradients - torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=10.0) # clip gradients - scaler.step(optimizer) # optimizer.step - scaler.update() - optimizer.zero_grad() - if ema: - ema.update(model) - last_opt_step = ni - - # Log - if RANK in {-1, 0}: - mloss = (mloss * i + loss_items) / (i + 1) # update mean losses - mem = f'{torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0:.3g}G' # (GB) - pbar.set_description(('%11s' * 2 + '%11.4g' * 6) % - (f'{epoch}/{epochs - 1}', mem, *mloss, targets.shape[0], imgs.shape[-1])) - # callbacks.run('on_train_batch_end', model, ni, imgs, targets, paths) - # if callbacks.stop_training: - # return - - # Mosaic plots - if plots: - if ni < 3: - plot_images_and_masks(imgs, targets, masks, paths, save_dir / f"train_batch{ni}.jpg") - if ni == 10: - files = sorted(save_dir.glob('train*.jpg')) - logger.log_images(files, "Mosaics", epoch) - # end batch ------------------------------------------------------------------------------------------------ - - # Scheduler - lr = [x['lr'] for x in optimizer.param_groups] # for loggers - scheduler.step() - - if RANK in {-1, 0}: - # mAP - # callbacks.run('on_train_epoch_end', epoch=epoch) - ema.update_attr(model, include=['yaml', 'nc', 'hyp', 'names', 'stride', 'class_weights']) - final_epoch = (epoch + 1 == epochs) or stopper.possible_stop - if not noval or final_epoch: # Calculate mAP - results, maps, _ = validate.run(data_dict, - batch_size=batch_size // WORLD_SIZE * 2, - imgsz=imgsz, - half=amp, - model=ema.ema, - single_cls=single_cls, - dataloader=val_loader, - save_dir=save_dir, - plots=False, - callbacks=callbacks, - compute_loss=compute_loss, - mask_downsample_ratio=mask_ratio, - overlap=overlap) - - # Update best mAP - fi = fitness(np.array(results).reshape(1, -1)) # weighted combination of [P, R, mAP@.5, mAP@.5-.95] - stop = stopper(epoch=epoch, fitness=fi) # early stop check - if fi > best_fitness: - best_fitness = fi - log_vals = list(mloss) + list(results) + lr - # callbacks.run('on_fit_epoch_end', log_vals, epoch, best_fitness, fi) - # Log val metrics and media - metrics_dict = dict(zip(KEYS, log_vals)) - logger.log_metrics(metrics_dict, epoch) - - # Save model - if (not nosave) or (final_epoch and not evolve): # if save - ckpt = { - 'epoch': epoch, - 'best_fitness': best_fitness, - 'model': deepcopy(de_parallel(model)).half(), - 'ema': deepcopy(ema.ema).half(), - 'updates': ema.updates, - 'optimizer': optimizer.state_dict(), - 'opt': vars(opt), - 'date': datetime.now().isoformat()} - - # Save last, best and delete - torch.save(ckpt, last) - if best_fitness == fi: - torch.save(ckpt, best) - if opt.save_period > 0 and epoch % opt.save_period == 0: - torch.save(ckpt, w / f'epoch{epoch}.pt') - logger.log_model(w / f'epoch{epoch}.pt') - del ckpt - # callbacks.run('on_model_save', last, epoch, final_epoch, best_fitness, fi) - - # EarlyStopping - if RANK != -1: # if DDP training - broadcast_list = [stop if RANK == 0 else None] - dist.broadcast_object_list(broadcast_list, 0) # broadcast 'stop' to all ranks - if RANK != 0: - stop = broadcast_list[0] - if stop: - break # must break all DDP ranks - - # end epoch ---------------------------------------------------------------------------------------------------- - # end training ----------------------------------------------------------------------------------------------------- - if RANK in {-1, 0}: - LOGGER.info(f'\n{epoch - start_epoch + 1} epochs completed in {(time.time() - t0) / 3600:.3f} hours.') - for f in last, best: - if f.exists(): - strip_optimizer(f) # strip optimizers - if f is best: - LOGGER.info(f'\nValidating {f}...') - results, _, _ = validate.run( - data_dict, - batch_size=batch_size // WORLD_SIZE * 2, - imgsz=imgsz, - model=attempt_load(f, device).half(), - iou_thres=0.65 if is_coco else 0.60, # best pycocotools at iou 0.65 - single_cls=single_cls, - dataloader=val_loader, - save_dir=save_dir, - save_json=is_coco, - verbose=True, - plots=plots, - callbacks=callbacks, - compute_loss=compute_loss, - mask_downsample_ratio=mask_ratio, - overlap=overlap) # val best model with plots - if is_coco: - # callbacks.run('on_fit_epoch_end', list(mloss) + list(results) + lr, epoch, best_fitness, fi) - metrics_dict = dict(zip(KEYS, list(mloss) + list(results) + lr)) - logger.log_metrics(metrics_dict, epoch) - - # callbacks.run('on_train_end', last, best, epoch, results) - # on train end callback using genericLogger - logger.log_metrics(dict(zip(KEYS[4:16], results)), epochs) - if not opt.evolve: - logger.log_model(best, epoch) - if plots: - plot_results_with_masks(file=save_dir / 'results.csv') # save results.png - files = ['results.png', 'confusion_matrix.png', *(f'{x}_curve.png' for x in ('F1', 'PR', 'P', 'R'))] - files = [(save_dir / f) for f in files if (save_dir / f).exists()] # filter - LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}") - logger.log_images(files, "Results", epoch + 1) - logger.log_images(sorted(save_dir.glob('val*.jpg')), "Validation", epoch + 1) - torch.cuda.empty_cache() - return results - - -def parse_opt(known=False): - parser = argparse.ArgumentParser() - parser.add_argument('--weights', type=str, default=ROOT / 'yolov5s-seg.pt', help='initial weights path') - parser.add_argument('--cfg', type=str, default='', help='model.yaml path') - parser.add_argument('--data', type=str, default=ROOT / 'data/coco128-seg.yaml', help='dataset.yaml path') - parser.add_argument('--hyp', type=str, default=ROOT / 'data/hyps/hyp.scratch-low.yaml', help='hyperparameters path') - parser.add_argument('--epochs', type=int, default=100, help='total training epochs') - parser.add_argument('--batch-size', type=int, default=16, help='total batch size for all GPUs, -1 for autobatch') - parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='train, val image size (pixels)') - parser.add_argument('--rect', action='store_true', help='rectangular training') - parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training') - parser.add_argument('--nosave', action='store_true', help='only save final checkpoint') - parser.add_argument('--noval', action='store_true', help='only validate final epoch') - parser.add_argument('--noautoanchor', action='store_true', help='disable AutoAnchor') - parser.add_argument('--noplots', action='store_true', help='save no plot files') - parser.add_argument('--evolve', type=int, nargs='?', const=300, help='evolve hyperparameters for x generations') - parser.add_argument('--bucket', type=str, default='', help='gsutil bucket') - parser.add_argument('--cache', type=str, nargs='?', const='ram', help='image --cache ram/disk') - parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training') - parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%') - parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class') - parser.add_argument('--optimizer', type=str, choices=['SGD', 'Adam', 'AdamW'], default='SGD', help='optimizer') - parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode') - parser.add_argument('--workers', type=int, default=8, help='max dataloader workers (per RANK in DDP mode)') - parser.add_argument('--project', default=ROOT / 'runs/train-seg', help='save to project/name') - parser.add_argument('--name', default='exp', help='save to project/name') - parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment') - parser.add_argument('--quad', action='store_true', help='quad dataloader') - parser.add_argument('--cos-lr', action='store_true', help='cosine LR scheduler') - parser.add_argument('--label-smoothing', type=float, default=0.0, help='Label smoothing epsilon') - parser.add_argument('--patience', type=int, default=100, help='EarlyStopping patience (epochs without improvement)') - parser.add_argument('--freeze', nargs='+', type=int, default=[0], help='Freeze layers: backbone=10, first3=0 1 2') - parser.add_argument('--save-period', type=int, default=-1, help='Save checkpoint every x epochs (disabled if < 1)') - parser.add_argument('--seed', type=int, default=0, help='Global training seed') - parser.add_argument('--local_rank', type=int, default=-1, help='Automatic DDP Multi-GPU argument, do not modify') - - # Instance Segmentation Args - parser.add_argument('--mask-ratio', type=int, default=4, help='Downsample the truth masks to saving memory') - parser.add_argument('--no-overlap', action='store_true', help='Overlap masks train faster at slightly less mAP') - - # Weights & Biases arguments - # parser.add_argument('--entity', default=None, help='W&B: Entity') - # parser.add_argument('--upload_dataset', nargs='?', const=True, default=False, help='W&B: Upload data, "val" option') - # parser.add_argument('--bbox_interval', type=int, default=-1, help='W&B: Set bounding-box image logging interval') - # parser.add_argument('--artifact_alias', type=str, default='latest', help='W&B: Version of dataset artifact to use') - - return parser.parse_known_args()[0] if known else parser.parse_args() - - -def main(opt, callbacks=Callbacks()): - # Checks - if RANK in {-1, 0}: - print_args(vars(opt)) - check_git_status() - check_requirements() - - # Resume - if opt.resume and not opt.evolve: # resume from specified or most recent last.pt - last = Path(check_file(opt.resume) if isinstance(opt.resume, str) else get_latest_run()) - opt_yaml = last.parent.parent / 'opt.yaml' # train options yaml - opt_data = opt.data # original dataset - if opt_yaml.is_file(): - with open(opt_yaml, errors='ignore') as f: - d = yaml.safe_load(f) - else: - d = torch.load(last, map_location='cpu')['opt'] - opt = argparse.Namespace(**d) # replace - opt.cfg, opt.weights, opt.resume = '', str(last), True # reinstate - if is_url(opt_data): - opt.data = check_file(opt_data) # avoid HUB resume auth timeout - else: - opt.data, opt.cfg, opt.hyp, opt.weights, opt.project = \ - check_file(opt.data), check_yaml(opt.cfg), check_yaml(opt.hyp), str(opt.weights), str(opt.project) # checks - assert len(opt.cfg) or len(opt.weights), 'either --cfg or --weights must be specified' - if opt.evolve: - if opt.project == str(ROOT / 'runs/train'): # if default project name, rename to runs/evolve - opt.project = str(ROOT / 'runs/evolve') - opt.exist_ok, opt.resume = opt.resume, False # pass resume to exist_ok and disable resume - if opt.name == 'cfg': - opt.name = Path(opt.cfg).stem # use model.yaml as name - opt.save_dir = str(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok)) - - # DDP mode - device = select_device(opt.device, batch_size=opt.batch_size) - if LOCAL_RANK != -1: - msg = 'is not compatible with YOLOv5 Multi-GPU DDP training' - assert not opt.image_weights, f'--image-weights {msg}' - assert not opt.evolve, f'--evolve {msg}' - assert opt.batch_size != -1, f'AutoBatch with --batch-size -1 {msg}, please pass a valid --batch-size' - assert opt.batch_size % WORLD_SIZE == 0, f'--batch-size {opt.batch_size} must be multiple of WORLD_SIZE' - assert torch.cuda.device_count() > LOCAL_RANK, 'insufficient CUDA devices for DDP command' - torch.cuda.set_device(LOCAL_RANK) - device = torch.device('cuda', LOCAL_RANK) - dist.init_process_group(backend="nccl" if dist.is_nccl_available() else "gloo") - - # Train - if not opt.evolve: - train(opt.hyp, opt, device, callbacks) - - # Evolve hyperparameters (optional) - else: - # Hyperparameter evolution metadata (mutation scale 0-1, lower_limit, upper_limit) - meta = { - 'lr0': (1, 1e-5, 1e-1), # initial learning rate (SGD=1E-2, Adam=1E-3) - 'lrf': (1, 0.01, 1.0), # final OneCycleLR learning rate (lr0 * lrf) - 'momentum': (0.3, 0.6, 0.98), # SGD momentum/Adam beta1 - 'weight_decay': (1, 0.0, 0.001), # optimizer weight decay - 'warmup_epochs': (1, 0.0, 5.0), # warmup epochs (fractions ok) - 'warmup_momentum': (1, 0.0, 0.95), # warmup initial momentum - 'warmup_bias_lr': (1, 0.0, 0.2), # warmup initial bias lr - 'box': (1, 0.02, 0.2), # box loss gain - 'cls': (1, 0.2, 4.0), # cls loss gain - 'cls_pw': (1, 0.5, 2.0), # cls BCELoss positive_weight - 'obj': (1, 0.2, 4.0), # obj loss gain (scale with pixels) - 'obj_pw': (1, 0.5, 2.0), # obj BCELoss positive_weight - 'iou_t': (0, 0.1, 0.7), # IoU training threshold - 'anchor_t': (1, 2.0, 8.0), # anchor-multiple threshold - 'anchors': (2, 2.0, 10.0), # anchors per output grid (0 to ignore) - 'fl_gamma': (0, 0.0, 2.0), # focal loss gamma (efficientDet default gamma=1.5) - 'hsv_h': (1, 0.0, 0.1), # image HSV-Hue augmentation (fraction) - 'hsv_s': (1, 0.0, 0.9), # image HSV-Saturation augmentation (fraction) - 'hsv_v': (1, 0.0, 0.9), # image HSV-Value augmentation (fraction) - 'degrees': (1, 0.0, 45.0), # image rotation (+/- deg) - 'translate': (1, 0.0, 0.9), # image translation (+/- fraction) - 'scale': (1, 0.0, 0.9), # image scale (+/- gain) - 'shear': (1, 0.0, 10.0), # image shear (+/- deg) - 'perspective': (0, 0.0, 0.001), # image perspective (+/- fraction), range 0-0.001 - 'flipud': (1, 0.0, 1.0), # image flip up-down (probability) - 'fliplr': (0, 0.0, 1.0), # image flip left-right (probability) - 'mosaic': (1, 0.0, 1.0), # image mixup (probability) - 'mixup': (1, 0.0, 1.0), # image mixup (probability) - 'copy_paste': (1, 0.0, 1.0)} # segment copy-paste (probability) - - with open(opt.hyp, errors='ignore') as f: - hyp = yaml.safe_load(f) # load hyps dict - if 'anchors' not in hyp: # anchors commented in hyp.yaml - hyp['anchors'] = 3 - if opt.noautoanchor: - del hyp['anchors'], meta['anchors'] - opt.noval, opt.nosave, save_dir = True, True, Path(opt.save_dir) # only val/save final epoch - # ei = [isinstance(x, (int, float)) for x in hyp.values()] # evolvable indices - evolve_yaml, evolve_csv = save_dir / 'hyp_evolve.yaml', save_dir / 'evolve.csv' - if opt.bucket: - os.system(f'gsutil cp gs://{opt.bucket}/evolve.csv {evolve_csv}') # download evolve.csv if exists - - for _ in range(opt.evolve): # generations to evolve - if evolve_csv.exists(): # if evolve.csv exists: select best hyps and mutate - # Select parent(s) - parent = 'single' # parent selection method: 'single' or 'weighted' - x = np.loadtxt(evolve_csv, ndmin=2, delimiter=',', skiprows=1) - n = min(5, len(x)) # number of previous results to consider - x = x[np.argsort(-fitness(x))][:n] # top n mutations - w = fitness(x) - fitness(x).min() + 1E-6 # weights (sum > 0) - if parent == 'single' or len(x) == 1: - # x = x[random.randint(0, n - 1)] # random selection - x = x[random.choices(range(n), weights=w)[0]] # weighted selection - elif parent == 'weighted': - x = (x * w.reshape(n, 1)).sum(0) / w.sum() # weighted combination - - # Mutate - mp, s = 0.8, 0.2 # mutation probability, sigma - npr = np.random - npr.seed(int(time.time())) - g = np.array([meta[k][0] for k in hyp.keys()]) # gains 0-1 - ng = len(meta) - v = np.ones(ng) - while all(v == 1): # mutate until a change occurs (prevent duplicates) - v = (g * (npr.random(ng) < mp) * npr.randn(ng) * npr.random() * s + 1).clip(0.3, 3.0) - for i, k in enumerate(hyp.keys()): # plt.hist(v.ravel(), 300) - hyp[k] = float(x[i + 7] * v[i]) # mutate - - # Constrain to limits - for k, v in meta.items(): - hyp[k] = max(hyp[k], v[1]) # lower limit - hyp[k] = min(hyp[k], v[2]) # upper limit - hyp[k] = round(hyp[k], 5) # significant digits - - # Train mutation - results = train(hyp.copy(), opt, device, callbacks) - callbacks = Callbacks() - # Write mutation results - print_mutation(KEYS, results, hyp.copy(), save_dir, opt.bucket) - - # Plot results - plot_evolve(evolve_csv) - LOGGER.info(f'Hyperparameter evolution finished {opt.evolve} generations\n' - f"Results saved to {colorstr('bold', save_dir)}\n" - f'Usage example: $ python train.py --hyp {evolve_yaml}') - - -def run(**kwargs): - # Usage: import train; train.run(data='coco128.yaml', imgsz=320, weights='yolov5m.pt') - opt = parse_opt(True) - for k, v in kwargs.items(): - setattr(opt, k, v) - main(opt) - return opt - - -if __name__ == "__main__": - opt = parse_opt() - main(opt) diff --git a/spaces/sub314xxl/MusicGen-Continuation/CHANGELOG.md b/spaces/sub314xxl/MusicGen-Continuation/CHANGELOG.md deleted file mode 100644 index 51dc21c8566b75e2a6ef3a10e5778d4ada917531..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MusicGen-Continuation/CHANGELOG.md +++ /dev/null @@ -1,18 +0,0 @@ -# Changelog - -All notable changes to this project will be documented in this file. - -The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/). - -## [0.0.2a] - TBD - -Improved demo, fixed top p (thanks @jnordberg). - -Compressor tanh on output to avoid clipping with some style (especially piano). -Now repeating the conditioning periodically if it is too short. - -More options when launching Gradio app locally (thanks @ashleykleynhans). - -## [0.0.1] - 2023-06-09 - -Initial release, with model evaluation only. diff --git a/spaces/sub314xxl/MusicGen/tests/quantization/test_vq.py b/spaces/sub314xxl/MusicGen/tests/quantization/test_vq.py deleted file mode 100644 index c215099fedacae35c6798fdd9b8420a447aa16bb..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MusicGen/tests/quantization/test_vq.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from audiocraft.quantization.vq import ResidualVectorQuantizer - - -class TestResidualVectorQuantizer: - - def test_rvq(self): - x = torch.randn(1, 16, 2048) - vq = ResidualVectorQuantizer(n_q=8, dimension=16, bins=8) - res = vq(x, 1.) - assert res.x.shape == torch.Size([1, 16, 2048]) diff --git a/spaces/subhajitmaji/MusicGen/tests/data/test_audio_utils.py b/spaces/subhajitmaji/MusicGen/tests/data/test_audio_utils.py deleted file mode 100644 index 0480671bb17281d61ce02bce6373a5ccec89fece..0000000000000000000000000000000000000000 --- a/spaces/subhajitmaji/MusicGen/tests/data/test_audio_utils.py +++ /dev/null @@ -1,110 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import julius -import torch -import pytest - -from audiocraft.data.audio_utils import ( - _clip_wav, - convert_audio_channels, - convert_audio, - normalize_audio -) -from ..common_utils import get_batch_white_noise - - -class TestConvertAudioChannels: - - def test_convert_audio_channels_downmix(self): - b, c, t = 2, 3, 100 - audio = get_batch_white_noise(b, c, t) - mixed = convert_audio_channels(audio, channels=2) - assert list(mixed.shape) == [b, 2, t] - - def test_convert_audio_channels_nochange(self): - b, c, t = 2, 3, 100 - audio = get_batch_white_noise(b, c, t) - mixed = convert_audio_channels(audio, channels=c) - assert list(mixed.shape) == list(audio.shape) - - def test_convert_audio_channels_upmix(self): - b, c, t = 2, 1, 100 - audio = get_batch_white_noise(b, c, t) - mixed = convert_audio_channels(audio, channels=3) - assert list(mixed.shape) == [b, 3, t] - - def test_convert_audio_channels_upmix_error(self): - b, c, t = 2, 2, 100 - audio = get_batch_white_noise(b, c, t) - with pytest.raises(ValueError): - convert_audio_channels(audio, channels=3) - - -class TestConvertAudio: - - def test_convert_audio_channels_downmix(self): - b, c, dur = 2, 3, 4. - sr = 128 - audio = get_batch_white_noise(b, c, int(sr * dur)) - out = convert_audio(audio, from_rate=sr, to_rate=sr, to_channels=2) - assert list(out.shape) == [audio.shape[0], 2, audio.shape[-1]] - - def test_convert_audio_channels_upmix(self): - b, c, dur = 2, 1, 4. - sr = 128 - audio = get_batch_white_noise(b, c, int(sr * dur)) - out = convert_audio(audio, from_rate=sr, to_rate=sr, to_channels=3) - assert list(out.shape) == [audio.shape[0], 3, audio.shape[-1]] - - def test_convert_audio_upsample(self): - b, c, dur = 2, 1, 4. - sr = 2 - new_sr = 3 - audio = get_batch_white_noise(b, c, int(sr * dur)) - out = convert_audio(audio, from_rate=sr, to_rate=new_sr, to_channels=c) - out_j = julius.resample.resample_frac(audio, old_sr=sr, new_sr=new_sr) - assert torch.allclose(out, out_j) - - def test_convert_audio_resample(self): - b, c, dur = 2, 1, 4. - sr = 3 - new_sr = 2 - audio = get_batch_white_noise(b, c, int(sr * dur)) - out = convert_audio(audio, from_rate=sr, to_rate=new_sr, to_channels=c) - out_j = julius.resample.resample_frac(audio, old_sr=sr, new_sr=new_sr) - assert torch.allclose(out, out_j) - - -class TestNormalizeAudio: - - def test_clip_wav(self): - b, c, dur = 2, 1, 4. - sr = 3 - audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur)) - _clip_wav(audio) - assert audio.abs().max() <= 1 - - def test_normalize_audio_clip(self): - b, c, dur = 2, 1, 4. - sr = 3 - audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur)) - norm_audio = normalize_audio(audio, strategy='clip') - assert norm_audio.abs().max() <= 1 - - def test_normalize_audio_rms(self): - b, c, dur = 2, 1, 4. - sr = 3 - audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur)) - norm_audio = normalize_audio(audio, strategy='rms') - assert norm_audio.abs().max() <= 1 - - def test_normalize_audio_peak(self): - b, c, dur = 2, 1, 4. - sr = 3 - audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur)) - norm_audio = normalize_audio(audio, strategy='peak') - assert norm_audio.abs().max() <= 1 diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Parks And Recreation 720p Season 1l.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Parks And Recreation 720p Season 1l.md deleted file mode 100644 index a80c3796bb7c804ef431abf892e88fbfc6f4a535..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Parks And Recreation 720p Season 1l.md +++ /dev/null @@ -1,6 +0,0 @@ -

Parks And Recreation 720p Season 1l


Download Zip 🔗 https://cinurl.com/2uEXrv



-
-Parks and Recreation is an American political satire mockumentary sitcom television series ... Tom quits his city hall job to form an entertainment company called Entertainment 720 with his friend, Jean-Ralphio. The business cannot maintain ... 4d29de3e1b
-
-
-

diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/cnn/bricks/conv_module.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/cnn/bricks/conv_module.py deleted file mode 100644 index e60e7e62245071c77b652093fddebff3948d7c3e..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/cnn/bricks/conv_module.py +++ /dev/null @@ -1,206 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch.nn as nn - -from annotator.uniformer.mmcv.utils import _BatchNorm, _InstanceNorm -from ..utils import constant_init, kaiming_init -from .activation import build_activation_layer -from .conv import build_conv_layer -from .norm import build_norm_layer -from .padding import build_padding_layer -from .registry import PLUGIN_LAYERS - - -@PLUGIN_LAYERS.register_module() -class ConvModule(nn.Module): - """A conv block that bundles conv/norm/activation layers. - - This block simplifies the usage of convolution layers, which are commonly - used with a norm layer (e.g., BatchNorm) and activation layer (e.g., ReLU). - It is based upon three build methods: `build_conv_layer()`, - `build_norm_layer()` and `build_activation_layer()`. - - Besides, we add some additional features in this module. - 1. Automatically set `bias` of the conv layer. - 2. Spectral norm is supported. - 3. More padding modes are supported. Before PyTorch 1.5, nn.Conv2d only - supports zero and circular padding, and we add "reflect" padding mode. - - Args: - in_channels (int): Number of channels in the input feature map. - Same as that in ``nn._ConvNd``. - out_channels (int): Number of channels produced by the convolution. - Same as that in ``nn._ConvNd``. - kernel_size (int | tuple[int]): Size of the convolving kernel. - Same as that in ``nn._ConvNd``. - stride (int | tuple[int]): Stride of the convolution. - Same as that in ``nn._ConvNd``. - padding (int | tuple[int]): Zero-padding added to both sides of - the input. Same as that in ``nn._ConvNd``. - dilation (int | tuple[int]): Spacing between kernel elements. - Same as that in ``nn._ConvNd``. - groups (int): Number of blocked connections from input channels to - output channels. Same as that in ``nn._ConvNd``. - bias (bool | str): If specified as `auto`, it will be decided by the - norm_cfg. Bias will be set as True if `norm_cfg` is None, otherwise - False. Default: "auto". - conv_cfg (dict): Config dict for convolution layer. Default: None, - which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. Default: None. - act_cfg (dict): Config dict for activation layer. - Default: dict(type='ReLU'). - inplace (bool): Whether to use inplace mode for activation. - Default: True. - with_spectral_norm (bool): Whether use spectral norm in conv module. - Default: False. - padding_mode (str): If the `padding_mode` has not been supported by - current `Conv2d` in PyTorch, we will use our own padding layer - instead. Currently, we support ['zeros', 'circular'] with official - implementation and ['reflect'] with our own implementation. - Default: 'zeros'. - order (tuple[str]): The order of conv/norm/activation layers. It is a - sequence of "conv", "norm" and "act". Common examples are - ("conv", "norm", "act") and ("act", "conv", "norm"). - Default: ('conv', 'norm', 'act'). - """ - - _abbr_ = 'conv_block' - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - bias='auto', - conv_cfg=None, - norm_cfg=None, - act_cfg=dict(type='ReLU'), - inplace=True, - with_spectral_norm=False, - padding_mode='zeros', - order=('conv', 'norm', 'act')): - super(ConvModule, self).__init__() - assert conv_cfg is None or isinstance(conv_cfg, dict) - assert norm_cfg is None or isinstance(norm_cfg, dict) - assert act_cfg is None or isinstance(act_cfg, dict) - official_padding_mode = ['zeros', 'circular'] - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.inplace = inplace - self.with_spectral_norm = with_spectral_norm - self.with_explicit_padding = padding_mode not in official_padding_mode - self.order = order - assert isinstance(self.order, tuple) and len(self.order) == 3 - assert set(order) == set(['conv', 'norm', 'act']) - - self.with_norm = norm_cfg is not None - self.with_activation = act_cfg is not None - # if the conv layer is before a norm layer, bias is unnecessary. - if bias == 'auto': - bias = not self.with_norm - self.with_bias = bias - - if self.with_explicit_padding: - pad_cfg = dict(type=padding_mode) - self.padding_layer = build_padding_layer(pad_cfg, padding) - - # reset padding to 0 for conv module - conv_padding = 0 if self.with_explicit_padding else padding - # build convolution layer - self.conv = build_conv_layer( - conv_cfg, - in_channels, - out_channels, - kernel_size, - stride=stride, - padding=conv_padding, - dilation=dilation, - groups=groups, - bias=bias) - # export the attributes of self.conv to a higher level for convenience - self.in_channels = self.conv.in_channels - self.out_channels = self.conv.out_channels - self.kernel_size = self.conv.kernel_size - self.stride = self.conv.stride - self.padding = padding - self.dilation = self.conv.dilation - self.transposed = self.conv.transposed - self.output_padding = self.conv.output_padding - self.groups = self.conv.groups - - if self.with_spectral_norm: - self.conv = nn.utils.spectral_norm(self.conv) - - # build normalization layers - if self.with_norm: - # norm layer is after conv layer - if order.index('norm') > order.index('conv'): - norm_channels = out_channels - else: - norm_channels = in_channels - self.norm_name, norm = build_norm_layer(norm_cfg, norm_channels) - self.add_module(self.norm_name, norm) - if self.with_bias: - if isinstance(norm, (_BatchNorm, _InstanceNorm)): - warnings.warn( - 'Unnecessary conv bias before batch/instance norm') - else: - self.norm_name = None - - # build activation layer - if self.with_activation: - act_cfg_ = act_cfg.copy() - # nn.Tanh has no 'inplace' argument - if act_cfg_['type'] not in [ - 'Tanh', 'PReLU', 'Sigmoid', 'HSigmoid', 'Swish' - ]: - act_cfg_.setdefault('inplace', inplace) - self.activate = build_activation_layer(act_cfg_) - - # Use msra init by default - self.init_weights() - - @property - def norm(self): - if self.norm_name: - return getattr(self, self.norm_name) - else: - return None - - def init_weights(self): - # 1. It is mainly for customized conv layers with their own - # initialization manners by calling their own ``init_weights()``, - # and we do not want ConvModule to override the initialization. - # 2. For customized conv layers without their own initialization - # manners (that is, they don't have their own ``init_weights()``) - # and PyTorch's conv layers, they will be initialized by - # this method with default ``kaiming_init``. - # Note: For PyTorch's conv layers, they will be overwritten by our - # initialization implementation using default ``kaiming_init``. - if not hasattr(self.conv, 'init_weights'): - if self.with_activation and self.act_cfg['type'] == 'LeakyReLU': - nonlinearity = 'leaky_relu' - a = self.act_cfg.get('negative_slope', 0.01) - else: - nonlinearity = 'relu' - a = 0 - kaiming_init(self.conv, a=a, nonlinearity=nonlinearity) - if self.with_norm: - constant_init(self.norm, 1, bias=0) - - def forward(self, x, activate=True, norm=True): - for layer in self.order: - if layer == 'conv': - if self.with_explicit_padding: - x = self.padding_layer(x) - x = self.conv(x) - elif layer == 'norm' and norm and self.with_norm: - x = self.norm(x) - elif layer == 'act' and activate and self.with_activation: - x = self.activate(x) - return x diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/datasets/custom.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/datasets/custom.py deleted file mode 100644 index d8eb2a709cc7a3a68fc6a1e3a1ad98faef4c5b7b..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/datasets/custom.py +++ /dev/null @@ -1,400 +0,0 @@ -import os -import os.path as osp -from collections import OrderedDict -from functools import reduce - -import annotator.uniformer.mmcv as mmcv -import numpy as np -from annotator.uniformer.mmcv.utils import print_log -from prettytable import PrettyTable -from torch.utils.data import Dataset - -from annotator.uniformer.mmseg.core import eval_metrics -from annotator.uniformer.mmseg.utils import get_root_logger -from .builder import DATASETS -from .pipelines import Compose - - -@DATASETS.register_module() -class CustomDataset(Dataset): - """Custom dataset for semantic segmentation. An example of file structure - is as followed. - - .. code-block:: none - - ├── data - │ ├── my_dataset - │ │ ├── img_dir - │ │ │ ├── train - │ │ │ │ ├── xxx{img_suffix} - │ │ │ │ ├── yyy{img_suffix} - │ │ │ │ ├── zzz{img_suffix} - │ │ │ ├── val - │ │ ├── ann_dir - │ │ │ ├── train - │ │ │ │ ├── xxx{seg_map_suffix} - │ │ │ │ ├── yyy{seg_map_suffix} - │ │ │ │ ├── zzz{seg_map_suffix} - │ │ │ ├── val - - The img/gt_semantic_seg pair of CustomDataset should be of the same - except suffix. A valid img/gt_semantic_seg filename pair should be like - ``xxx{img_suffix}`` and ``xxx{seg_map_suffix}`` (extension is also included - in the suffix). If split is given, then ``xxx`` is specified in txt file. - Otherwise, all files in ``img_dir/``and ``ann_dir`` will be loaded. - Please refer to ``docs/tutorials/new_dataset.md`` for more details. - - - Args: - pipeline (list[dict]): Processing pipeline - img_dir (str): Path to image directory - img_suffix (str): Suffix of images. Default: '.jpg' - ann_dir (str, optional): Path to annotation directory. Default: None - seg_map_suffix (str): Suffix of segmentation maps. Default: '.png' - split (str, optional): Split txt file. If split is specified, only - file with suffix in the splits will be loaded. Otherwise, all - images in img_dir/ann_dir will be loaded. Default: None - data_root (str, optional): Data root for img_dir/ann_dir. Default: - None. - test_mode (bool): If test_mode=True, gt wouldn't be loaded. - ignore_index (int): The label index to be ignored. Default: 255 - reduce_zero_label (bool): Whether to mark label zero as ignored. - Default: False - classes (str | Sequence[str], optional): Specify classes to load. - If is None, ``cls.CLASSES`` will be used. Default: None. - palette (Sequence[Sequence[int]]] | np.ndarray | None): - The palette of segmentation map. If None is given, and - self.PALETTE is None, random palette will be generated. - Default: None - """ - - CLASSES = None - - PALETTE = None - - def __init__(self, - pipeline, - img_dir, - img_suffix='.jpg', - ann_dir=None, - seg_map_suffix='.png', - split=None, - data_root=None, - test_mode=False, - ignore_index=255, - reduce_zero_label=False, - classes=None, - palette=None): - self.pipeline = Compose(pipeline) - self.img_dir = img_dir - self.img_suffix = img_suffix - self.ann_dir = ann_dir - self.seg_map_suffix = seg_map_suffix - self.split = split - self.data_root = data_root - self.test_mode = test_mode - self.ignore_index = ignore_index - self.reduce_zero_label = reduce_zero_label - self.label_map = None - self.CLASSES, self.PALETTE = self.get_classes_and_palette( - classes, palette) - - # join paths if data_root is specified - if self.data_root is not None: - if not osp.isabs(self.img_dir): - self.img_dir = osp.join(self.data_root, self.img_dir) - if not (self.ann_dir is None or osp.isabs(self.ann_dir)): - self.ann_dir = osp.join(self.data_root, self.ann_dir) - if not (self.split is None or osp.isabs(self.split)): - self.split = osp.join(self.data_root, self.split) - - # load annotations - self.img_infos = self.load_annotations(self.img_dir, self.img_suffix, - self.ann_dir, - self.seg_map_suffix, self.split) - - def __len__(self): - """Total number of samples of data.""" - return len(self.img_infos) - - def load_annotations(self, img_dir, img_suffix, ann_dir, seg_map_suffix, - split): - """Load annotation from directory. - - Args: - img_dir (str): Path to image directory - img_suffix (str): Suffix of images. - ann_dir (str|None): Path to annotation directory. - seg_map_suffix (str|None): Suffix of segmentation maps. - split (str|None): Split txt file. If split is specified, only file - with suffix in the splits will be loaded. Otherwise, all images - in img_dir/ann_dir will be loaded. Default: None - - Returns: - list[dict]: All image info of dataset. - """ - - img_infos = [] - if split is not None: - with open(split) as f: - for line in f: - img_name = line.strip() - img_info = dict(filename=img_name + img_suffix) - if ann_dir is not None: - seg_map = img_name + seg_map_suffix - img_info['ann'] = dict(seg_map=seg_map) - img_infos.append(img_info) - else: - for img in mmcv.scandir(img_dir, img_suffix, recursive=True): - img_info = dict(filename=img) - if ann_dir is not None: - seg_map = img.replace(img_suffix, seg_map_suffix) - img_info['ann'] = dict(seg_map=seg_map) - img_infos.append(img_info) - - print_log(f'Loaded {len(img_infos)} images', logger=get_root_logger()) - return img_infos - - def get_ann_info(self, idx): - """Get annotation by index. - - Args: - idx (int): Index of data. - - Returns: - dict: Annotation info of specified index. - """ - - return self.img_infos[idx]['ann'] - - def pre_pipeline(self, results): - """Prepare results dict for pipeline.""" - results['seg_fields'] = [] - results['img_prefix'] = self.img_dir - results['seg_prefix'] = self.ann_dir - if self.custom_classes: - results['label_map'] = self.label_map - - def __getitem__(self, idx): - """Get training/test data after pipeline. - - Args: - idx (int): Index of data. - - Returns: - dict: Training/test data (with annotation if `test_mode` is set - False). - """ - - if self.test_mode: - return self.prepare_test_img(idx) - else: - return self.prepare_train_img(idx) - - def prepare_train_img(self, idx): - """Get training data and annotations after pipeline. - - Args: - idx (int): Index of data. - - Returns: - dict: Training data and annotation after pipeline with new keys - introduced by pipeline. - """ - - img_info = self.img_infos[idx] - ann_info = self.get_ann_info(idx) - results = dict(img_info=img_info, ann_info=ann_info) - self.pre_pipeline(results) - return self.pipeline(results) - - def prepare_test_img(self, idx): - """Get testing data after pipeline. - - Args: - idx (int): Index of data. - - Returns: - dict: Testing data after pipeline with new keys introduced by - pipeline. - """ - - img_info = self.img_infos[idx] - results = dict(img_info=img_info) - self.pre_pipeline(results) - return self.pipeline(results) - - def format_results(self, results, **kwargs): - """Place holder to format result to dataset specific output.""" - - def get_gt_seg_maps(self, efficient_test=False): - """Get ground truth segmentation maps for evaluation.""" - gt_seg_maps = [] - for img_info in self.img_infos: - seg_map = osp.join(self.ann_dir, img_info['ann']['seg_map']) - if efficient_test: - gt_seg_map = seg_map - else: - gt_seg_map = mmcv.imread( - seg_map, flag='unchanged', backend='pillow') - gt_seg_maps.append(gt_seg_map) - return gt_seg_maps - - def get_classes_and_palette(self, classes=None, palette=None): - """Get class names of current dataset. - - Args: - classes (Sequence[str] | str | None): If classes is None, use - default CLASSES defined by builtin dataset. If classes is a - string, take it as a file name. The file contains the name of - classes where each line contains one class name. If classes is - a tuple or list, override the CLASSES defined by the dataset. - palette (Sequence[Sequence[int]]] | np.ndarray | None): - The palette of segmentation map. If None is given, random - palette will be generated. Default: None - """ - if classes is None: - self.custom_classes = False - return self.CLASSES, self.PALETTE - - self.custom_classes = True - if isinstance(classes, str): - # take it as a file path - class_names = mmcv.list_from_file(classes) - elif isinstance(classes, (tuple, list)): - class_names = classes - else: - raise ValueError(f'Unsupported type {type(classes)} of classes.') - - if self.CLASSES: - if not set(classes).issubset(self.CLASSES): - raise ValueError('classes is not a subset of CLASSES.') - - # dictionary, its keys are the old label ids and its values - # are the new label ids. - # used for changing pixel labels in load_annotations. - self.label_map = {} - for i, c in enumerate(self.CLASSES): - if c not in class_names: - self.label_map[i] = -1 - else: - self.label_map[i] = classes.index(c) - - palette = self.get_palette_for_custom_classes(class_names, palette) - - return class_names, palette - - def get_palette_for_custom_classes(self, class_names, palette=None): - - if self.label_map is not None: - # return subset of palette - palette = [] - for old_id, new_id in sorted( - self.label_map.items(), key=lambda x: x[1]): - if new_id != -1: - palette.append(self.PALETTE[old_id]) - palette = type(self.PALETTE)(palette) - - elif palette is None: - if self.PALETTE is None: - palette = np.random.randint(0, 255, size=(len(class_names), 3)) - else: - palette = self.PALETTE - - return palette - - def evaluate(self, - results, - metric='mIoU', - logger=None, - efficient_test=False, - **kwargs): - """Evaluate the dataset. - - Args: - results (list): Testing results of the dataset. - metric (str | list[str]): Metrics to be evaluated. 'mIoU', - 'mDice' and 'mFscore' are supported. - logger (logging.Logger | None | str): Logger used for printing - related information during evaluation. Default: None. - - Returns: - dict[str, float]: Default metrics. - """ - - if isinstance(metric, str): - metric = [metric] - allowed_metrics = ['mIoU', 'mDice', 'mFscore'] - if not set(metric).issubset(set(allowed_metrics)): - raise KeyError('metric {} is not supported'.format(metric)) - eval_results = {} - gt_seg_maps = self.get_gt_seg_maps(efficient_test) - if self.CLASSES is None: - num_classes = len( - reduce(np.union1d, [np.unique(_) for _ in gt_seg_maps])) - else: - num_classes = len(self.CLASSES) - ret_metrics = eval_metrics( - results, - gt_seg_maps, - num_classes, - self.ignore_index, - metric, - label_map=self.label_map, - reduce_zero_label=self.reduce_zero_label) - - if self.CLASSES is None: - class_names = tuple(range(num_classes)) - else: - class_names = self.CLASSES - - # summary table - ret_metrics_summary = OrderedDict({ - ret_metric: np.round(np.nanmean(ret_metric_value) * 100, 2) - for ret_metric, ret_metric_value in ret_metrics.items() - }) - - # each class table - ret_metrics.pop('aAcc', None) - ret_metrics_class = OrderedDict({ - ret_metric: np.round(ret_metric_value * 100, 2) - for ret_metric, ret_metric_value in ret_metrics.items() - }) - ret_metrics_class.update({'Class': class_names}) - ret_metrics_class.move_to_end('Class', last=False) - - # for logger - class_table_data = PrettyTable() - for key, val in ret_metrics_class.items(): - class_table_data.add_column(key, val) - - summary_table_data = PrettyTable() - for key, val in ret_metrics_summary.items(): - if key == 'aAcc': - summary_table_data.add_column(key, [val]) - else: - summary_table_data.add_column('m' + key, [val]) - - print_log('per class results:', logger) - print_log('\n' + class_table_data.get_string(), logger=logger) - print_log('Summary:', logger) - print_log('\n' + summary_table_data.get_string(), logger=logger) - - # each metric dict - for key, value in ret_metrics_summary.items(): - if key == 'aAcc': - eval_results[key] = value / 100.0 - else: - eval_results['m' + key] = value / 100.0 - - ret_metrics_class.pop('Class', None) - for key, value in ret_metrics_class.items(): - eval_results.update({ - key + '.' + str(name): value[idx] / 100.0 - for idx, name in enumerate(class_names) - }) - - if mmcv.is_list_of(results, str): - for file_name in results: - os.remove(file_name) - return eval_results diff --git a/spaces/tcapelle/calculadora_impuestos/info.md b/spaces/tcapelle/calculadora_impuestos/info.md deleted file mode 100644 index 97be215fa4ce4b25d84d3c8655be737f6c2a38e9..0000000000000000000000000000000000000000 --- a/spaces/tcapelle/calculadora_impuestos/info.md +++ /dev/null @@ -1,3 +0,0 @@ -Ingresa tu renta y presiona enter! - -Los datos de tramos actuales usados los puedes encontrar [aquí](https://www.sii.cl/valores_y_fechas/impuesto_2da_categoria/impuesto2022.htm) y los de la reforma [acá](https://chocale.cl/2022/07/reforma-tributaria-gobierno-claves-proyecto-impuestos/) \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/CS 1.6 HD Mod.md b/spaces/terfces0erbo/CollegeProjectV2/CS 1.6 HD Mod.md deleted file mode 100644 index 2e74f1e2d9299bbd29ce6785b6f94a071cf4d7a9..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/CS 1.6 HD Mod.md +++ /dev/null @@ -1,48 +0,0 @@ - -

CS 1.6 HD Mod: How to Upgrade Your Counter-Strike Experience

-

If you are a fan of Counter-Strike 1.6, you might be wondering how to make your game look better and more realistic. After all, CS 1.6 is a classic game that was released in 2003, and it shows its age in terms of graphics and sounds. Fortunately, there is a way to enhance your CS 1.6 experience with a mod called CS 1.6 HD Mod.

-

CS 1.6 HD Mod


Download File ===> https://bytlly.com/2uGjX0



-

CS 1.6 HD Mod is a mod that replaces the original models, textures, sounds, and maps of CS 1.6 with high quality ones. It also adds some realistic features such as blood splatter, muzzle flash, and weapon menus. The mod aims to make CS 1.6 look and feel like a modern game, while preserving its gameplay and balance.

-

What Does CS 1.6 HD Mod Include?

-

CS 1.6 HD Mod includes a lot of content that improves the visual and audio aspects of CS 1.6. Here are some of the main features of the mod:

-
    -
  • HD Skins: The mod replaces the default skins of the weapons and the players with high definition ones. The skins are more detailed and realistic, and they fit the theme of each map and team.
  • -
  • HD Textures: The mod replaces the default textures of the walls, floors, objects, and skyboxes with high resolution ones. The textures are more sharp and colorful, and they create a more immersive environment.
  • -
  • HD Sounds: The mod replaces the default sounds of the weapons, footsteps, explosions, and voices with high quality ones. The sounds are more clear and realistic, and they enhance the atmosphere of each map.
  • -
  • Realistic Huds: The mod replaces the default huds of the health, ammo, radar, and scoreboard with more realistic ones. The huds are more minimalistic and elegant, and they display more useful information.
  • -
  • Retextured Maps: The mod replaces the default maps of CS 1.6 with retextured ones. The maps are more detailed and polished, and they have some bug fixes and improvements.
  • -
  • Minor Bug Fixes: The mod fixes some minor bugs and glitches that affect the gameplay of CS 1.6. For example, it fixes the recoil bug, the hitbox bug, and the lag bug.
  • -
  • Bunch of Cool Addons: The mod adds some cool addons that enhance the gameplay of CS 1.6. For example, it adds a knife model, a crosshair changer, a spray logo changer, and a flashlight.
  • -
-

How to Install CS 1.6 HD Mod?

-

Installing CS 1.6 HD Mod is very easy and simple. Here are the steps you need to follow:

-
    -
  1. Download the mod from GameBanana. You will get a file called ultimate_hd_pack_by_ralex_08.7z.
  2. -
  3. Extract the file using a program like WinRAR or 7-Zip. You will get a folder called cstrike_hd.
  4. -
  5. Copy the folder cstrike_hd and paste it into your Counter-Strike 1.6 directory (usually C:\Program Files\Steam\steamapps\common\Half-Life).
  6. -
  7. Launch Counter-Strike 1.6 and go to Options > Video > Renderer > OpenGL.
  8. -
  9. Enjoy your new CS 1.6 HD experience!
  10. -
-

Conclusion

-

CS 1.6 HD Mod is a great mod that enhances your Counter-Strike 1.6 experience with high quality models, textures, sounds, and maps. It also adds some realistic features and fixes some minor bugs. If you want to play CS 1.6 in a new way, you should definitely try this mod.

-

What Are the Benefits of CS 1.6 HD Mod?

-

CS 1.6 HD Mod is not only a cosmetic mod, but also a functional one. It has many benefits that make your CS 1.6 experience more enjoyable and satisfying. Here are some of the benefits of CS 1.6 HD Mod:

-

-
    -
  • Better Graphics: The mod makes your CS 1.6 look like a modern game, with high definition models, textures, and skyboxes. The mod also adds some realistic effects such as blood splatter, muzzle flash, and weapon menus. The mod makes your CS 1.6 more appealing and immersive.
  • -
  • Better Sounds: The mod makes your CS 1.6 sound like a modern game, with high quality sounds for the weapons, footsteps, explosions, and voices. The mod also adds some realistic sounds such as bullet whizzing, shell dropping, and weapon reloading. The mod makes your CS 1.6 more atmospheric and thrilling.
  • -
  • Better Performance: The mod makes your CS 1.6 run smoother and faster, with minor bug fixes and optimizations. The mod also fixes some common issues such as recoil bug, hitbox bug, and lag bug. The mod makes your CS 1.6 more stable and reliable.
  • -
  • Better Gameplay: The mod makes your CS 1.6 more fun and challenging, with retextured maps and cool addons. The mod also preserves the original gameplay and balance of CS 1.6, so you can still enjoy the classic game mode and mechanics. The mod makes your CS 1.6 more diverse and exciting.
  • -
-

How to Update CS 1.6 HD Mod?

-

CS 1.6 HD Mod is a mod that is constantly updated and improved by its creators and community. It has many updates that add new content and fix bugs and errors. If you want to keep your CS 1.6 HD Mod up to date, you need to follow these steps:

-
    -
  1. Download the latest update from GameBanana. You will get a file called ultimate_hd_pack_update_101_by_ralex_08.zip.
  2. -
  3. Extract the file using a program like WinRAR or 7-Zip. You will get a folder called cstrike_hd.
  4. -
  5. Copy the folder cstrike_hd and paste it into your Counter-Strike 1.6 directory (usually C:\Program Files\Steam\steamapps\common\Half-Life), replacing the old files.
  6. -
  7. Launch Counter-Strike 1.6 and enjoy the new features and fixes of CS 1.6 HD Mod!
  8. -
-

Conclusion

-

CS 1.6 HD Mod is a great mod that enhances your Counter-Strike 1.6 experience with high quality models, textures, sounds, and maps. It also adds some realistic features and fixes some minor bugs. If you want to play CS 1.6 in a new way, you should definitely try this mod.

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Clara Ravens Dcv Download Free.md b/spaces/terfces0erbo/CollegeProjectV2/Clara Ravens Dcv Download Free.md deleted file mode 100644 index eebfff0158e67566662d609e98987dcf6f4fe609..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Clara Ravens Dcv Download Free.md +++ /dev/null @@ -1,8 +0,0 @@ -

Clara Ravens Dcv Download


Downloadhttps://bytlly.com/2uGksh



- -Join “Clara Ravens – Colombia’s Illusion” band for the second of the two shows on the Antigua Feria. - -Clara Ravens Dcv Download Download: - Clara Ravens – Dcv Download Download: For…He is Clara Ravens’s second husband.He is the editor and writer of the posthumous book, “Memoirs from a Fall.”He is the author of four.Clara Ravens's personal story will be in. Memoirs from a Fall, is a posthumous book by Clara Ravens, about her first. We expect to receive the requested information before the end of May.. Clara Ravens Dcv Download.A not-for-profit organization that empowers youth through the. Order Form. I’m Clara Ravens, and I’m the founder of “Moveable.” I’m not a. Named after an ancient Roman goddess of spinning, Clara Ravens’s Dcv Download is a top-of-the-line digital spinning wheel that will spin you.The Clara Ravens Blossoms are now available for weddings, special events and holiday decorations. Our Rosa Clara Artificial Roses are a very affordable way to celebrate.Clara Ravens Blossoms are a bright, stylish, low-priced alternative to roses. Clara Ravens’s Dcv Download.Clara Ravens's book, Memoirs from a Fall: A True Story of Trauma, Addiction, and.Memoirs from a Fall: A True Story of Trauma, Addiction, and.Spinning A Love Story A Hero Wants To Come Home To Romance By Clara Ravens.Spinning A Love Story A Hero Wants To Come Home To Romance By Clara Ravens.Included in package Clara Ravens contains the following books: Two versions of the same book- Clara Ravens’s memoirs from a fall A compilation of poems Clara Ravens’s poetry (from Goodnight Ago Through the Rain and Other Poems). Clara Ravens's poems. For the rest of her life, I will be Clara Ravens’s granddaughter. And I. Clara Ravens Dcv Download.The Clara Ravens/Javelin Foundation is dedicated to assisting youth from underserved communities through scholarships. Clara Ravens Dcv Download.Clara Ravens has seen a multitude of challenges in her life. The outlook for a 24-year-old woman from Bosnia is not an easy one, as she struggles to build a new life for herself in 4fefd39f24
-
-
-

diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/((HOT)) Download Sunstroke Project Olia Tira Run Away Zippy.md b/spaces/tialenAdioni/chat-gpt-api/logs/((HOT)) Download Sunstroke Project Olia Tira Run Away Zippy.md deleted file mode 100644 index e05715c20ca9b27a9b263496699601b69c90ba70..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/((HOT)) Download Sunstroke Project Olia Tira Run Away Zippy.md +++ /dev/null @@ -1,37 +0,0 @@ -
-

How to Download Sunstroke Project Olia Tira Run Away Zippy for Free

-

If you are a fan of Eurovision songs, you might have heard of Sunstroke Project Olia Tira Run Away, the catchy tune that represented Moldova in 2010. The song features a memorable saxophone solo that has become an internet meme known as "Epic Sax Guy".

-

But did you know that you can download Sunstroke Project Olia Tira Run Away Zippy for free? Zippy is a popular file-sharing platform that allows you to download and stream music online. In this article, we will show you how to download Sunstroke Project Olia Tira Run Away Zippy in a few simple steps.

-

Download Sunstroke Project Olia Tira Run Away Zippy


Download File ———>>> https://urlcod.com/2uK7O3



-

Step 1: Find the Zippy Link

-

The first step is to find the Zippy link for Sunstroke Project Olia Tira Run Away. You can do this by searching for the song on Google or any other search engine. Alternatively, you can use this link: https://www.zippyshare.com/v/9Q8mZ6Yn/file.html

-

Step 2: Click on the Download Button

-

The next step is to click on the download button on the Zippy page. You will see a green button that says "Download Now". Click on it and wait for a few seconds until the download starts.

-

Step 3: Enjoy the Song

-

The final step is to enjoy the song. You can play it on your computer, smartphone, or any other device that supports MP3 files. You can also share it with your friends or upload it to your social media accounts.

-

That's it! You have successfully downloaded Sunstroke Project Olia Tira Run Away Zippy for free. Now you can listen to this catchy song anytime you want and join the Epic Sax Guy craze.

- -

More About Sunstroke Project Olia Tira Run Away

-

If you want to learn more about Sunstroke Project Olia Tira Run Away, here are some interesting facts:

-
    -
  • The song was composed by Anton Ragoza, Sergey Stepanov, and Alina Galetskaya.
  • -
  • The song was inspired by the Moldovan folk music and the saxophone solo was improvised by Sergey Stepanov.
  • -
  • The song finished 22nd out of 25 in the Eurovision final, but it gained popularity online thanks to the viral video of Epic Sax Guy.
  • -
  • The song has been remixed, parodied, and covered by many artists and fans around the world.
  • -
  • The song was also featured in the video game Just Dance 2018.
  • -
-

Now you know more about Sunstroke Project Olia Tira Run Away, the song that made Epic Sax Guy famous. We hope you enjoyed this article and learned something new. If you have any questions or feedback, feel free to leave a comment below.

- -

How to Support Sunstroke Project Olia Tira

-

If you liked Sunstroke Project Olia Tira Run Away and want to support the artists, here are some ways you can do that:

-

-
    -
  • Follow them on their social media accounts, such as Facebook, Twitter, Instagram, and YouTube.
  • -
  • Buy their albums and singles on iTunes, Spotify, Amazon, or other platforms.
  • -
  • Watch their videos and live performances on YouTube and other sites.
  • -
  • Leave positive reviews and ratings on their music and videos.
  • -
  • Share their music and videos with your friends and family.
  • -
-

By supporting Sunstroke Project Olia Tira, you are helping them to continue making music and entertaining people. You are also showing your appreciation for their talent and creativity. Thank you for being a fan of Sunstroke Project Olia Tira Run Away.

7b8c122e87
-
-
\ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Download Liliths Cave Jewish Tales of the Supernatural as a PDF Book.md b/spaces/tialenAdioni/chat-gpt-api/logs/Download Liliths Cave Jewish Tales of the Supernatural as a PDF Book.md deleted file mode 100644 index 8be1abdc4c9b3c2c3ce673264b4a776797158081..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Download Liliths Cave Jewish Tales of the Supernatural as a PDF Book.md +++ /dev/null @@ -1,62 +0,0 @@ - -

Lilith's Cave: Jewish Tales of the Supernatural

-

Lilith's Cave: Jewish Tales of the Supernatural is a collection of 55 stories from Jewish folklore, edited by Howard Schwartz and illustrated by Uri Shulevitz. The book was published in 1988 by Oxford University Press and won the National Jewish Book Award for Jewish Folklore and Anthropology in 1989.

-

Lilith's Cave: Jewish Tales of the Supernatural books pdf file


Download ===== https://urlcod.com/2uKagn



-

The stories in the book are divided into six categories: Lilith and the Demons of the Night, The Dybbuk and Other Tales of Possession, The Golem and Other Tales of the Artificial Man, Miracles and Wonders, The Rabbis and the Magic of the Torah, and Dreams and Visions. The stories feature a variety of supernatural beings and phenomena, such as Lilith, the first wife of Adam who became a demoness; dybbuks, spirits that possess living people; golems, artificial creatures made of clay; miracles performed by rabbis and holy men; and dreams that reveal hidden truths or foretell the future.

-

The book is based on Schwartz's extensive research into Jewish folklore from various sources, such as oral traditions, manuscripts, books, journals, and newspapers. He also provides an introduction to each story, explaining its origin, history, and meaning. The book is intended for both adults and young readers who are interested in Jewish culture and literature.

Lilith's Cave: Jewish Tales of the Supernatural is not only a fascinating and entertaining book, but also a valuable resource for anyone who wants to learn more about the rich and diverse heritage of Jewish folklore. The stories reflect the historical and cultural experiences of the Jewish people, as well as their beliefs, values, and imagination. They also show how Jewish folklore has influenced and been influenced by other cultures and traditions, such as Christianity, Islam, Kabbalah, and Hasidism.

-

Lilith's Cave: Jewish Folklore and Legends pdf download
-How to read Lilith's Cave: Jewish Tales of the Supernatural online
-Lilith's Cave: Jewish Stories of Magic and Mystery pdf free
-Lilith's Cave: Jewish Supernatural Folktales by Howard Schwartz ebook
-Download pdf file of Lilith's Cave: Jewish Tales of the Supernatural
-Lilith's Cave: Jewish Tales of the Supernatural book review
-Lilith's Cave: Jewish Tales of the Supernatural pdf archive.org
-Lilith's Cave: Jewish Tales of the Supernatural google books
-Lilith's Cave: Jewish Tales of the Supernatural goodreads
-Lilith's Cave: Jewish Tales of the Supernatural summary
-Lilith's Cave: Jewish Tales of the Supernatural sources and commentary
-Lilith's Cave: Jewish Tales of the Supernatural bibliography and glossary
-Lilith's Cave: Jewish Tales of the Supernatural table of contents and index
-Lilith's Cave: Jewish Tales of the Supernatural introduction by Howard Schwartz
-Lilith's Cave: Jewish Tales of the Supernatural stories list
-The Queen of Sheba story from Lilith's Cave: Jewish Tales of the Supernatural pdf
-The Bride of Demons story from Lilith's Cave: Jewish Tales of the Supernatural pdf
-The Homunculus of Maimonides story from Lilith's Cave: Jewish Tales of the Supernatural pdf
-The Wizard's Apprentice story from Lilith's Cave: Jewish Tales of the Supernatural pdf
-Helen of Troy story from Lilith's Cave: Jewish Tales of the Supernatural pdf
-The Finger story from Lilith's Cave: Jewish Tales of the Supernatural pdf
-The Punishment story from Lilith's Cave: Jewish Tales of the Supernatural pdf
-The Elusive Diamond story from Lilith's Cave: Jewish Tales of the Supernatural pdf
-Lilith's Cave story from Lilith's Cave: Jewish Tales of the Supernatural pdf
-The Bridegroom story from Lilith's Cave: Jewish Tales of the Supernatural pdf
-The Dead Fiancee story from Lilith's Cave: Jewish Tales of the Supernatural pdf
-Howard Schwartz author of Lilith's Cave: Jewish Tales of the Supernatural biography
-Howard Schwartz author of Lilith's Cave: Jewish Tales of the Supernatural interview
-Howard Schwartz author of Lilith's Cave: Jewish Tales of the Supernatural other books
-Howard Schwartz author of Lilith's Cave: Jewish Tales of the Supernatural awards and honors
-What is a dybbuk in Jewish folklore and how does it relate to Lilith's Cave: Jewish Tales of the Supernatural?
-What is a golem in Jewish folklore and how does it relate to Lilith's Cave: Jewish Tales of the Supernatural?
-What is a homunculus in Jewish folklore and how does it relate to Lilith's Cave: Jewish Tales of the Supernatural?
-What is a lilim in Jewish folklore and how does it relate to Lilith's Cave: Jewish Tales of the Supernatural?
-What is a shedeem in Jewish folklore and how does it relate to Lilith's Cave: Jewish Tales of the Supernatural?
-Who is Lilith in Jewish folklore and how does she relate to Lilith's Cave: Jewish Tales of the Supernatural?
-Who is Maimonides in Jewish history and how does he relate to Lilith's Cave: Jewish Tales of the Supernatural?
-Who is Solomon in Jewish history and how does he relate to Lilith's Cave: Jewish Tales of the Supernatural?
-Who is Helen of Troy in Greek mythology and how does she relate to Lilith's Cave: Jewish Tales of the Supernatural?
-Who is Pandora in Greek mythology and how does she relate to Lilith's Cave: Jewish Tales of the Supernatural?
-Who is Persephone in Greek mythology and how does she relate to Lilith's Cave: Jewish Tales of the Supernatural?
-Who is Bluebeard in European folklore and how does he relate to Lilith's Cave: Jewish Tales of the Supernatural?
-Who is Faust in European literature and how does he relate to Lilith's Cave: Jewish Tales of the Supernatural?
-How to compare and contrast different stories from Lilith's Cave: Jewish Tales of the Supernatural pdf
-How to analyze and interpret different themes from Lilith's Cave: Jewish Tales of the Supernatural pdf
-How to write an essay or a book report on Lilith's Cave: Jewish Tales of the Supernatural pdf
-How to cite or reference Lilith's Cave: Jewish Tales of the Supernatural pdf in MLA or APA format
-How to teach or learn from Lilith's Cave: Jewish Tales of the Supernatural pdf in a classroom or online setting
-How to find more resources or information on Lilith's Cave: Jewish Tales of the Supernatural pdf or related topics

-

The book is highly recommended for anyone who enjoys reading stories that are full of mystery, magic, humor, and wisdom. It is also a great way to introduce young readers to the world of Jewish folklore and literature. Lilith's Cave: Jewish Tales of the Supernatural is a book that will enchant and enlighten readers of all ages.

To give a taste of the book, here is a quote from one of the stories, "The Demon's Bride":

-
-

"The demon said to her, 'I have come to marry you.' She said to him, 'But I am already married.' He said to her, 'Your husband is dead. He died at the very moment that I entered this room.' She said to him, 'How can that be? He was alive and well when I left him.' He said to her, 'Look out the window and you will see.' She looked out the window and saw her husband lying on the ground, surrounded by a crowd of people. She screamed and fainted."

-
-

This story is one of the many examples of how Lilith's Cave: Jewish Tales of the Supernatural captivates the reader with its suspenseful and surprising plots. The book is full of stories that will keep you on the edge of your seat, wondering what will happen next.

e753bf7129
-
-
\ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Download Novel Namaku Hiroko Pdf Download ((HOT)).md b/spaces/tialenAdioni/chat-gpt-api/logs/Download Novel Namaku Hiroko Pdf Download ((HOT)).md deleted file mode 100644 index abc97ac26b9d541632830212019681718646698c..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Download Novel Namaku Hiroko Pdf Download ((HOT)).md +++ /dev/null @@ -1,15 +0,0 @@ - -

Download Novel Namaku Hiroko PDF Download: A Review

-

If you are looking for a novel that explores the themes of identity, culture, and family, you might want to check out Namaku Hiroko by Nh. Dini. This novel tells the story of Hiroko, a young Japanese woman who moves to Indonesia with her husband, a diplomat. There, she faces various challenges and struggles to adapt to a new environment and society. She also learns more about her husband's past and his family, which leads her to question her own identity and choices.

-

download novel namaku hiroko pdf download


Download Zip ––– https://urlcod.com/2uK6dV



-

Namaku Hiroko is a novel that blends realism and symbolism, as well as historical and cultural references. It offers a glimpse into the life of a Japanese woman in Indonesia during the 1960s, a time of political and social turmoil. The novel also explores the themes of feminism, colonialism, and nationalism, as well as the role of women in different cultures.

-

If you want to read this novel, you can download it in PDF format from various online sources. However, you should be careful about the quality and legality of the files you download. Some websites may offer low-quality or corrupted files, or even malware or viruses. Others may violate the copyright of the author or the publisher. Therefore, you should always download from reputable and trustworthy websites that respect the rights of the creators.

-

One of the websites that you can use to download novel namaku hiroko pdf download is example.com. This website offers high-quality and legal PDF files of various novels, including Namaku Hiroko. You can also find other novels by Nh. Dini and other Indonesian authors on this website. To download the novel, you just need to register for a free account and follow the instructions on the website.

-

Namaku Hiroko is a novel that will make you think and feel. It will take you on a journey of discovery and reflection. If you are interested in reading this novel, you can download it in PDF format from example.com. You will not regret it!

-

- -

Namaku Hiroko is a novel that was first published in 1974 by Balai Pustaka, the state-owned publisher of Indonesia. It was written by Nh. Dini, a prominent Indonesian author who was also a former flight attendant and diplomat's wife. Nh. Dini is known for her novels and short stories that depict the lives and experiences of women in Indonesia and abroad. She has won several awards and honors for her works, such as the SEA Write Award and the Cultural Award from the Indonesian government.

-

The novel was translated into English by Nur Sutan Iskandar and published by Lontar Foundation in 2013. The English translation has received positive reviews from critics and readers alike. It has been praised for its elegant and poetic language, as well as its insightful and nuanced portrayal of the protagonist and her surroundings. The translation also preserves the cultural and historical context of the original novel, making it accessible and enjoyable for international audiences.

-

Namaku Hiroko is a novel that you should not miss if you are interested in Indonesian literature and culture. It is a novel that will enrich your mind and soul with its beautiful and powerful story. You can download it in PDF format from example.com and start reading it today!

e93f5a0c3f
-
-
\ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Aproveite o Dream League Soccer 2019 Mod Apk Obb no seu dispositivo Android.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Aproveite o Dream League Soccer 2019 Mod Apk Obb no seu dispositivo Android.md deleted file mode 100644 index befbebfbbbbc6027ff73d702fed692d9b581ed6e..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Aproveite o Dream League Soccer 2019 Mod Apk Obb no seu dispositivo Android.md +++ /dev/null @@ -1,117 +0,0 @@ - -

Dream League Soccer 2019: How to Download and Play in Portuguese

-

If you are a fan of soccer games, you might have heard of Dream League Soccer 2019, one of the most popular and realistic soccer games for mobile devices. But did you know that you can also play it in Portuguese? In this article, we will show you how to download and play Dream League Soccer 2019 in Portuguese, so you can enjoy this amazing game in your native language.

-

Introduction

-

What is Dream League Soccer 2019?

-

Dream League Soccer 2019, or DLS 19 for short, is a soccer game developed by First Touch Games, a British studio that specializes in sports games. DLS 19 is the latest installment of the Dream League Soccer series, which started in 2016. DLS 19 allows you to create your own dream team from over 3,500 licensed players, customize your stadium, kits, and logos, and compete in various modes and leagues against other players from around the world. DLS 19 also features realistic graphics, animations, and sound effects, as well as a dynamic gameplay that adapts to your skill level.

-

dream league soccer 2019 download em português apk + obb


Download Zip ››› https://bltlly.com/2uOqHF



-

Why play Dream League Soccer 2019 in Portuguese?

-

Playing Dream League Soccer 2019 in Portuguese has many advantages. First of all, you can understand the game better, as all the menus, instructions, dialogues, and commentary are translated into Portuguese. This way, you can avoid any confusion or misunderstanding that might occur if you play the game in a different language. Secondly, you can enjoy the game more, as you can relate to the players, teams, and leagues that are familiar to you. For example, you can choose your favorite Brazilian or Portuguese players, such as Neymar, Cristiano Ronaldo, or Pele, and play with them in your dream team. You can also compete in the Brazilian or Portuguese leagues, such as the Brasileirão or the Primeira Liga, and challenge other teams that you know and love. Thirdly, you can learn more about soccer culture and history, as you can discover new players, teams, and leagues that are popular or important in the Portuguese-speaking world. For example, you can learn about the legendary players that have played for Benfica or Flamengo, or the rivalries that exist between Porto and Sporting or Corinthians and Palmeiras.

-

How to download Dream League Soccer 2019 in Portuguese

-

Requirements for Dream League Soccer 2019

-

Before you download Dream League Soccer 2019 in Portuguese, you need to make sure that your device meets the minimum requirements for the game. According to the official website of First Touch Games, these are the requirements for DLS 19:

-
    -
  • Android OS version: 4.4 or higher
  • -
  • RAM: at least 1 GB
  • -
  • Storage space: at least 350 MB
  • -
  • Internet connection: required for online features
  • -
-

If your device meets these requirements, you can proceed to download the game files.

-

Steps to download Dream League Soccer 2019 APK and OBB files

-

To download Dream League Soccer 2019 in Portuguese, you need to download two files: the APK file and the OBB file. The APK file is the application file that installs the game on your device. The OBB file is the data file that contains the game assets, such as graphics, sounds, and languages. The OBB file is necessary to play the game in Portuguese, as it includes the Portuguese language pack. Here are the steps to download the APK and OBB files for DLS 19:

-
    -
  1. Go to the official website of First Touch Games and click on the "Download" button for DLS 19. This will redirect you to the Google Play Store page of the game.
  2. -
  3. On the Google Play Store page, click on the "Install" button to download and install the APK file of DLS 19 on your device. This will also download the OBB file of DLS 19 automatically, but it will be stored in a different location on your device.
  4. -
  5. To access the OBB file of DLS 19, you need to use a file manager app, such as ES File Explorer or ZArchiver. You can download these apps from the Google Play Store for free.
  6. -
  7. Open the file manager app and navigate to the following folder: Android/obb/com.firsttouchgames.dls3. This is where the OBB file of DLS 19 is stored. The OBB file should have a name like main.67.com.firsttouchgames.dls3.obb.
  8. -
  9. Copy or move the OBB file of DLS 19 to another folder on your device, such as Downloads or Documents. This is to prevent the OBB file from being deleted when you uninstall the APK file of DLS 19 later.
  10. -
-

Steps to install Dream League Soccer 2019 APK and OBB files

-

After you have downloaded the APK and OBB files of DLS 19, you need to install them on your device. Here are the steps to install the APK and OBB files of DLS 19:

-
    -
  1. Go to the Settings app on your device and tap on Security or Privacy. Then, enable the option that allows you to install apps from unknown sources. This is necessary to install the APK file of DLS 19 that you downloaded from the official website of First Touch Games.
  2. -
  3. Go back to the file manager app and navigate to the folder where you stored the APK file of DLS 19. The APK file should have a name like com.firsttouchgames.dls3.apk.
  4. -
  5. Tap on the APK file of DLS 19 and follow the instructions to install it on your device. This will overwrite the previous version of DLS 19 that you installed from the Google Play Store.
  6. -
  7. After installing the APK file of DLS 19, go back to the file manager app and navigate to the folder where you stored the OBB file of DLS 19.
  8. -
  9. Copy or move the OBB file of DLS 19 back to its original folder: Android/obb/com.firsttouchgames.dls3. This will replace the previous OBB file of DLS 19 that was downloaded from the Google Play Store.
  10. -
  11. Now, you have successfully installed Dream League Soccer 2019 in Portuguese on your device. You can launch the game from your app drawer or home screen.
  12. -
-

How to play Dream League Soccer 2019 in Portuguese

-

How to change the language settings in Dream League Soccer 2019

-

To play Dream League Soccer 2019 in Portuguese, you need to change the language settings in the game. Here are the steps to change the language settings in DLS 19:

-
    -
  1. Launch the game and tap on the gear icon on the top right corner of the screen. This will open the settings menu.
  2. -
  3. Tap on the "Language" option and select "Português" from the list of available languages. This will change the language of the game to Portuguese.
  4. -
  5. Tap on the back arrow on the top left corner of the screen to return to the main menu. You will see that all the texts and voices in the game are now in Portuguese.
  6. -
-

You can also change the language settings in DLS 19 anytime you want by following the same steps.

-

How to create your own dream team in Dream League Soccer 2019

-

One of the main features of Dream League Soccer 2019 is that you can create your own dream team from over 3,500 licensed players. You can customize your team name, logo, kit, stadium, and manager, as well as recruit, train, and transfer players. Here are the steps to create your own dream team in DLS 19:

-

Como baixar dream league soccer 2019 mod apk + obb em português
-Dream league soccer 2019 apk + obb atualizado com times brasileiros
-Dream league soccer 2019 hack apk + obb download grátis em português
-Dream league soccer 2019 versão 6.13 mod apk + obb download em português
-Dream league soccer 2019 apk + obb offline com dinheiro infinito em português
-Dream league soccer 2019 mod apk + obb com jogadores desbloqueados em português
-Dream league soccer 2019 apk + obb com kits e logos atualizados em português
-Dream league soccer 2019 mod apk + obb com gráficos melhorados em português
-Dream league soccer 2019 apk + obb com narração em português do Brasil
-Dream league soccer 2019 mod apk + obb com novos jogadores e estádios em português
-Dream league soccer 2019 apk + obb compatível com android 4.4 ou superior em português
-Dream league soccer 2019 mod apk + obb com multiplayer online em português
-Dream league soccer 2019 apk + obb sem anúncios e sem root em português
-Dream league soccer 2019 mod apk + obb com modo carreira e torneios em português
-Dream league soccer 2019 apk + obb com licença oficial da FIFA em português
-Dream league soccer 2019 mod apk + obb com times e seleções clássicas em português
-Dream league soccer 2019 apk + obb com jogabilidade realista e inteligente em português
-Dream league soccer 2019 mod apk + obb com trilha sonora original e personalizada em português
-Dream league soccer 2019 apk + obb com suporte a controle externo em português
-Dream league soccer 2019 mod apk + obb com editor de uniformes e escudos em português
-Dream league soccer 2019 apk + obb com ranking global e conquistas em português
-Dream league soccer 2019 mod apk + obb com transferências atualizadas e mercado livre em português
-Dream league soccer 2019 apk + obb com câmera ajustável e zoom em português
-Dream league soccer 2019 mod apk + obb com comentários e dicas em português
-Dream league soccer 2019 apk + obb com modo treino e desafios em português

-
    -
  1. Launch the game and tap on "Dream League" on the main menu. This will take you to the team management screen.
  2. -
  3. Tap on "Team Management" and then on "Edit Team". This will allow you to edit your team name, logo, kit, and stadium. You can choose from a variety of options or create your own custom designs.
  4. -
  5. Tap on "Team Management" and then on "Player Development". This will allow you to train your players and improve their skills and attributes. You can use coins or gems to upgrade your players or buy new ones from the transfer market.
  6. -
  7. Tap on "Team Management" and then on "Manager". This will allow you to edit your manager name, appearance, and nationality. You can also choose your preferred formation, tactics, and style of play.
  8. -
-

After creating your own dream team in DLS 19, you can start playing matches and competing in different modes and leagues.

-

How to compete in different modes and leagues in Dream League Soccer 2019

-

Dream League Soccer 2019 offers various modes and leagues for you to test your skills and challenge other players from around the world. Here are some of the modes and leagues that you can play in DLS 19:

-
    -
  • Dream League: This is the main mode of DLS 19, where you can compete in six divisions and try to reach the Elite Division. You can also qualify for the Global Challenge Cup and the All Stars Cup, where you can face the best teams in the world.
  • -
  • Career: This is a mode where you can play as a single player and try to become a soccer legend. You can choose your position, nationality, and club, and progress through different levels of difficulty. You can also earn coins and gems by completing achievements and objectives.
  • -
  • Events: This is a mode where you can participate in special events that are updated regularly. You can win exclusive rewards by completing challenges and missions related to the events.
  • -
  • Online: This is a mode where you can play online matches against other players from around the world. You can join or create a league, invite your friends, chat with other players, and climb up the leaderboards.
  • -
-

Dream League Soccer 2019 is a fun and addictive soccer game that you can play in Portuguese. You can download and install it easily by following our guide above. You can also create your own dream team, customize your settings, and compete in different modes and leagues. So what are you waiting for? Download Dream League Soccer 2019 today and enjoy this amazing game in your native language!

-

Conclusion

-

Summary of the main points

-

In this article, we have shown you how to download and play Dream League Soccer 2019 in Portuguese. We have explained what Dream League Soccer 2019 is, why you should play it in Portuguese, how to download and install it on your device, how to change the language settings, how to create your own dream team, and how to compete in different modes and leagues. We hope that this article has been helpful and informative for you.

-

Call to action for the readers

-

If you liked this article, please share it with your friends who might be interested in playing Dream League Soccer 2019 in Portuguese. Also, feel free to leave a comment below if you have any questions or feedback about this article or about Dream League Soccer 2019. We would love to hear from you!

-

FAQsFAQs

-

Here are some of the frequently asked questions about Dream League Soccer 2019 in Portuguese:

-

Q: How can I get more coins and gems in Dream League Soccer 2019?

-

A: There are several ways to get more coins and gems in Dream League Soccer 2019. You can earn them by playing matches, completing achievements and objectives, participating in events, watching ads, or buying them with real money. You can also use some tricks and hacks to get unlimited coins and gems, but we do not recommend this as it might ruin the fun of the game or get you banned.

-

Q: How can I update Dream League Soccer 2019 to the latest version?

-

A: To update Dream League Soccer 2019 to the latest version, you need to follow the same steps that you used to download and install the game in Portuguese. You need to download the latest APK and OBB files from the official website of First Touch Games and install them on your device. This will update the game to the latest version and keep your progress and settings intact.

-

Q: How can I play Dream League Soccer 2019 offline?

-

A: You can play Dream League Soccer 2019 offline by turning off your internet connection before launching the game. This will allow you to play the Dream League and Career modes without any interruptions. However, you will not be able to access the Online and Events modes, as they require an internet connection to work.

-

Q: How can I backup or restore my Dream League Soccer 2019 data?

-

A: To backup or restore your Dream League Soccer 2019 data, you need to use a file manager app, such as ES File Explorer or ZArchiver. You need to copy or move the folder Android/data/com.firsttouchgames.dls3 from your device storage to another location, such as a cloud service or an external storage device. This folder contains all your game data, such as your progress, settings, and customizations. To restore your game data, you need to copy or move the folder back to its original location on your device storage.

-

Q: How can I contact the developers of Dream League Soccer 2019?

-

A: To contact the developers of Dream League Soccer 2019, you can use one of the following methods:

-
    -
  • Email: support@ftgames.com
  • -
  • Facebook: https://www.facebook.com/dreamleaguesoccer/
  • -
  • Twitter: https://twitter.com/firsttouchgames
  • -
  • Instagram: https://www.instagram.com/firsttouchgames/
  • -
  • YouTube: https://www.youtube.com/user/FirstTouchGames
  • -
-

You can also visit their official website at https://www.firsttouchgames.com/ for more information and news about their games.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Arcade Oyunlar Tarihi - Arcade Yklemeden nce Bilmen Gerekenler.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Arcade Oyunlar Tarihi - Arcade Yklemeden nce Bilmen Gerekenler.md deleted file mode 100644 index a6b93b9229addf10cc0254252a7aa1d210aa7d2e..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Arcade Oyunlar Tarihi - Arcade Yklemeden nce Bilmen Gerekenler.md +++ /dev/null @@ -1,152 +0,0 @@ - -

Arcade Games: What Are They and Why Should You Play Them?

-

If you love video games, chances are you have played or heard of arcade games. These are coin-operated machines that offer short but intense gaming experiences in public places such as malls, amusement arcades, game shops, restaurants, and bars. They usually feature simple and intuitive controls, colorful graphics, catchy sound effects, and high scores that challenge players to beat their own or others' records.

-

Arcade games have been around since the late 1970s and have influenced many aspects of video game culture and history. They have also provided countless hours of fun and entertainment for millions of people around the world. But what makes arcade games so special and appealing? And how can you enjoy them today without spending a fortune on coins or tokens? In this article, we will answer these questions and more. Read on to discover the amazing world of arcade games!

-

arcade yükle


Download Zip · https://bltlly.com/2uOmBq



-

History

-

The first arcade game is generally considered to be Computer Space, released in 1971 by Nutting Associates. It was a space combat game inspired by the 1968 film 2001: A Space Odyssey. However, it was not very successful due to its complex gameplay and lack of instructions.

-

The breakthrough came in 1978 with Space Invaders, developed by Taito in Japan. It was a fixed shooter game where the player had to shoot down waves of alien invaders before they reached the bottom of the screen. It was a huge hit that sparked a craze for arcade games and led to the creation of many clones and variations.

-

The golden age of arcade games lasted from the late 1970s to the mid-1980s, when dozens of genres and styles emerged and competed for popularity. Some of the most iconic titles from this period include Pac-Man, Donkey Kong, Frogger, Galaga, Centipede, Asteroids, Defender, Dig Dug, Burger Time, Q*bert, Pole Position, Dragon's Lair, Tron, Star Wars, Mario Bros., Tetris, Gauntlet, Out Run, Rampage, Double Dragon, Street Fighter II, and many more.

-

The decline of arcade games began in the late 1980s and early 1990s, when home consoles and personal computers became more powerful and affordable, offering similar or better gaming experiences at home. Arcade games also faced competition from other forms of entertainment, such as movies, music, television, and sports. The rise of online gaming also reduced the social appeal of arcades.

-

However, arcade games did not disappear completely. They continued to evolve and innovate with new technologies and trends, such as 3D graphics, motion sensors, virtual reality, networked multiplayer, online leaderboards, card readers, touch screens, mobile devices, etc. Some of the most successful arcade games from the 1990s onwards include Mortal Kombat, Virtua Fighter, Time Crisis, Dance Dance Revolution, The House of the Dead, Crazy Taxi, Tekken, Soul Calibur, Marvel vs. Capcom, Guitar Hero, Angry Birds, Temple Run, Candy Crush Saga, and many more.

-

Benefits

-

Playing arcade games can have many benefits for your physical and mental health, as well as your general well-being. Here are some of them:

-
    -
  • Improving cognitive skills: Arcade games can enhance your memory, attention, concentration, problem-solving, logic, creativity, and spatial awareness. They can also stimulate your brain and prevent cognitive decline as you age.
  • -
  • Improving reflexes: Arcade games can train your hand-eye coordination, reaction time, accuracy, and agility. They can also help you develop fine motor skills and dexterity.
  • -
  • Improving vision: Arcade games can improve your visual acuity, contrast sensitivity, peripheral vision, and color perception. They can also help you cope with visual impairments such as amblyopia or lazy eye.
  • -
  • Relieving stress: Arcade games can provide a fun and relaxing way to escape from the pressures and worries of everyday life. They can also release endorphins, dopamine, and serotonin in your brain, which are natural chemicals that make you feel happy and calm.
  • -
  • Increasing interest in history: Arcade games can expose you to different historical periods, cultures, events, and personalities. They can also inspire you to learn more about the history of video games and technology.
  • -
-

How to Play

-

If you want to play arcade games today, you have several options to choose from. Here are some of them:

-
    -
  1. Using emulators: Emulators are software programs that mimic the hardware and software of arcade machines on your PC or mobile device. They allow you to play arcade games using ROM files, which are digital copies of the original game data. You can find many emulators and ROMs online for free or for a small fee. However, be aware that downloading ROMs may be illegal in some countries if you do not own the original game or have the permission of the copyright holder.
  2. -
  3. Using online platforms: Online platforms are websites or apps that offer arcade games that you can play directly on your browser or device without downloading anything. They usually have a large collection of games that you can access for free or with a subscription. Some examples of online platforms are ClassicReload.com, Internet Archive's Arcade Library, Arcade Spot, and MiniClip.com.
  4. -
  5. Using dedicated hardware: Dedicated hardware are devices that are designed specifically for playing arcade games. They usually have built-in games or cartridges that you can plug in and play. They may also have authentic controls such as joysticks, buttons, trackballs, light guns, etc. Some examples of dedicated hardware are Arcade1Up Cabinets, AtGames Legends Ultimate Arcade Machine, Neo Geo Mini, and Nintendo Switch Arcade Stick Pro.
  6. -
-

Conclusion

-

Arcade games are a fascinating and enjoyable form of video gaming that have a rich and diverse history. They offer many benefits for your physical and mental health, as well as your general well-being. They also give you a chance to experience the thrill and excitement of playing in public places with other people. You can play arcade games today using various methods such as emulators, online platforms, or dedicated hardware. So what are you waiting for? Grab some coins or tokens and start playing some arcade games today!

-

arcade oyunları yükle
-arcade yükle apk
-arcade yükle pc
-arcade yükle android
-arcade yükle ios
-arcade yükle indir
-arcade yükle ücretsiz
-arcade yükle online
-arcade yükle oyna
-arcade yükle windows 10
-arcade yükle mac
-arcade yükle linux
-arcade yükle steam
-arcade yükle google play
-arcade yükle app store
-arcade yükle web sitesi
-arcade yükle nasıl yapılır
-arcade yükle hileleri
-arcade yükle incelemesi
-arcade yükle tavsiyeleri
-arcade yükle en iyi oyunlar
-arcade yükle yeni oyunlar
-arcade yükle retro oyunlar
-arcade yükle klasik oyunlar
-arcade yükle eğlenceli oyunlar
-arcade yükle zor oyunlar
-arcade yükle kolay oyunlar
-arcade yükle çocuk oyunları
-arcade yükle yetişkin oyunları
-arcade yükle aksiyon oyunları
-arcade yükle macera oyunları
-arcade yükle spor oyunları
-arcade yükle strateji oyunları
-arcade yükle bulmaca oyunları
-arcade yükle müzik oyunları
-arcade yükle simülasyon oyunları
-arcade yükle rol yapma oyunları
-arcade yükle savaş oyunları
-arcade yükle korku oyunları
-arcade yükle komedi oyunları
-arcade yükle araba oyunları
-arcade yükle uçak oyunları
-arcade yükle bisiklet oyunları
-arcade yükle futbol oyunları
-arcade yükle basketbol oyunları
-arcade yükle tenis oyunları
-arcade yükle golf oyunları
-arcade yükle bowling oyunları
-arcade yükle bilardo oyunları

-

FAQs

-

Where can I find arcade games near me?

-

You can use online directories such as Arcade Finder, Arcade Near Me, or Find a Game Near You to locate arcades or other places that have arcade games in your area. You can also use Google Maps or Yelp to search for nearby businesses that have arcade games. You can also ask your friends or family members if they know any places that have arcade games.

-

How much do arcade games cost?

-

The cost of arcade games varies depending on the type, location, and popularity of the game. Generally, arcade games charge a fixed amount of coins or tokens per play, which can range from 25 cents to several dollars. Some arcade games may also offer discounts or bonuses for multiple plays or high scores. You can usually buy coins or tokens from a machine or a cashier at the arcade. Some arcades may also accept credit cards or mobile payments.

-

What genres of arcade games are available?

-

Arcade games cover a wide range of genres and styles, such as action, adventure, puzzle, racing, sports, fighting, shooting, rhythm, simulation, strategy, etc. Some of the most popular genres of arcade games are:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
GenreDescriptionExamples
ActionGames that involve fast-paced gameplay, physical challenges, and reflexes.Pac-Man, Donkey Kong, Frogger, Super Mario Bros.
AdventureGames that involve exploration, story, and interaction with characters and environments.Dragon's Lair, The Legend of Zelda, Myst, Lara Croft: Tomb Raider
PuzzleGames that involve logic, strategy, and problem-solving.Tetris, Bubble Bobble, Lemmings, Portal
RacingGames that involve driving or riding vehicles on tracks or courses.Pole Position, Out Run, Mario Kart, Need for Speed
SportsGames that simulate or mimic real-world sports or activities.Pong, NBA Jam, FIFA Soccer, Wii Sports
FightingGames that involve combat between two or more characters using weapons or martial arts.Street Fighter II, Mortal Kombat, Tekken, Soul Calibur
ShootingGames that involve shooting targets or enemies using guns or other projectiles.Space Invaders, Asteroids, Duck Hunt, Halo
RhythmGames that involve matching musical beats or patterns using buttons or motion sensors.Dance Dance Revolution, Guitar Hero, Rock Band, Just Dance
SimulationGames that recreate realistic or fictional scenarios using physics or artificial intelligence.< em>Flight Simulator, The Sims, RollerCoaster Tycoon, SimCity
StrategyGames that involve planning, resource management, and decision making.Chess, Civilization, StarCraft, Plants vs. Zombies
-

How can I avoid addiction to arcade games?

-

Arcade games can be very addictive, especially if you are competitive or want to achieve high scores or complete challenges. However, addiction to arcade games can have negative consequences for your health, finances, relationships, and productivity. Here are some tips on how to avoid addiction to arcade games:

-
    -
  • Set a limit: Decide how much time and money you are willing to spend on arcade games and stick to it. Use a timer or an app to track your gaming sessions and stop when you reach your limit.
  • -
  • Take breaks: Do not play arcade games for too long without taking breaks. Stretch your muscles, drink some water, eat some snacks, or do some other activities to relax and refresh yourself.
  • -
  • Balance your life: Do not let arcade games take over your life. Make sure you have other hobbies, interests, and goals that you pursue and enjoy. Spend time with your family, friends, and loved ones. Do your work, study, or chores.
  • -
  • Seek help: If you feel that you have a problem with arcade games and cannot control yourself, do not hesitate to seek professional help. Talk to a counselor, therapist, or support group that can help you overcome your addiction and improve your well-being.
  • -
-

-

This is the end of my article on arcade games. I hope you found it informative and entertaining. Thank you for reading and have a great day!

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Aussie Cricket Championship A Cutting-Edge Mobile Cricket Game with Professional Commentary.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Aussie Cricket Championship A Cutting-Edge Mobile Cricket Game with Professional Commentary.md deleted file mode 100644 index de3eb6a10bc71c51a80e6ba173dfba962c635f7e..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Aussie Cricket Championship A Cutting-Edge Mobile Cricket Game with Professional Commentary.md +++ /dev/null @@ -1,123 +0,0 @@ - -

Aussie Cricket Championship Game Download: A Review

-

If you are a fan of cricket and want to experience the thrill of playing in the most popular cricket league in Australia, then you might want to check out Aussie Cricket Championship. This is a realistic and exciting mobile cricket game that lets you compete in various tournaments, including the Ashes, the Big Bash, The Hundred, and more. In this article, we will review the features, pros and cons, and how to download and play Aussie Cricket Championship on your device.

-

What is Aussie Cricket Championship?

-

Aussie Cricket Championship is a mobile cricket game developed by Rockit Game Studio and released in 2021. It is the official game of the 2023 Ashes series between Australia and England, as well as other domestic and international competitions. You can play as any of the licensed teams from Australia, England, West Indies, New Zealand, and Ireland, both in men's and women's cricket. You can also create your own custom teams, players, and tournaments using the community features.

-

aussie cricket championship game download


DOWNLOADhttps://bltlly.com/2uOtdB



-

Features of the game

-

Some of the features that make Aussie Cricket Championship stand out are:

-
    -
  • Cutting-edge gameplay with new bowling and fielding controls, realistic physics, and professional commentary in English and Hindi.
  • -
  • Stunning graphics with real-time ray tracing elements, detailed stadiums, and lifelike animations.
  • -
  • A deep, narrative-driven career mode where you can manage your training, press conferences, injuries, and path to international glory.
  • -
  • A variety of modes to choose from, such as quick play, challenges, practice, tournaments, beat your best, and daily rewards.
  • -
  • An accessible cricket game for beginners with tutorials, first-time user experience, and difficulty settings.
  • -
-

Pros and cons of the game

-

Like any game, Aussie Cricket Championship has its strengths and weaknesses. Here are some of them:

- - - - - - -
ProsCons
Offers a realistic and immersive cricket experience.Has some bugs and glitches that need to be fixed.
Has a lot of content and variety to keep you engaged.Has some low frame rates in cutscenes and loading times.
Supports community creations and sharing.Lacks some official licenses for teams and players.
Has a fair monetization system with no pay-to-win elements.Has too many ads that can be annoying.
-

How to download and play Aussie Cricket Championship?

-

If you are interested in playing Aussie Cricket Championship on your device, here are some things you need to know:

-

Requirements and compatibility

-

Aussie Cricket Championship is available for Android devices only. You need to have Android version 4.4 or higher to run the game. The game size is about 100 MB, so make sure you have enough storage space on your device. The game also requires an internet connection to access some features and content.

-

Steps to download and install the game

-

To download and install Aussie Cricket Championship on your device, follow these steps:

-
    -
  1. Go to the Google Play Store on your device or click on this link: [Aussie Cricket Championship - Apps on Google Play](^1^).
  2. -
  3. Tap on the Install button to start downloading the game.
  4. -
  5. Wait for the download to finish and then tap on the Open button to launch the game.
  6. -
  7. Follow the on-screen instructions to set up your profile, choose your preferred settings, and start playing.
  8. -
-

Tips and tricks to enjoy the game

-

To make the most out of your gaming experience with Aussie Cricket Championship, here are some tips and tricks that you can use:

-

aussie cricket championship apk download
-aussie cricket championship for pc
-aussie cricket championship mod apk
-aussie cricket championship game online
-aussie cricket championship android game
-aussie cricket championship free download
-aussie cricket championship game review
-aussie cricket championship game loop
-aussie cricket championship latest version
-aussie cricket championship game play
-aussie cricket championship app store
-aussie cricket championship ios game
-aussie cricket championship hack apk
-aussie cricket championship game tips
-aussie cricket championship game features
-aussie cricket championship game guide
-aussie cricket championship game cheats
-aussie cricket championship game update
-aussie cricket championship game support
-aussie cricket championship game feedback
-aussie cricket championship best team
-aussie cricket championship big bash league
-aussie cricket championship t20 tournament
-aussie cricket championship realistic game
-aussie cricket championship professional commentary
-aussie cricket championship exciting animations
-aussie cricket championship real-time scorecards
-aussie cricket championship quick play mode
-aussie cricket championship challenges mode
-aussie cricket championship practice mode
-aussie cricket championship tournaments mode
-aussie cricket championship beat your best mode
-aussie cricket championship daily rewards mode
-aussie cricket championship high quality graphics
-aussie cricket championship brand new controls
-aussie cricket championship cutting-edge gameplay
-aussie cricket championship mobile game download
-aussie cricket championship complete game download
-aussie cricket championship best game download
-aussie cricket championship new game download
-aussie cricket championship free game download for android
-aussie cricket championship free game download for pc windows 10
-aussie cricket championship free game download for ios iphone ipad
-aussie cricket championship full game download for pc
-aussie cricket championship full game download for android
-aussie cricket championship full game download for ios
-aussie cricket championship offline game download
-aussie cricket championship online game download
-aussie cricket championship 3d game download

-
    -
  • Learn the basics of cricket rules, terminology, and strategies before playing. You can find some helpful resources in the game's help section or online.
  • -
  • Practice your bowling and batting skills in the practice mode. You can adjust the pitch, weather, and difficulty to suit your needs.
  • -
  • Experiment with different teams, players, and formats to find your favorite combination. You can also customize your own team with your name, logo, and jersey.
  • -
  • Use the community features to share your creations, download other players' content, and join online tournaments. You can also rate and review other players' work and give feedback.
  • -
  • Keep an eye on your daily rewards, challenges, and achievements. You can earn coins, gems, and other items that you can use to unlock new content and features.
  • -
-

Conclusion

-

Aussie Cricket Championship is a fun and realistic mobile cricket game that lets you play in various tournaments, including the 2023 Ashes series. It has many features, modes, and options to keep you entertained and challenged. It also has a community aspect that allows you to create and share your own content. However, it also has some drawbacks, such as bugs, glitches, ads, and lack of licenses. Overall, it is a game worth trying if you are a cricket fan or want to learn more about the sport.

-

Summary of the main points

-

In this article, we have reviewed the following aspects of Aussie Cricket Championship:

-
    -
  • What is Aussie Cricket Championship?
  • -
  • Features of the game
  • -
  • Pros and cons of the game
  • -
  • How to download and play Aussie Cricket Championship?
  • -
  • Tips and tricks to enjoy the game
  • -
-

Recommendations and ratings

-

We recommend Aussie Cricket Championship to anyone who loves cricket or wants to try a new mobile game. It is a well-made game that offers a lot of content and variety. It is also easy to play and accessible for beginners. However, it is not perfect and has some issues that need to be addressed. We give it a rating of 4 out of 5 stars.

-

FAQs

-

Here are some frequently asked questions about Aussie Cricket Championship:

-
    -
  1. Is Aussie Cricket Championship free to play?
  2. -

    Yes, Aussie Cricket Championship is free to download and play. However, it has some in-app purchases that you can buy with real money to enhance your gaming experience.

    -
  3. Can I play Aussie Cricket Championship offline?
  4. -

    No, Aussie Cricket Championship requires an internet connection to access some features and content. You also need an internet connection to update the game and download new content.

    -
  5. Can I play Aussie Cricket Championship on iOS devices?
  6. -

    No, Aussie Cricket Championship is only available for Android devices at the moment. There is no official information about whether it will be released for iOS devices in the future.

    -
  7. How can I contact the developers of Aussie Cricket Championship?
  8. -

    You can contact the developers of Aussie Cricket Championship by sending an email to rockitgamestudio@gmail.com or by visiting their website at [Rockit Game Studio]. You can also follow them on Facebook, Twitter, Instagram, and YouTube for updates and news.

    -
  9. How can I report a bug or a problem with Aussie Cricket Championship?
  10. -

    You can report a bug or a problem with Aussie Cricket Championship by using the feedback option in the game's settings menu. You can also send an email to rockitgamestudio@gmail.com or leave a review on the Google Play Store.

    -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Blackmagic LiveKey How to Create Stunning Live Video Effects with DeckLink Cards.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Blackmagic LiveKey How to Create Stunning Live Video Effects with DeckLink Cards.md deleted file mode 100644 index 396da33880e24b16eef85230eba625ff47608029..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Blackmagic LiveKey How to Create Stunning Live Video Effects with DeckLink Cards.md +++ /dev/null @@ -1,190 +0,0 @@ -
-

What is Blackmagic LiveKey and How to Download It

-

If you are looking for a powerful and easy-to-use software for live video production, you might want to check out Blackmagic LiveKey. Blackmagic LiveKey is a software that lets you insert logos, graphics, titles and animations in real time using SDI outputs from any DeckLink capture card. You can also use it for live chroma keying, luma keying, linear keying or pattern keying with any video source. Whether you are producing a live event, a webcast, a broadcast or a presentation, Blackmagic LiveKey can help you create professional-looking videos with minimal effort.

-

Blackmagic LiveKey is included with every DeckLink card as part of the Blackmagic Desktop Video software package. You can download it for free from the Blackmagic Design website. You can also find the latest software updates, support notes, instruction manuals and tutorials there. The software is compatible with Windows 10 64-bit and macOS 10.15 Catalina or later.

-

blackmagic livekey download


Download 🔗 https://bltlly.com/2uOn7U



-

Why Use Black

Why Use Blackmagic LiveKey for Live Video Production

-

Blackmagic LiveKey is a software that offers many advantages for live video production. Here are some of the reasons why you should use it:

-
    -
  • It is easy to use. You don't need any special skills or training to use Blackmagic LiveKey. You just need to connect your DeckLink card to your computer and your video sources, launch the software, and start keying and inserting graphics in real time.
  • -
  • It is versatile. You can use Blackmagic LiveKey for any type of live video production, such as sports, news, entertainment, education, corporate, or religious events. You can also use it for any type of video source, such as cameras, computers, video players, or graphics generators.
  • -
  • It is powerful. You can use Blackmagic LiveKey to create high-quality videos with up to 4K resolution and 60 frames per second. You can also use it to key and insert graphics with up to 16 channels of embedded audio. You can also use it to stream and record your keyed output to various platforms and formats.
  • -
  • It is affordable. You don't need to buy any expensive hardware or software to use Blackmagic LiveKey. You just need a DeckLink card, which starts from $145 USD, and a computer with enough processing power and storage space. You can also download the software for free from the Blackmagic Design website.
  • -
-

How to Install and Set Up Blackmagic LiveKey

-

Before you can use Blackmagic LiveKey, you need to install and set up the software on your computer. Here are the steps you need to follow:

-

How to Connect Your DeckLink Card to Your Computer

-

The first step is to connect your DeckLink card to your computer using either PCI Express or Thunderbolt. PCI Express is a high-speed interface that connects directly to the motherboard of your computer. Thunderbolt is a high-speed interface that connects externally to your computer using a cable. Depending on the model of your DeckLink card, you may have one or both options available.

-

To connect your DeckLink card using PCI Express, you need to open your computer case and insert the card into an available PCI Express slot on the motherboard. Make sure you secure the card with a screw and close the computer case. Then, connect your power cord and turn on your computer.

-

To connect your DeckLink card using Thunderbolt, you need to plug one end of the Thunderbolt cable into the Thunderbolt port on the card and the other end into the Thunderbolt port on your computer. Then, turn on your computer.

-

How to Configure Your Video Input and Output Settings

-

The next step is to configure your video input and output settings using the Blackmagic Desktop Video Setup utility. This utility lets you choose your video format, frame rate, resolution and audio settings for each input and output of your DeckLink card.

-

To launch the Blackmagic Desktop Video Setup utility, go to Start > All Programs > Blackmagic Design > Desktop Video > Desktop Video Setup on Windows or Applications > Blackmagic Desktop Video > Desktop Video Setup on Mac. You will see a window with a list of inputs and outputs on the left side and a panel with settings on the right side.

-

blackmagic livekey software update
-blackmagic livekey installation guide
-blackmagic livekey tutorial
-blackmagic livekey compatibility
-blackmagic livekey free trial
-blackmagic livekey support center
-blackmagic livekey user manual
-blackmagic livekey features
-blackmagic livekey system requirements
-blackmagic livekey license key
-blackmagic livekey alternative
-blackmagic livekey review
-blackmagic livekey forum
-blackmagic livekey splice community
-blackmagic livekey video editing
-blackmagic livekey color correction
-blackmagic livekey professional audio
-blackmagic livekey visual effects
-blackmagic livekey mercury playback engine
-blackmagic livekey deck control
-blackmagic livekey uncompressed AVI and QuickTime files
-blackmagic livekey motion JPEG and DVCPRO HD
-blackmagic livekey Avid DNxHD
-blackmagic livekey DaVinci Resolve
-blackmagic livekey Final Cut Pro X
-blackmagic livekey Media Composer
-blackmagic livekey Premiere Pro
-blackmagic livekey Fusion
-blackmagic livekey After Effects
-blackmagic livekey Photoshop
-blackmagic livekey Fairlight audio
-blackmagic livekey ProTools
-blackmagic livekey WDM and DirectShow driver for Windows
-blackmagic livekey SDI video connections
-blackmagic livekey RS-422 deck control
-blackmagic livekey external reference inputs
-blackmagic livekey capture and playback cards
-blackmagic livekey broadcast quality video and audio output
-blackmagic livekey media express software for capture and playback management
-blackmagic livekey disk speed test for disk array performance check
-blackmagic livekey logo and graphics insertion in real time
-blackmagic livekey 10 bit uncompressed 4:2:2 and 4:4:4 quality
-blackmagic livekey 64-bit float YRGB processing
-blackmagic livekey power windows, tracking, primaries and secondaries
-blackmagic livekey 3D object tracking
-blackmagic livekey standards conversion
-blackmagic livekey network storage
-blackmagic livekey streaming and encoding
-blackmagic livekey multi-view monitoring
-blackmagic livekey routing and distribution

-

To configure your video input settings, select the input that you want to use from the list on the left side. For example, if you want to use SDI input 1, select SDI 1 from the list. Then, choose your video format from the drop-down menu on the right side. For example, if you want to use 1080p60 as your video format, select HD 1080p60 from the menu. You can also choose your audio settings from the same panel.

-

To configure your video output settings, select the output that you want to use from the list on the left side. For example, if you want to use SDI output 1, select SDI 1 from the list. Then, choose your video format from the drop-down menu on the right side. For example, if you want to use 1080p60 as your video format, select HD 1080p60 from the menu. You can also choose your audio settings from the same panel.

-

After you have configured your video input and output settings, click Save Settings at the bottom of the window.

-

How to Launch Blackmagic LiveKey and Access Its Interface

-

The final step is to launch Blackmagic LiveKey and access its interface. This is where you can start keying and inserting graphics in real time.

-

To launch Blackmagic LiveKey, go to Start > All Programs > Blackmagic Design > LiveKey on Windows or Applications > Blackmagic LiveKey on Mac. You will see a window with four main sections: Key Source, Fill Source, Output and Preview.

-

The Key Source section lets you select your key source from SDI or HDMI inputs and adjust your key settings such as chroma, luma, linear or pattern keys. The Fill Source section lets you select your fill source from SDI or HDMI inputs or from a still image file and adjust your fill settings such as opacity, position, size and rotation. The Output section lets you select your output from SDI or HDMI outputs and monitor your keyed output on an external monitor. The Preview section lets you preview your keyed output on your computer screen.

-

You can also access the menu bar at the top of the window, where you can find options such as File, Edit, View, Window and Help. You can use these options to perform actions such as saving and loading presets, undoing and redoing changes, zooming in and out, switching between windows and getting help.

-

How to Use Blackmagic LiveKey for Live Keying and Graphics

-

Now that you have installed and set up Blackmagic LiveKey, you can start using it for live keying and graphics insertion. Here are the steps you need to follow:

-

How to Select Your Key Source and Adjust Your Key Settings

-

The first step is to select your key source and adjust your key settings. Your key source is the video that you want to remove the background from and replace it with another video or image. Your key settings are the parameters that determine how the background is removed and how the edges are blended.

-

To select your key source, go to the Key Source section on the left side of the window. You will see two options: SDI Input and HDMI Input. Choose the option that matches the input that you have connected your key source to. For example, if you have connected your key source to SDI input 1, choose SDI Input.

-

To adjust your key settings, go to the Key Settings panel below the Key Source section. You will see four tabs: Chroma Key, Luma Key, Linear Key and Pattern Key. Choose the tab that matches the type of key that you want to use. For example, if you want to use a chroma key, which is a key based on color, choose Chroma Key.

-

Depending on the type of key that you choose, you will see different settings that you can adjust. For example, if you choose Chroma Key, you will see settings such as Key Color, Tolerance, Softness and Spill Suppression. You can use these settings to fine-tune your key by adjusting the color range, edge smoothness and color spill of your key source.

-

To adjust a setting, you can either drag the slider or enter a value in the box next to it. You can also use the eyedropper tool to pick a color from your key source by clicking on it. You can also use the invert button to invert your key by clicking on it.

-

As you adjust your key settings, you will see the result in the Preview section on the right side of the window. You can also see the result in the Output section if you have connected an external monitor to your output.

-

How to Select Your Fill Source and Adjust Your Fill Settings

-

The next step is to select your fill source and adjust your fill settings. Your fill source is the video or image that you want to replace the background of your key source with. Your fill settings are the parameters that determine how the fill source is blended with the key source.

-

To select your fill source, go to the Fill Source section on the left side of the window. You will see three options: SDI Input, HDMI Input and Still Image. Choose the option that matches the source that you want to use as your fill source. For example, if you want to use a still image as your fill source, choose Still Image.

-

To adjust your fill settings, go to the Fill Settings panel below the Fill Source section. You will see settings such as Opacity, Position, Size and Rotation. You can use these settings to fine-tune your fill source by adjusting its transparency, location, dimension and angle.

-

To adjust a setting, you can either drag the slider or enter a value in the box next to it. You can also use the buttons to reset, center or flip your fill source by clicking on them.

-

As you adjust your fill settings, you will see the result in the Preview section on the right side of the window. You can also see the result in the Output section if you have connected an external monitor to your output.

-

How to Preview and Monitor Your Keyed Output

-

The final step is to preview and monitor your keyed output. Your keyed output is the video that results from blending your key source and your fill source. You can preview and monitor your keyed output on your computer screen or on an external monitor.

-

To preview your keyed output on your computer screen, go to the Preview section on the right side of the window. You will see a video window that shows your keyed output. You can also see a toolbar below the video window that lets you control the playback of your keyed output. You can use the buttons to play, pause, stop, rewind or fast forward your keyed output by clicking on them. You can also use the slider to scrub through your keyed output by dragging it.

-

To monitor your keyed output on an external monitor, go to the Output section on the bottom of the window. You will see two options: SDI Output and HDMI Output. Choose the option that matches the output that you have connected your external monitor to. For example, if you have connected your external monitor to SDI output 1, choose SDI Output.

-

How to Use Blackmagic LiveKey for Live Streaming and Recording

-

In addition to live keying and graphics insertion, you can also use Blackmagic LiveKey for live streaming and recording your keyed output. This way, you can share your live video production with a wider audience or save it for later use. Here are the steps you need to follow:

-

How to Stream Your Keyed Output to a Streaming Platform

-

The first step is to stream your keyed output to a streaming platform such as YouTube, Facebook or Twitch. This way, you can broadcast your live video production to anyone who has access to the internet.

-

To stream your keyed output to a streaming platform, you need to use a third-party software such as OBS Studio, Wirecast or vMix. These software let you capture your keyed output from Blackmagic LiveKey and encode it into a streaming format such as RTMP or RTSP. They also let you connect to a streaming platform and send your encoded stream to it.

-

To stream your keyed output using OBS Studio, for example, you need to follow these steps:

-
    -
  1. Download and install OBS Studio from https://obsproject.com/.
  2. -
  3. Launch OBS Studio and go to Settings > Stream.
  4. -
  5. Select your streaming platform from the Service drop-down menu and enter your stream key in the Stream Key box. You can find your stream key from your streaming platform's dashboard.
  6. -
  7. Go back to the main window and click on the + button under Sources.
  8. -
  9. Select Video Capture Device from the list and click OK.
  10. -
  11. Select Blackmagic Device from the Device drop-down menu and click OK.
  12. -
  13. Adjust the properties of your video capture device such as resolution, frame rate and audio settings if needed.
  14. -
  15. Click on Start Streaming at the bottom right corner of the window.
  16. -
-

You can now stream your keyed output from Blackmagic LiveKey to your streaming platform using OBS Studio.

-

How to Record Your Keyed Output to a File

-

The next step is to record your keyed output to a file on your computer or an external drive. This way, you can save your live video production for later use or editing.

-

To record your keyed output to a file, you need to use the Blackmagic Media Express software. This software lets you capture your keyed output from Blackmagic LiveKey and save it into a file format such as AVI or QuickTime. You can also use it to play back and manage your recorded files.

-

To record your keyed output using Blackmagic Media Express, you need to follow these steps:

-
    -
  1. Download and install Blackmagic Media Express from the Blackmagic Design website.
  2. -
  3. Launch Blackmagic Media Express and go to Preferences > Capture.
  4. -
  5. Select your DeckLink card from the Video Device drop-down menu and choose your video format from the Video Format drop-down menu. Make sure they match the settings that you have configured in Blackmagic Desktop Video Setup.
  6. -
  7. Go back to the main window and click on the Capture tab at the top of the window.
  8. -
  9. Enter a file name in the File Name box and choose a file format from the File Format drop-down menu. You can also choose a destination folder for your file by clicking on the Browse button.
  10. -
  11. Click on the Record button at the bottom of the window to start recording your keyed output.
  12. -
  13. Click on the Stop button at the bottom of the window to stop recording your keyed output.
  14. -
-

You can now record your keyed output from Blackmagic LiveKey to a file using Blackmagic Media Express.

-

How to Manage Your Streaming and Recording Settings

-

The final step is to manage your streaming and recording settings using the Blackmagic LiveKey software. These settings let you control aspects such as bitrate, quality, duration and file name of your streaming and recording.

-

To manage your streaming and recording settings, go to the menu bar at the top of the window and click on Edit > Preferences. You will see a window with two tabs: Streaming and Recording. Choose the tab that matches the option that you want to manage.

-

If you choose Streaming, you will see settings such as Protocol, Server URL, Stream Key, Bitrate and Quality. You can use these settings to configure your streaming parameters such as RTMP or RTSP protocol, server URL, stream key, bitrate and quality. You can also use the Test button to test your streaming connection by clicking on it.

-

If you choose Recording, you will see settings such as File Format, File Name, Duration and Quality. You can use these settings to configure your recording parameters such as AVI or QuickTime format, file name, duration and quality. You can also use the Browse button to choose a destination folder for your file by clicking on it.

-

After you have managed your streaming and recording settings, click OK at the bottom of the window.

-

Tips and Tricks for Using Blackmagic LiveKey Effectively

-

Blackmagic LiveKey is a software that can help you create amazing live video productions with ease. However, there are some tips and tricks that can help you use it more effectively. Here are some of them:

-
    -
  • Use keyboard shortcuts. You can use keyboard shortcuts to perform common actions such as playing, pausing, stopping, rewinding or fast forwarding your keyed output. You can also use keyboard shortcuts to switch between windows or tabs. You can find a list of keyboard shortcuts in the Help menu or in the instruction manual.
  • -
  • Use presets. You can use presets to save and load your key settings and fill settings for different scenarios. You can create presets by clicking on the Save Preset button in the Key Settings panel or in the Fill Settings panel. You can load presets by clicking on the Load Preset button in the same panels.
  • -
  • Use automation. You can use automation to trigger actions such as keying, filling, streaming or recording based on events such as time, date or GPI signals. You can create automation scripts by clicking on the Edit Automation button in the menu bar. You can run automation scripts by clicking on the Run Automation button in the same menu bar.
  • -
  • Use troubleshooting. You can use troubleshooting to solve common problems such as no video input, no video output, poor key quality or poor stream quality. You can find troubleshooting tips in the Help menu or in the support notes.
  • -
-

Conclusion

-

Blackmagic LiveKey is a software that lets you insert logos, graphics, titles and animations in real time using SDI outputs from any DeckLink capture card. You can also use it for live chroma keying, luma keying, linear keying or pattern keying with any video source. Whether you are producing a live event, a webcast, a broadcast or a presentation, Blackmagic LiveKey can help you create professional-looking videos with minimal effort.

-

In this article, we have explained what Blackmagic LiveKey is and how to download it. We have also shown you how to install and set up the software, how to use it for live keying and graphics insertion, how to use it for live streaming and recording, and how to use it effectively. We hope that this article has been helpful and informative for you.

-

If you want to learn more about Blackmagic LiveKey or other Blackmagic Design products, you can visit their website or their YouTube channel. You can also contact their support team if you have any questions or issues.

-

Thank you for reading this article and happy live video production!

-

FAQs

-

Here are some frequently asked questions about Blackmagic LiveKey:

-
    -
  1. What are the system requirements for Blackmagic LiveKey?
  2. -

    The system requirements for Blackmagic LiveKey are as follows:

    -
      -
    • A DeckLink card with SDI outputs
    • -
    • A computer with Windows 10 64-bit or macOS 10.15 Catalina or later
    • -
    • A PCI Express or Thunderbolt interface for connecting the DeckLink card to the computer
    • -
    • At least 8 GB of RAM and 256 GB of SSD storage space
    • -
    • A fast internet connection for streaming
    • -
    -
  3. What are the supported video formats and resolutions for Blackmagic LiveKey?
  4. -

    The supported video formats and resolutions for Blackmagic LiveKey are as follows:

    - - - - - - - - - - - - -
    Video FormatResolutionFrame Rate
    NTSC720 x 48629.97 fps
    PAL720 x 57625 fps
    HD 720p1280 x 72023.98, 24, 25, 29.97, 30, 50, 59.94, 60 fps
    HD 1080i1920 x 108050, 59.94, 60 fps
    HD 1080p1920 x 108023.98, 24, 25, 29.97, 30, 50, 59.94, 60 fps
    2K DCI p2048 x 108023.98, 24, 25 fps
    4K DCI p4096 x 216023.98, 24, 25 fps
    4K UHD p3840 x 216023.98, 24, 25, 29.97, 30 fps
    8K DCI p8192 x 432023.98, 24 fps (DeckLink Extreme only)
    8K UHD p7680 x 432023.98, 24 fps (DeckLink Extreme only)
    -
  5. What are the supported file formats and codecs for Blackmagic LiveKey?
  6. -

    The supported file formats and codecs for Blackmagic LiveKey are as follows:

    - - - - -
    File FormatCodec
    AVIUncompressed, Motion JPEG, DV, HDV, DNxHD, DNxHR, ProRes
    QuickTimeUncompressed, Motion JPEG, DV, HDV, DNxHD, DNxHR, ProRes, H.264
    -
  7. How can I update the software and firmware of Blackmagic LiveKey?
  8. -

    You can update the software and firmware of Blackmagic LiveKey by downloading and installing the latest version of Blackmagic Desktop Video from the Blackmagic Design website. The software update will automatically update the firmware of your DeckLink card as well.

    -
  9. How can I get help and support for Blackmagic LiveKey?
  10. -

    You can get help and support for Blackmagic LiveKey by visiting the Blackmagic Design website or contacting the support team. You can also find helpful resources such as instruction manuals, tutorials, support notes and forums on the website.

    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Burger ryi v Kotletinin Lzztli Resepti. Evd Fast Food uD83CuDF54 Hamburger Hazrlanmas.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Burger ryi v Kotletinin Lzztli Resepti. Evd Fast Food uD83CuDF54 Hamburger Hazrlanmas.md deleted file mode 100644 index ec1d967d6dccea75d6fe64dae883da503f3cf25f..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Burger ryi v Kotletinin Lzztli Resepti. Evd Fast Food uD83CuDF54 Hamburger Hazrlanmas.md +++ /dev/null @@ -1,96 +0,0 @@ -
-

How to make burger buns at home

Do you love burgers? If so, you know that a good burger bun can make or break your burger experience. A burger bun is not just a piece of bread that holds your patty and toppings together. It is also a part of the flavor and texture of your burger. It should be soft and fluffy on the inside, golden and crisp on the outside, and slightly sweet and buttery in taste.

But did you know that you can make your own burger buns at home? It is not as hard as you might think. All you need are some simple ingredients that you probably already have in your pantry and some basic baking skills. Making your own burger buns at home has many benefits. You can control the quality and freshness of your ingredients. You can customize the size and shape of your buns. You can also add your own twist to your buns by using different flours or seeds.

-

burger coreyi resepti


Download File ☆☆☆☆☆ https://bltlly.com/2uOp4W



In this article, we will show you how to make burger buns at home in easy steps. We will also share some tips and tricks for making perfect burger buns every time. And we will give you some ideas on how to use your homemade burger buns for delicious burgers and more.

| |

Ingredients for burger buns

To make about 8 medium-sized burger buns at home, you will need the following ingredients:

  • 3 1/4 cups (400 g) of all-purpose flour
  • 2 teaspoons (7 g) of active dry yeast
  • 1/4 cup (50 g) of granulated sugar
  • 1 teaspoon (5 g) of salt
  • 1/4 cup (60 g) of unsalted butter
  • 3/4 cup (180 ml) of warm milk
  • 1 large egg
  • Sesame seeds (optional)
| |

Steps for making burger buns

-

Now that you have all the ingredients ready, let's start making the burger buns. Here are the steps you need to follow:

-
    -
  1. In the bowl of a stand mixer fitted with the dough hook, combine the warm milk and yeast and let it sit for 5 minutes until foamy. This will activate the yeast and help your dough rise.
  2. -
  3. Add the sugar, egg, butter, salt, and 3 cups of flour to the yeast mixture and mix on low speed until a soft and sticky dough forms. You may need to add more flour gradually if the dough is too wet.
  4. -
  5. Knead the dough on medium speed for about 10 minutes until it is smooth and elastic. You can also knead the dough by hand on a lightly floured surface if you don't have a stand mixer.
  6. -
  7. Place the dough in a large greased bowl and cover it with a damp cloth or plastic wrap. Let it rise in a warm place for about an hour or until doubled in size.
  8. -
  9. Punch down the dough and divide it into 8 equal pieces. Roll each piece into a smooth ball and place them on a baking sheet lined with parchment paper or sprayed with cooking spray. Leave some space between them to allow for expansion.
  10. -
  11. Flatten each ball slightly with your palm and cover them loosely with a cloth or plastic wrap. Let them rise again for another 30 to 45 minutes or until puffy.
  12. -
  13. Preheat your oven to 375°F (190°C) and brush the tops of the buns with some melted butter. This will give them a nice golden crust and a buttery flavor.
  14. -
  15. If you want to add some sesame seeds or other seeds to your buns, whisk an egg white with some water and brush it over the buttered buns. Then sprinkle the seeds evenly over the buns.
  16. -
  17. Bake the buns for 15 to 18 minutes or until golden brown. Transfer them to a wire rack and let them cool completely before slicing or serving.
  18. -

Tips and tricks for perfect burger buns

-

Making burger buns at home is not difficult, but there are some tips and tricks that can help you achieve the best results. Here are some of them:

-
    -
  • Use warm milk, not hot or cold, to activate the yeast. The ideal temperature is between 105°F and 115°F (40°C and 46°C). You can use a thermometer to check the temperature, or test it with your finger. It should feel warm but not scalding.
  • -
  • Proof the yeast before adding it to the flour. This means letting it sit in the warm milk with some sugar for a few minutes until it becomes foamy and bubbly. This will ensure that your yeast is alive and active, and will help your dough rise better.
  • -
  • Grease the baking sheet or line it with parchment paper to prevent the buns from sticking. You can also sprinkle some cornmeal or semolina on the sheet for extra crunch and flavor.
  • -
  • Check the doneness of the buns by tapping them lightly on the bottom. They should sound hollow when they are done. You can also insert a thermometer into the center of a bun. It should read 190°F (88°C) when they are fully baked.
  • -
  • Cool the buns on a wire rack before slicing or serving. This will prevent them from becoming soggy or mushy from the steam.
  • -

How to store and freeze burger buns

-

One of the advantages of making your own burger buns at home is that you can make a large batch and store or freeze them for later use. This way, you will always have fresh and homemade buns ready for your burgers. Here are some tips on how to store and freeze burger buns:

-
    -
  • To store burger buns at room temperature, wrap them in plastic wrap or foil and keep them in an airtight container or freezer bag. They will last for about 5 to 7 days, depending on the humidity and temperature.
  • -
  • To freeze burger buns, wrap each bun individually in plastic wrap and place them in a freezer-safe bag. Squeeze out as much air as possible and label the bag with the date and "Hamburger Buns". Freeze for up to 3 months.
  • -
  • To thaw burger buns, you can either leave them on the counter overnight or microwave them for a few seconds. You can also place them in a colander over a steaming bowl of water and cover with a towel to steam them.
  • -
  • To reheat burger buns, you can either toast them lightly in the oven or on the grill, or wrap them in a damp paper towel and microwave them for 10 to 15 seconds.
  • -

How to use burger buns

-

Now that you have learned how to make, store, and freeze burger buns, you might be wondering how to use them for your burgers and more. Here are some ideas on how to use your homemade burger buns:

-

Evdə burger coreyi hazırlanması
-Burger coreyi üçün xəmir resepti
-Burger kotleti necə hazırlanır
-Fast food hamburger resepti
-Burger coreyi və kotletinin hazırlanması
-Süd coreyi ilə burger resepti
-Burger coreyi üçün maya nə qədər olmalıdır
-Burger coreyi üçün ən yaxşı un növü
-Burger kotleti üçün ət seçimi
-Burger coreyi və kotletinin pişirilməsi
-Evdə burger coreyi və sosu hazırlanması
-Burger coreyi üçün şirin xəmir resepti
-Burger kotleti üçün dadlı marinad resepti
-Fast food hamburger sosu necə hazırlanır
-Burger coreyi və kotletinin saxlanılması
-Süd coreyi ilə burgerin faydaları
-Burger coreyi üçün mayasız xəmir resepti
-Burger kotleti üçün toxumlu ət resepti
-Fast food hamburgerin ziyanaşıqarı təsirləri
-Burger coreyi və kotletinin qızardılması
-Evdə burger coreyi və salatı hazırlanması
-Burger coreyi üçün çörək toxumu əlavəsi
-Burger kotleti üçün sebzeli ət resepti
-Fast food hamburgerin kaloriyası nə qədərdir
-Burger coreyi və kotletinin servis edilməsi
-Süd coreyi ilə burgerin hazırlanma zamanı
-Burger coreyi üçün xammal alınacaq yerlər
-Burger kotleti üçün Əzizbeyov bazarında ən yaxşı ət satıcıları
-Fast food hamburgerin alternativləri nələrdir
-Burger coreyi və kotletinin istilik dairəsi nə qədər olmalıdır
-Evdə burger coreyi və pendiri hazırlanması
-Burger coreyi üçün yumurta Əliyev bazarında nereden alınır
-Burger kotleti üçün baharat seçimi nelerdir
-Fast food hamburgerin tüketimi nasıl azaltılır
-Burger coreyi və kotletinin dondurulması ve çözüldürülmüşü nasıl pişirilir
-Süd çöröyü ilö burgerin lezzeti nasıl artırılır
-Burger çöröyü üçün unun markası önemli midir
-Burger kotleti üçün etin yağ oranı ne kadar olmalıdır
-Fast food hamburgerin sağlıklı bir şekilde yapılması mümkün müdür
-Burger çöröyü ve kotletinin yanında ne tür içecekler sunulabilir
-Evde burger çöröyü ve turşusu hazırlanması
-Burger çöröyü üçün fırın ayarı nasıl yapılır
-Burger kotleti üçün tavuk eti kullanılabilir mi
-Fast food hamburgerin besin değerleri nelerdir
-Burger çöröyü ve kotletinin vegan versiyonu nasıl yapılır

-
    -
  • Make different types of burgers with your favorite patties and toppings. You can use beef, chicken, turkey, pork, lamb, or veggie patties. You can also add cheese, lettuce, tomato, onion, pickle, bacon, avocado, mushroom, or any other topping you like. Don't forget to add some sauce, such as ketchup, mustard, mayo, barbecue, or aioli.
  • -
  • Make sliders or mini burgers with smaller buns and patties. These are great for parties, snacks, or appetizers. You can also make them with different fillings, such as pulled pork, chicken salad, tuna salad, or ham and cheese.
  • -
  • Make breakfast sandwiches with your burger buns. You can use eggs, bacon, sausage, cheese, ham, or any other breakfast ingredient you like. You can also toast your buns and spread some butter, jam, peanut butter, or Nutella on them.
  • -
  • Make sandwiches or wraps with your burger buns. You can use any kind of meat, cheese, veggie, or salad you like. You can also cut your buns in half and use them as bread for your sandwiches or wraps.
  • -
  • Make croutons or bread crumbs with your leftover burger buns. You can cut them into small pieces and toast them in the oven with some oil and seasonings. Then you can use them for salads, soups, casseroles, or stuffing.
  • -

Conclusion

-

As you can see, making burger buns at home is not only easy, but also fun and rewarding. You can enjoy fresh and delicious burger buns anytime you want, and you can also customize them to your liking. Plus, you can save money and avoid preservatives and additives that are often found in store-bought buns.

-

So what are you waiting for? Grab your ingredients and start making your own burger buns today. You will be amazed by how good they taste and how proud you will feel. And don't forget to share your creations with your family and friends. They will love them too!

-

FAQs

-

Here are some frequently asked questions about burger buns and their answers:

-
    -
  1. Can I use whole wheat flour or gluten-free flour to make burger buns?
    Yes, you can use whole wheat flour or gluten-free flour to make burger buns, but you may need to adjust the amount of liquid and yeast accordingly. Whole wheat flour tends to absorb more liquid and gluten-free flour tends to rise less than all-purpose flour. You may also need to add some vital wheat gluten or xanthan gum to improve the texture and elasticity of the dough.
  2. -
  3. Can I add other flavors or ingredients to my burger buns?
    Yes, you can add other flavors or ingredients to your burger buns, such as herbs, spices, cheese, nuts, dried fruits, etc. You can either mix them into the dough or sprinkle them on top of the buns before baking. Just make sure not to add too much or too heavy ingredients that might weigh down the dough or affect the rising.
  4. -
  5. Can I make burger buns without yeast?
    Yes, you can make burger buns without yeast, but they will have a different texture and flavor than yeast-based buns. You can use baking powder or baking soda as leavening agents instead of yeast, and you can skip the rising time. However, the buns will be denser and more biscuit-like than fluffy and bread-like.
  6. -
  7. Can I make burger buns in a bread machine?
    Yes, you can make burger buns in a bread machine, but you will still need to shape and bake them in the oven. You can use the dough cycle of your bread machine to mix and knead the dough, then follow the rest of the steps as usual.
  8. -
  9. Can I make burger buns ahead of time?
    Yes, you can make burger buns ahead of time, either by storing them at room temperature or freezing them for later use. You can also refrigerate the dough overnight and bake it the next day. Just make sure to bring the dough or the buns to room temperature before baking or reheating.
  10. -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Bel-Air S01E05 Wills Loyalty to Philly Tested.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Bel-Air S01E05 Wills Loyalty to Philly Tested.md deleted file mode 100644 index 47355fa9b4082247a6ebca6c77d5e5bf7e21e3a9..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Bel-Air S01E05 Wills Loyalty to Philly Tested.md +++ /dev/null @@ -1,170 +0,0 @@ -
-

How to Download Bel-Air Season 1 Episode 5 Legally and Safely

-

Bel-Air is a drama television series that reimagines the beloved sitcom The Fresh Prince of Bel-Air through a new, dramatic take on Will's complicated journey from the streets of West Philadelphia to the gated mansions of Bel-Air. The series premiered on Peacock on February 13, 2022, and has received positive reviews from critics and audiences alike.

-

bel-air season 1 episode 5 download


DOWNLOAD > https://bltlly.com/2uOh48



-

In season 1 episode 5, titled "PA to LA", Will's friend Tray visits him in Bel-Air, and his loyalty to Philly is put to the test. Meanwhile, the Banks family supports Hilary by hosting an influencer event. If you missed this episode or want to watch it again, you might be wondering how to download it legally and safely. In this article, we will show you three options to do so, along with their pros and cons.

-

Option 1: Netflix

-

Netflix is one of the most popular streaming services in the world, offering a huge catalog of movies, TV shows, documentaries, and original content. Netflix also allows you to download some of its titles for offline viewing on your phone, tablet, or PC.

-

Pros and Cons of Netflix

- - - - - - - - - - - - - - - - - - - - - -
ProsCons
No ads when streamingMore expensive than other services
Tons of original, high-quality contentContent changes with little to no notification
Mobile downloads availableDownloads expire after a certain period or after playback
Free mobile gamesDownloads are limited to two devices at a time
-

How to Download Bel-Air Season 1 Episode 5 from Netflix

-
    -
  1. Subscribe to a Netflix plan that suits your needs and budget.
  2. -
  3. Download and install the Netflix app on your device.
  4. -
  5. Launch the app and log in with your credentials.
  6. -
  7. Search for "Bel-Air" and select the show from the results.
  8. -
  9. Tap on the download icon next to season 1 episode 5.
  10. -
  11. Wait for the download to finish and enjoy watching it offline.
  12. -
-

Option 2: Hulu

-

Hulu is another popular streaming service that focuses on TV shows over movies. It also offers live TV channels, sports, news, and original programming. With the No Ads subscription, you can download TV shows for offline viewing on your mobile device.

-

Pros and Cons of Hulu

- - - - - - - - - - - - - - - - - - - - - -
ProsCons
No ads with No Ads planAds with Basic plan
Live TV option available
Wide range of TV showsContent varies by region
Mobile downloads availableDownloads expire after 30 days or 48 hours after playback
-

How to Download Bel-Air Season 1 Episode 5 from Hulu

-
    -
  1. Subscribe to a Hulu plan that includes downloads, such as No Ads or Live TV.
  2. -
  3. Download and install the Hulu app on your device.
  4. -
  5. Launch the app and log in with your credentials.
  6. -
  7. Search for "Bel-Air" and select the show from the results.
  8. -
  9. Tap on the download icon next to season 1 episode 5.
  10. -
  11. Wait for the download to finish and enjoy watching it offline.
  12. -
-

Option 3: Amazon Prime Video

-

Amazon Prime Video is a streaming service that offers movies, TV shows, documentaries, and original content. It is included with an Amazon Prime membership, which also gives you access to free shipping, music, books, and more. You can download titles from Prime Video for offline viewing on your mobile device or PC.

-

Pros and Cons of Amazon Prime Video

- - - - - - - - - - - - - - - - - - - - - -
ProsCons
No ads when streamingSome content requires additional purchase or rental
Lots of original, award-winning contentUser interface is not very intuitive or user-friendly
Mobile and PC downloads availableDownloads expire after a certain period or after playback
Other benefits of Amazon Prime membership
-

How to Download Bel-Air Season 1 Episode 5 from Amazon Prime Video

-
    -
  1. Sign up for an Amazon Prime membership or a Prime Video subscription.
  2. -
  3. Download and install the Prime Video app on your device or the Amazon Games app on your PC.
  4. -
  5. Launch the app and log in with your credentials.
  6. -
  7. Search for "Bel-Air" and select the show from the results.
  8. -
  9. Tap on the download icon next to season 1 episode 5.
  10. -
  11. Wait for the download to finish and enjoy watching it offline.
  12. -
-

Conclusion

-

In this article, we have shown you three options to download Bel-Air season 1 episode 5 legally and safely. Each option has its own pros and cons, so you should choose the one that best suits your preferences and budget. However, if we had to recommend one, we would go with Netflix, as it offers the most benefits and features for a reasonable price. Netflix also has a large library of original and exclusive content, including other shows like Stranger Things, The Witcher, and The Crown.

-

bel-air s01e05 download free
-watch bel-air season 1 episode 5 online
-bel-air season 1 episode 5 pa to la download
-how to download bel-air season 1 episode 5
-bel-air season 1 episode 5 streaming peacock
-bel-air season 1 episode 5 subtitles download
-bel-air season 1 episode 5 full hd download
-bel-air season 1 episode 5 torrent download
-bel-air season 1 episode 5 mp4 download
-bel-air season 1 episode 5 recap and review
-bel-air season 1 episode 5 cast and crew
-bel-air season 1 episode 5 release date and time
-bel-air season 1 episode 5 spoilers and predictions
-bel-air season 1 episode 5 trailer and promo
-bel-air season 1 episode 5 behind the scenes and extras
-bel-air season 1 episode 5 soundtrack and music
-bel-air season 1 episode 5 quotes and memes
-bel-air season 1 episode 5 ratings and reviews
-bel-air season 1 episode 5 analysis and discussion
-bel-air season 1 episode 5 reaction and commentary
-where to watch bel-air season 1 episode 5 online
-where to download bel-air season 1 episode 5 for free
-where to buy bel-air season 1 episode 5 online
-where to stream bel-air season 1 episode 5 online
-where to rent bel-air season 1 episode 5 online
-how to watch bel-air season 1 episode 5 for free
-how to stream bel-air season 1 episode 5 online
-how to buy bel-air season 1 episode 5 online
-how to rent bel-air season 1 episode 5 online
-how to get subtitles for bel-air season 1 episode 5
-what happened in bel-air season 1 episode 5
-what is the plot of bel-air season 1 episode 5
-what is the title of bel-air season 1 episode 5
-what is the air date of bel-air season 1 episode 5
-what is the runtime of bel-air season 1 episode 5
-who is in the cast of bel-air season 1 episode 5
-who is the director of bel-air season 1 episode

-

Whichever option you choose, we hope you enjoy watching Bel-Air season 1 episode 5 offline. Bel-Air is a captivating and compelling series that explores the themes of identity, family, culture, and class. It also pays homage to the original sitcom that inspired it, while adding its own twist and flair.

-

FAQs

-

What is the release date of Bel-Air season 2?

-

There is no official announcement yet about the release date of Bel-Air season 2. However, given that season 1 premiered in February 2022, we can expect season 2 to arrive sometime in early 2023.

-

How many episodes are there in Bel-Air season 1?

-

Bel-Air season 1 consists of 10 episodes, each lasting about an hour. The episodes are released weekly on Peacock every Sunday. The season finale is expected to air on April 17, 2022.

-

Is Bel-Air a remake of The Fresh Prince of Bel-Air?

-

No, Bel-Air is not a remake of The Fresh Prince of Bel-Air. It is a reimagining of the classic sitcom that takes a more dramatic and realistic approach to Will's story. It is based on a viral fan-made trailer by Morgan Cooper that caught the attention of Will Smith, who serves as an executive producer of the series.

-

Who are the main cast members of Bel-Air?

-

The main cast members of Bel-Air are:

    -
  • Jabari Banks as Will Smith
  • -
  • Adrian Holmes as Philip Banks
  • -
  • Cassandra Freeman as Vivian Banks
  • -
  • Olly Sholotan as Carlton Banks
  • -
  • Coco Jones as Hilary Banks
  • -
  • Akira Akbar as Ashley Banks
  • -
  • Jimmy Akingbola as Geoffrey Butler
  • -
  • Jordan L. Jones as Jazz
  • -
  • Simone Joy Jones as Lisa Wilkes
  • -

-

Where can I watch the trailer of Bel-Air?

-

You can watch the official trailer of Bel-Air on YouTube or on Peacock's website. The trailer gives you a glimpse of what to expect from the series, such as Will's arrival in Bel-Air, his clashes with Carlton and Uncle Phil, his romance with Lisa, and his friendship with Jazz.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/timpal0l/chat-ui/src/lib/shareConversation.ts b/spaces/timpal0l/chat-ui/src/lib/shareConversation.ts deleted file mode 100644 index dfe28ae5d34bbf662b4dc4e294bcf93cd445ad7c..0000000000000000000000000000000000000000 --- a/spaces/timpal0l/chat-ui/src/lib/shareConversation.ts +++ /dev/null @@ -1,34 +0,0 @@ -import { base } from "$app/paths"; -import { ERROR_MESSAGES, error } from "$lib/stores/errors"; - -export async function shareConversation(id: string, title: string) { - try { - const res = await fetch(`${base}/conversation/${id}/share`, { - method: "POST", - headers: { - "Content-Type": "application/json", - }, - }); - - if (!res.ok) { - error.set("Error while sharing conversation, try again."); - console.error("Error while sharing conversation: " + (await res.text())); - return; - } - - const { url } = await res.json(); - - if (navigator.share) { - navigator.share({ - title, - text: "Share this chat with others", - url, - }); - } else { - prompt("Copy this public url to share:", url); - } - } catch (err) { - error.set(ERROR_MESSAGES.default); - console.error(err); - } -} diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Artcam 2012 Crack !FULL!.md b/spaces/tioseFevbu/cartoon-converter/scripts/Artcam 2012 Crack !FULL!.md deleted file mode 100644 index d5f7eb481f20ebea4eafba822a3a94ef4a8cec1c..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Artcam 2012 Crack !FULL!.md +++ /dev/null @@ -1,36 +0,0 @@ -
-Here is a possible title and article with HTML formatting for the keyword "Artcam 2012 Crack": - -

How to Download and Install Artcam 2012 Crack for Free

-

Artcam 2012 is a powerful software for creating 3D models and designs for CNC machines. It allows you to use scanned images, vector graphics, solid models and CAD files as input, and generate realistic and stunning outputs in STL format. Artcam 2012 also offers advanced tools and flexible machining strategies to meet your needs.

-

However, Artcam 2012 is not a free software, and you need a license key to activate it. If you don't have one, you might be tempted to look for a crack version online. But be careful, as some of the crack versions might contain viruses, malware or spyware that can harm your computer or steal your personal information.

-

Artcam 2012 Crack


Download Filehttps://urlcod.com/2uHvKM



-

So how can you download and install Artcam 2012 crack safely and legally? Here are some steps you can follow:

-
    -
  1. Go to this website and download the Delcam Artcam 2012 SP2 file. This is a trusted source that provides the original software without any modifications. The file size is about 898 MB and the password for extracting it is "123".[^2^]
  2. -
  3. Extract the file using WinRAR or any other software that can handle RAR files. You will get a folder named Delcam.ArtCAM.v2012.SP2.Build.359.
  4. -
  5. Run the Setup.exe file inside the folder and follow the installation wizard. Choose your preferred language and unit system (millimeter or inch). Do not select the Sentinel Drivers option, as you don't need it for the crack version.
  6. -
  7. After the installation is complete, go to the Crack folder inside the same folder and copy the two subfolders named Exec and Exec64. Paste them into the ArtCAM 2012 folder in your Program Files directory (usually C:\Program Files\ArtCAM 2012) and overwrite the original files.
  8. -
  9. Now you can run Artcam 2012 from your desktop shortcut or start menu. You don't need to enter any license key or serial number, as the crack version has already bypassed the activation process.
  10. -
-

Congratulations! You have successfully downloaded and installed Artcam 2012 crack for free. You can now enjoy creating amazing 3D models and designs with this software.

-

Note: This article is for educational purposes only. We do not encourage or support any illegal activities such as cracking or pirating software. Please respect the intellectual property rights of the software developers and purchase a legitimate license if you want to use Artcam 2012.

Here is a possible continuation of the article with HTML formatting: - -

How to Use Artcam 2012 for CNC Projects

-

Now that you have installed Artcam 2012 crack, you might be wondering how to use it for your CNC projects. Artcam 2012 is a user-friendly software that lets you create and edit 2D and 3D designs with ease. You can also import and export various file formats, such as DXF, DWG, EPS, AI, PDF, STL and more.

-

Here are some basic steps to get you started with Artcam 2012:

-
    -
  1. Launch Artcam 2012 from your desktop shortcut or start menu. You will see a home screen with some tutorials and links to learning resources. You can watch these videos or visit these websites to learn more about Artcam 2012 features and functions.[^3^]
  2. -
  3. Create a new model by clicking on the New Model icon on the top left corner of the screen. You can choose the size, resolution and orientation of your model. You can also select a material from the list or create your own custom material.
  4. -
  5. Draw vectors on your model using the drawing tools on the left panel. You can use the snapping and snapping options to draw precise shapes and curves. You can also import vectors from other sources by clicking on the Import Vectors icon on the top toolbar.
  6. -
  7. Edit your vectors using the editing tools on the right panel. You can transform, manipulate, join, close, offset, smooth and trim your vectors. You can also use the vector layers to organize your vectors into different groups.
  8. -
  9. Create 3D reliefs from your vectors using the relief tools on the right panel. You can use various methods to create reliefs, such as shape editor, two rail sweep, extrude, dome, texture relief and more. You can also import reliefs from other sources by clicking on the Import Relief icon on the top toolbar.
  10. -
  11. Edit your reliefs using the relief editing tools on the right panel. You can sculpt, smooth, erase, blend, add draft and clip your reliefs. You can also use the relief layers to organize your reliefs into different groups.
  12. -
  13. Simulate your toolpaths using the toolpath tools on the right panel. You can choose from various machining strategies, such as profile, pocketing, v-carving, engraving, drilling and more. You can also set up your machine parameters, such as tool diameter, feed rate, spindle speed and depth of cut.
  14. -
  15. Preview your toolpaths using the simulation tools on the right panel. You can see how your design will look like after machining. You can also check for any errors or collisions in your toolpaths.
  16. -
  17. Save and export your toolpaths using the save and export tools on the top toolbar. You can save your project as an Artcam file (.art) or export it as a CNC file (.nc) that is compatible with your machine controller.
  18. -
-

That's it! You have learned how to use Artcam 2012 for CNC projects. You can now create amazing 3D models and designs with this software.

-

7196e7f11a
-
-
\ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Life And Death Twilight Reimagined Epub Download.md b/spaces/tioseFevbu/cartoon-converter/scripts/Life And Death Twilight Reimagined Epub Download.md deleted file mode 100644 index 910010658a49e9810420db016d7b6bad41656189..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Life And Death Twilight Reimagined Epub Download.md +++ /dev/null @@ -1,22 +0,0 @@ - -

How to Download Life and Death: Twilight Reimagined by Stephenie Meyer in EPUB Format

- -

If you are a fan of the Twilight saga, you might be interested in reading Life and Death: Twilight Reimagined, a reworking of the original story with the genders of the main characters reversed. In this novel, Beaufort Swan moves to the gloomy town of Forks and meets the mysterious, alluring Edythe Cullen, a vampire who is drawn to his blood.

- -

Life and Death: Twilight Reimagined was first published in 2015 as part of the tenth anniversary edition of Twilight, along with a foreword and afterword by the author. It was later released as a standalone book in 2016. If you want to read this novel on your e-reader or other devices, you might be looking for an EPUB version of it.

-

life and death twilight reimagined epub download


Download File ››››› https://urlcod.com/2uHxyQ



- -

EPUB is a popular and widely supported format for digital books that allows you to adjust the font size, layout, and other settings according to your preferences. However, not all books are available in EPUB format, and some might be protected by digital rights management (DRM) that prevents you from copying or sharing them.

- -

So how can you download Life and Death: Twilight Reimagined by Stephenie Meyer in EPUB format? Here are some possible ways:

- -
    -
  • Buy the official EPUB version from an online bookstore. This is the easiest and most legal way to get the book in EPUB format. You can find it on various platforms such as Amazon Kindle Store, Google Play Books, Apple Books, Kobo, and more. However, you might need to pay a certain amount of money for it, and you might not be able to transfer it to other devices or apps without authorization.
  • -
  • Download it from a free online library. There are some websites that offer free access to thousands of books in different formats, including EPUB. One example is Archive.org, which has both Twilight ; Life and death : a reimagining of the classic novel [^1^] and Life and death : Twilight reimagined [^2^] by Stephenie Meyer in EPUB format. However, you should be careful when downloading books from these sources, as they might not be authorized by the author or publisher, and they might contain viruses or malware.
  • -
  • Convert it from another format. If you already have a copy of the book in another format, such as PDF or MOBI, you can use an online converter tool to change it into EPUB format. There are many free and easy-to-use converters available online, such as Zamzar, Online-Convert, ConvertFiles, and more. However, you should be aware that converting files might affect the quality and layout of the book, and it might not work for DRM-protected files.
  • -
- -

These are some of the ways you can download Life and Death: Twilight Reimagined by Stephenie Meyer in EPUB format. Whichever method you choose, make sure you respect the author's rights and enjoy reading this novel!

-

cec2833e83
-
-
\ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/distributions/installed.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/distributions/installed.py deleted file mode 100644 index edb38aa1a6c54dcb73e2f74b6bdfff337841d99f..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/distributions/installed.py +++ /dev/null @@ -1,23 +0,0 @@ -from pip._internal.distributions.base import AbstractDistribution -from pip._internal.index.package_finder import PackageFinder -from pip._internal.metadata import BaseDistribution - - -class InstalledDistribution(AbstractDistribution): - """Represents an installed package. - - This does not need any preparation as the required information has already - been computed. - """ - - def get_metadata_distribution(self) -> BaseDistribution: - assert self.req.satisfied_by is not None, "not actually installed" - return self.req.satisfied_by - - def prepare_distribution_metadata( - self, - finder: PackageFinder, - build_isolation: bool, - check_build_deps: bool, - ) -> None: - pass diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/platformdirs/android.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/platformdirs/android.py deleted file mode 100644 index eda80935123cb5db7e18d7fb82fe5f71991d7af8..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/platformdirs/android.py +++ /dev/null @@ -1,120 +0,0 @@ -from __future__ import annotations - -import os -import re -import sys -from functools import lru_cache -from typing import cast - -from .api import PlatformDirsABC - - -class Android(PlatformDirsABC): - """ - Follows the guidance `from here `_. Makes use of the - `appname ` and - `version `. - """ - - @property - def user_data_dir(self) -> str: - """:return: data directory tied to the user, e.g. ``/data/user///files/``""" - return self._append_app_name_and_version(cast(str, _android_folder()), "files") - - @property - def site_data_dir(self) -> str: - """:return: data directory shared by users, same as `user_data_dir`""" - return self.user_data_dir - - @property - def user_config_dir(self) -> str: - """ - :return: config directory tied to the user, e.g. ``/data/user///shared_prefs/`` - """ - return self._append_app_name_and_version(cast(str, _android_folder()), "shared_prefs") - - @property - def site_config_dir(self) -> str: - """:return: config directory shared by the users, same as `user_config_dir`""" - return self.user_config_dir - - @property - def user_cache_dir(self) -> str: - """:return: cache directory tied to the user, e.g. e.g. ``/data/user///cache/``""" - return self._append_app_name_and_version(cast(str, _android_folder()), "cache") - - @property - def user_state_dir(self) -> str: - """:return: state directory tied to the user, same as `user_data_dir`""" - return self.user_data_dir - - @property - def user_log_dir(self) -> str: - """ - :return: log directory tied to the user, same as `user_cache_dir` if not opinionated else ``log`` in it, - e.g. ``/data/user///cache//log`` - """ - path = self.user_cache_dir - if self.opinion: - path = os.path.join(path, "log") - return path - - @property - def user_documents_dir(self) -> str: - """ - :return: documents directory tied to the user e.g. ``/storage/emulated/0/Documents`` - """ - return _android_documents_folder() - - @property - def user_runtime_dir(self) -> str: - """ - :return: runtime directory tied to the user, same as `user_cache_dir` if not opinionated else ``tmp`` in it, - e.g. ``/data/user///cache//tmp`` - """ - path = self.user_cache_dir - if self.opinion: - path = os.path.join(path, "tmp") - return path - - -@lru_cache(maxsize=1) -def _android_folder() -> str | None: - """:return: base folder for the Android OS or None if cannot be found""" - try: - # First try to get path to android app via pyjnius - from jnius import autoclass - - Context = autoclass("android.content.Context") # noqa: N806 - result: str | None = Context.getFilesDir().getParentFile().getAbsolutePath() - except Exception: - # if fails find an android folder looking path on the sys.path - pattern = re.compile(r"/data/(data|user/\d+)/(.+)/files") - for path in sys.path: - if pattern.match(path): - result = path.split("/files")[0] - break - else: - result = None - return result - - -@lru_cache(maxsize=1) -def _android_documents_folder() -> str: - """:return: documents folder for the Android OS""" - # Get directories with pyjnius - try: - from jnius import autoclass - - Context = autoclass("android.content.Context") # noqa: N806 - Environment = autoclass("android.os.Environment") # noqa: N806 - documents_dir: str = Context.getExternalFilesDir(Environment.DIRECTORY_DOCUMENTS).getAbsolutePath() - except Exception: - documents_dir = "/storage/emulated/0/Documents" - - return documents_dir - - -__all__ = [ - "Android", -] diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/urllib3/packages/six.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/urllib3/packages/six.py deleted file mode 100644 index f099a3dcd28d2fec21457c9b6c01ded4e3e9ddee..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/urllib3/packages/six.py +++ /dev/null @@ -1,1076 +0,0 @@ -# Copyright (c) 2010-2020 Benjamin Peterson -# -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: -# -# The above copyright notice and this permission notice shall be included in all -# copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. - -"""Utilities for writing code that runs on Python 2 and 3""" - -from __future__ import absolute_import - -import functools -import itertools -import operator -import sys -import types - -__author__ = "Benjamin Peterson " -__version__ = "1.16.0" - - -# Useful for very coarse version differentiation. -PY2 = sys.version_info[0] == 2 -PY3 = sys.version_info[0] == 3 -PY34 = sys.version_info[0:2] >= (3, 4) - -if PY3: - string_types = (str,) - integer_types = (int,) - class_types = (type,) - text_type = str - binary_type = bytes - - MAXSIZE = sys.maxsize -else: - string_types = (basestring,) - integer_types = (int, long) - class_types = (type, types.ClassType) - text_type = unicode - binary_type = str - - if sys.platform.startswith("java"): - # Jython always uses 32 bits. - MAXSIZE = int((1 << 31) - 1) - else: - # It's possible to have sizeof(long) != sizeof(Py_ssize_t). - class X(object): - def __len__(self): - return 1 << 31 - - try: - len(X()) - except OverflowError: - # 32-bit - MAXSIZE = int((1 << 31) - 1) - else: - # 64-bit - MAXSIZE = int((1 << 63) - 1) - del X - -if PY34: - from importlib.util import spec_from_loader -else: - spec_from_loader = None - - -def _add_doc(func, doc): - """Add documentation to a function.""" - func.__doc__ = doc - - -def _import_module(name): - """Import module, returning the module after the last dot.""" - __import__(name) - return sys.modules[name] - - -class _LazyDescr(object): - def __init__(self, name): - self.name = name - - def __get__(self, obj, tp): - result = self._resolve() - setattr(obj, self.name, result) # Invokes __set__. - try: - # This is a bit ugly, but it avoids running this again by - # removing this descriptor. - delattr(obj.__class__, self.name) - except AttributeError: - pass - return result - - -class MovedModule(_LazyDescr): - def __init__(self, name, old, new=None): - super(MovedModule, self).__init__(name) - if PY3: - if new is None: - new = name - self.mod = new - else: - self.mod = old - - def _resolve(self): - return _import_module(self.mod) - - def __getattr__(self, attr): - _module = self._resolve() - value = getattr(_module, attr) - setattr(self, attr, value) - return value - - -class _LazyModule(types.ModuleType): - def __init__(self, name): - super(_LazyModule, self).__init__(name) - self.__doc__ = self.__class__.__doc__ - - def __dir__(self): - attrs = ["__doc__", "__name__"] - attrs += [attr.name for attr in self._moved_attributes] - return attrs - - # Subclasses should override this - _moved_attributes = [] - - -class MovedAttribute(_LazyDescr): - def __init__(self, name, old_mod, new_mod, old_attr=None, new_attr=None): - super(MovedAttribute, self).__init__(name) - if PY3: - if new_mod is None: - new_mod = name - self.mod = new_mod - if new_attr is None: - if old_attr is None: - new_attr = name - else: - new_attr = old_attr - self.attr = new_attr - else: - self.mod = old_mod - if old_attr is None: - old_attr = name - self.attr = old_attr - - def _resolve(self): - module = _import_module(self.mod) - return getattr(module, self.attr) - - -class _SixMetaPathImporter(object): - - """ - A meta path importer to import six.moves and its submodules. - - This class implements a PEP302 finder and loader. It should be compatible - with Python 2.5 and all existing versions of Python3 - """ - - def __init__(self, six_module_name): - self.name = six_module_name - self.known_modules = {} - - def _add_module(self, mod, *fullnames): - for fullname in fullnames: - self.known_modules[self.name + "." + fullname] = mod - - def _get_module(self, fullname): - return self.known_modules[self.name + "." + fullname] - - def find_module(self, fullname, path=None): - if fullname in self.known_modules: - return self - return None - - def find_spec(self, fullname, path, target=None): - if fullname in self.known_modules: - return spec_from_loader(fullname, self) - return None - - def __get_module(self, fullname): - try: - return self.known_modules[fullname] - except KeyError: - raise ImportError("This loader does not know module " + fullname) - - def load_module(self, fullname): - try: - # in case of a reload - return sys.modules[fullname] - except KeyError: - pass - mod = self.__get_module(fullname) - if isinstance(mod, MovedModule): - mod = mod._resolve() - else: - mod.__loader__ = self - sys.modules[fullname] = mod - return mod - - def is_package(self, fullname): - """ - Return true, if the named module is a package. - - We need this method to get correct spec objects with - Python 3.4 (see PEP451) - """ - return hasattr(self.__get_module(fullname), "__path__") - - def get_code(self, fullname): - """Return None - - Required, if is_package is implemented""" - self.__get_module(fullname) # eventually raises ImportError - return None - - get_source = get_code # same as get_code - - def create_module(self, spec): - return self.load_module(spec.name) - - def exec_module(self, module): - pass - - -_importer = _SixMetaPathImporter(__name__) - - -class _MovedItems(_LazyModule): - - """Lazy loading of moved objects""" - - __path__ = [] # mark as package - - -_moved_attributes = [ - MovedAttribute("cStringIO", "cStringIO", "io", "StringIO"), - MovedAttribute("filter", "itertools", "builtins", "ifilter", "filter"), - MovedAttribute( - "filterfalse", "itertools", "itertools", "ifilterfalse", "filterfalse" - ), - MovedAttribute("input", "__builtin__", "builtins", "raw_input", "input"), - MovedAttribute("intern", "__builtin__", "sys"), - MovedAttribute("map", "itertools", "builtins", "imap", "map"), - MovedAttribute("getcwd", "os", "os", "getcwdu", "getcwd"), - MovedAttribute("getcwdb", "os", "os", "getcwd", "getcwdb"), - MovedAttribute("getoutput", "commands", "subprocess"), - MovedAttribute("range", "__builtin__", "builtins", "xrange", "range"), - MovedAttribute( - "reload_module", "__builtin__", "importlib" if PY34 else "imp", "reload" - ), - MovedAttribute("reduce", "__builtin__", "functools"), - MovedAttribute("shlex_quote", "pipes", "shlex", "quote"), - MovedAttribute("StringIO", "StringIO", "io"), - MovedAttribute("UserDict", "UserDict", "collections"), - MovedAttribute("UserList", "UserList", "collections"), - MovedAttribute("UserString", "UserString", "collections"), - MovedAttribute("xrange", "__builtin__", "builtins", "xrange", "range"), - MovedAttribute("zip", "itertools", "builtins", "izip", "zip"), - MovedAttribute( - "zip_longest", "itertools", "itertools", "izip_longest", "zip_longest" - ), - MovedModule("builtins", "__builtin__"), - MovedModule("configparser", "ConfigParser"), - MovedModule( - "collections_abc", - "collections", - "collections.abc" if sys.version_info >= (3, 3) else "collections", - ), - MovedModule("copyreg", "copy_reg"), - MovedModule("dbm_gnu", "gdbm", "dbm.gnu"), - MovedModule("dbm_ndbm", "dbm", "dbm.ndbm"), - MovedModule( - "_dummy_thread", - "dummy_thread", - "_dummy_thread" if sys.version_info < (3, 9) else "_thread", - ), - MovedModule("http_cookiejar", "cookielib", "http.cookiejar"), - MovedModule("http_cookies", "Cookie", "http.cookies"), - MovedModule("html_entities", "htmlentitydefs", "html.entities"), - MovedModule("html_parser", "HTMLParser", "html.parser"), - MovedModule("http_client", "httplib", "http.client"), - MovedModule("email_mime_base", "email.MIMEBase", "email.mime.base"), - MovedModule("email_mime_image", "email.MIMEImage", "email.mime.image"), - MovedModule("email_mime_multipart", "email.MIMEMultipart", "email.mime.multipart"), - MovedModule( - "email_mime_nonmultipart", "email.MIMENonMultipart", "email.mime.nonmultipart" - ), - MovedModule("email_mime_text", "email.MIMEText", "email.mime.text"), - MovedModule("BaseHTTPServer", "BaseHTTPServer", "http.server"), - MovedModule("CGIHTTPServer", "CGIHTTPServer", "http.server"), - MovedModule("SimpleHTTPServer", "SimpleHTTPServer", "http.server"), - MovedModule("cPickle", "cPickle", "pickle"), - MovedModule("queue", "Queue"), - MovedModule("reprlib", "repr"), - MovedModule("socketserver", "SocketServer"), - MovedModule("_thread", "thread", "_thread"), - MovedModule("tkinter", "Tkinter"), - MovedModule("tkinter_dialog", "Dialog", "tkinter.dialog"), - MovedModule("tkinter_filedialog", "FileDialog", "tkinter.filedialog"), - MovedModule("tkinter_scrolledtext", "ScrolledText", "tkinter.scrolledtext"), - MovedModule("tkinter_simpledialog", "SimpleDialog", "tkinter.simpledialog"), - MovedModule("tkinter_tix", "Tix", "tkinter.tix"), - MovedModule("tkinter_ttk", "ttk", "tkinter.ttk"), - MovedModule("tkinter_constants", "Tkconstants", "tkinter.constants"), - MovedModule("tkinter_dnd", "Tkdnd", "tkinter.dnd"), - MovedModule("tkinter_colorchooser", "tkColorChooser", "tkinter.colorchooser"), - MovedModule("tkinter_commondialog", "tkCommonDialog", "tkinter.commondialog"), - MovedModule("tkinter_tkfiledialog", "tkFileDialog", "tkinter.filedialog"), - MovedModule("tkinter_font", "tkFont", "tkinter.font"), - MovedModule("tkinter_messagebox", "tkMessageBox", "tkinter.messagebox"), - MovedModule("tkinter_tksimpledialog", "tkSimpleDialog", "tkinter.simpledialog"), - MovedModule("urllib_parse", __name__ + ".moves.urllib_parse", "urllib.parse"), - MovedModule("urllib_error", __name__ + ".moves.urllib_error", "urllib.error"), - MovedModule("urllib", __name__ + ".moves.urllib", __name__ + ".moves.urllib"), - MovedModule("urllib_robotparser", "robotparser", "urllib.robotparser"), - MovedModule("xmlrpc_client", "xmlrpclib", "xmlrpc.client"), - MovedModule("xmlrpc_server", "SimpleXMLRPCServer", "xmlrpc.server"), -] -# Add windows specific modules. -if sys.platform == "win32": - _moved_attributes += [ - MovedModule("winreg", "_winreg"), - ] - -for attr in _moved_attributes: - setattr(_MovedItems, attr.name, attr) - if isinstance(attr, MovedModule): - _importer._add_module(attr, "moves." + attr.name) -del attr - -_MovedItems._moved_attributes = _moved_attributes - -moves = _MovedItems(__name__ + ".moves") -_importer._add_module(moves, "moves") - - -class Module_six_moves_urllib_parse(_LazyModule): - - """Lazy loading of moved objects in six.moves.urllib_parse""" - - -_urllib_parse_moved_attributes = [ - MovedAttribute("ParseResult", "urlparse", "urllib.parse"), - MovedAttribute("SplitResult", "urlparse", "urllib.parse"), - MovedAttribute("parse_qs", "urlparse", "urllib.parse"), - MovedAttribute("parse_qsl", "urlparse", "urllib.parse"), - MovedAttribute("urldefrag", "urlparse", "urllib.parse"), - MovedAttribute("urljoin", "urlparse", "urllib.parse"), - MovedAttribute("urlparse", "urlparse", "urllib.parse"), - MovedAttribute("urlsplit", "urlparse", "urllib.parse"), - MovedAttribute("urlunparse", "urlparse", "urllib.parse"), - MovedAttribute("urlunsplit", "urlparse", "urllib.parse"), - MovedAttribute("quote", "urllib", "urllib.parse"), - MovedAttribute("quote_plus", "urllib", "urllib.parse"), - MovedAttribute("unquote", "urllib", "urllib.parse"), - MovedAttribute("unquote_plus", "urllib", "urllib.parse"), - MovedAttribute( - "unquote_to_bytes", "urllib", "urllib.parse", "unquote", "unquote_to_bytes" - ), - MovedAttribute("urlencode", "urllib", "urllib.parse"), - MovedAttribute("splitquery", "urllib", "urllib.parse"), - MovedAttribute("splittag", "urllib", "urllib.parse"), - MovedAttribute("splituser", "urllib", "urllib.parse"), - MovedAttribute("splitvalue", "urllib", "urllib.parse"), - MovedAttribute("uses_fragment", "urlparse", "urllib.parse"), - MovedAttribute("uses_netloc", "urlparse", "urllib.parse"), - MovedAttribute("uses_params", "urlparse", "urllib.parse"), - MovedAttribute("uses_query", "urlparse", "urllib.parse"), - MovedAttribute("uses_relative", "urlparse", "urllib.parse"), -] -for attr in _urllib_parse_moved_attributes: - setattr(Module_six_moves_urllib_parse, attr.name, attr) -del attr - -Module_six_moves_urllib_parse._moved_attributes = _urllib_parse_moved_attributes - -_importer._add_module( - Module_six_moves_urllib_parse(__name__ + ".moves.urllib_parse"), - "moves.urllib_parse", - "moves.urllib.parse", -) - - -class Module_six_moves_urllib_error(_LazyModule): - - """Lazy loading of moved objects in six.moves.urllib_error""" - - -_urllib_error_moved_attributes = [ - MovedAttribute("URLError", "urllib2", "urllib.error"), - MovedAttribute("HTTPError", "urllib2", "urllib.error"), - MovedAttribute("ContentTooShortError", "urllib", "urllib.error"), -] -for attr in _urllib_error_moved_attributes: - setattr(Module_six_moves_urllib_error, attr.name, attr) -del attr - -Module_six_moves_urllib_error._moved_attributes = _urllib_error_moved_attributes - -_importer._add_module( - Module_six_moves_urllib_error(__name__ + ".moves.urllib.error"), - "moves.urllib_error", - "moves.urllib.error", -) - - -class Module_six_moves_urllib_request(_LazyModule): - - """Lazy loading of moved objects in six.moves.urllib_request""" - - -_urllib_request_moved_attributes = [ - MovedAttribute("urlopen", "urllib2", "urllib.request"), - MovedAttribute("install_opener", "urllib2", "urllib.request"), - MovedAttribute("build_opener", "urllib2", "urllib.request"), - MovedAttribute("pathname2url", "urllib", "urllib.request"), - MovedAttribute("url2pathname", "urllib", "urllib.request"), - MovedAttribute("getproxies", "urllib", "urllib.request"), - MovedAttribute("Request", "urllib2", "urllib.request"), - MovedAttribute("OpenerDirector", "urllib2", "urllib.request"), - MovedAttribute("HTTPDefaultErrorHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPRedirectHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPCookieProcessor", "urllib2", "urllib.request"), - MovedAttribute("ProxyHandler", "urllib2", "urllib.request"), - MovedAttribute("BaseHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPPasswordMgr", "urllib2", "urllib.request"), - MovedAttribute("HTTPPasswordMgrWithDefaultRealm", "urllib2", "urllib.request"), - MovedAttribute("AbstractBasicAuthHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPBasicAuthHandler", "urllib2", "urllib.request"), - MovedAttribute("ProxyBasicAuthHandler", "urllib2", "urllib.request"), - MovedAttribute("AbstractDigestAuthHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPDigestAuthHandler", "urllib2", "urllib.request"), - MovedAttribute("ProxyDigestAuthHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPSHandler", "urllib2", "urllib.request"), - MovedAttribute("FileHandler", "urllib2", "urllib.request"), - MovedAttribute("FTPHandler", "urllib2", "urllib.request"), - MovedAttribute("CacheFTPHandler", "urllib2", "urllib.request"), - MovedAttribute("UnknownHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPErrorProcessor", "urllib2", "urllib.request"), - MovedAttribute("urlretrieve", "urllib", "urllib.request"), - MovedAttribute("urlcleanup", "urllib", "urllib.request"), - MovedAttribute("URLopener", "urllib", "urllib.request"), - MovedAttribute("FancyURLopener", "urllib", "urllib.request"), - MovedAttribute("proxy_bypass", "urllib", "urllib.request"), - MovedAttribute("parse_http_list", "urllib2", "urllib.request"), - MovedAttribute("parse_keqv_list", "urllib2", "urllib.request"), -] -for attr in _urllib_request_moved_attributes: - setattr(Module_six_moves_urllib_request, attr.name, attr) -del attr - -Module_six_moves_urllib_request._moved_attributes = _urllib_request_moved_attributes - -_importer._add_module( - Module_six_moves_urllib_request(__name__ + ".moves.urllib.request"), - "moves.urllib_request", - "moves.urllib.request", -) - - -class Module_six_moves_urllib_response(_LazyModule): - - """Lazy loading of moved objects in six.moves.urllib_response""" - - -_urllib_response_moved_attributes = [ - MovedAttribute("addbase", "urllib", "urllib.response"), - MovedAttribute("addclosehook", "urllib", "urllib.response"), - MovedAttribute("addinfo", "urllib", "urllib.response"), - MovedAttribute("addinfourl", "urllib", "urllib.response"), -] -for attr in _urllib_response_moved_attributes: - setattr(Module_six_moves_urllib_response, attr.name, attr) -del attr - -Module_six_moves_urllib_response._moved_attributes = _urllib_response_moved_attributes - -_importer._add_module( - Module_six_moves_urllib_response(__name__ + ".moves.urllib.response"), - "moves.urllib_response", - "moves.urllib.response", -) - - -class Module_six_moves_urllib_robotparser(_LazyModule): - - """Lazy loading of moved objects in six.moves.urllib_robotparser""" - - -_urllib_robotparser_moved_attributes = [ - MovedAttribute("RobotFileParser", "robotparser", "urllib.robotparser"), -] -for attr in _urllib_robotparser_moved_attributes: - setattr(Module_six_moves_urllib_robotparser, attr.name, attr) -del attr - -Module_six_moves_urllib_robotparser._moved_attributes = ( - _urllib_robotparser_moved_attributes -) - -_importer._add_module( - Module_six_moves_urllib_robotparser(__name__ + ".moves.urllib.robotparser"), - "moves.urllib_robotparser", - "moves.urllib.robotparser", -) - - -class Module_six_moves_urllib(types.ModuleType): - - """Create a six.moves.urllib namespace that resembles the Python 3 namespace""" - - __path__ = [] # mark as package - parse = _importer._get_module("moves.urllib_parse") - error = _importer._get_module("moves.urllib_error") - request = _importer._get_module("moves.urllib_request") - response = _importer._get_module("moves.urllib_response") - robotparser = _importer._get_module("moves.urllib_robotparser") - - def __dir__(self): - return ["parse", "error", "request", "response", "robotparser"] - - -_importer._add_module( - Module_six_moves_urllib(__name__ + ".moves.urllib"), "moves.urllib" -) - - -def add_move(move): - """Add an item to six.moves.""" - setattr(_MovedItems, move.name, move) - - -def remove_move(name): - """Remove item from six.moves.""" - try: - delattr(_MovedItems, name) - except AttributeError: - try: - del moves.__dict__[name] - except KeyError: - raise AttributeError("no such move, %r" % (name,)) - - -if PY3: - _meth_func = "__func__" - _meth_self = "__self__" - - _func_closure = "__closure__" - _func_code = "__code__" - _func_defaults = "__defaults__" - _func_globals = "__globals__" -else: - _meth_func = "im_func" - _meth_self = "im_self" - - _func_closure = "func_closure" - _func_code = "func_code" - _func_defaults = "func_defaults" - _func_globals = "func_globals" - - -try: - advance_iterator = next -except NameError: - - def advance_iterator(it): - return it.next() - - -next = advance_iterator - - -try: - callable = callable -except NameError: - - def callable(obj): - return any("__call__" in klass.__dict__ for klass in type(obj).__mro__) - - -if PY3: - - def get_unbound_function(unbound): - return unbound - - create_bound_method = types.MethodType - - def create_unbound_method(func, cls): - return func - - Iterator = object -else: - - def get_unbound_function(unbound): - return unbound.im_func - - def create_bound_method(func, obj): - return types.MethodType(func, obj, obj.__class__) - - def create_unbound_method(func, cls): - return types.MethodType(func, None, cls) - - class Iterator(object): - def next(self): - return type(self).__next__(self) - - callable = callable -_add_doc( - get_unbound_function, """Get the function out of a possibly unbound function""" -) - - -get_method_function = operator.attrgetter(_meth_func) -get_method_self = operator.attrgetter(_meth_self) -get_function_closure = operator.attrgetter(_func_closure) -get_function_code = operator.attrgetter(_func_code) -get_function_defaults = operator.attrgetter(_func_defaults) -get_function_globals = operator.attrgetter(_func_globals) - - -if PY3: - - def iterkeys(d, **kw): - return iter(d.keys(**kw)) - - def itervalues(d, **kw): - return iter(d.values(**kw)) - - def iteritems(d, **kw): - return iter(d.items(**kw)) - - def iterlists(d, **kw): - return iter(d.lists(**kw)) - - viewkeys = operator.methodcaller("keys") - - viewvalues = operator.methodcaller("values") - - viewitems = operator.methodcaller("items") -else: - - def iterkeys(d, **kw): - return d.iterkeys(**kw) - - def itervalues(d, **kw): - return d.itervalues(**kw) - - def iteritems(d, **kw): - return d.iteritems(**kw) - - def iterlists(d, **kw): - return d.iterlists(**kw) - - viewkeys = operator.methodcaller("viewkeys") - - viewvalues = operator.methodcaller("viewvalues") - - viewitems = operator.methodcaller("viewitems") - -_add_doc(iterkeys, "Return an iterator over the keys of a dictionary.") -_add_doc(itervalues, "Return an iterator over the values of a dictionary.") -_add_doc(iteritems, "Return an iterator over the (key, value) pairs of a dictionary.") -_add_doc( - iterlists, "Return an iterator over the (key, [values]) pairs of a dictionary." -) - - -if PY3: - - def b(s): - return s.encode("latin-1") - - def u(s): - return s - - unichr = chr - import struct - - int2byte = struct.Struct(">B").pack - del struct - byte2int = operator.itemgetter(0) - indexbytes = operator.getitem - iterbytes = iter - import io - - StringIO = io.StringIO - BytesIO = io.BytesIO - del io - _assertCountEqual = "assertCountEqual" - if sys.version_info[1] <= 1: - _assertRaisesRegex = "assertRaisesRegexp" - _assertRegex = "assertRegexpMatches" - _assertNotRegex = "assertNotRegexpMatches" - else: - _assertRaisesRegex = "assertRaisesRegex" - _assertRegex = "assertRegex" - _assertNotRegex = "assertNotRegex" -else: - - def b(s): - return s - - # Workaround for standalone backslash - - def u(s): - return unicode(s.replace(r"\\", r"\\\\"), "unicode_escape") - - unichr = unichr - int2byte = chr - - def byte2int(bs): - return ord(bs[0]) - - def indexbytes(buf, i): - return ord(buf[i]) - - iterbytes = functools.partial(itertools.imap, ord) - import StringIO - - StringIO = BytesIO = StringIO.StringIO - _assertCountEqual = "assertItemsEqual" - _assertRaisesRegex = "assertRaisesRegexp" - _assertRegex = "assertRegexpMatches" - _assertNotRegex = "assertNotRegexpMatches" -_add_doc(b, """Byte literal""") -_add_doc(u, """Text literal""") - - -def assertCountEqual(self, *args, **kwargs): - return getattr(self, _assertCountEqual)(*args, **kwargs) - - -def assertRaisesRegex(self, *args, **kwargs): - return getattr(self, _assertRaisesRegex)(*args, **kwargs) - - -def assertRegex(self, *args, **kwargs): - return getattr(self, _assertRegex)(*args, **kwargs) - - -def assertNotRegex(self, *args, **kwargs): - return getattr(self, _assertNotRegex)(*args, **kwargs) - - -if PY3: - exec_ = getattr(moves.builtins, "exec") - - def reraise(tp, value, tb=None): - try: - if value is None: - value = tp() - if value.__traceback__ is not tb: - raise value.with_traceback(tb) - raise value - finally: - value = None - tb = None - -else: - - def exec_(_code_, _globs_=None, _locs_=None): - """Execute code in a namespace.""" - if _globs_ is None: - frame = sys._getframe(1) - _globs_ = frame.f_globals - if _locs_ is None: - _locs_ = frame.f_locals - del frame - elif _locs_ is None: - _locs_ = _globs_ - exec ("""exec _code_ in _globs_, _locs_""") - - exec_( - """def reraise(tp, value, tb=None): - try: - raise tp, value, tb - finally: - tb = None -""" - ) - - -if sys.version_info[:2] > (3,): - exec_( - """def raise_from(value, from_value): - try: - raise value from from_value - finally: - value = None -""" - ) -else: - - def raise_from(value, from_value): - raise value - - -print_ = getattr(moves.builtins, "print", None) -if print_ is None: - - def print_(*args, **kwargs): - """The new-style print function for Python 2.4 and 2.5.""" - fp = kwargs.pop("file", sys.stdout) - if fp is None: - return - - def write(data): - if not isinstance(data, basestring): - data = str(data) - # If the file has an encoding, encode unicode with it. - if ( - isinstance(fp, file) - and isinstance(data, unicode) - and fp.encoding is not None - ): - errors = getattr(fp, "errors", None) - if errors is None: - errors = "strict" - data = data.encode(fp.encoding, errors) - fp.write(data) - - want_unicode = False - sep = kwargs.pop("sep", None) - if sep is not None: - if isinstance(sep, unicode): - want_unicode = True - elif not isinstance(sep, str): - raise TypeError("sep must be None or a string") - end = kwargs.pop("end", None) - if end is not None: - if isinstance(end, unicode): - want_unicode = True - elif not isinstance(end, str): - raise TypeError("end must be None or a string") - if kwargs: - raise TypeError("invalid keyword arguments to print()") - if not want_unicode: - for arg in args: - if isinstance(arg, unicode): - want_unicode = True - break - if want_unicode: - newline = unicode("\n") - space = unicode(" ") - else: - newline = "\n" - space = " " - if sep is None: - sep = space - if end is None: - end = newline - for i, arg in enumerate(args): - if i: - write(sep) - write(arg) - write(end) - - -if sys.version_info[:2] < (3, 3): - _print = print_ - - def print_(*args, **kwargs): - fp = kwargs.get("file", sys.stdout) - flush = kwargs.pop("flush", False) - _print(*args, **kwargs) - if flush and fp is not None: - fp.flush() - - -_add_doc(reraise, """Reraise an exception.""") - -if sys.version_info[0:2] < (3, 4): - # This does exactly the same what the :func:`py3:functools.update_wrapper` - # function does on Python versions after 3.2. It sets the ``__wrapped__`` - # attribute on ``wrapper`` object and it doesn't raise an error if any of - # the attributes mentioned in ``assigned`` and ``updated`` are missing on - # ``wrapped`` object. - def _update_wrapper( - wrapper, - wrapped, - assigned=functools.WRAPPER_ASSIGNMENTS, - updated=functools.WRAPPER_UPDATES, - ): - for attr in assigned: - try: - value = getattr(wrapped, attr) - except AttributeError: - continue - else: - setattr(wrapper, attr, value) - for attr in updated: - getattr(wrapper, attr).update(getattr(wrapped, attr, {})) - wrapper.__wrapped__ = wrapped - return wrapper - - _update_wrapper.__doc__ = functools.update_wrapper.__doc__ - - def wraps( - wrapped, - assigned=functools.WRAPPER_ASSIGNMENTS, - updated=functools.WRAPPER_UPDATES, - ): - return functools.partial( - _update_wrapper, wrapped=wrapped, assigned=assigned, updated=updated - ) - - wraps.__doc__ = functools.wraps.__doc__ - -else: - wraps = functools.wraps - - -def with_metaclass(meta, *bases): - """Create a base class with a metaclass.""" - # This requires a bit of explanation: the basic idea is to make a dummy - # metaclass for one level of class instantiation that replaces itself with - # the actual metaclass. - class metaclass(type): - def __new__(cls, name, this_bases, d): - if sys.version_info[:2] >= (3, 7): - # This version introduced PEP 560 that requires a bit - # of extra care (we mimic what is done by __build_class__). - resolved_bases = types.resolve_bases(bases) - if resolved_bases is not bases: - d["__orig_bases__"] = bases - else: - resolved_bases = bases - return meta(name, resolved_bases, d) - - @classmethod - def __prepare__(cls, name, this_bases): - return meta.__prepare__(name, bases) - - return type.__new__(metaclass, "temporary_class", (), {}) - - -def add_metaclass(metaclass): - """Class decorator for creating a class with a metaclass.""" - - def wrapper(cls): - orig_vars = cls.__dict__.copy() - slots = orig_vars.get("__slots__") - if slots is not None: - if isinstance(slots, str): - slots = [slots] - for slots_var in slots: - orig_vars.pop(slots_var) - orig_vars.pop("__dict__", None) - orig_vars.pop("__weakref__", None) - if hasattr(cls, "__qualname__"): - orig_vars["__qualname__"] = cls.__qualname__ - return metaclass(cls.__name__, cls.__bases__, orig_vars) - - return wrapper - - -def ensure_binary(s, encoding="utf-8", errors="strict"): - """Coerce **s** to six.binary_type. - - For Python 2: - - `unicode` -> encoded to `str` - - `str` -> `str` - - For Python 3: - - `str` -> encoded to `bytes` - - `bytes` -> `bytes` - """ - if isinstance(s, binary_type): - return s - if isinstance(s, text_type): - return s.encode(encoding, errors) - raise TypeError("not expecting type '%s'" % type(s)) - - -def ensure_str(s, encoding="utf-8", errors="strict"): - """Coerce *s* to `str`. - - For Python 2: - - `unicode` -> encoded to `str` - - `str` -> `str` - - For Python 3: - - `str` -> `str` - - `bytes` -> decoded to `str` - """ - # Optimization: Fast return for the common case. - if type(s) is str: - return s - if PY2 and isinstance(s, text_type): - return s.encode(encoding, errors) - elif PY3 and isinstance(s, binary_type): - return s.decode(encoding, errors) - elif not isinstance(s, (text_type, binary_type)): - raise TypeError("not expecting type '%s'" % type(s)) - return s - - -def ensure_text(s, encoding="utf-8", errors="strict"): - """Coerce *s* to six.text_type. - - For Python 2: - - `unicode` -> `unicode` - - `str` -> `unicode` - - For Python 3: - - `str` -> `str` - - `bytes` -> decoded to `str` - """ - if isinstance(s, binary_type): - return s.decode(encoding, errors) - elif isinstance(s, text_type): - return s - else: - raise TypeError("not expecting type '%s'" % type(s)) - - -def python_2_unicode_compatible(klass): - """ - A class decorator that defines __unicode__ and __str__ methods under Python 2. - Under Python 3 it does nothing. - - To support Python 2 and 3 with a single code base, define a __str__ method - returning text and apply this decorator to the class. - """ - if PY2: - if "__str__" not in klass.__dict__: - raise ValueError( - "@python_2_unicode_compatible cannot be applied " - "to %s because it doesn't define __str__()." % klass.__name__ - ) - klass.__unicode__ = klass.__str__ - klass.__str__ = lambda self: self.__unicode__().encode("utf-8") - return klass - - -# Complete the moves implementation. -# This code is at the end of this module to speed up module loading. -# Turn this module into a package. -__path__ = [] # required for PEP 302 and PEP 451 -__package__ = __name__ # see PEP 366 @ReservedAssignment -if globals().get("__spec__") is not None: - __spec__.submodule_search_locations = [] # PEP 451 @UndefinedVariable -# Remove other six meta path importers, since they cause problems. This can -# happen if six is removed from sys.modules and then reloaded. (Setuptools does -# this for some reason.) -if sys.meta_path: - for i, importer in enumerate(sys.meta_path): - # Here's some real nastiness: Another "instance" of the six module might - # be floating around. Therefore, we can't use isinstance() to check for - # the six meta path importer, since the other six instance will have - # inserted an importer with different class. - if ( - type(importer).__name__ == "_SixMetaPathImporter" - and importer.name == __name__ - ): - del sys.meta_path[i] - break - del i, importer -# Finally, add the importer to the meta path import hook. -sys.meta_path.append(_importer) diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_itertools.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_itertools.py deleted file mode 100644 index b8bf6d210aec669b6b948942eda1db953e8725fa..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_itertools.py +++ /dev/null @@ -1,23 +0,0 @@ -from setuptools.extern.more_itertools import consume # noqa: F401 - - -# copied from jaraco.itertools 6.1 -def ensure_unique(iterable, key=lambda x: x): - """ - Wrap an iterable to raise a ValueError if non-unique values are encountered. - - >>> list(ensure_unique('abc')) - ['a', 'b', 'c'] - >>> consume(ensure_unique('abca')) - Traceback (most recent call last): - ... - ValueError: Duplicate element 'a' encountered. - """ - seen = set() - seen_add = seen.add - for element in iterable: - k = key(element) - if k in seen: - raise ValueError(f"Duplicate element {element!r} encountered.") - seen_add(k) - yield element diff --git a/spaces/tomofi/MaskTextSpotterV3-OCR/maskrcnn_benchmark/csrc/SigmoidFocalLoss.h b/spaces/tomofi/MaskTextSpotterV3-OCR/maskrcnn_benchmark/csrc/SigmoidFocalLoss.h deleted file mode 100644 index 308861e44774dffd89b3f5ebff7cc6c5491fe3a5..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MaskTextSpotterV3-OCR/maskrcnn_benchmark/csrc/SigmoidFocalLoss.h +++ /dev/null @@ -1,41 +0,0 @@ -#pragma once - -#include "cpu/vision.h" - -#ifdef WITH_CUDA -#include "cuda/vision.h" -#endif - -// Interface for Python -at::Tensor SigmoidFocalLoss_forward( - const at::Tensor& logits, - const at::Tensor& targets, - const int num_classes, - const float gamma, - const float alpha) { - if (logits.type().is_cuda()) { -#ifdef WITH_CUDA - return SigmoidFocalLoss_forward_cuda(logits, targets, num_classes, gamma, alpha); -#else - AT_ERROR("Not compiled with GPU support"); -#endif - } - AT_ERROR("Not implemented on the CPU"); -} - -at::Tensor SigmoidFocalLoss_backward( - const at::Tensor& logits, - const at::Tensor& targets, - const at::Tensor& d_losses, - const int num_classes, - const float gamma, - const float alpha) { - if (logits.type().is_cuda()) { -#ifdef WITH_CUDA - return SigmoidFocalLoss_backward_cuda(logits, targets, d_losses, num_classes, gamma, alpha); -#else - AT_ERROR("Not compiled with GPU support"); -#endif - } - AT_ERROR("Not implemented on the CPU"); -} diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/docs/get_started.md b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/docs/get_started.md deleted file mode 100644 index d237fa7b35e9e9bd81d8f6f1a48d97c5f2a74d2d..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/docs/get_started.md +++ /dev/null @@ -1,219 +0,0 @@ -## Prerequisites - -- Linux or macOS (Windows is in experimental support) -- Python 3.6+ -- PyTorch 1.3+ -- CUDA 9.2+ (If you build PyTorch from source, CUDA 9.0 is also compatible) -- GCC 5+ -- [MMCV](https://mmcv.readthedocs.io/en/latest/#installation) - -The compatible MMDetection and MMCV versions are as below. Please install the correct version of MMCV to avoid installation issues. - -| MMDetection version | MMCV version | -|:-------------------:|:-------------------:| -| master | mmcv-full>=1.3.2, <1.4.0 | -| 2.11.0 | mmcv-full>=1.2.4, <1.4.0 | -| 2.10.0 | mmcv-full>=1.2.4, <1.4.0 | -| 2.9.0 | mmcv-full>=1.2.4, <1.4.0 | -| 2.8.0 | mmcv-full>=1.2.4, <1.4.0 | -| 2.7.0 | mmcv-full>=1.1.5, <1.4.0 | -| 2.6.0 | mmcv-full>=1.1.5, <1.4.0 | -| 2.5.0 | mmcv-full>=1.1.5, <1.4.0 | -| 2.4.0 | mmcv-full>=1.1.1, <1.4.0 | -| 2.3.0 | mmcv-full==1.0.5 | -| 2.3.0rc0 | mmcv-full>=1.0.2 | -| 2.2.1 | mmcv==0.6.2 | -| 2.2.0 | mmcv==0.6.2 | -| 2.1.0 | mmcv>=0.5.9, <=0.6.1| -| 2.0.0 | mmcv>=0.5.1, <=0.5.8| - -Note: You need to run `pip uninstall mmcv` first if you have mmcv installed. -If mmcv and mmcv-full are both installed, there will be `ModuleNotFoundError`. - -## Installation - -0. You can simply install mmdetection with the following commands: - `pip install mmdet` - -1. Create a conda virtual environment and activate it. - - ```shell - conda create -n open-mmlab python=3.7 -y - conda activate open-mmlab - ``` - -2. Install PyTorch and torchvision following the [official instructions](https://pytorch.org/), e.g., - - ```shell - conda install pytorch torchvision -c pytorch - ``` - - Note: Make sure that your compilation CUDA version and runtime CUDA version match. - You can check the supported CUDA version for precompiled packages on the [PyTorch website](https://pytorch.org/). - - `E.g.1` If you have CUDA 10.1 installed under `/usr/local/cuda` and would like to install - PyTorch 1.5, you need to install the prebuilt PyTorch with CUDA 10.1. - - ```shell - conda install pytorch cudatoolkit=10.1 torchvision -c pytorch - ``` - - `E.g. 2` If you have CUDA 9.2 installed under `/usr/local/cuda` and would like to install - PyTorch 1.3.1., you need to install the prebuilt PyTorch with CUDA 9.2. - - ```shell - conda install pytorch=1.3.1 cudatoolkit=9.2 torchvision=0.4.2 -c pytorch - ``` - - If you build PyTorch from source instead of installing the prebuilt pacakge, - you can use more CUDA versions such as 9.0. - -3. Install mmcv-full, we recommend you to install the pre-build package as below. - - ```shell - pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/{cu_version}/{torch_version}/index.html - ``` - - Please replace `{cu_version}` and `{torch_version}` in the url to your desired one. For example, to install the latest `mmcv-full` with `CUDA 11` and `PyTorch 1.7.0`, use the following command: - - ```shell - pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu110/torch1.7.0/index.html - ``` - - See [here](https://github.com/open-mmlab/mmcv#install-with-pip) for different versions of MMCV compatible to different PyTorch and CUDA versions. - Optionally you can choose to compile mmcv from source by the following command - - ```shell - git clone https://github.com/open-mmlab/mmcv.git - cd mmcv - MMCV_WITH_OPS=1 pip install -e . # package mmcv-full will be installed after this step - cd .. - ``` - - Or directly run - - ```shell - pip install mmcv-full - ``` - -4. Clone the MMDetection repository. - - ```shell - git clone https://github.com/open-mmlab/mmdetection.git - cd mmdetection - ``` - -5. Install build requirements and then install MMDetection. - - ```shell - pip install -r requirements/build.txt - pip install -v -e . # or "python setup.py develop" - ``` - -Note: - -a. Following the above instructions, MMDetection is installed on `dev` mode -, any local modifications made to the code will take effect without the need to reinstall it. - -b. If you would like to use `opencv-python-headless` instead of `opencv --python`, -you can install it before installing MMCV. - -c. Some dependencies are optional. Simply running `pip install -v -e .` will - only install the minimum runtime requirements. To use optional dependencies like `albumentations` and `imagecorruptions` either install them manually with `pip install -r requirements/optional.txt` or specify desired extras when calling `pip` (e.g. `pip install -v -e .[optional]`). Valid keys for the extras field are: `all`, `tests`, `build`, and `optional`. - -### Install with CPU only - -The code can be built for CPU only environment (where CUDA isn't available). - -In CPU mode you can run the demo/webcam_demo.py for example. -However some functionality is gone in this mode: - -- Deformable Convolution -- Modulated Deformable Convolution -- ROI pooling -- Deformable ROI pooling -- CARAFE: Content-Aware ReAssembly of FEatures -- SyncBatchNorm -- CrissCrossAttention: Criss-Cross Attention -- MaskedConv2d -- Temporal Interlace Shift -- nms_cuda -- sigmoid_focal_loss_cuda -- bbox_overlaps - -So if you try to run inference with a model containing above ops you will get an error. The following table lists the related methods that cannot inference on CPU due to dependency on these operators - -| Operator | Model | -| :-----------------------------------------------------: | :----------------------------------------------------------: | -| Deformable Convolution/Modulated Deformable Convolution | DCN、Guided Anchoring、RepPoints、CentripetalNet、VFNet、CascadeRPN、NAS-FCOS、DetectoRS | -| MaskedConv2d | Guided Anchoring | -| CARAFE | CARAFE | -| SyncBatchNorm | ResNeSt | - -**Notice**: MMDetection does not support training with CPU for now. - -### Another option: Docker Image - -We provide a [Dockerfile](https://github.com/open-mmlab/mmdetection/blob/master/docker/Dockerfile) to build an image. Ensure that you are using [docker version](https://docs.docker.com/engine/install/) >=19.03. - -```shell -# build an image with PyTorch 1.6, CUDA 10.1 -docker build -t mmdetection docker/ -``` - -Run it with - -```shell -docker run --gpus all --shm-size=8g -it -v {DATA_DIR}:/mmdetection/data mmdetection -``` - -### A from-scratch setup script - -Assuming that you already have CUDA 10.1 installed, here is a full script for setting up MMDetection with conda. - -```shell -conda create -n open-mmlab python=3.7 -y -conda activate open-mmlab - -conda install pytorch==1.6.0 torchvision==0.7.0 cudatoolkit=10.1 -c pytorch -y - -# install the latest mmcv -pip install mmcv-full==latest+torch1.6.0+cu101 -f https://download.openmmlab.com/mmcv/dist/index.html - -# install mmdetection -git clone https://github.com/open-mmlab/mmdetection.git -cd mmdetection -pip install -r requirements/build.txt -pip install -v -e . -``` - -### Developing with multiple MMDetection versions - -The train and test scripts already modify the `PYTHONPATH` to ensure the script use the MMDetection in the current directory. - -To use the default MMDetection installed in the environment rather than that you are working with, you can remove the following line in those scripts - -```shell -PYTHONPATH="$(dirname $0)/..":$PYTHONPATH -``` - -## Verification - -To verify whether MMDetection and the required environment are installed correctly, we can run sample Python code to initialize a detector and run inference a demo image: - -```python -from mmdet.apis import init_detector, inference_detector - -config_file = 'configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py' -# download the checkpoint from model zoo and put it in `checkpoints/` -# url: http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth -checkpoint_file = 'checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth' -device = 'cuda:0' -# init a detector -model = init_detector(config_file, checkpoint_file, device=device) -# inference the demo image -inference_detector(model, 'demo/demo.jpg') -``` - -The above code is supposed to run successfully upon you finish the installation. diff --git a/spaces/transiteration/nemo_stt_kz_quartznet15x5/app.py b/spaces/transiteration/nemo_stt_kz_quartznet15x5/app.py deleted file mode 100644 index a404597e90b8d988e04b4642ff4d1e9d4b8ba860..0000000000000000000000000000000000000000 --- a/spaces/transiteration/nemo_stt_kz_quartznet15x5/app.py +++ /dev/null @@ -1,16 +0,0 @@ -import os -import gradio as gr -from transcribe import transcribe - - -title = "Automatic Speech Recognition Using NVIDIA NeMo for Kazakh Speech" -example_list = [["examples/" + example] for example in os.listdir("examples")] - -demo = gr.Interface( - fn=transcribe, - inputs=gr.Audio(source="microphone", type="filepath"), - outputs="text", - title=title, - examples=example_list) - -demo.launch() \ No newline at end of file diff --git a/spaces/twdac/BuChengFangYuan-ChineseJapaneseTranslation/app/model_utils_torch/more_ops/__init__.py b/spaces/twdac/BuChengFangYuan-ChineseJapaneseTranslation/app/model_utils_torch/more_ops/__init__.py deleted file mode 100644 index 92a98f47e9ad8bf3c3886ad4e4a525e45a6eed6e..0000000000000000000000000000000000000000 --- a/spaces/twdac/BuChengFangYuan-ChineseJapaneseTranslation/app/model_utils_torch/more_ops/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .multiple_pad import * -from .vector_retrieval import * -from .make_nlp_mask import * diff --git a/spaces/twdac/BuChengFangYuan-ChineseJapaneseTranslation/app/my_py_lib/dataset/_cocostuffhelper.py b/spaces/twdac/BuChengFangYuan-ChineseJapaneseTranslation/app/my_py_lib/dataset/_cocostuffhelper.py deleted file mode 100644 index bc776beae66678d5c7d453a9c6df22651e187179..0000000000000000000000000000000000000000 --- a/spaces/twdac/BuChengFangYuan-ChineseJapaneseTranslation/app/my_py_lib/dataset/_cocostuffhelper.py +++ /dev/null @@ -1,202 +0,0 @@ -__author__ = 'hcaesar' - -# Helper functions used to convert between different formats for the -# COCO Stuff Segmentation Challenge. -# -# Note: Some functions use the Pillow image package, which may need -# to be installed manually. -# -# Microsoft COCO Toolbox. version 2.0 -# Data, paper, and tutorials available at: http://mscoco.org/ -# Code written by Piotr Dollar and Tsung-Yi Lin, 2015. -# Licensed under the Simplified BSD License [see coco/license.txt] - -import numpy as np -from pycocotools import mask -from PIL import Image, ImagePalette # For indexed images -import matplotlib # For Matlab's color maps - -def segmentationToCocoMask(labelMap, labelId): - ''' - Encodes a segmentation mask using the Mask API. - :param labelMap: [h x w] segmentation map that indicates the label of each pixel - :param labelId: the label from labelMap that will be encoded - :return: Rs - the encoded label mask for label 'labelId' - ''' - labelMask = labelMap == labelId - labelMask = np.expand_dims(labelMask, axis=2) - labelMask = labelMask.astype('uint8') - labelMask = np.asfortranarray(labelMask) - Rs = mask.encode(labelMask) - assert len(Rs) == 1 - Rs = Rs[0] - - return Rs - -def segmentationToCocoResult(labelMap, imgId, stuffStartId=92): - ''' - Convert a segmentation map to COCO stuff segmentation result format. - :param labelMap: [h x w] segmentation map that indicates the label of each pixel - :param imgId: the id of the COCO image (last part of the file name) - :param stuffStartId: (optional) index where stuff classes start - :return: anns - a list of dicts for each label in this image - .image_id - the id of the COCO image - .category_id - the id of the stuff class of this annotation - .segmentation - the RLE encoded segmentation of this class - ''' - - # Get stuff labels - shape = labelMap.shape - if len(shape) != 2: - raise Exception(('Error: Image has %d instead of 2 channels! Most likely you ' - 'provided an RGB image instead of an indexed image (with or without color palette).') % len(shape)) - [h, w] = shape - assert h > 0 and w > 0 - labelsAll = np.unique(labelMap) - labelsStuff = [i for i in labelsAll if i >= stuffStartId] - - # Add stuff annotations - anns = [] - for labelId in labelsStuff: - - # Create mask and encode it - Rs = segmentationToCocoMask(labelMap, labelId) - - # Create annotation data and add it to the list - anndata = {} - anndata['image_id'] = int(imgId) - anndata['category_id'] = int(labelId) - anndata['segmentation'] = Rs - anns.append(anndata) - return anns - -def cocoSegmentationToSegmentationMap(coco, imgId, checkUniquePixelLabel=True, includeCrowd=False): - ''' - Convert COCO GT or results for a single image to a segmentation map. - :param coco: an instance of the COCO API (ground-truth or result) - :param imgId: the id of the COCO image - :param checkUniquePixelLabel: (optional) whether every pixel can have at most one label - :param includeCrowd: whether to include 'crowd' thing annotations as 'other' (or void) - :return: labelMap - [h x w] segmentation map that indicates the label of each pixel - ''' - - # Init - curImg = coco.imgs[imgId] - imageSize = (curImg['height'], curImg['width']) - labelMap = np.zeros(imageSize) - - # Get annotations of the current image (may be empty) - imgAnnots = [a for a in coco.anns.values() if a['image_id'] == imgId] - if includeCrowd: - annIds = coco.getAnnIds(imgIds=imgId) - else: - annIds = coco.getAnnIds(imgIds=imgId, iscrowd=False) - imgAnnots = coco.loadAnns(annIds) - - # Combine all annotations of this image in labelMap - #labelMasks = mask.decode([a['segmentation'] for a in imgAnnots]) - for a in range(0, len(imgAnnots)): - labelMask = coco.annToMask(imgAnnots[a]) == 1 - #labelMask = labelMasks[:, :, a] == 1 - newLabel = imgAnnots[a]['category_id'] - - if checkUniquePixelLabel and (labelMap[labelMask] != 0).any(): - raise Exception('Error: Some pixels have more than one label (image %d)!' % (imgId)) - - labelMap[labelMask] = newLabel - - return labelMap - -def pngToCocoResult(pngPath, imgId, stuffStartId=92): - ''' - Reads an indexed .png file with a label map from disk and converts it to COCO result format. - :param pngPath: the path of the .png file - :param imgId: the COCO id of the image (last part of the file name) - :param stuffStartId: (optional) index where stuff classes start - :return: anns - a list of dicts for each label in this image - .image_id - the id of the COCO image - .category_id - the id of the stuff class of this annotation - .segmentation - the RLE encoded segmentation of this class - ''' - - # Read indexed .png file from disk - im = Image.open(pngPath) - labelMap = np.array(im) - - # Convert label map to COCO result format - anns = segmentationToCocoResult(labelMap, imgId, stuffStartId) - return anns - -def cocoSegmentationToPng(coco, imgId, pngPath, includeCrowd=False): - ''' - Convert COCO GT or results for a single image to a segmentation map and write it to disk. - :param coco: an instance of the COCO API (ground-truth or result) - :param imgId: the COCO id of the image (last part of the file name) - :param pngPath: the path of the .png file - :param includeCrowd: whether to include 'crowd' thing annotations as 'other' (or void) - :return: None - ''' - - # Create label map - labelMap = cocoSegmentationToSegmentationMap(coco, imgId, includeCrowd=includeCrowd) - labelMap = labelMap.astype(np.int8) - - # Get color map and convert to PIL's format - cmap = getCMap() - cmap = (cmap * 255).astype(int) - padding = np.zeros((256-cmap.shape[0], 3), np.int8) - cmap = np.vstack((cmap, padding)) - cmap = cmap.reshape((-1)) - assert len(cmap) == 768, 'Error: Color map must have exactly 256*3 elements!' - - # Write to png file - png = Image.fromarray(labelMap).convert('P') - png.putpalette(cmap) - png.save(pngPath, format='PNG') - -def getCMap(stuffStartId=92, stuffEndId=182, cmapName='jet', addThings=True, addUnlabeled=True, addOther=True): - ''' - Create a color map for the classes in the COCO Stuff Segmentation Challenge. - :param stuffStartId: (optional) index where stuff classes start - :param stuffEndId: (optional) index where stuff classes end - :param cmapName: (optional) Matlab's name of the color map - :param addThings: (optional) whether to add a color for the 91 thing classes - :param addUnlabeled: (optional) whether to add a color for the 'unlabeled' class - :param addOther: (optional) whether to add a color for the 'other' class - :return: cmap - [c, 3] a color map for c colors where the columns indicate the RGB values - ''' - - # Get jet color map from Matlab - labelCount = stuffEndId - stuffStartId + 1 - cmapGen = matplotlib.cm.get_cmap(cmapName, labelCount) - cmap = cmapGen(np.arange(labelCount)) - cmap = cmap[:, 0:3] - - # Reduce value/brightness of stuff colors (easier in HSV format) - cmap = cmap.reshape((-1, 1, 3)) - hsv = matplotlib.colors.rgb_to_hsv(cmap) - hsv[:, 0, 2] = hsv[:, 0, 2] * 0.7 - cmap = matplotlib.colors.hsv_to_rgb(hsv) - cmap = cmap.reshape((-1, 3)) - - # Permute entries to avoid classes with similar name having similar colors - st0 = np.random.get_state() - np.random.seed(42) - perm = np.random.permutation(labelCount) - np.random.set_state(st0) - cmap = cmap[perm, :] - - # Add black (or any other) color for each thing class - if addThings: - thingsPadding = np.zeros((stuffStartId - 1, 3)) - cmap = np.vstack((thingsPadding, cmap)) - - # Add black color for 'unlabeled' class - if addUnlabeled: - cmap = np.vstack(((0.0, 0.0, 0.0), cmap)) - - # Add yellow/orange color for 'other' class - if addOther: - cmap = np.vstack((cmap, (1.0, 0.843, 0.0))) - - return cmap \ No newline at end of file diff --git a/spaces/twdac/BuChengFangYuan-ChineseJapaneseTranslation/app/my_py_lib/imagescope_xml_utils.py b/spaces/twdac/BuChengFangYuan-ChineseJapaneseTranslation/app/my_py_lib/imagescope_xml_utils.py deleted file mode 100644 index c469cd1ca4a9c186797af99451205b5cfa2c892d..0000000000000000000000000000000000000000 --- a/spaces/twdac/BuChengFangYuan-ChineseJapaneseTranslation/app/my_py_lib/imagescope_xml_utils.py +++ /dev/null @@ -1,313 +0,0 @@ -''' -软件 Aperio ImageScope 的 XML文件 读取和写入工具 -目前支持 轮廓,方框,椭圆,箭头 的读入和写入操作 -注意,除了标签数据,其他非必要的信息目前不提供支持 - -读取后将会返回 区域列表和颜色元组列表 -存档时需要的也是 区域列表和颜色元组列表 - -如何使用? -请看本文件最下面的测试样例 - -注意: -这里读取返回和储存时需要的坐标点格式不是 xy,而是 yx。使用时请务必注意。 - -数据格式: -CONTOUR:一串坐标点。[point1, point2, point3, ...] -BOX: 有两种方式,可通过 use_box_y1x1y2x2 参数选择。方式1:[左上角坐标,右下角坐标],方式2:[左上角坐标,右上角坐标,右下角坐标,左下角坐标] -ELLIPSE:未知,等待补充 -ARROW:读取时,有两种方式:可通过 keep_arrow_tail 参数选择,如果为真,格式为[point_head, point_tail],否则格式为 point_head - 存储时,若只有 point_head,没有 point_tail,需设定 auto_tail 为真,将自动生成 point_tail,否则会报错。 - -''' - - -import lxml.etree as etree -import numpy as np -from typing import Tuple - - -TYPE_CONTOUR = 0 # 单个格式:[pt1_yx, pt2_yx, pt3_yx, ...] -TYPE_BOX = 1 # 单个格式:[pt_tl_yx, pt_tr_yx, pt_br_yx, pt_bl_yx] -TYPE_ELLIPSE = 2 # 单个格式:[[y1, x1], [y2, x2]] -TYPE_ARROW = 3 # 单个格式:[hear_yx, tail_yx] 或 hear_yx - - -def color_int_to_tuple(color_int): - ''' - 将RGB颜色元组转换为颜色整数 - :param color_int: - :return: - ''' - color_str = hex(color_int)[2:] - assert len(color_str) <= 6, 'Found unknow color!' - pad_count = 6 - len(color_str) - color_str = ''.join(['0'] * pad_count) + color_str - b, g, r = int(color_str[0:2], 16), int(color_str[2:4], 16), int(color_str[4:6], 16) - return r, g, b - - -def color_tuple_to_int(color_tuple): - ''' - 将RGB颜色元组转换为颜色整数 - :param color_tuple: - :return: - ''' - assert len(color_tuple) == 3, 'Found unknow color tuple!' - r, g, b = color_tuple - color_int = r + (g << 8) + (b << 16) - return color_int - - -class ImageScopeXmlReader: - def __init__(self, file=None, keep_arrow_tail=False, use_box_y1x1y2x2=True): - ''' - - :param file: 读取文件路径 - :param keep_arrow_tail: 读取箭头标签时是否保留箭头的尾部 - :param use_box_y1x1y2x2:读取方盒标签时是否使用y1x1y2x2坐标,若设为False则使用[左上,右上,右下,左下]坐标 - ''' - self.keep_arrow_tail = keep_arrow_tail - self.use_box_y1x1y2x2 = use_box_y1x1y2x2 - self.contour_color_regs = {} - self.box_color_regs = {} - self.arrow_color_regs = {} - self.ellipse_color_regs = {} - if file is not None: - self.read(file) - - def read(self, file): - tree = etree.parse(file) - for ann in tree.findall('./Annotation'): - color_int = int(ann.attrib['LineColor']) - color_tuple = color_int_to_tuple(color_int) - for region in ann.findall('./Regions/Region'): - reg_type = int(region.attrib['Type']) - - if reg_type == TYPE_ARROW: - # 读取箭头标签 - self.arrow_color_regs.setdefault(color_tuple, []) - arrow_head_tail_points = [] - for vertex in region.findall('./Vertices/Vertex'): - x = int(float(vertex.attrib['X'])) - y = int(float(vertex.attrib['Y'])) - arrow_head_tail_points.append((y, x)) - arrow_points = np.asarray(arrow_head_tail_points, np.int) - if not self.keep_arrow_tail: - arrow_points = arrow_points[0] - self.arrow_color_regs[color_tuple].append(arrow_points) - - elif reg_type == TYPE_BOX: - # 读取盒状标签 - self.box_color_regs.setdefault(color_tuple, []) - box_points = [] - for vertex in region.findall('./Vertices/Vertex'): - x = int(float(vertex.attrib['X'])) - y = int(float(vertex.attrib['Y'])) - box_points.append((y, x)) - box_points = np.asarray(box_points, np.int) - if self.use_box_y1x1y2x2: - y1, x1 = box_points[0] - y2, x2 = box_points[2] - box_points = np.array([y1, x1, y2, x2]) - self.box_color_regs[color_tuple].append(box_points) - - elif reg_type == TYPE_CONTOUR: - # 读取轮廓标签 - self.contour_color_regs.setdefault(color_tuple, []) - contours = [] - for vertex in region.findall('./Vertices/Vertex'): - x = int(float(vertex.attrib['X'])) - y = int(float(vertex.attrib['Y'])) - contours.append((y, x)) - contours = np.asarray(contours, np.int) - self.contour_color_regs[color_tuple].append(contours) - - elif reg_type == TYPE_ELLIPSE: - # 读取椭圆标签 - self.ellipse_color_regs.setdefault(color_tuple, []) - ellipse = [] - for vertex in region.findall('./Vertices/Vertex'): - x = int(float(vertex.attrib['X'])) - y = int(float(vertex.attrib['Y'])) - ellipse.append((y, x)) - ellipse = np.asarray(ellipse, np.int) - self.ellipse_color_regs[color_tuple].append(ellipse) - - else: - print('Unknow type {}. Will be skip.'.format(reg_type)) - - def get_contours(self): - contours, colors = [], [] - for color in self.contour_color_regs: - contours.extend(self.contour_color_regs[color]) - colors.extend([color]*len(self.contour_color_regs[color])) - return contours, colors - - def get_boxes(self): - boxes, colors = [], [] - for color in self.box_color_regs: - boxes.extend(self.box_color_regs[color]) - colors.extend([color]*len(self.box_color_regs[color])) - return boxes, colors - - def get_arrows(self): - arrows, colors = [], [] - for color in self.arrow_color_regs: - arrows.extend(self.arrow_color_regs[color]) - colors.extend([color]*len(self.arrow_color_regs[color])) - return arrows, colors - - def get_ellipses(self): - ellipses, colors = [], [] - for color in self.ellipse_color_regs: - ellipses.extend(self.ellipse_color_regs[color]) - colors.extend([color]*len(self.ellipse_color_regs[color])) - return ellipses, colors - - -class ImageScopeXmlWriter: - - def __init__(self, contour_default_is_closure=True, allow_box_y1x1y2x2=True, auto_add_arrow_tail=True): - ''' - :param contour_default_is_closure: 默认输入的轮廓是否是闭合的 - :param allow_box_y1x1y2x2: 是否允许方框坐标为 y1x1y2x2,若设为False,则需要手动保证方框输入坐标为 [左上,右上,右下,左下] 格式坐标 - :param auto_add_arrow_tail: 是否自动给只有箭头没有箭尾自动增加箭尾,如果设为False则需要手动保证箭头标签同时有箭头和箭尾 - ''' - self.contour_default_is_closure = contour_default_is_closure - self.allow_box_y1x1y2x2 = allow_box_y1x1y2x2 - self.auto_add_arrow_tail = auto_add_arrow_tail - # 每个类别的存储处,存储方式:(颜色元组) -> [(数据), (数据), ...] - self.contour_color_regs = {} - self.box_color_regs = {} - self.arrow_color_regs = {} - self.ellipse_color_regs = {} - - def add_contours(self, contours, colors, is_closures=None): - assert is_closures is None or len(is_closures) == len(contours) - assert len(contours) == len(colors) - - if is_closures is None: - is_closures = [self.contour_default_is_closure] * len(contours) - - color_set = set(colors) - for c in color_set: - assert isinstance(c, Tuple) and len(c) == 3 - if c not in self.contour_color_regs: - self.contour_color_regs[c] = [] - - for con, color, clos in zip(contours, colors, is_closures): - assert isinstance(con, np.ndarray) - assert con.ndim == 2 and con.shape[1] == 2 - if clos and np.any(con[0] != con[-1]): - con = np.resize(con, [con.shape[0]+1, con.shape[1]]) - con[-1] = con[0] - self.contour_color_regs[color].append(con) - - def add_boxes(self, boxes, colors): - assert len(boxes) == len(colors) - - color_set = set(colors) - for c in color_set: - assert isinstance(c, Tuple) and len(c) == 3 - if c not in self.box_color_regs: - self.box_color_regs[c] = [] - - for box, color in zip(boxes, colors): - assert isinstance(box, np.ndarray) - if self.allow_box_y1x1y2x2 and box.shape == (4,): - y1, x1, y2, x2 = box - box = [[y1, x1], [y1, x2], [y2, x2], [y2, x1]] - box = np.array(box) - assert box.shape == (4, 2) - self.box_color_regs[color].append(box) - - def add_arrows(self, arrows, colors, auto_tail=True): - assert len(arrows) == len(colors) - - color_set = set(colors) - for c in color_set: - assert isinstance(c, Tuple) and len(c) == 3 - if c not in self.arrow_color_regs: - self.arrow_color_regs[c] = [] - - for arrow, color in zip(arrows, colors): - assert isinstance(arrow, np.ndarray) - if auto_tail and arrow.shape == (2,): - arrow = np.resize(arrow.reshape([1, 2]), [2, 2]) - arrow[1] = arrow[0] + 100 - assert arrow.shape == (2, 2) - self.arrow_color_regs[color].append(arrow) - - def add_ellipses(self, ellipses, colors): - assert len(ellipses) == len(colors) - - color_set = set(colors) - for c in color_set: - assert isinstance(c, Tuple) and len(c) == 3 - if c not in self.ellipse_color_regs: - self.ellipse_color_regs[c] = [] - - for ellipse, color in zip(ellipses, colors): - assert isinstance(ellipse, np.ndarray) - assert ellipse.shape == (2, 2) - self.ellipse_color_regs[color].append(ellipse) - - def write(self, file): - Annotations = etree.Element('Annotations', {'MicronsPerPixel': '0'}) - ann_id = 0 - for color_regs, type_id in zip([self.contour_color_regs, self.box_color_regs, self.arrow_color_regs, self.ellipse_color_regs], - [TYPE_CONTOUR, TYPE_BOX, TYPE_ARROW, TYPE_ELLIPSE]): - for color in color_regs.keys(): - ann_id += 1 - LineColor = str(color_tuple_to_int(color)) - Annotation = etree.SubElement(Annotations, 'Annotation', - {'Id': str(ann_id), 'Name': '', 'ReadOnly': '0', 'NameReadOnly': '0', - 'LineColorReadOnly': '0', 'Incremental': '0', 'Type': '4', - 'LineColor': LineColor, 'Visible': '1', 'Selected': '0', - 'MarkupImagePath': '', 'MacroName': ''}) - - Attributes = etree.SubElement(Annotation, 'Attributes') - etree.SubElement(Attributes, 'Attribute', {'Name': '', 'Id': '0', 'Value': ''}) - Regions = etree.SubElement(Annotation, 'Regions') - RegionAttributeHeaders = etree.SubElement(Regions, 'RegionAttributeHeaders') - etree.SubElement(RegionAttributeHeaders, 'AttributeHeader', - {'Id': "9999", 'Name': 'Region', 'ColumnWidth': '-1'}) - etree.SubElement(RegionAttributeHeaders, 'AttributeHeader', - {'Id': "9997", 'Name': 'Length', 'ColumnWidth': '-1'}) - etree.SubElement(RegionAttributeHeaders, 'AttributeHeader', - {'Id': "9996", 'Name': 'Area', 'ColumnWidth': '-1'}) - etree.SubElement(RegionAttributeHeaders, 'AttributeHeader', - {'Id': "9998", 'Name': 'Text', 'ColumnWidth': '-1'}) - - for contour_id, contour in enumerate(color_regs[color]): - Region = etree.SubElement(Regions, 'Region', - {'Id': str(contour_id), 'Type': str(type_id), 'Zoom': '1', 'Selected': '0', - 'ImageLocation': '', 'ImageFocus': '-1', 'Length': '0', 'Area': '0', - 'LengthMicrons': '0', 'AreaMicrons': '0', 'Text': '', 'NegativeROA': '0', - 'InputRegionId': '0', 'Analyze': '1', 'DisplayId': str(contour_id)}) - etree.SubElement(Region, 'Attributes') - Vertices = etree.SubElement(Region, 'Vertices') - for v_yx in contour: - etree.SubElement(Vertices, 'Vertex', {'X': str(v_yx[1]), 'Y': str(v_yx[0]), 'Z': '0'}) - - etree.SubElement(Annotation, 'Plots') - - doc = etree.ElementTree(Annotations) - doc.write(open(file, "wb"), pretty_print=True) - - -if __name__ == '__main__': - print('Testing') - reader = ImageScopeXmlReader("test.xml", keep_arrow_tail=False, use_box_y1x1y2x2=True) - arrows, arrow_colors = reader.get_arrows() - boxes, box_colors = reader.get_boxes() - contours, contour_colors = reader.get_contours() - ellipses, ellipse_colors = reader.get_ellipses() - - writer = ImageScopeXmlWriter() - writer.add_arrows(arrows, arrow_colors) - writer.add_boxes(boxes, box_colors) - writer.add_contours(contours, contour_colors) - writer.add_ellipses(ellipses, ellipse_colors) - writer.write('test2.xml') diff --git a/spaces/uSerNameDDHL/bingo/src/components/chat-history.tsx b/spaces/uSerNameDDHL/bingo/src/components/chat-history.tsx deleted file mode 100644 index feb81de66562edda8f40d3c0cc717202c92b6509..0000000000000000000000000000000000000000 --- a/spaces/uSerNameDDHL/bingo/src/components/chat-history.tsx +++ /dev/null @@ -1,48 +0,0 @@ -import { IconEdit, IconTrash, IconMore, IconDownload } from "./ui/icons" - -export function ChatHistory() { - return ( -
-
- 历史记录 -
-
-
-
-
-
-
- -
-

无标题的聊天

-
-

上午1:42

-
- - - - - - - - -
-
-
-
-
-
-
-
- ) -} diff --git a/spaces/ucinlp/autoprompt/autoprompt/label_search.py b/spaces/ucinlp/autoprompt/autoprompt/label_search.py deleted file mode 100644 index 0cc325b56c6f76cc7fdc12c9c3b1f05ff48208ed..0000000000000000000000000000000000000000 --- a/spaces/ucinlp/autoprompt/autoprompt/label_search.py +++ /dev/null @@ -1,162 +0,0 @@ -""" -This is a hacky little attempt using the tools from the trigger creation script to identify a -good set of label strings. The idea is to train a linear classifier over the predict token and -then look at the most similar tokens. -""" -import argparse -import json -import logging -from pathlib import Path - -import torch -import torch.nn.functional as F -from torch.utils.data import DataLoader -from transformers import ( - AutoConfig, AutoModelWithLMHead, AutoTokenizer, BertForMaskedLM, RobertaForMaskedLM -) -from tqdm import tqdm - -import autoprompt.utils as utils -import autoprompt.create_trigger as ct - - -logger = logging.getLogger(__name__) - - -def load_pretrained(model_name): - """ - Loads pretrained HuggingFace config/model/tokenizer, as well as performs required - initialization steps to facilitate working with triggers. - """ - config = AutoConfig.from_pretrained(args.model_name) - model = AutoModelWithLMHead.from_pretrained(args.model_name, config=config) - model.eval() - tokenizer = AutoTokenizer.from_pretrained(args.model_name) - utils.add_task_specific_tokens(tokenizer) - return config, model, tokenizer - - -def get_final_embeddings(model): - if isinstance(model, BertForMaskedLM): - return model.cls.predictions.transform - elif isinstance(model, RobertaForMaskedLM): - return model.lm_head.layer_norm - else: - raise NotImplementedError(f'{model} not currently supported') - - -def get_word_embeddings(model): - if isinstance(model, BertForMaskedLM): - return model.cls.predictions.decoder.weight - elif isinstance(model, RobertaForMaskedLM): - return model.lm_head.decoder.weight - else: - raise NotImplementedError(f'{model} not currently supported') - - -def main(args): - ct.set_seed(args.seed) - device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - - logger.info('Loading model, tokenizer, etc.') - config, model, tokenizer = load_pretrained(args.model_name) - model.to(device) - final_embeddings = get_final_embeddings(model) - embedding_storage = utils.OutputStorage(final_embeddings) - word_embeddings = get_word_embeddings(model) - - label_map = json.loads(args.label_map) - reverse_label_map = {y: x for x, y in label_map.items()} - templatizer = utils.TriggerTemplatizer( - args.template, - tokenizer, - label_map=label_map, - label_field=args.label_field, - add_special_tokens=False - ) - - # The weights of this projection will help identify the best label words. - projection = torch.nn.Linear(config.hidden_size, len(label_map)) - projection.to(device) - - # Obtain the initial trigger tokens and label mapping - if args.initial_trigger: - trigger_ids = tokenizer.encode( - args.initial_trigger, - add_special_tokens=False, - add_prefix_space=True - ) - assert len(trigger_ids) == templatizer.num_trigger_tokens - else: - trigger_ids = [tokenizer.mask_token_id] * templatizer.num_trigger_tokens - trigger_ids = torch.tensor(trigger_ids, device=device).unsqueeze(0) - - logger.info('Loading datasets') - collator = utils.Collator(pad_token_id=tokenizer.pad_token_id) - train_dataset = utils.load_trigger_dataset(args.train, templatizer) - train_loader = DataLoader(train_dataset, batch_size=args.bsz, shuffle=True, collate_fn=collator) - - optimizer = torch.optim.Adam(projection.parameters(), lr=args.lr) - - scores = torch.matmul(projection.weight, word_embeddings.transpose(0, 1)) - scores = F.softmax(scores, dim=0) - for i, row in enumerate(scores): - _, top = row.topk(args.k) - decoded = tokenizer.convert_ids_to_tokens(top) - logger.info(f"Top k for class {reverse_label_map[i]}: {', '.join(decoded)}") - - logger.info('Training') - for i in range(args.iters): - pbar = tqdm(train_loader) - for model_inputs, labels in pbar: - optimizer.zero_grad() - model_inputs = {k: v.to(device) for k, v in model_inputs.items()} - labels = labels.to(device) - trigger_mask = model_inputs.pop('trigger_mask') - predict_mask = model_inputs.pop('predict_mask') - model_inputs = ct.replace_trigger_tokens(model_inputs, trigger_ids, trigger_mask) - with torch.no_grad(): - model(**model_inputs) - embeddings = embedding_storage.get() - predict_embeddings = embeddings.masked_select(predict_mask.unsqueeze(-1)).view(embeddings.size(0), -1) - logits = projection(predict_embeddings) - loss = F.cross_entropy(logits, labels.squeeze(-1)) - loss.backward() - optimizer.step() - pbar.set_description(f'loss: {loss : 0.4f}') - - scores = torch.matmul(projection.weight, word_embeddings.transpose(0, 1)) - scores = F.softmax(scores, dim=0) - for i, row in enumerate(scores): - _, top = row.topk(args.k) - decoded = tokenizer.convert_ids_to_tokens(top) - logger.info(f"Top k for class {reverse_label_map[i]}: {', '.join(decoded)}") - - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--train', type=Path, required=True, help='Train data path') - parser.add_argument('--template', type=str, help='Template string') - parser.add_argument('--label-map', type=str, help='JSON object defining label map') - parser.add_argument('--initial-trigger', type=str, default=None, help='Manual prompt') - parser.add_argument('--label-field', type=str, default='label', - help='Name of the label field') - parser.add_argument('--lr', type=float, default=3e-4, help='Learning rate') - parser.add_argument('--k', type=int, default=50, help='Number of label tokens to print') - parser.add_argument('--bsz', type=int, default=32, help='Batch size') - parser.add_argument('--iters', type=int, default=10, - help='Number of iterations to run label search') - parser.add_argument('--model-name', type=str, default='bert-base-cased', - help='Model name passed to HuggingFace AutoX classes.') - parser.add_argument('--seed', type=int, default=0) - parser.add_argument('--debug', action='store_true') - args = parser.parse_args() - - if args.debug: - level = logging.DEBUG - else: - level = logging.INFO - logging.basicConfig(level=level) - - main(args) diff --git a/spaces/unidiffuser-testing/unidiffuser-testing/libs/caption_decoder.py b/spaces/unidiffuser-testing/unidiffuser-testing/libs/caption_decoder.py deleted file mode 100644 index 6386f6673af278ad1c1e4cccfbba99b4d0d57123..0000000000000000000000000000000000000000 --- a/spaces/unidiffuser-testing/unidiffuser-testing/libs/caption_decoder.py +++ /dev/null @@ -1,283 +0,0 @@ -import os -import numpy as np -import torch -from torch import nn -from torch.nn import functional as nnf - -from transformers import GPT2Tokenizer, GPT2LMHeadModel -from transformers import default_data_collator -from transformers import EarlyStoppingCallback - -data_collator = default_data_collator -es = EarlyStoppingCallback(early_stopping_patience=5) -import json -import argparse -from typing import Union, Optional -from collections import OrderedDict - - -# %% model initial -class ClipCaptionModel(nn.Module): - """ - """ - - def get_dummy_token(self, batch_size: int, device: torch.device) -> torch.Tensor: - return torch.zeros(batch_size, self.prefix_length, dtype=torch.int64, device=device) - - def forward(self, tokens: torch.Tensor, prefix: torch.Tensor, mask: Optional[torch.Tensor] = None, - labels: Optional[torch.Tensor] = None): - """ - : param tokens: (Tensor) [N x max_seq_len] eg. [4 X 33] - : param prefix: (Tensor) [N x prefix_length x 768] eg. [4 x 77 x 768] - : param mask: (Tensor) [N x (prefix_length + max_seq_len) x 768] eg. [4 x 110 x768] - - : attribute embedding_text: (Tensor) [N x max_seq_len x 768] eg. [4 x 33 x 768] - : attribute embedding_cat: (Tensor) [N x (prefix_length + max_seq_len) x 768] eg. [4 x 110 x 768] - """ - embedding_text = self.gpt.transformer.wte(tokens) - hidden = self.encode_prefix(prefix) - prefix = self.decode_prefix(hidden) - embedding_cat = torch.cat((prefix, embedding_text), dim=1) - - if labels is not None: - dummy_token = self.get_dummy_token(tokens.shape[0], tokens.device) - labels = torch.cat((dummy_token, tokens), dim=1) - out = self.gpt(inputs_embeds=embedding_cat, labels=labels, attention_mask=mask) - if self.hidden_dim is not None: - return out, hidden - else: - return out - - def encode_decode_prefix(self, prefix): - return self.decode_prefix(self.encode_prefix(prefix)) - - def __init__(self, prefix_length: int, hidden_dim=None): - super(ClipCaptionModel, self).__init__() - self.prefix_length = prefix_length - eos = '<|EOS|>' - special_tokens_dict = {'eos_token': eos} - base_tokenizer = GPT2Tokenizer.from_pretrained('gpt2') - base_tokenizer.add_special_tokens(special_tokens_dict) - self.gpt = GPT2LMHeadModel.from_pretrained('gpt2', eos_token_id=base_tokenizer.eos_token_id) - self.gpt.resize_token_embeddings(len(base_tokenizer)) - - self.hidden_dim = hidden_dim - self.encode_prefix = nn.Linear(768, hidden_dim) if hidden_dim is not None else nn.Identity() - self.decode_prefix = nn.Linear(hidden_dim, 768) if hidden_dim is not None else nn.Identity() - - - - -def load_model(config_path: str, epoch_or_latest: Union[str, int] = '_latest'): - with open(config_path) as f: - config = json.load(f) - parser = argparse.ArgumentParser() - parser.set_defaults(**config) - args = parser.parse_args() - if type(epoch_or_latest) is int: - epoch_or_latest = f"-{epoch_or_latest:03d}" - model_path = os.path.join(args.out_dir, f"{args.prefix}{epoch_or_latest}.pt") - model = ClipCaptionModel(args.prefix_length) - if os.path.isfile(model_path): - print(f"loading model from {model_path}") - model.load_state_dict(torch.load(model_path, map_location=torch.device('cpu'))) - else: - print(f"{model_path} is not exist") - return model, parser - - -def generate_beam( - model, - tokenizer, - beam_size: int = 5, - prompt=None, - embed=None, - entry_length=67, - temperature=1.0, - stop_token: str = '<|EOS|>', -): - model.eval() - stop_token_index = tokenizer.encode(stop_token)[0] - tokens = None - scores = None - device = next(model.parameters()).device - seq_lengths = torch.ones(beam_size, device=device) - is_stopped = torch.zeros(beam_size, device=device, dtype=torch.bool) - with torch.no_grad(): - if embed is not None: - generated = embed - else: - if tokens is None: - tokens = torch.tensor(tokenizer.encode(prompt)) - tokens = tokens.unsqueeze(0).to(device) - generated = model.gpt.transformer.wte(tokens) - # pbar = tqdm(range(entry_length)) - # pbar.set_description("generating text ...") - for i in range(entry_length): - # print(generated.shape) - outputs = model.gpt(inputs_embeds=generated) - logits = outputs.logits - logits = logits[:, -1, :] / (temperature if temperature > 0 else 1.0) - logits = logits.softmax(-1).log() - if scores is None: - scores, next_tokens = logits.topk(beam_size, -1) - generated = generated.expand(beam_size, *generated.shape[1:]) - next_tokens, scores = next_tokens.permute(1, 0), scores.squeeze(0) - if tokens is None: - tokens = next_tokens - else: - tokens = tokens.expand(beam_size, *tokens.shape[1:]) - tokens = torch.cat((tokens, next_tokens), dim=1) - else: - logits[is_stopped] = -float(np.inf) - logits[is_stopped, 0] = 0 - scores_sum = scores[:, None] + logits - seq_lengths[~is_stopped] += 1 - scores_sum_average = scores_sum / seq_lengths[:, None] - scores_sum_average, next_tokens = scores_sum_average.view(-1).topk( - beam_size, -1 - ) - next_tokens_source = next_tokens // scores_sum.shape[1] - seq_lengths = seq_lengths[next_tokens_source] - next_tokens = next_tokens % scores_sum.shape[1] - next_tokens = next_tokens.unsqueeze(1) - tokens = tokens[next_tokens_source] - tokens = torch.cat((tokens, next_tokens), dim=1) - generated = generated[next_tokens_source] - scores = scores_sum_average * seq_lengths - is_stopped = is_stopped[next_tokens_source] - next_token_embed = model.gpt.transformer.wte(next_tokens.squeeze()).view( - generated.shape[0], 1, -1 - ) - generated = torch.cat((generated, next_token_embed), dim=1) - is_stopped = is_stopped + next_tokens.eq(stop_token_index).squeeze() - if is_stopped.all(): - break - scores = scores / seq_lengths - output_list = tokens.cpu().numpy() - output_texts = [ - tokenizer.decode(output[: int(length)], skip_special_tokens=True) - for output, length in zip(output_list, seq_lengths) - ] - order = scores.argsort(descending=True) - output_texts = [output_texts[i] for i in order] - model.train() - return output_texts - - -def generate2( - model, - tokenizer, - tokens=None, - prompt=None, - embed=None, - entry_count=1, - entry_length=67, # maximum number of words - top_p=0.8, - temperature=1.0, - stop_token: str = '<|EOS|>', -): - model.eval() - generated_num = 0 - generated_list = [] - stop_token_index = tokenizer.encode(stop_token)[0] - filter_value = -float("Inf") - device = next(model.parameters()).device - - with torch.no_grad(): - - for entry_idx in range(entry_count): - if embed is not None: - generated = embed - else: - if tokens is None: - tokens = torch.tensor(tokenizer.encode(prompt)) - tokens = tokens.unsqueeze(0).to(device) - - generated = model.gpt.transformer.wte(tokens) - - for i in range(entry_length): - - outputs = model.gpt(inputs_embeds=generated) - logits = outputs.logits - logits = logits[:, -1, :] / (temperature if temperature > 0 else 1.0) - sorted_logits, sorted_indices = torch.sort(logits, descending=True) - cumulative_probs = torch.cumsum( - nnf.softmax(sorted_logits, dim=-1), dim=-1 - ) - sorted_indices_to_remove = cumulative_probs > top_p - sorted_indices_to_remove[..., 1:] = sorted_indices_to_remove[ - ..., :-1 - ].clone() - sorted_indices_to_remove[..., 0] = 0 - - indices_to_remove = sorted_indices[sorted_indices_to_remove] - logits[:, indices_to_remove] = filter_value - next_token = torch.argmax(logits, -1).unsqueeze(0) - next_token_embed = model.gpt.transformer.wte(next_token) - if tokens is None: - tokens = next_token - else: - tokens = torch.cat((tokens, next_token), dim=1) - generated = torch.cat((generated, next_token_embed), dim=1) - if stop_token_index == next_token.item(): - break - - output_list = list(tokens.squeeze().cpu().numpy()) - output_text = tokenizer.decode(output_list) - generated_list.append(output_text) - - return generated_list[0] - - -class CaptionDecoder(object): - def __init__(self, device, pretrained_path, hidden_dim=-1): - if hidden_dim < 0: - hidden_dim = None - # tokenizer initialize - eos = '<|EOS|>' - special_tokens_dict = {'eos_token': eos} - self.tokenizer = GPT2Tokenizer.from_pretrained('gpt2') - self.tokenizer.add_special_tokens(special_tokens_dict) - - # model initialize - feature_length = 77 - # modelFile = "assets/caption_decoder/coco_v2_latest.pt" - self.caption_model = ClipCaptionModel(feature_length, hidden_dim=hidden_dim) - # print("Load Model...") - ckpt = torch.load(pretrained_path, map_location='cpu') - state_dict = OrderedDict() - for k, v in ckpt.items(): - new_k = k[7:] - state_dict[new_k] = v - mk, uk = self.caption_model.load_state_dict(state_dict, strict=False) - assert len(mk) == 0 - assert all([name.startswith('clip') for name in uk]) - self.caption_model.eval() - self.caption_model.to(device) - self.caption_model.requires_grad_(False) - self.device = device - - def encode_prefix(self, features): - return self.caption_model.encode_prefix(features) - - def generate_captions(self, features): # the low dimension representation of clip feature - """ - generate captions given features - : param features : (tensor([B x L x D])) - : return generated_text: (list([L])) - """ - - # generate config - use_beam_search = True - - features = torch.split(features, 1, dim=0) - generated_captions = [] - with torch.no_grad(): - for feature in features: - feature = self.caption_model.decode_prefix(feature.to(self.device)) # back to the clip feature - if use_beam_search: - generated_captions.append(generate_beam(self.caption_model, self.tokenizer, embed=feature)[0]) - else: - generated_captions.append(generate2(self.caption_model, self.tokenizer, embed=feature)) - return generated_captions diff --git a/spaces/unity/ML-Agents-Walker/README.md b/spaces/unity/ML-Agents-Walker/README.md deleted file mode 100644 index 02c2696846129bc300bd042b766c856012277fef..0000000000000000000000000000000000000000 --- a/spaces/unity/ML-Agents-Walker/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: ML Agents Walker -emoji: 🚶 -colorFrom: pink -colorTo: yellow -sdk: static -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/usbethFlerru/sovits-modelsV2/cluster/__init__.py b/spaces/usbethFlerru/sovits-modelsV2/cluster/__init__.py deleted file mode 100644 index f1b9bde04e73e9218a5d534227caa4c25332f424..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/cluster/__init__.py +++ /dev/null @@ -1,29 +0,0 @@ -import numpy as np -import torch -from sklearn.cluster import KMeans - -def get_cluster_model(ckpt_path): - checkpoint = torch.load(ckpt_path) - kmeans_dict = {} - for spk, ckpt in checkpoint.items(): - km = KMeans(ckpt["n_features_in_"]) - km.__dict__["n_features_in_"] = ckpt["n_features_in_"] - km.__dict__["_n_threads"] = ckpt["_n_threads"] - km.__dict__["cluster_centers_"] = ckpt["cluster_centers_"] - kmeans_dict[spk] = km - return kmeans_dict - -def get_cluster_result(model, x, speaker): - """ - x: np.array [t, 256] - return cluster class result - """ - return model[speaker].predict(x) - -def get_cluster_center_result(model, x,speaker): - """x: np.array [t, 256]""" - predict = model[speaker].predict(x) - return model[speaker].cluster_centers_[predict] - -def get_center(model, x,speaker): - return model[speaker].cluster_centers_[x] diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/models/yolov8.md b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/models/yolov8.md deleted file mode 100644 index 8907248cdf453c83ed33b23b76a73b65aa632c6b..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/models/yolov8.md +++ /dev/null @@ -1,115 +0,0 @@ ---- -comments: true -description: Learn about YOLOv8's pre-trained weights supporting detection, instance segmentation, pose, and classification tasks. Get performance details. -keywords: YOLOv8, real-time object detection, object detection, deep learning, machine learning ---- - -# YOLOv8 - -## Overview - -YOLOv8 is the latest iteration in the YOLO series of real-time object detectors, offering cutting-edge performance in terms of accuracy and speed. Building upon the advancements of previous YOLO versions, YOLOv8 introduces new features and optimizations that make it an ideal choice for various object detection tasks in a wide range of applications. - -![Ultralytics YOLOv8](https://raw.githubusercontent.com/ultralytics/assets/main/yolov8/yolo-comparison-plots.png) - -## Key Features - -- **Advanced Backbone and Neck Architectures:** YOLOv8 employs state-of-the-art backbone and neck architectures, resulting in improved feature extraction and object detection performance. -- **Anchor-free Split Ultralytics Head:** YOLOv8 adopts an anchor-free split Ultralytics head, which contributes to better accuracy and a more efficient detection process compared to anchor-based approaches. -- **Optimized Accuracy-Speed Tradeoff:** With a focus on maintaining an optimal balance between accuracy and speed, YOLOv8 is suitable for real-time object detection tasks in diverse application areas. -- **Variety of Pre-trained Models:** YOLOv8 offers a range of pre-trained models to cater to various tasks and performance requirements, making it easier to find the right model for your specific use case. - -## Supported Tasks - -| Model Type | Pre-trained Weights | Task | -|-------------|------------------------------------------------------------------------------------------------------------------|-----------------------| -| YOLOv8 | `yolov8n.pt`, `yolov8s.pt`, `yolov8m.pt`, `yolov8l.pt`, `yolov8x.pt` | Detection | -| YOLOv8-seg | `yolov8n-seg.pt`, `yolov8s-seg.pt`, `yolov8m-seg.pt`, `yolov8l-seg.pt`, `yolov8x-seg.pt` | Instance Segmentation | -| YOLOv8-pose | `yolov8n-pose.pt`, `yolov8s-pose.pt`, `yolov8m-pose.pt`, `yolov8l-pose.pt`, `yolov8x-pose.pt` ,`yolov8x-pose-p6` | Pose/Keypoints | -| YOLOv8-cls | `yolov8n-cls.pt`, `yolov8s-cls.pt`, `yolov8m-cls.pt`, `yolov8l-cls.pt`, `yolov8x-cls.pt` | Classification | - -## Supported Modes - -| Mode | Supported | -|------------|--------------------| -| Inference | :heavy_check_mark: | -| Validation | :heavy_check_mark: | -| Training | :heavy_check_mark: | - -!!! Performance - - === "Detection" - - | Model | size
(pixels) | mAPval
50-95 | Speed
CPU ONNX
(ms) | Speed
A100 TensorRT
(ms) | params
(M) | FLOPs
(B) | - | ------------------------------------------------------------------------------------ | --------------------- | -------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- | - | [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n.pt) | 640 | 37.3 | 80.4 | 0.99 | 3.2 | 8.7 | - | [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s.pt) | 640 | 44.9 | 128.4 | 1.20 | 11.2 | 28.6 | - | [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m.pt) | 640 | 50.2 | 234.7 | 1.83 | 25.9 | 78.9 | - | [YOLOv8l](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l.pt) | 640 | 52.9 | 375.2 | 2.39 | 43.7 | 165.2 | - | [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x.pt) | 640 | 53.9 | 479.1 | 3.53 | 68.2 | 257.8 | - - === "Segmentation" - - | Model | size
(pixels) | mAPbox
50-95 | mAPmask
50-95 | Speed
CPU ONNX
(ms) | Speed
A100 TensorRT
(ms) | params
(M) | FLOPs
(B) | - | -------------------------------------------------------------------------------------------- | --------------------- | -------------------- | --------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- | - | [YOLOv8n-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-seg.pt) | 640 | 36.7 | 30.5 | 96.1 | 1.21 | 3.4 | 12.6 | - | [YOLOv8s-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-seg.pt) | 640 | 44.6 | 36.8 | 155.7 | 1.47 | 11.8 | 42.6 | - | [YOLOv8m-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-seg.pt) | 640 | 49.9 | 40.8 | 317.0 | 2.18 | 27.3 | 110.2 | - | [YOLOv8l-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-seg.pt) | 640 | 52.3 | 42.6 | 572.4 | 2.79 | 46.0 | 220.5 | - | [YOLOv8x-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-seg.pt) | 640 | 53.4 | 43.4 | 712.1 | 4.02 | 71.8 | 344.1 | - - === "Classification" - - | Model | size
(pixels) | acc
top1 | acc
top5 | Speed
CPU ONNX
(ms) | Speed
A100 TensorRT
(ms) | params
(M) | FLOPs
(B) at 640 | - | -------------------------------------------------------------------------------------------- | --------------------- | ---------------- | ---------------- | ------------------------------ | ----------------------------------- | ------------------ | ------------------------ | - | [YOLOv8n-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-cls.pt) | 224 | 66.6 | 87.0 | 12.9 | 0.31 | 2.7 | 4.3 | - | [YOLOv8s-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-cls.pt) | 224 | 72.3 | 91.1 | 23.4 | 0.35 | 6.4 | 13.5 | - | [YOLOv8m-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-cls.pt) | 224 | 76.4 | 93.2 | 85.4 | 0.62 | 17.0 | 42.7 | - | [YOLOv8l-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-cls.pt) | 224 | 78.0 | 94.1 | 163.0 | 0.87 | 37.5 | 99.7 | - | [YOLOv8x-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-cls.pt) | 224 | 78.4 | 94.3 | 232.0 | 1.01 | 57.4 | 154.8 | - - === "Pose" - - | Model | size
(pixels) | mAPpose
50-95 | mAPpose
50 | Speed
CPU ONNX
(ms) | Speed
A100 TensorRT
(ms) | params
(M) | FLOPs
(B) | - | ---------------------------------------------------------------------------------------------------- | --------------------- | --------------------- | ------------------ | ------------------------------ | ----------------------------------- | ------------------ | ----------------- | - | [YOLOv8n-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-pose.pt) | 640 | 50.4 | 80.1 | 131.8 | 1.18 | 3.3 | 9.2 | - | [YOLOv8s-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-pose.pt) | 640 | 60.0 | 86.2 | 233.2 | 1.42 | 11.6 | 30.2 | - | [YOLOv8m-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-pose.pt) | 640 | 65.0 | 88.8 | 456.3 | 2.00 | 26.4 | 81.0 | - | [YOLOv8l-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-pose.pt) | 640 | 67.6 | 90.0 | 784.5 | 2.59 | 44.4 | 168.6 | - | [YOLOv8x-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-pose.pt) | 640 | 69.2 | 90.2 | 1607.1 | 3.73 | 69.4 | 263.2 | - | [YOLOv8x-pose-p6](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-pose-p6.pt) | 1280 | 71.6 | 91.2 | 4088.7 | 10.04 | 99.1 | 1066.4 | - -## Usage - -You can use YOLOv8 for object detection tasks using the Ultralytics pip package. The following is a sample code snippet showing how to use YOLOv8 models for inference: - -```python -from ultralytics import YOLO - -# Load the model -model = YOLO('yolov8n.pt') # load a pretrained model - -# Perform inference -results = model('image.jpg') - -# Print the results -results.print() -``` - -## Citation - -If you use the YOLOv8 model or any other software from this repository in your work, please cite it using the following format: - -```bibtex -@software{yolov8_ultralytics, - author = {Glenn Jocher and Ayush Chaurasia and Jing Qiu}, - title = {Ultralytics YOLOv8}, - version = {8.0.0}, - year = {2023}, - url = {https://github.com/ultralytics/ultralytics}, - orcid = {0000-0001-5950-6979, 0000-0002-7603-6750, 0000-0003-3783-7069}, - license = {AGPL-3.0} -} -``` - -Please note that the DOI is pending and will be added to the citation once it is available. The usage of the software is in accordance with the AGPL-3.0 license. \ No newline at end of file diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/tracker/utils/gmc.md b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/tracker/utils/gmc.md deleted file mode 100644 index 6441f071d15c824c69c766391a4b9c1bae29bbfd..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/tracker/utils/gmc.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -description: '"Track Google Marketing Campaigns in GMC with Ultralytics Tracker. Learn to set up and use GMC for detailed analytics. Get started now."' -keywords: Ultralytics, YOLO, object detection, tracker, optimization, models, documentation ---- - -## GMC ---- -### ::: ultralytics.tracker.utils.gmc.GMC -

diff --git a/spaces/vinay123/panoptic-segment-anything/segment_anything/segment_anything/utils/amg.py b/spaces/vinay123/panoptic-segment-anything/segment_anything/segment_anything/utils/amg.py deleted file mode 100644 index 3a137778e45c464c079658ecb87ec53270e789f7..0000000000000000000000000000000000000000 --- a/spaces/vinay123/panoptic-segment-anything/segment_anything/segment_anything/utils/amg.py +++ /dev/null @@ -1,346 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch - -import math -from copy import deepcopy -from itertools import product -from typing import Any, Dict, Generator, ItemsView, List, Tuple - - -class MaskData: - """ - A structure for storing masks and their related data in batched format. - Implements basic filtering and concatenation. - """ - - def __init__(self, **kwargs) -> None: - for v in kwargs.values(): - assert isinstance( - v, (list, np.ndarray, torch.Tensor) - ), "MaskData only supports list, numpy arrays, and torch tensors." - self._stats = dict(**kwargs) - - def __setitem__(self, key: str, item: Any) -> None: - assert isinstance( - item, (list, np.ndarray, torch.Tensor) - ), "MaskData only supports list, numpy arrays, and torch tensors." - self._stats[key] = item - - def __delitem__(self, key: str) -> None: - del self._stats[key] - - def __getitem__(self, key: str) -> Any: - return self._stats[key] - - def items(self) -> ItemsView[str, Any]: - return self._stats.items() - - def filter(self, keep: torch.Tensor) -> None: - for k, v in self._stats.items(): - if v is None: - self._stats[k] = None - elif isinstance(v, torch.Tensor): - self._stats[k] = v[torch.as_tensor(keep, device=v.device)] - elif isinstance(v, np.ndarray): - self._stats[k] = v[keep.detach().cpu().numpy()] - elif isinstance(v, list) and keep.dtype == torch.bool: - self._stats[k] = [a for i, a in enumerate(v) if keep[i]] - elif isinstance(v, list): - self._stats[k] = [v[i] for i in keep] - else: - raise TypeError(f"MaskData key {k} has an unsupported type {type(v)}.") - - def cat(self, new_stats: "MaskData") -> None: - for k, v in new_stats.items(): - if k not in self._stats or self._stats[k] is None: - self._stats[k] = deepcopy(v) - elif isinstance(v, torch.Tensor): - self._stats[k] = torch.cat([self._stats[k], v], dim=0) - elif isinstance(v, np.ndarray): - self._stats[k] = np.concatenate([self._stats[k], v], axis=0) - elif isinstance(v, list): - self._stats[k] = self._stats[k] + deepcopy(v) - else: - raise TypeError(f"MaskData key {k} has an unsupported type {type(v)}.") - - def to_numpy(self) -> None: - for k, v in self._stats.items(): - if isinstance(v, torch.Tensor): - self._stats[k] = v.detach().cpu().numpy() - - -def is_box_near_crop_edge( - boxes: torch.Tensor, crop_box: List[int], orig_box: List[int], atol: float = 20.0 -) -> torch.Tensor: - """Filter masks at the edge of a crop, but not at the edge of the original image.""" - crop_box_torch = torch.as_tensor(crop_box, dtype=torch.float, device=boxes.device) - orig_box_torch = torch.as_tensor(orig_box, dtype=torch.float, device=boxes.device) - boxes = uncrop_boxes_xyxy(boxes, crop_box).float() - near_crop_edge = torch.isclose(boxes, crop_box_torch[None, :], atol=atol, rtol=0) - near_image_edge = torch.isclose(boxes, orig_box_torch[None, :], atol=atol, rtol=0) - near_crop_edge = torch.logical_and(near_crop_edge, ~near_image_edge) - return torch.any(near_crop_edge, dim=1) - - -def box_xyxy_to_xywh(box_xyxy: torch.Tensor) -> torch.Tensor: - box_xywh = deepcopy(box_xyxy) - box_xywh[2] = box_xywh[2] - box_xywh[0] - box_xywh[3] = box_xywh[3] - box_xywh[1] - return box_xywh - - -def batch_iterator(batch_size: int, *args) -> Generator[List[Any], None, None]: - assert len(args) > 0 and all( - len(a) == len(args[0]) for a in args - ), "Batched iteration must have inputs of all the same size." - n_batches = len(args[0]) // batch_size + int(len(args[0]) % batch_size != 0) - for b in range(n_batches): - yield [arg[b * batch_size : (b + 1) * batch_size] for arg in args] - - -def mask_to_rle_pytorch(tensor: torch.Tensor) -> List[Dict[str, Any]]: - """ - Encodes masks to an uncompressed RLE, in the format expected by - pycoco tools. - """ - # Put in fortran order and flatten h,w - b, h, w = tensor.shape - tensor = tensor.permute(0, 2, 1).flatten(1) - - # Compute change indices - diff = tensor[:, 1:] ^ tensor[:, :-1] - change_indices = diff.nonzero() - - # Encode run length - out = [] - for i in range(b): - cur_idxs = change_indices[change_indices[:, 0] == i, 1] - cur_idxs = torch.cat( - [ - torch.tensor([0], dtype=cur_idxs.dtype, device=cur_idxs.device), - cur_idxs + 1, - torch.tensor([h * w], dtype=cur_idxs.dtype, device=cur_idxs.device), - ] - ) - btw_idxs = cur_idxs[1:] - cur_idxs[:-1] - counts = [] if tensor[i, 0] == 0 else [0] - counts.extend(btw_idxs.detach().cpu().tolist()) - out.append({"size": [h, w], "counts": counts}) - return out - - -def rle_to_mask(rle: Dict[str, Any]) -> np.ndarray: - """Compute a binary mask from an uncompressed RLE.""" - h, w = rle["size"] - mask = np.empty(h * w, dtype=bool) - idx = 0 - parity = False - for count in rle["counts"]: - mask[idx : idx + count] = parity - idx += count - parity ^= True - mask = mask.reshape(w, h) - return mask.transpose() # Put in C order - - -def area_from_rle(rle: Dict[str, Any]) -> int: - return sum(rle["counts"][1::2]) - - -def calculate_stability_score( - masks: torch.Tensor, mask_threshold: float, threshold_offset: float -) -> torch.Tensor: - """ - Computes the stability score for a batch of masks. The stability - score is the IoU between the binary masks obtained by thresholding - the predicted mask logits at high and low values. - """ - # One mask is always contained inside the other. - # Save memory by preventing unnecesary cast to torch.int64 - intersections = ( - (masks > (mask_threshold + threshold_offset)) - .sum(-1, dtype=torch.int16) - .sum(-1, dtype=torch.int32) - ) - unions = ( - (masks > (mask_threshold - threshold_offset)) - .sum(-1, dtype=torch.int16) - .sum(-1, dtype=torch.int32) - ) - return intersections / unions - - -def build_point_grid(n_per_side: int) -> np.ndarray: - """Generates a 2D grid of points evenly spaced in [0,1]x[0,1].""" - offset = 1 / (2 * n_per_side) - points_one_side = np.linspace(offset, 1 - offset, n_per_side) - points_x = np.tile(points_one_side[None, :], (n_per_side, 1)) - points_y = np.tile(points_one_side[:, None], (1, n_per_side)) - points = np.stack([points_x, points_y], axis=-1).reshape(-1, 2) - return points - - -def build_all_layer_point_grids( - n_per_side: int, n_layers: int, scale_per_layer: int -) -> List[np.ndarray]: - """Generates point grids for all crop layers.""" - points_by_layer = [] - for i in range(n_layers + 1): - n_points = int(n_per_side / (scale_per_layer**i)) - points_by_layer.append(build_point_grid(n_points)) - return points_by_layer - - -def generate_crop_boxes( - im_size: Tuple[int, ...], n_layers: int, overlap_ratio: float -) -> Tuple[List[List[int]], List[int]]: - """ - Generates a list of crop boxes of different sizes. Each layer - has (2**i)**2 boxes for the ith layer. - """ - crop_boxes, layer_idxs = [], [] - im_h, im_w = im_size - short_side = min(im_h, im_w) - - # Original image - crop_boxes.append([0, 0, im_w, im_h]) - layer_idxs.append(0) - - def crop_len(orig_len, n_crops, overlap): - return int(math.ceil((overlap * (n_crops - 1) + orig_len) / n_crops)) - - for i_layer in range(n_layers): - n_crops_per_side = 2 ** (i_layer + 1) - overlap = int(overlap_ratio * short_side * (2 / n_crops_per_side)) - - crop_w = crop_len(im_w, n_crops_per_side, overlap) - crop_h = crop_len(im_h, n_crops_per_side, overlap) - - crop_box_x0 = [int((crop_w - overlap) * i) for i in range(n_crops_per_side)] - crop_box_y0 = [int((crop_h - overlap) * i) for i in range(n_crops_per_side)] - - # Crops in XYWH format - for x0, y0 in product(crop_box_x0, crop_box_y0): - box = [x0, y0, min(x0 + crop_w, im_w), min(y0 + crop_h, im_h)] - crop_boxes.append(box) - layer_idxs.append(i_layer + 1) - - return crop_boxes, layer_idxs - - -def uncrop_boxes_xyxy(boxes: torch.Tensor, crop_box: List[int]) -> torch.Tensor: - x0, y0, _, _ = crop_box - offset = torch.tensor([[x0, y0, x0, y0]], device=boxes.device) - # Check if boxes has a channel dimension - if len(boxes.shape) == 3: - offset = offset.unsqueeze(1) - return boxes + offset - - -def uncrop_points(points: torch.Tensor, crop_box: List[int]) -> torch.Tensor: - x0, y0, _, _ = crop_box - offset = torch.tensor([[x0, y0]], device=points.device) - # Check if points has a channel dimension - if len(points.shape) == 3: - offset = offset.unsqueeze(1) - return points + offset - - -def uncrop_masks( - masks: torch.Tensor, crop_box: List[int], orig_h: int, orig_w: int -) -> torch.Tensor: - x0, y0, x1, y1 = crop_box - if x0 == 0 and y0 == 0 and x1 == orig_w and y1 == orig_h: - return masks - # Coordinate transform masks - pad_x, pad_y = orig_w - (x1 - x0), orig_h - (y1 - y0) - pad = (x0, pad_x - x0, y0, pad_y - y0) - return torch.nn.functional.pad(masks, pad, value=0) - - -def remove_small_regions( - mask: np.ndarray, area_thresh: float, mode: str -) -> Tuple[np.ndarray, bool]: - """ - Removes small disconnected regions and holes in a mask. Returns the - mask and an indicator of if the mask has been modified. - """ - import cv2 # type: ignore - - assert mode in ["holes", "islands"] - correct_holes = mode == "holes" - working_mask = (correct_holes ^ mask).astype(np.uint8) - n_labels, regions, stats, _ = cv2.connectedComponentsWithStats(working_mask, 8) - sizes = stats[:, -1][1:] # Row 0 is background label - small_regions = [i + 1 for i, s in enumerate(sizes) if s < area_thresh] - if len(small_regions) == 0: - return mask, False - fill_labels = [0] + small_regions - if not correct_holes: - fill_labels = [i for i in range(n_labels) if i not in fill_labels] - # If every region is below threshold, keep largest - if len(fill_labels) == 0: - fill_labels = [int(np.argmax(sizes)) + 1] - mask = np.isin(regions, fill_labels) - return mask, True - - -def coco_encode_rle(uncompressed_rle: Dict[str, Any]) -> Dict[str, Any]: - from pycocotools import mask as mask_utils # type: ignore - - h, w = uncompressed_rle["size"] - rle = mask_utils.frPyObjects(uncompressed_rle, h, w) - rle["counts"] = rle["counts"].decode("utf-8") # Necessary to serialize with json - return rle - - -def batched_mask_to_box(masks: torch.Tensor) -> torch.Tensor: - """ - Calculates boxes in XYXY format around masks. Return [0,0,0,0] for - an empty mask. For input shape C1xC2x...xHxW, the output shape is C1xC2x...x4. - """ - # torch.max below raises an error on empty inputs, just skip in this case - if torch.numel(masks) == 0: - return torch.zeros(*masks.shape[:-2], 4, device=masks.device) - - # Normalize shape to CxHxW - shape = masks.shape - h, w = shape[-2:] - if len(shape) > 2: - masks = masks.flatten(0, -3) - else: - masks = masks.unsqueeze(0) - - # Get top and bottom edges - in_height, _ = torch.max(masks, dim=-1) - in_height_coords = in_height * torch.arange(h, device=in_height.device)[None, :] - bottom_edges, _ = torch.max(in_height_coords, dim=-1) - in_height_coords = in_height_coords + h * (~in_height) - top_edges, _ = torch.min(in_height_coords, dim=-1) - - # Get left and right edges - in_width, _ = torch.max(masks, dim=-2) - in_width_coords = in_width * torch.arange(w, device=in_width.device)[None, :] - right_edges, _ = torch.max(in_width_coords, dim=-1) - in_width_coords = in_width_coords + w * (~in_width) - left_edges, _ = torch.min(in_width_coords, dim=-1) - - # If the mask is empty the right edge will be to the left of the left edge. - # Replace these boxes with [0, 0, 0, 0] - empty_filter = (right_edges < left_edges) | (bottom_edges < top_edges) - out = torch.stack([left_edges, top_edges, right_edges, bottom_edges], dim=-1) - out = out * (~empty_filter).unsqueeze(-1) - - # Return to original shape - if len(shape) > 2: - out = out.reshape(*shape[:-2], 4) - else: - out = out[0] - - return out diff --git a/spaces/viniods/speech_recognition/app.py b/spaces/viniods/speech_recognition/app.py deleted file mode 100644 index 7d048fd9a71ee14f1bf2d1d568f7f6f25f383d48..0000000000000000000000000000000000000000 --- a/spaces/viniods/speech_recognition/app.py +++ /dev/null @@ -1,40 +0,0 @@ -import gradio as gr -import speech_recognition as sr -from pydub import AudioSegment -from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor -import os -import torch - -tokenizer = Wav2Vec2Processor.from_pretrained('jonatasgrosman/wav2vec2-large-xlsr-53-portuguese') -model = Wav2Vec2ForCTC.from_pretrained('jonatasgrosman/wav2vec2-large-xlsr-53-portuguese') -# Load the pre-trained speech recognition model -recognizer = sr.Recognizer() - -def recognize_speech(audio_path): - print(audio_path) - # Perform speech recognition on the captured audio - try: - clip = AudioSegment.from_file(audio_path) - clip = clip.set_frame_rate(16000) - print(clip) - x = torch.FloatTensor(clip.get_array_of_samples()) - inputs = tokenizer(x, sampling_rate=16000, return_tensors='pt', padding='longest').input_values - logits = model(inputs).logits - tokens = torch.argmax(logits, axis=-1) - text = tokenizer.batch_decode(tokens) - return str(text).lower() - except sr.UnknownValueError: - return "Could not understand the audio." - except sr.RequestError as e: - return f"Error accessing the Google Speech Recognition service: {e}" - -# Create the Gradio interface with microphone input -audio_recognizer_interface = gr.Interface( - fn=recognize_speech, - inputs=gr.inputs.Audio(source="microphone", type="filepath", label="Speak into the microphone..."), - outputs="text", - title="Real-time Speech Recognition" -) - -# Run the interface -audio_recognizer_interface.launch() diff --git a/spaces/visakh7843/Sheet_Music_Generator/MC/markov_chain.py b/spaces/visakh7843/Sheet_Music_Generator/MC/markov_chain.py deleted file mode 100644 index 87aae3dfb4ce5923af0cd9f7ee3a87d3b0a5b2d5..0000000000000000000000000000000000000000 --- a/spaces/visakh7843/Sheet_Music_Generator/MC/markov_chain.py +++ /dev/null @@ -1,246 +0,0 @@ - -import os,glob -from matplotlib.pyplot import pie -from mido import MidiFile -import datetime -import numpy as np -import pandas as pd -import subprocess -from music21 import * -from music21 import converter -from mido import Message, MidiFile, MidiTrack, MetaMessage - -#number of notes to be used for prediction -window = 3 - -#num of notes to generate -#TODO: change this to accept values according to user -num_notes = 100 - -#midi ticks per quarter note, indicates tempo of track -quarter_note_ticks = 480 - -#accepted note durations: ranges from 16th note to whole dotted notes -accepeted_lengths = [0.25,0.375,0.5,0.75,1,1.5,2.0,3.0,4.0] -#Finds all absolute paths in directory -#https://stackoverflow.com/questions/9816816/get-absolute-paths-of-all-files-in-a-directory -def abs_paths(dir): - for dir_path,_,filenames in os.walk(dir): - for f in filenames: - yield os.path.abspath(os.path.join(dir_path, f)) -def pitch_to_int(nameWithOctave): - # letter names with corresponding values - letter_dict = {'C':0,'D':2,'E':4,'F':5,'G':7,'A':9,'B':11} - # parse characters from strings - chars = list(nameWithOctave) - # convert octave number to corresponding midi value - octave = 12*(int(chars[-1])+1) - # select value from letter_dict using first character - note = letter_dict[chars[0]] - # set accidental value - accidental = 0 - # does accidental exist? - if not len(chars)==2: - # increase (sharp) or decrease (flat) value by one - accidental = 1 if chars[1]=='#' else -1 - # return sum of these numbers, middle C(4) == 60 - return octave + note + accidental -def get_pngs(path): - filelist=os.listdir(path) - for fichier in filelist[:]: # filelist[:] makes a copy of filelist. - if not(fichier.endswith(".png")): - filelist.remove(fichier) - newlist = [path+'/'+x for x in filelist] #making it cwd - return newlist -def generate_notes(csv_file): - df_notes = pd.read_csv(csv_file) - print(df_notes.shape) - # define arrays for generated notes and durations - gen_notes = [] - gen_durations = [] - # define note and duration feature columns based on names - features = df_notes.columns[:-2] - note_features = [s for s in features if "note" in s] - duration_features = [s for s in features if "duration" in s] - # define target columns - note_target = df_notes.columns[-2] - duration_target = df_notes.columns[-1] - - # sample random row from dataframe and define start notes and durations - initial_sample = df_notes.sample() - start_notes = list(initial_sample[note_features].values[0]) - start_durations = list(initial_sample[duration_features].values[0]) - # append starting notes and durations to gen arrays - for note in start_notes: - gen_notes.append(int(note)) - for duration in start_durations: - gen_durations.append(duration) - - for i in range(num_notes) : - rows = df_notes - for i in range(window-1): - rows = rows.loc[df_notes[note_features[i]] == start_notes[i]] - rows = rows.loc[df_notes[duration_features[i]]== start_durations[i]] - - #This gives the same effect as probability. - # We effectively sample from a list which might have more than 1 C note, Hence increasing its probability - #Sometime, The start_notes and durations could be selected in such a way that we cannot generate any further notes uptill num_notes, - #This means there maybe some combinations of notes such as 76,68 which are not there in the dataset and hence cannot be sampled. - #In such cases, the only way about it would be to reset the start notes, because we cannot sample from an empty row - #So here we check if any rows which we ta - if len(rows): - next_sample = rows.sample() - next_note = next_sample[note_target].values[0] - next_duration = next_sample[duration_target].values[0] - gen_notes.append(int(next_note)) - gen_durations.append(next_duration) - - start_notes.pop(0) - start_durations.pop(0) - - start_notes.append(next_note) - start_durations.append(next_duration) - else: - #Received empty row - # print("Exiting!!!!!!") - #restarting again to get new start notes - return [],[] - - # print(rows[note_target].value_counts(normalize=True)) - # print(rows[duration_target].value_counts(normalize=True)) - - return gen_notes, gen_durations - -#MAIN FUNCTION -def main_markov(time_sign): - command = "rm -r MC/gen_songs_midi/*" - subprocess.Popen(command,shell=True,stdout=subprocess.PIPE,stderr=subprocess.PIPE).communicate() - # https://stackoverflow.com/questions/49462107/how-can-i-get-all-piano-parts-from-a-music21-score - if not os.path.exists('tracks'): - os.mkdir('tracks') - os.mkdir('tracks/3_4') - os.mkdir('tracks/4_4') - os.mkdir('tracks/6_8') - os.mkdir('tracks/2_2') - os.mkdir('tracks/2_4') - i = 0 - #Parse midi files into tracks folder - - - for path in abs_paths('data'): - print(path) - piece = converter.parse(path) - #print(list(piece.parts)) - # for part in piece.parts: - part_notes = [] - l = piece.getTimeSignatures() - - # s = l.show('text') #prints piece time signature - - time_sig_num = piece.recurse().getElementsByClass(meter.TimeSignature)[0].numerator - time_sig_denum = piece.recurse().getElementsByClass(meter.TimeSignature)[0].denominator #gets time signature for piece - #print(piece[meter.TimeSignature][0]) - #print(piece['Measure'][0].timeSignature) - - for el in piece.recurse().notes: - # print(el.offset, el, el.activeSite) - # print(el.beatDuration) - # print(el.quarterLength) - # print(el._getTimeSignatureForBeat) - # print(el.beat) - - # if getattr(el, 'isNote', None): - # print("this method works") - # print(el.nameWithOctave) - #get all note messages from all tracks - # for event in el: - # for y in event.contextSites(): - # if y[0] is part: - # offset = y[1] - - if getattr(el, 'isNote', None) and el.isNote: - # print('note in {}'.format(el)) - #check if note is in accepted length - #convert string to numerical value - if el.quarterLength in accepeted_lengths: - part_notes.append([pitch_to_int(el.nameWithOctave), el.quarterLength]) - if not len(part_notes) == 0: - np.save('tracks/'+str(time_sig_num)+'_'+str(time_sig_denum)+'/'+'{}.npy'.format(i), np.array(part_notes)) - i+=1 - print('Number of tracks parsed: {}'.format(i)) - if not glob.glob('MC/prepared*.csv'): - sigs = ['3_4','4_4','6_8','2_2','2_4'] - columns = [] - for i in range(window): - columns.append('note' + str(i)) - columns.append('duration' + str(i)) - for sig in sigs: - df_notes = pd.DataFrame(columns=columns) - # append segments from each track as rows to dataframe - for path in abs_paths('tracks/'+sig): - notes = np.load(path) - for i in range(len(notes)-window): - # take every x notes and durations - segment = notes[i:i+window].flatten() - # make into pd.Series row - row = pd.Series(segment, index=df_notes.columns) - # append row to dataframe - df_notes = df_notes.append(row, ignore_index=True) - # export - df_notes.to_csv('prepared'+sig+'.csv', index=False) - time_signature = str(time_sign).split('/') - - success = False - gen_notes =[] - gen_durations =[] - - #Retry mechanism - csv_path = 'MC/prepared'+time_signature[0]+'_'+time_signature[1]+'.csv' - while len(gen_notes) max_n_lights[0] or - len(scene.spot_light_nodes) > max_n_lights[1] or - len(scene.point_light_nodes) > max_n_lights[2]): - light_nodes = self._sorted_nodes_by_distance( - scene, scene.light_nodes, node - ) - - for n in light_nodes: - light = n.light - pose = scene.get_pose(n) - position = pose[:3,3] - direction = -pose[:3,2] - - if isinstance(light, PointLight): - if plc == max_n_lights[2]: - continue - b = 'point_lights[{}].'.format(plc) - plc += 1 - shadow = bool(flags & RenderFlags.SHADOWS_POINT) - program.set_uniform(b + 'position', position) - elif isinstance(light, SpotLight): - if slc == max_n_lights[1]: - continue - b = 'spot_lights[{}].'.format(slc) - slc += 1 - shadow = bool(flags & RenderFlags.SHADOWS_SPOT) - las = 1.0 / max(0.001, np.cos(light.innerConeAngle) - - np.cos(light.outerConeAngle)) - lao = -np.cos(light.outerConeAngle) * las - program.set_uniform(b + 'direction', direction) - program.set_uniform(b + 'position', position) - program.set_uniform(b + 'light_angle_scale', las) - program.set_uniform(b + 'light_angle_offset', lao) - else: - if dlc == max_n_lights[0]: - continue - b = 'directional_lights[{}].'.format(dlc) - dlc += 1 - shadow = bool(flags & RenderFlags.SHADOWS_DIRECTIONAL) - program.set_uniform(b + 'direction', direction) - - program.set_uniform(b + 'color', light.color) - program.set_uniform(b + 'intensity', light.intensity) - # if light.range is not None: - # program.set_uniform(b + 'range', light.range) - # else: - # program.set_uniform(b + 'range', 0) - - if shadow: - self._bind_texture(light.shadow_texture, - b + 'shadow_map', program) - if not isinstance(light, PointLight): - V, P = self._get_light_cam_matrices(scene, n, flags) - program.set_uniform(b + 'light_matrix', P.dot(V)) - else: - raise NotImplementedError( - 'Point light shadows not implemented' - ) - - def _sorted_mesh_nodes(self, scene): - cam_loc = scene.get_pose(scene.main_camera_node)[:3,3] - solid_nodes = [] - trans_nodes = [] - for node in scene.mesh_nodes: - mesh = node.mesh - if mesh.is_transparent: - trans_nodes.append(node) - else: - solid_nodes.append(node) - - # TODO BETTER SORTING METHOD - trans_nodes.sort( - key=lambda n: -np.linalg.norm(scene.get_pose(n)[:3,3] - cam_loc) - ) - solid_nodes.sort( - key=lambda n: -np.linalg.norm(scene.get_pose(n)[:3,3] - cam_loc) - ) - - return solid_nodes + trans_nodes - - def _sorted_nodes_by_distance(self, scene, nodes, compare_node): - nodes = list(nodes) - compare_posn = scene.get_pose(compare_node)[:3,3] - nodes.sort(key=lambda n: np.linalg.norm( - scene.get_pose(n)[:3,3] - compare_posn) - ) - return nodes - - ########################################################################### - # Context Management - ########################################################################### - - def _update_context(self, scene, flags): - - # Update meshes - scene_meshes = scene.meshes - - # Add new meshes to context - for mesh in scene_meshes - self._meshes: - for p in mesh.primitives: - p._add_to_context() - - # Remove old meshes from context - for mesh in self._meshes - scene_meshes: - for p in mesh.primitives: - p.delete() - - self._meshes = scene_meshes.copy() - - # Update mesh textures - mesh_textures = set() - for m in scene_meshes: - for p in m.primitives: - mesh_textures |= p.material.textures - - # Add new textures to context - for texture in mesh_textures - self._mesh_textures: - texture._add_to_context() - - # Remove old textures from context - for texture in self._mesh_textures - mesh_textures: - texture.delete() - - self._mesh_textures = mesh_textures.copy() - - shadow_textures = set() - for l in scene.lights: - # Create if needed - active = False - if (isinstance(l, DirectionalLight) and - flags & RenderFlags.SHADOWS_DIRECTIONAL): - active = True - elif (isinstance(l, PointLight) and - flags & RenderFlags.SHADOWS_POINT): - active = True - elif isinstance(l, SpotLight) and flags & RenderFlags.SHADOWS_SPOT: - active = True - - if active and l.shadow_texture is None: - l._generate_shadow_texture() - if l.shadow_texture is not None: - shadow_textures.add(l.shadow_texture) - - # Add new textures to context - for texture in shadow_textures - self._shadow_textures: - texture._add_to_context() - - # Remove old textures from context - for texture in self._shadow_textures - shadow_textures: - texture.delete() - - self._shadow_textures = shadow_textures.copy() - - ########################################################################### - # Texture Management - ########################################################################### - - def _bind_texture(self, texture, uniform_name, program): - """Bind a texture to an active texture unit and return - the texture unit index that was used. - """ - tex_id = self._get_next_active_texture() - glActiveTexture(GL_TEXTURE0 + tex_id) - texture._bind() - program.set_uniform(uniform_name, tex_id) - - def _get_next_active_texture(self): - val = self._texture_alloc_idx - self._texture_alloc_idx += 1 - return val - - def _reset_active_textures(self): - self._texture_alloc_idx = 0 - - ########################################################################### - # Camera Matrix Management - ########################################################################### - - def _get_camera_matrices(self, scene): - main_camera_node = scene.main_camera_node - if main_camera_node is None: - raise ValueError('Cannot render scene without a camera') - P = main_camera_node.camera.get_projection_matrix( - width=self.viewport_width, height=self.viewport_height - ) - pose = scene.get_pose(main_camera_node) - V = np.linalg.inv(pose) # V maps from world to camera - return V, P - - def _get_light_cam_matrices(self, scene, light_node, flags): - light = light_node.light - pose = scene.get_pose(light_node).copy() - s = scene.scale - camera = light._get_shadow_camera(s) - P = camera.get_projection_matrix() - if isinstance(light, DirectionalLight): - direction = -pose[:3,2] - c = scene.centroid - loc = c - direction * s - pose[:3,3] = loc - V = np.linalg.inv(pose) # V maps from world to camera - return V, P - - ########################################################################### - # Shader Program Management - ########################################################################### - - def _get_text_program(self): - program = self._program_cache.get_program( - vertex_shader='text.vert', - fragment_shader='text.frag' - ) - - if not program._in_context(): - program._add_to_context() - - return program - - def _compute_max_n_lights(self, flags): - max_n_lights = [MAX_N_LIGHTS, MAX_N_LIGHTS, MAX_N_LIGHTS] - n_tex_units = glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS) - - # Reserved texture units: 6 - # Normal Map - # Occlusion Map - # Emissive Map - # Base Color or Diffuse Map - # MR or SG Map - # Environment cubemap - - n_reserved_textures = 6 - n_available_textures = n_tex_units - n_reserved_textures - - # Distribute textures evenly among lights with shadows, with - # a preference for directional lights - n_shadow_types = 0 - if flags & RenderFlags.SHADOWS_DIRECTIONAL: - n_shadow_types += 1 - if flags & RenderFlags.SHADOWS_SPOT: - n_shadow_types += 1 - if flags & RenderFlags.SHADOWS_POINT: - n_shadow_types += 1 - - if n_shadow_types > 0: - tex_per_light = n_available_textures // n_shadow_types - - if flags & RenderFlags.SHADOWS_DIRECTIONAL: - max_n_lights[0] = ( - tex_per_light + - (n_available_textures - tex_per_light * n_shadow_types) - ) - if flags & RenderFlags.SHADOWS_SPOT: - max_n_lights[1] = tex_per_light - if flags & RenderFlags.SHADOWS_POINT: - max_n_lights[2] = tex_per_light - - return max_n_lights - - def _get_primitive_program(self, primitive, flags, program_flags): - vertex_shader = None - fragment_shader = None - geometry_shader = None - defines = {} - - if (bool(program_flags & ProgramFlags.USE_MATERIAL) and - not flags & RenderFlags.DEPTH_ONLY and - not flags & RenderFlags.FLAT and - not flags & RenderFlags.SEG): - vertex_shader = 'mesh.vert' - fragment_shader = 'mesh.frag' - elif bool(program_flags & (ProgramFlags.VERTEX_NORMALS | - ProgramFlags.FACE_NORMALS)): - vertex_shader = 'vertex_normals.vert' - if primitive.mode == GLTF.POINTS: - geometry_shader = 'vertex_normals_pc.geom' - else: - geometry_shader = 'vertex_normals.geom' - fragment_shader = 'vertex_normals.frag' - elif flags & RenderFlags.FLAT: - vertex_shader = 'flat.vert' - fragment_shader = 'flat.frag' - elif flags & RenderFlags.SEG: - vertex_shader = 'segmentation.vert' - fragment_shader = 'segmentation.frag' - else: - vertex_shader = 'mesh_depth.vert' - fragment_shader = 'mesh_depth.frag' - - # Set up vertex buffer DEFINES - bf = primitive.buf_flags - buf_idx = 1 - if bf & BufFlags.NORMAL: - defines['NORMAL_LOC'] = buf_idx - buf_idx += 1 - if bf & BufFlags.TANGENT: - defines['TANGENT_LOC'] = buf_idx - buf_idx += 1 - if bf & BufFlags.TEXCOORD_0: - defines['TEXCOORD_0_LOC'] = buf_idx - buf_idx += 1 - if bf & BufFlags.TEXCOORD_1: - defines['TEXCOORD_1_LOC'] = buf_idx - buf_idx += 1 - if bf & BufFlags.COLOR_0: - defines['COLOR_0_LOC'] = buf_idx - buf_idx += 1 - if bf & BufFlags.JOINTS_0: - defines['JOINTS_0_LOC'] = buf_idx - buf_idx += 1 - if bf & BufFlags.WEIGHTS_0: - defines['WEIGHTS_0_LOC'] = buf_idx - buf_idx += 1 - defines['INST_M_LOC'] = buf_idx - - # Set up shadow mapping defines - if flags & RenderFlags.SHADOWS_DIRECTIONAL: - defines['DIRECTIONAL_LIGHT_SHADOWS'] = 1 - if flags & RenderFlags.SHADOWS_SPOT: - defines['SPOT_LIGHT_SHADOWS'] = 1 - if flags & RenderFlags.SHADOWS_POINT: - defines['POINT_LIGHT_SHADOWS'] = 1 - max_n_lights = self._compute_max_n_lights(flags) - defines['MAX_DIRECTIONAL_LIGHTS'] = max_n_lights[0] - defines['MAX_SPOT_LIGHTS'] = max_n_lights[1] - defines['MAX_POINT_LIGHTS'] = max_n_lights[2] - - # Set up vertex normal defines - if program_flags & ProgramFlags.VERTEX_NORMALS: - defines['VERTEX_NORMALS'] = 1 - if program_flags & ProgramFlags.FACE_NORMALS: - defines['FACE_NORMALS'] = 1 - - # Set up material texture defines - if bool(program_flags & ProgramFlags.USE_MATERIAL): - tf = primitive.material.tex_flags - if tf & TexFlags.NORMAL: - defines['HAS_NORMAL_TEX'] = 1 - if tf & TexFlags.OCCLUSION: - defines['HAS_OCCLUSION_TEX'] = 1 - if tf & TexFlags.EMISSIVE: - defines['HAS_EMISSIVE_TEX'] = 1 - if tf & TexFlags.BASE_COLOR: - defines['HAS_BASE_COLOR_TEX'] = 1 - if tf & TexFlags.METALLIC_ROUGHNESS: - defines['HAS_METALLIC_ROUGHNESS_TEX'] = 1 - if tf & TexFlags.DIFFUSE: - defines['HAS_DIFFUSE_TEX'] = 1 - if tf & TexFlags.SPECULAR_GLOSSINESS: - defines['HAS_SPECULAR_GLOSSINESS_TEX'] = 1 - if isinstance(primitive.material, MetallicRoughnessMaterial): - defines['USE_METALLIC_MATERIAL'] = 1 - elif isinstance(primitive.material, SpecularGlossinessMaterial): - defines['USE_GLOSSY_MATERIAL'] = 1 - - program = self._program_cache.get_program( - vertex_shader=vertex_shader, - fragment_shader=fragment_shader, - geometry_shader=geometry_shader, - defines=defines - ) - - if not program._in_context(): - program._add_to_context() - - return program - - ########################################################################### - # Viewport Management - ########################################################################### - - def _configure_forward_pass_viewport(self, flags): - - # If using offscreen render, bind main framebuffer - if flags & RenderFlags.OFFSCREEN: - self._configure_main_framebuffer() - glBindFramebuffer(GL_DRAW_FRAMEBUFFER, self._main_fb_ms) - else: - glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0) - - glViewport(0, 0, self.viewport_width, self.viewport_height) - glEnable(GL_DEPTH_TEST) - glDepthMask(GL_TRUE) - glDepthFunc(GL_LESS) - glDepthRange(0.0, 1.0) - - def _configure_shadow_mapping_viewport(self, light, flags): - self._configure_shadow_framebuffer() - glBindFramebuffer(GL_FRAMEBUFFER, self._shadow_fb) - light.shadow_texture._bind() - light.shadow_texture._bind_as_depth_attachment() - glActiveTexture(GL_TEXTURE0) - light.shadow_texture._bind() - glDrawBuffer(GL_NONE) - glReadBuffer(GL_NONE) - - glClear(GL_DEPTH_BUFFER_BIT) - glViewport(0, 0, SHADOW_TEX_SZ, SHADOW_TEX_SZ) - glEnable(GL_DEPTH_TEST) - glDepthMask(GL_TRUE) - glDepthFunc(GL_LESS) - glDepthRange(0.0, 1.0) - glDisable(GL_CULL_FACE) - glDisable(GL_BLEND) - - ########################################################################### - # Framebuffer Management - ########################################################################### - - def _configure_shadow_framebuffer(self): - if self._shadow_fb is None: - self._shadow_fb = glGenFramebuffers(1) - - def _delete_shadow_framebuffer(self): - if self._shadow_fb is not None: - glDeleteFramebuffers(1, [self._shadow_fb]) - - def _configure_main_framebuffer(self): - # If mismatch with prior framebuffer, delete it - if (self._main_fb is not None and - self.viewport_width != self._main_fb_dims[0] or - self.viewport_height != self._main_fb_dims[1]): - self._delete_main_framebuffer() - - # If framebuffer doesn't exist, create it - if self._main_fb is None: - # Generate standard buffer - self._main_cb, self._main_db = glGenRenderbuffers(2) - - glBindRenderbuffer(GL_RENDERBUFFER, self._main_cb) - glRenderbufferStorage( - GL_RENDERBUFFER, GL_RGBA, - self.viewport_width, self.viewport_height - ) - - glBindRenderbuffer(GL_RENDERBUFFER, self._main_db) - glRenderbufferStorage( - GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, - self.viewport_width, self.viewport_height - ) - - self._main_fb = glGenFramebuffers(1) - glBindFramebuffer(GL_DRAW_FRAMEBUFFER, self._main_fb) - glFramebufferRenderbuffer( - GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, - GL_RENDERBUFFER, self._main_cb - ) - glFramebufferRenderbuffer( - GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, - GL_RENDERBUFFER, self._main_db - ) - - # Generate multisample buffer - self._main_cb_ms, self._main_db_ms = glGenRenderbuffers(2) - glBindRenderbuffer(GL_RENDERBUFFER, self._main_cb_ms) - # glRenderbufferStorageMultisample( - # GL_RENDERBUFFER, 4, GL_RGBA, - # self.viewport_width, self.viewport_height - # ) - # glBindRenderbuffer(GL_RENDERBUFFER, self._main_db_ms) - # glRenderbufferStorageMultisample( - # GL_RENDERBUFFER, 4, GL_DEPTH_COMPONENT24, - # self.viewport_width, self.viewport_height - # ) - # 增加这一行 - num_samples = min(glGetIntegerv(GL_MAX_SAMPLES), 4) # No more than GL_MAX_SAMPLES - - # 其实就是把 4 替换成 num_samples,其余不变 - glRenderbufferStorageMultisample(GL_RENDERBUFFER, num_samples, GL_RGBA, self.viewport_width, self.viewport_height) - - glBindRenderbuffer(GL_RENDERBUFFER, self._main_db_ms) # 这行不变 - - # 这一行也是将 4 替换成 num_samples - glRenderbufferStorageMultisample(GL_RENDERBUFFER, num_samples, GL_DEPTH_COMPONENT24, self.viewport_width, self.viewport_height) - - self._main_fb_ms = glGenFramebuffers(1) - glBindFramebuffer(GL_DRAW_FRAMEBUFFER, self._main_fb_ms) - glFramebufferRenderbuffer( - GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, - GL_RENDERBUFFER, self._main_cb_ms - ) - glFramebufferRenderbuffer( - GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, - GL_RENDERBUFFER, self._main_db_ms - ) - - self._main_fb_dims = (self.viewport_width, self.viewport_height) - - def _delete_main_framebuffer(self): - if self._main_fb is not None: - glDeleteFramebuffers(2, [self._main_fb, self._main_fb_ms]) - if self._main_cb is not None: - glDeleteRenderbuffers(2, [self._main_cb, self._main_cb_ms]) - if self._main_db is not None: - glDeleteRenderbuffers(2, [self._main_db, self._main_db_ms]) - - self._main_fb = None - self._main_cb = None - self._main_db = None - self._main_fb_ms = None - self._main_cb_ms = None - self._main_db_ms = None - self._main_fb_dims = (None, None) - - def _read_main_framebuffer(self, scene, flags): - width, height = self._main_fb_dims[0], self._main_fb_dims[1] - - # Bind framebuffer and blit buffers - glBindFramebuffer(GL_READ_FRAMEBUFFER, self._main_fb_ms) - glBindFramebuffer(GL_DRAW_FRAMEBUFFER, self._main_fb) - glBlitFramebuffer( - 0, 0, width, height, 0, 0, width, height, - GL_COLOR_BUFFER_BIT, GL_LINEAR - ) - glBlitFramebuffer( - 0, 0, width, height, 0, 0, width, height, - GL_DEPTH_BUFFER_BIT, GL_NEAREST - ) - glBindFramebuffer(GL_READ_FRAMEBUFFER, self._main_fb) - - # Read depth - depth_buf = glReadPixels( - 0, 0, width, height, GL_DEPTH_COMPONENT, GL_FLOAT - ) - depth_im = np.frombuffer(depth_buf, dtype=np.float32) - depth_im = depth_im.reshape((height, width)) - depth_im = np.flip(depth_im, axis=0) - inf_inds = (depth_im == 1.0) - depth_im = 2.0 * depth_im - 1.0 - z_near = scene.main_camera_node.camera.znear - z_far = scene.main_camera_node.camera.zfar - noninf = np.logical_not(inf_inds) - if z_far is None: - depth_im[noninf] = 2 * z_near / (1.0 - depth_im[noninf]) - else: - depth_im[noninf] = ((2.0 * z_near * z_far) / - (z_far + z_near - depth_im[noninf] * - (z_far - z_near))) - depth_im[inf_inds] = 0.0 - - # Resize for macos if needed - if sys.platform == 'darwin': - depth_im = self._resize_image(depth_im) - - if flags & RenderFlags.DEPTH_ONLY: - return depth_im - - # Read color - if flags & RenderFlags.RGBA: - color_buf = glReadPixels( - 0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE - ) - color_im = np.frombuffer(color_buf, dtype=np.uint8) - color_im = color_im.reshape((height, width, 4)) - else: - color_buf = glReadPixels( - 0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE - ) - color_im = np.frombuffer(color_buf, dtype=np.uint8) - color_im = color_im.reshape((height, width, 3)) - color_im = np.flip(color_im, axis=0) - - # Resize for macos if needed - if sys.platform == 'darwin': - color_im = self._resize_image(color_im, True) - - return color_im, depth_im - - def _resize_image(self, value, antialias=False): - """If needed, rescale the render for MacOS.""" - img = PIL.Image.fromarray(value) - resample = PIL.Image.NEAREST - if antialias: - resample = PIL.Image.BILINEAR - size = (self.viewport_width // self.dpscale, - self.viewport_height // self.dpscale) - img = img.resize(size, resample=resample) - return np.array(img) - - ########################################################################### - # Shadowmap Debugging - ########################################################################### - - def _forward_pass_no_reset(self, scene, flags): - # Set up camera matrices - V, P = self._get_camera_matrices(scene) - - # Now, render each object in sorted order - for node in self._sorted_mesh_nodes(scene): - mesh = node.mesh - - # Skip the mesh if it's not visible - if not mesh.is_visible: - continue - - for primitive in mesh.primitives: - - # First, get and bind the appropriate program - program = self._get_primitive_program( - primitive, flags, ProgramFlags.USE_MATERIAL - ) - program._bind() - - # Set the camera uniforms - program.set_uniform('V', V) - program.set_uniform('P', P) - program.set_uniform( - 'cam_pos', scene.get_pose(scene.main_camera_node)[:3,3] - ) - - # Next, bind the lighting - if not flags & RenderFlags.DEPTH_ONLY and not flags & RenderFlags.FLAT: - self._bind_lighting(scene, program, node, flags) - - # Finally, bind and draw the primitive - self._bind_and_draw_primitive( - primitive=primitive, - pose=scene.get_pose(node), - program=program, - flags=flags - ) - self._reset_active_textures() - - # Unbind the shader and flush the output - if program is not None: - program._unbind() - glFlush() - - def _render_light_shadowmaps(self, scene, light_nodes, flags, tile=False): - glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0) - glClearColor(*scene.bg_color) - glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) - glEnable(GL_DEPTH_TEST) - glDepthMask(GL_TRUE) - glDepthFunc(GL_LESS) - glDepthRange(0.0, 1.0) - - w = self.viewport_width - h = self.viewport_height - - num_nodes = len(light_nodes) - viewport_dims = { - (0, 2): [0, h // 2, w // 2, h], - (1, 2): [w // 2, h // 2, w, h], - (0, 3): [0, h // 2, w // 2, h], - (1, 3): [w // 2, h // 2, w, h], - (2, 3): [0, 0, w // 2, h // 2], - (0, 4): [0, h // 2, w // 2, h], - (1, 4): [w // 2, h // 2, w, h], - (2, 4): [0, 0, w // 2, h // 2], - (3, 4): [w // 2, 0, w, h // 2] - } - - if tile: - for i, ln in enumerate(light_nodes): - light = ln.light - - if light.shadow_texture is None: - raise ValueError('Light does not have a shadow texture') - - glViewport(*viewport_dims[(i, num_nodes + 1)]) - - program = self._get_debug_quad_program() - program._bind() - self._bind_texture(light.shadow_texture, 'depthMap', program) - self._render_debug_quad() - self._reset_active_textures() - glFlush() - i += 1 - glViewport(*viewport_dims[(i, num_nodes + 1)]) - self._forward_pass_no_reset(scene, flags) - else: - for i, ln in enumerate(light_nodes): - light = ln.light - - if light.shadow_texture is None: - raise ValueError('Light does not have a shadow texture') - - glViewport(0, 0, self.viewport_width, self.viewport_height) - - program = self._get_debug_quad_program() - program._bind() - self._bind_texture(light.shadow_texture, 'depthMap', program) - self._render_debug_quad() - self._reset_active_textures() - glFlush() - return - - def _get_debug_quad_program(self): - program = self._program_cache.get_program( - vertex_shader='debug_quad.vert', - fragment_shader='debug_quad.frag' - ) - if not program._in_context(): - program._add_to_context() - return program - - def _render_debug_quad(self): - x = glGenVertexArrays(1) - glBindVertexArray(x) - glDrawArrays(GL_TRIANGLES, 0, 6) - glBindVertexArray(0) - glDeleteVertexArrays(1, [x]) diff --git a/spaces/vumichien/canvas_controlnet/annotator/uniformer/exp/upernet_global_small/run.sh b/spaces/vumichien/canvas_controlnet/annotator/uniformer/exp/upernet_global_small/run.sh deleted file mode 100644 index 9fb22edfa7a32624ea08a63fe7d720c40db3b696..0000000000000000000000000000000000000000 --- a/spaces/vumichien/canvas_controlnet/annotator/uniformer/exp/upernet_global_small/run.sh +++ /dev/null @@ -1,10 +0,0 @@ -#!/usr/bin/env bash - -work_path=$(dirname $0) -PYTHONPATH="$(dirname $0)/../../":$PYTHONPATH \ -python -m torch.distributed.launch --nproc_per_node=8 \ - tools/train.py ${work_path}/config.py \ - --launcher pytorch \ - --options model.backbone.pretrained_path='your_model_path/uniformer_small_in1k.pth' \ - --work-dir ${work_path}/ckpt \ - 2>&1 | tee -a ${work_path}/log.txt diff --git a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/runner/hooks/hook.py b/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/runner/hooks/hook.py deleted file mode 100644 index b8855c107727ecf85b917c890fc8b7f6359238a4..0000000000000000000000000000000000000000 --- a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/runner/hooks/hook.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from annotator.uniformer.mmcv.utils import Registry, is_method_overridden - -HOOKS = Registry('hook') - - -class Hook: - stages = ('before_run', 'before_train_epoch', 'before_train_iter', - 'after_train_iter', 'after_train_epoch', 'before_val_epoch', - 'before_val_iter', 'after_val_iter', 'after_val_epoch', - 'after_run') - - def before_run(self, runner): - pass - - def after_run(self, runner): - pass - - def before_epoch(self, runner): - pass - - def after_epoch(self, runner): - pass - - def before_iter(self, runner): - pass - - def after_iter(self, runner): - pass - - def before_train_epoch(self, runner): - self.before_epoch(runner) - - def before_val_epoch(self, runner): - self.before_epoch(runner) - - def after_train_epoch(self, runner): - self.after_epoch(runner) - - def after_val_epoch(self, runner): - self.after_epoch(runner) - - def before_train_iter(self, runner): - self.before_iter(runner) - - def before_val_iter(self, runner): - self.before_iter(runner) - - def after_train_iter(self, runner): - self.after_iter(runner) - - def after_val_iter(self, runner): - self.after_iter(runner) - - def every_n_epochs(self, runner, n): - return (runner.epoch + 1) % n == 0 if n > 0 else False - - def every_n_inner_iters(self, runner, n): - return (runner.inner_iter + 1) % n == 0 if n > 0 else False - - def every_n_iters(self, runner, n): - return (runner.iter + 1) % n == 0 if n > 0 else False - - def end_of_epoch(self, runner): - return runner.inner_iter + 1 == len(runner.data_loader) - - def is_last_epoch(self, runner): - return runner.epoch + 1 == runner._max_epochs - - def is_last_iter(self, runner): - return runner.iter + 1 == runner._max_iters - - def get_triggered_stages(self): - trigger_stages = set() - for stage in Hook.stages: - if is_method_overridden(stage, Hook, self): - trigger_stages.add(stage) - - # some methods will be triggered in multi stages - # use this dict to map method to stages. - method_stages_map = { - 'before_epoch': ['before_train_epoch', 'before_val_epoch'], - 'after_epoch': ['after_train_epoch', 'after_val_epoch'], - 'before_iter': ['before_train_iter', 'before_val_iter'], - 'after_iter': ['after_train_iter', 'after_val_iter'], - } - - for method, map_stages in method_stages_map.items(): - if is_method_overridden(method, Hook, self): - trigger_stages.update(map_stages) - - return [stage for stage in Hook.stages if stage in trigger_stages] diff --git a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/utils/version_utils.py b/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/utils/version_utils.py deleted file mode 100644 index 963c45a2e8a86a88413ab6c18c22481fb9831985..0000000000000000000000000000000000000000 --- a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/utils/version_utils.py +++ /dev/null @@ -1,90 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -import subprocess -import warnings - -from packaging.version import parse - - -def digit_version(version_str: str, length: int = 4): - """Convert a version string into a tuple of integers. - - This method is usually used for comparing two versions. For pre-release - versions: alpha < beta < rc. - - Args: - version_str (str): The version string. - length (int): The maximum number of version levels. Default: 4. - - Returns: - tuple[int]: The version info in digits (integers). - """ - assert 'parrots' not in version_str - version = parse(version_str) - assert version.release, f'failed to parse version {version_str}' - release = list(version.release) - release = release[:length] - if len(release) < length: - release = release + [0] * (length - len(release)) - if version.is_prerelease: - mapping = {'a': -3, 'b': -2, 'rc': -1} - val = -4 - # version.pre can be None - if version.pre: - if version.pre[0] not in mapping: - warnings.warn(f'unknown prerelease version {version.pre[0]}, ' - 'version checking may go wrong') - else: - val = mapping[version.pre[0]] - release.extend([val, version.pre[-1]]) - else: - release.extend([val, 0]) - - elif version.is_postrelease: - release.extend([1, version.post]) - else: - release.extend([0, 0]) - return tuple(release) - - -def _minimal_ext_cmd(cmd): - # construct minimal environment - env = {} - for k in ['SYSTEMROOT', 'PATH', 'HOME']: - v = os.environ.get(k) - if v is not None: - env[k] = v - # LANGUAGE is used on win32 - env['LANGUAGE'] = 'C' - env['LANG'] = 'C' - env['LC_ALL'] = 'C' - out = subprocess.Popen( - cmd, stdout=subprocess.PIPE, env=env).communicate()[0] - return out - - -def get_git_hash(fallback='unknown', digits=None): - """Get the git hash of the current repo. - - Args: - fallback (str, optional): The fallback string when git hash is - unavailable. Defaults to 'unknown'. - digits (int, optional): kept digits of the hash. Defaults to None, - meaning all digits are kept. - - Returns: - str: Git commit hash. - """ - - if digits is not None and not isinstance(digits, int): - raise TypeError('digits must be None or an integer') - - try: - out = _minimal_ext_cmd(['git', 'rev-parse', 'HEAD']) - sha = out.strip().decode('ascii') - if digits is not None: - sha = sha[:digits] - except OSError: - sha = fallback - - return sha diff --git a/spaces/wanxing28/QQsign/unidbg-fetch-qsign/bin/unidbg-fetch-qsign.bat b/spaces/wanxing28/QQsign/unidbg-fetch-qsign/bin/unidbg-fetch-qsign.bat deleted file mode 100644 index ce71ce4c8398bfce2af249eed73a29ed364b2cff..0000000000000000000000000000000000000000 --- a/spaces/wanxing28/QQsign/unidbg-fetch-qsign/bin/unidbg-fetch-qsign.bat +++ /dev/null @@ -1,84 +0,0 @@ -@if "%DEBUG%" == "" @echo off -@rem ########################################################################## -@rem -@rem unidbg-fetch-qsign startup script for Windows -@rem -@rem ########################################################################## - -@rem Set local scope for the variables with windows NT shell -if "%OS%"=="Windows_NT" setlocal - -set DIRNAME=%~dp0 -if "%DIRNAME%" == "" set DIRNAME=. -set APP_BASE_NAME=%~n0 -set APP_HOME=%DIRNAME%.. - -@rem Add default JVM options here. You can also use JAVA_OPTS and UNIDBG_FETCH_QSIGN_OPTS to pass JVM options to this script. -set DEFAULT_JVM_OPTS= - -@rem Find java.exe -if defined JAVA_HOME goto findJavaFromJavaHome - -set JAVA_EXE=java.exe -%JAVA_EXE% -version >NUL 2>&1 -if "%ERRORLEVEL%" == "0" goto init - -echo. -echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH. -echo. -echo Please set the JAVA_HOME variable in your environment to match the -echo location of your Java installation. - -goto fail - -:findJavaFromJavaHome -set JAVA_HOME=%JAVA_HOME:"=% -set JAVA_EXE=%JAVA_HOME%/bin/java.exe - -if exist "%JAVA_EXE%" goto init - -echo. -echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME% -echo. -echo Please set the JAVA_HOME variable in your environment to match the -echo location of your Java installation. - -goto fail - -:init -@rem Get command-line arguments, handling Windows variants - -if not "%OS%" == "Windows_NT" goto win9xME_args - -:win9xME_args -@rem Slurp the command line arguments. -set CMD_LINE_ARGS= -set _SKIP=2 - -:win9xME_args_slurp -if "x%~1" == "x" goto execute - -set CMD_LINE_ARGS=%* - -:execute -@rem Setup the command line - -set CLASSPATH=%APP_HOME%\lib\unidbg-fetch-qsign-1.1.7-all.jar - -@rem Execute unidbg-fetch-qsign -"%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %UNIDBG_FETCH_QSIGN_OPTS% -jar "%CLASSPATH%" %CMD_LINE_ARGS% - -:end -@rem End local scope for the variables with windows NT shell -if "%ERRORLEVEL%"=="0" goto mainEnd - -:fail -rem Set variable UNIDBG_FETCH_QSIGN_EXIT_CONSOLE if you need the _script_ return code instead of -rem the _cmd.exe /c_ return code! -if not "" == "%UNIDBG_FETCH_QSIGN_EXIT_CONSOLE%" exit 1 -exit /b 1 - -:mainEnd -if "%OS%"=="Windows_NT" endlocal - -:omega \ No newline at end of file diff --git a/spaces/wffcyrus/MetaGPT-v1/tests/metagpt/management/test_skill_manager.py b/spaces/wffcyrus/MetaGPT-v1/tests/metagpt/management/test_skill_manager.py deleted file mode 100644 index b0be858a1fd908548d301a51a1eeeb26c4551335..0000000000000000000000000000000000000000 --- a/spaces/wffcyrus/MetaGPT-v1/tests/metagpt/management/test_skill_manager.py +++ /dev/null @@ -1,36 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/6/6 12:38 -@Author : alexanderwu -@File : test_skill_manager.py -""" -from metagpt.actions import WritePRD, WriteTest -from metagpt.logs import logger -from metagpt.management.skill_manager import SkillManager - - -def test_skill_manager(): - manager = SkillManager() - logger.info(manager._store) - - write_prd = WritePRD("WritePRD") - write_prd.desc = "基于老板或其他人的需求进行PRD的撰写,包括用户故事、需求分解等" - write_test = WriteTest("WriteTest") - write_test.desc = "进行测试用例的撰写" - manager.add_skill(write_prd) - manager.add_skill(write_test) - - skill = manager.get_skill("WriteTest") - logger.info(skill) - - rsp = manager.retrieve_skill("写PRD") - logger.info(rsp) - assert rsp[0] == "WritePRD" - - rsp = manager.retrieve_skill("写测试用例") - logger.info(rsp) - assert rsp[0] == 'WriteTest' - - rsp = manager.retrieve_skill_scored("写PRD") - logger.info(rsp) diff --git a/spaces/wffcyrus/SD-WebUI/README.md b/spaces/wffcyrus/SD-WebUI/README.md deleted file mode 100644 index 24b4c04ce710c22c0ec5c37a1888ab643536a8c1..0000000000000000000000000000000000000000 --- a/spaces/wffcyrus/SD-WebUI/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: SD WebUI -emoji: 📈 -colorFrom: pink -colorTo: gray -sdk: docker -pinned: false -duplicated_from: randomtable/SD-WebUI ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/wwwwwwww2/bingo/tailwind.config.js b/spaces/wwwwwwww2/bingo/tailwind.config.js deleted file mode 100644 index 03da3c3c45be6983b9f5ffa6df5f1fd0870e9636..0000000000000000000000000000000000000000 --- a/spaces/wwwwwwww2/bingo/tailwind.config.js +++ /dev/null @@ -1,48 +0,0 @@ -/** @type {import('tailwindcss').Config} */ -module.exports = { - content: [ - './src/pages/**/*.{js,ts,jsx,tsx,mdx}', - './src/components/**/*.{js,ts,jsx,tsx,mdx}', - './src/app/**/*.{js,ts,jsx,tsx,mdx}', - './src/ui/**/*.{js,ts,jsx,tsx,mdx}', - ], - "darkMode": "class", - theme: { - extend: { - colors: { - 'primary-blue': 'rgb(var(--color-primary-blue) / )', - secondary: 'rgb(var(--color-secondary) / )', - 'primary-background': 'rgb(var(--primary-background) / )', - 'primary-text': 'rgb(var(--primary-text) / )', - 'secondary-text': 'rgb(var(--secondary-text) / )', - 'light-text': 'rgb(var(--light-text) / )', - 'primary-border': 'rgb(var(--primary-border) / )', - }, - keyframes: { - slideDownAndFade: { - from: { opacity: 0, transform: 'translateY(-2px)' }, - to: { opacity: 1, transform: 'translateY(0)' }, - }, - slideLeftAndFade: { - from: { opacity: 0, transform: 'translateX(2px)' }, - to: { opacity: 1, transform: 'translateX(0)' }, - }, - slideUpAndFade: { - from: { opacity: 0, transform: 'translateY(2px)' }, - to: { opacity: 1, transform: 'translateY(0)' }, - }, - slideRightAndFade: { - from: { opacity: 0, transform: 'translateX(2px)' }, - to: { opacity: 1, transform: 'translateX(0)' }, - }, - }, - animation: { - slideDownAndFade: 'slideDownAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - slideLeftAndFade: 'slideLeftAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - slideUpAndFade: 'slideUpAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - slideRightAndFade: 'slideRightAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - }, - }, - }, - plugins: [require('@headlessui/tailwindcss'), require('tailwind-scrollbar')], -} diff --git a/spaces/wykonos/movie-recommender/README.md b/spaces/wykonos/movie-recommender/README.md deleted file mode 100644 index 22fe62d8b28dd649ac9e7bb2f7efa861c5310ea7..0000000000000000000000000000000000000000 --- a/spaces/wykonos/movie-recommender/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Movie Recommender -emoji: 🔥 -colorFrom: indigo -colorTo: pink -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/projects/OSNet_AIN/main.py b/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/projects/OSNet_AIN/main.py deleted file mode 100644 index f59177073dfafe1b4b712691c780af32b30fcd2c..0000000000000000000000000000000000000000 --- a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/projects/OSNet_AIN/main.py +++ /dev/null @@ -1,145 +0,0 @@ -import os -import sys -import time -import os.path as osp -import argparse -import torch -import torch.nn as nn - -import torchreid -from torchreid.utils import ( - Logger, check_isfile, set_random_seed, collect_env_info, - resume_from_checkpoint, compute_model_complexity -) - -import osnet_search as osnet_models -from softmax_nas import ImageSoftmaxNASEngine -from default_config import ( - imagedata_kwargs, optimizer_kwargs, engine_run_kwargs, get_default_config, - lr_scheduler_kwargs -) - - -def reset_config(cfg, args): - if args.root: - cfg.data.root = args.root - if args.sources: - cfg.data.sources = args.sources - if args.targets: - cfg.data.targets = args.targets - if args.transforms: - cfg.data.transforms = args.transforms - - -def main(): - parser = argparse.ArgumentParser( - formatter_class=argparse.ArgumentDefaultsHelpFormatter - ) - parser.add_argument( - '--config-file', type=str, default='', help='path to config file' - ) - parser.add_argument( - '-s', - '--sources', - type=str, - nargs='+', - help='source datasets (delimited by space)' - ) - parser.add_argument( - '-t', - '--targets', - type=str, - nargs='+', - help='target datasets (delimited by space)' - ) - parser.add_argument( - '--transforms', type=str, nargs='+', help='data augmentation' - ) - parser.add_argument( - '--root', type=str, default='', help='path to data root' - ) - parser.add_argument( - '--gpu-devices', - type=str, - default='', - ) - parser.add_argument( - 'opts', - default=None, - nargs=argparse.REMAINDER, - help='Modify config options using the command-line' - ) - args = parser.parse_args() - - cfg = get_default_config() - cfg.use_gpu = torch.cuda.is_available() - if args.config_file: - cfg.merge_from_file(args.config_file) - reset_config(cfg, args) - cfg.merge_from_list(args.opts) - set_random_seed(cfg.train.seed) - - if cfg.use_gpu and args.gpu_devices: - # if gpu_devices is not specified, all available gpus will be used - os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu_devices - log_name = 'test.log' if cfg.test.evaluate else 'train.log' - log_name += time.strftime('-%Y-%m-%d-%H-%M-%S') - sys.stdout = Logger(osp.join(cfg.data.save_dir, log_name)) - - print('Show configuration\n{}\n'.format(cfg)) - print('Collecting env info ...') - print('** System info **\n{}\n'.format(collect_env_info())) - - if cfg.use_gpu: - torch.backends.cudnn.benchmark = True - - datamanager = torchreid.data.ImageDataManager(**imagedata_kwargs(cfg)) - - print('Building model: {}'.format(cfg.model.name)) - model = osnet_models.build_model( - cfg.model.name, num_classes=datamanager.num_train_pids - ) - num_params, flops = compute_model_complexity( - model, (1, 3, cfg.data.height, cfg.data.width) - ) - print('Model complexity: params={:,} flops={:,}'.format(num_params, flops)) - - if cfg.use_gpu: - model = nn.DataParallel(model).cuda() - - optimizer = torchreid.optim.build_optimizer(model, **optimizer_kwargs(cfg)) - scheduler = torchreid.optim.build_lr_scheduler( - optimizer, **lr_scheduler_kwargs(cfg) - ) - - if cfg.model.resume and check_isfile(cfg.model.resume): - cfg.train.start_epoch = resume_from_checkpoint( - cfg.model.resume, model, optimizer=optimizer - ) - - print('Building NAS engine') - engine = ImageSoftmaxNASEngine( - datamanager, - model, - optimizer, - scheduler=scheduler, - use_gpu=cfg.use_gpu, - label_smooth=cfg.loss.softmax.label_smooth, - mc_iter=cfg.nas.mc_iter, - init_lmda=cfg.nas.init_lmda, - min_lmda=cfg.nas.min_lmda, - lmda_decay_step=cfg.nas.lmda_decay_step, - lmda_decay_rate=cfg.nas.lmda_decay_rate, - fixed_lmda=cfg.nas.fixed_lmda - ) - engine.run(**engine_run_kwargs(cfg)) - - print('*** Display the found architecture ***') - if cfg.use_gpu: - model.module.build_child_graph() - else: - model.build_child_graph() - - -if __name__ == '__main__': - main() diff --git a/spaces/xswu/HPSv2/src/open_clip/tokenizer.py b/spaces/xswu/HPSv2/src/open_clip/tokenizer.py deleted file mode 100644 index 23fcfcbcb4ca051ba5bba7520918693001999282..0000000000000000000000000000000000000000 --- a/spaces/xswu/HPSv2/src/open_clip/tokenizer.py +++ /dev/null @@ -1,214 +0,0 @@ -""" CLIP tokenizer - -Copied from https://github.com/openai/CLIP. Originally MIT License, Copyright (c) 2021 OpenAI. -""" -import gzip -import html -import os -from functools import lru_cache -from typing import Union, List - -import ftfy -import regex as re -import torch - -# https://stackoverflow.com/q/62691279 -import os -os.environ["TOKENIZERS_PARALLELISM"] = "false" - - -@lru_cache() -def default_bpe(): - return os.path.join(os.path.dirname(os.path.abspath(__file__)), "bpe_simple_vocab_16e6.txt.gz") - - -@lru_cache() -def bytes_to_unicode(): - """ - Returns list of utf-8 byte and a corresponding list of unicode strings. - The reversible bpe codes work on unicode strings. - This means you need a large # of unicode characters in your vocab if you want to avoid UNKs. - When you're at something like a 10B token dataset you end up needing around 5K for decent coverage. - This is a significant percentage of your normal, say, 32K bpe vocab. - To avoid that, we want lookup tables between utf-8 bytes and unicode strings. - And avoids mapping to whitespace/control characters the bpe code barfs on. - """ - bs = list(range(ord("!"), ord("~")+1))+list(range(ord("¡"), ord("¬")+1))+list(range(ord("®"), ord("ÿ")+1)) - cs = bs[:] - n = 0 - for b in range(2**8): - if b not in bs: - bs.append(b) - cs.append(2**8+n) - n += 1 - cs = [chr(n) for n in cs] - return dict(zip(bs, cs)) - - -def get_pairs(word): - """Return set of symbol pairs in a word. - Word is represented as tuple of symbols (symbols being variable-length strings). - """ - pairs = set() - prev_char = word[0] - for char in word[1:]: - pairs.add((prev_char, char)) - prev_char = char - return pairs - - -def basic_clean(text): - text = ftfy.fix_text(text) - text = html.unescape(html.unescape(text)) - return text.strip() - - -def whitespace_clean(text): - text = re.sub(r'\s+', ' ', text) - text = text.strip() - return text - - -class SimpleTokenizer(object): - def __init__(self, bpe_path: str = default_bpe(), special_tokens=None): - self.byte_encoder = bytes_to_unicode() - self.byte_decoder = {v: k for k, v in self.byte_encoder.items()} - merges = gzip.open(bpe_path).read().decode("utf-8").split('\n') - merges = merges[1:49152-256-2+1] - merges = [tuple(merge.split()) for merge in merges] - vocab = list(bytes_to_unicode().values()) - vocab = vocab + [v+'' for v in vocab] - for merge in merges: - vocab.append(''.join(merge)) - if not special_tokens: - special_tokens = ['', ''] - else: - special_tokens = ['', ''] + special_tokens - vocab.extend(special_tokens) - self.encoder = dict(zip(vocab, range(len(vocab)))) - self.decoder = {v: k for k, v in self.encoder.items()} - self.bpe_ranks = dict(zip(merges, range(len(merges)))) - self.cache = {t:t for t in special_tokens} - special = "|".join(special_tokens) - self.pat = re.compile(special + r"""|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""", re.IGNORECASE) - - self.vocab_size = len(self.encoder) - self.all_special_ids = [self.encoder[t] for t in special_tokens] - - def bpe(self, token): - if token in self.cache: - return self.cache[token] - word = tuple(token[:-1]) + ( token[-1] + '',) - pairs = get_pairs(word) - - if not pairs: - return token+'' - - while True: - bigram = min(pairs, key = lambda pair: self.bpe_ranks.get(pair, float('inf'))) - if bigram not in self.bpe_ranks: - break - first, second = bigram - new_word = [] - i = 0 - while i < len(word): - try: - j = word.index(first, i) - new_word.extend(word[i:j]) - i = j - except: - new_word.extend(word[i:]) - break - - if word[i] == first and i < len(word)-1 and word[i+1] == second: - new_word.append(first+second) - i += 2 - else: - new_word.append(word[i]) - i += 1 - new_word = tuple(new_word) - word = new_word - if len(word) == 1: - break - else: - pairs = get_pairs(word) - word = ' '.join(word) - self.cache[token] = word - return word - - def encode(self, text): - bpe_tokens = [] - text = whitespace_clean(basic_clean(text)).lower() - for token in re.findall(self.pat, text): - token = ''.join(self.byte_encoder[b] for b in token.encode('utf-8')) - bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(' ')) - return bpe_tokens - - def decode(self, tokens): - text = ''.join([self.decoder[token] for token in tokens]) - text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', errors="replace").replace('', ' ') - return text - - -_tokenizer = SimpleTokenizer() - -def decode(output_ids: torch.Tensor): - output_ids = output_ids.cpu().numpy() - return _tokenizer.decode(output_ids) - -def tokenize(texts: Union[str, List[str]], context_length: int = 77) -> torch.LongTensor: - """ - Returns the tokenized representation of given input string(s) - - Parameters - ---------- - texts : Union[str, List[str]] - An input string or a list of input strings to tokenize - context_length : int - The context length to use; all CLIP models use 77 as the context length - - Returns - ------- - A two-dimensional tensor containing the resulting tokens, shape = [number of input strings, context_length] - """ - if isinstance(texts, str): - texts = [texts] - - sot_token = _tokenizer.encoder[""] - eot_token = _tokenizer.encoder[""] - all_tokens = [[sot_token] + _tokenizer.encode(text) + [eot_token] for text in texts] - result = torch.zeros(len(all_tokens), context_length, dtype=torch.long) - - for i, tokens in enumerate(all_tokens): - if len(tokens) > context_length: - tokens = tokens[:context_length] # Truncate - tokens[-1] = eot_token - result[i, :len(tokens)] = torch.tensor(tokens) - - return result - - -class HFTokenizer: - """HuggingFace tokenizer wrapper""" - - def __init__(self, tokenizer_name: str): - from transformers import AutoTokenizer - self.tokenizer = AutoTokenizer.from_pretrained(tokenizer_name) - - def save_pretrained(self, dest): - self.tokenizer.save_pretrained(dest) - - def __call__(self, texts: Union[str, List[str]], context_length: int = 77) -> torch.Tensor: - # same cleaning as for default tokenizer, except lowercasing - # adding lower (for case-sensitive tokenizers) will make it more robust but less sensitive to nuance - if isinstance(texts, str): - texts = [texts] - texts = [whitespace_clean(basic_clean(text)) for text in texts] - input_ids = self.tokenizer( - texts, - return_tensors='pt', - max_length=context_length, - padding='max_length', - truncation=True, - ).input_ids - return input_ids diff --git a/spaces/yaoshining/text-generation-webui/modules/llama_attn_hijack.py b/spaces/yaoshining/text-generation-webui/modules/llama_attn_hijack.py deleted file mode 100644 index 925cdaa352326fdc23a3585699883d27b8de5c73..0000000000000000000000000000000000000000 --- a/spaces/yaoshining/text-generation-webui/modules/llama_attn_hijack.py +++ /dev/null @@ -1,171 +0,0 @@ -import math -import sys -from typing import Optional, Tuple - -import torch -import torch.nn as nn -import transformers.models.llama.modeling_llama - -import modules.shared as shared -from modules.logging_colors import logger - -if shared.args.xformers: - try: - import xformers.ops - except Exception: - logger.error("xformers not found! Please install it before trying to use it.", file=sys.stderr) - - -def hijack_llama_attention(): - if shared.args.xformers: - transformers.models.llama.modeling_llama.LlamaAttention.forward = xformers_forward - logger.info("Replaced attention with xformers_attention") - elif shared.args.sdp_attention: - transformers.models.llama.modeling_llama.LlamaAttention.forward = sdp_attention_forward - logger.info("Replaced attention with sdp_attention") - - -def xformers_forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.LongTensor] = None, - past_key_value: Optional[Tuple[torch.Tensor]] = None, - output_attentions: bool = False, - use_cache: bool = False, -) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]: - bsz, q_len, _ = hidden_states.size() - - query_states = self.q_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2) - key_states = self.k_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2) - value_states = self.v_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2) - - kv_seq_len = key_states.shape[-2] - if past_key_value is not None: - kv_seq_len += past_key_value[0].shape[-2] - cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len) - query_states, key_states = transformers.models.llama.modeling_llama.apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids) - # [bsz, nh, t, hd] - - if past_key_value is not None: - # reuse k, v, self_attention - key_states = torch.cat([past_key_value[0], key_states], dim=2) - value_states = torch.cat([past_key_value[1], value_states], dim=2) - - past_key_value = (key_states, value_states) if use_cache else None - - # We only apply xformers optimizations if we don't need to output the whole attention matrix - if not output_attentions: - query_states = query_states.transpose(1, 2) - key_states = key_states.transpose(1, 2) - value_states = value_states.transpose(1, 2) - - # This is a nasty hack. We know attention_mask in transformers is either LowerTriangular or all Zeros. - # We therefore check if one element in the upper triangular portion is zero. If it is, then the mask is all zeros. - if attention_mask is None or attention_mask[0, 0, 0, 1] == 0: - # input and output should be of form (bsz, q_len, num_heads, head_dim) - attn_output = xformers.ops.memory_efficient_attention(query_states, key_states, value_states, attn_bias=None) - else: - # input and output should be of form (bsz, q_len, num_heads, head_dim) - attn_output = xformers.ops.memory_efficient_attention(query_states, key_states, value_states, attn_bias=xformers.ops.LowerTriangularMask()) - attn_weights = None - else: - attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim) - - if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len): - raise ValueError( - f"Attention weights should be of size {(bsz * self.num_heads, q_len, kv_seq_len)}, but is" - f" {attn_weights.size()}" - ) - - if attention_mask is not None: - if attention_mask.size() != (bsz, 1, q_len, kv_seq_len): - raise ValueError( - f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}" - ) - attn_weights = attn_weights + attention_mask - attn_weights = torch.max(attn_weights, torch.tensor(torch.finfo(attn_weights.dtype).min)) - - # upcast attention to fp32 - attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype) - attn_output = torch.matmul(attn_weights, value_states) - - if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim): - raise ValueError( - f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is" - f" {attn_output.size()}" - ) - - attn_output = attn_output.transpose(1, 2) - - attn_output = attn_output.reshape(bsz, q_len, self.hidden_size) - attn_output = self.o_proj(attn_output) - return attn_output, attn_weights, past_key_value - - -def sdp_attention_forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.LongTensor] = None, - past_key_value: Optional[Tuple[torch.Tensor]] = None, - output_attentions: bool = False, - use_cache: bool = False, -) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]: - bsz, q_len, _ = hidden_states.size() - - query_states = self.q_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2) - key_states = self.k_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2) - value_states = self.v_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2) - - kv_seq_len = key_states.shape[-2] - if past_key_value is not None: - kv_seq_len += past_key_value[0].shape[-2] - cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len) - query_states, key_states = transformers.models.llama.modeling_llama.apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids) - # [bsz, nh, t, hd] - - if past_key_value is not None: - # reuse k, v, self_attention - key_states = torch.cat([past_key_value[0], key_states], dim=2) - value_states = torch.cat([past_key_value[1], value_states], dim=2) - - past_key_value = (key_states, value_states) if use_cache else None - - # We only apply sdp attention if we don't need to output the whole attention matrix - if not output_attentions: - attn_output = torch.nn.functional.scaled_dot_product_attention(query_states, key_states, value_states, attn_mask=attention_mask, is_causal=False) - attn_weights = None - else: - attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim) - - if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len): - raise ValueError( - f"Attention weights should be of size {(bsz * self.num_heads, q_len, kv_seq_len)}, but is" - f" {attn_weights.size()}" - ) - - if attention_mask is not None: - if attention_mask.size() != (bsz, 1, q_len, kv_seq_len): - raise ValueError( - f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}" - ) - attn_weights = attn_weights + attention_mask - attn_weights = torch.max(attn_weights, torch.tensor(torch.finfo(attn_weights.dtype).min)) - - # upcast attention to fp32 - attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype) - attn_output = torch.matmul(attn_weights, value_states) - - if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim): - raise ValueError( - f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is" - f" {attn_output.size()}" - ) - - attn_output = attn_output.transpose(1, 2) - attn_output = attn_output.reshape(bsz, q_len, self.hidden_size) - - attn_output = self.o_proj(attn_output) - - return attn_output, attn_weights, past_key_value diff --git a/spaces/yderre-aubay/midi-player-demo/src/main/components/TrackList/TrackDialog.tsx b/spaces/yderre-aubay/midi-player-demo/src/main/components/TrackList/TrackDialog.tsx deleted file mode 100644 index 96da01fba3a5a0c9fdbd9787418268ad4f24b617..0000000000000000000000000000000000000000 --- a/spaces/yderre-aubay/midi-player-demo/src/main/components/TrackList/TrackDialog.tsx +++ /dev/null @@ -1,93 +0,0 @@ -import { range } from "lodash" -import { FC, useEffect, useState } from "react" -import { Button, PrimaryButton } from "../../../components/Button" -import { - Dialog, - DialogActions, - DialogContent, - DialogTitle, -} from "../../../components/Dialog" -import { Label } from "../../../components/Label" -import { Localized } from "../../../components/Localized" -import { Select } from "../../../components/Select" -import { TextField } from "../../../components/TextField" -import { useStores } from "../../hooks/useStores" -import { TrackName } from "./TrackName" - -export interface TrackDialogProps { - trackId: number - open: boolean - onClose: () => void -} - -export const TrackDialog: FC = ({ - trackId, - open, - onClose, -}) => { - const { song } = useStores() - const track = song.tracks[trackId] - - const [name, setName] = useState(track.name) - const [channel, setChannel] = useState(track.channel) - - useEffect(() => { - setName(track.name) - setChannel(track.channel) - }, [trackId]) - - return ( - - - track:{" "} - - - - - setName(e.target.value as string)} - style={{ width: "100%", marginBottom: "1rem" }} - /> - - - - - - { - track.channel = channel - track.setName(name ?? "") - onClose() - }} - > - ok - - - - ) -} diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/deepspeed.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/deepspeed.py deleted file mode 100644 index 840d9cc2f55a16337c94e2106f48c421f35c7266..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/deepspeed.py +++ /dev/null @@ -1,40 +0,0 @@ -# Copyright 2020 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" -Integration with Deepspeed - kept for backward compatiblity, if you plan to make any edit, make sure to modify the file -in `integrations/deepspeed` instead. - -Check: https://github.com/huggingface/transformers/pull/25599 -""" -import warnings - - -warnings.warn( - "transformers.deepspeed module is deprecated and will be removed in a future version. Please import deepspeed modules directly from transformers.integrations", - FutureWarning, -) - -# Backward compatibility imports, to make sure all those objects can be found in integrations/deepspeed -from .integrations.deepspeed import ( # noqa - HfDeepSpeedConfig, - HfTrainerDeepSpeedConfig, - deepspeed_config, - deepspeed_init, - deepspeed_load_checkpoint, - deepspeed_optim_sched, - is_deepspeed_available, - is_deepspeed_zero3_enabled, - set_hf_deepspeed_config, - unset_hf_deepspeed_config, -) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/fsmt/__init__.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/fsmt/__init__.py deleted file mode 100644 index 65aba047469da14c6b25523fba31432e823ec47d..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/fsmt/__init__.py +++ /dev/null @@ -1,49 +0,0 @@ -# Copyright 2020 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from typing import TYPE_CHECKING - -from ...utils import OptionalDependencyNotAvailable, _LazyModule, is_torch_available - - -_import_structure = { - "configuration_fsmt": ["FSMT_PRETRAINED_CONFIG_ARCHIVE_MAP", "FSMTConfig"], - "tokenization_fsmt": ["FSMTTokenizer"], -} - -try: - if not is_torch_available(): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - pass -else: - _import_structure["modeling_fsmt"] = ["FSMTForConditionalGeneration", "FSMTModel", "PretrainedFSMTModel"] - - -if TYPE_CHECKING: - from .configuration_fsmt import FSMT_PRETRAINED_CONFIG_ARCHIVE_MAP, FSMTConfig - from .tokenization_fsmt import FSMTTokenizer - - try: - if not is_torch_available(): - raise OptionalDependencyNotAvailable() - except OptionalDependencyNotAvailable: - pass - else: - from .modeling_fsmt import FSMTForConditionalGeneration, FSMTModel, PretrainedFSMTModel - -else: - import sys - - sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/gpt2/modeling_gpt2.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/gpt2/modeling_gpt2.py deleted file mode 100644 index 714f0351b3e4df03ab9ae2c39bee9a694e4a278d..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/gpt2/modeling_gpt2.py +++ /dev/null @@ -1,1691 +0,0 @@ -# coding=utf-8 -# Copyright 2018 The OpenAI Team Authors and HuggingFace Inc. team. -# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""PyTorch OpenAI GPT-2 model.""" - -import math -import os -import warnings -from dataclasses import dataclass -from typing import Optional, Tuple, Union - -import torch -import torch.utils.checkpoint -from torch import nn -from torch.cuda.amp import autocast -from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss - -from ...activations import ACT2FN -from ...modeling_outputs import ( - BaseModelOutputWithPastAndCrossAttentions, - CausalLMOutputWithCrossAttentions, - QuestionAnsweringModelOutput, - SequenceClassifierOutputWithPast, - TokenClassifierOutput, -) -from ...modeling_utils import PreTrainedModel, SequenceSummary -from ...pytorch_utils import Conv1D, find_pruneable_heads_and_indices, prune_conv1d_layer -from ...utils import ( - ModelOutput, - add_code_sample_docstrings, - add_start_docstrings, - add_start_docstrings_to_model_forward, - logging, - replace_return_docstrings, -) -from ...utils.model_parallel_utils import assert_device_map, get_device_map -from .configuration_gpt2 import GPT2Config - - -logger = logging.get_logger(__name__) - -_CHECKPOINT_FOR_DOC = "gpt2" -_CONFIG_FOR_DOC = "GPT2Config" - -GPT2_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "gpt2", - "gpt2-medium", - "gpt2-large", - "gpt2-xl", - "distilgpt2", - # See all GPT-2 models at https://huggingface.co/models?filter=gpt2 -] - - -def load_tf_weights_in_gpt2(model, config, gpt2_checkpoint_path): - """Load tf checkpoints in a pytorch model""" - try: - import re - - import tensorflow as tf - except ImportError: - logger.error( - "Loading a TensorFlow model in PyTorch, requires TensorFlow to be installed. Please see " - "https://www.tensorflow.org/install/ for installation instructions." - ) - raise - tf_path = os.path.abspath(gpt2_checkpoint_path) - logger.info(f"Converting TensorFlow checkpoint from {tf_path}") - # Load weights from TF model - init_vars = tf.train.list_variables(tf_path) - names = [] - arrays = [] - for name, shape in init_vars: - logger.info(f"Loading TF weight {name} with shape {shape}") - array = tf.train.load_variable(tf_path, name) - names.append(name) - arrays.append(array.squeeze()) - - for name, array in zip(names, arrays): - name = name[6:] # skip "model/" - name = name.split("/") - pointer = model - for m_name in name: - if re.fullmatch(r"[A-Za-z]+\d+", m_name): - scope_names = re.split(r"(\d+)", m_name) - else: - scope_names = [m_name] - if scope_names[0] == "w" or scope_names[0] == "g": - pointer = getattr(pointer, "weight") - elif scope_names[0] == "b": - pointer = getattr(pointer, "bias") - elif scope_names[0] == "wpe" or scope_names[0] == "wte": - pointer = getattr(pointer, scope_names[0]) - pointer = getattr(pointer, "weight") - else: - pointer = getattr(pointer, scope_names[0]) - if len(scope_names) >= 2: - num = int(scope_names[1]) - pointer = pointer[num] - try: - if pointer.shape != array.shape: - raise ValueError(f"Pointer shape {pointer.shape} and array shape {array.shape} mismatched") - except ValueError as e: - e.args += (pointer.shape, array.shape) - raise - logger.info(f"Initialize PyTorch weight {name}") - pointer.data = torch.from_numpy(array) - return model - - -class GPT2Attention(nn.Module): - def __init__(self, config, is_cross_attention=False, layer_idx=None): - super().__init__() - - max_positions = config.max_position_embeddings - self.register_buffer( - "bias", - torch.tril(torch.ones((max_positions, max_positions), dtype=torch.bool)).view( - 1, 1, max_positions, max_positions - ), - persistent=False, - ) - self.register_buffer("masked_bias", torch.tensor(-1e4), persistent=False) - - self.embed_dim = config.hidden_size - self.num_heads = config.num_attention_heads - self.head_dim = self.embed_dim // self.num_heads - self.split_size = self.embed_dim - if self.head_dim * self.num_heads != self.embed_dim: - raise ValueError( - f"`embed_dim` must be divisible by num_heads (got `embed_dim`: {self.embed_dim} and `num_heads`:" - f" {self.num_heads})." - ) - - self.scale_attn_weights = config.scale_attn_weights - self.is_cross_attention = is_cross_attention - - # Layer-wise attention scaling, reordering, and upcasting - self.scale_attn_by_inverse_layer_idx = config.scale_attn_by_inverse_layer_idx - self.layer_idx = layer_idx - self.reorder_and_upcast_attn = config.reorder_and_upcast_attn - - if self.is_cross_attention: - self.c_attn = Conv1D(2 * self.embed_dim, self.embed_dim) - self.q_attn = Conv1D(self.embed_dim, self.embed_dim) - else: - self.c_attn = Conv1D(3 * self.embed_dim, self.embed_dim) - self.c_proj = Conv1D(self.embed_dim, self.embed_dim) - - self.attn_dropout = nn.Dropout(config.attn_pdrop) - self.resid_dropout = nn.Dropout(config.resid_pdrop) - - self.pruned_heads = set() - - def prune_heads(self, heads): - if len(heads) == 0: - return - heads, index = find_pruneable_heads_and_indices(heads, self.num_heads, self.head_dim, self.pruned_heads) - index_attn = torch.cat([index, index + self.split_size, index + (2 * self.split_size)]) - - # Prune conv1d layers - self.c_attn = prune_conv1d_layer(self.c_attn, index_attn, dim=1) - self.c_proj = prune_conv1d_layer(self.c_proj, index, dim=0) - - # Update hyper params - self.split_size = (self.split_size // self.num_heads) * (self.num_heads - len(heads)) - self.num_heads = self.num_heads - len(heads) - self.pruned_heads = self.pruned_heads.union(heads) - - def _attn(self, query, key, value, attention_mask=None, head_mask=None): - attn_weights = torch.matmul(query, key.transpose(-1, -2)) - - if self.scale_attn_weights: - attn_weights = attn_weights / torch.full( - [], value.size(-1) ** 0.5, dtype=attn_weights.dtype, device=attn_weights.device - ) - - # Layer-wise attention scaling - if self.scale_attn_by_inverse_layer_idx: - attn_weights = attn_weights / float(self.layer_idx + 1) - - if not self.is_cross_attention: - # if only "normal" attention layer implements causal mask - query_length, key_length = query.size(-2), key.size(-2) - causal_mask = self.bias[:, :, key_length - query_length : key_length, :key_length] - mask_value = torch.finfo(attn_weights.dtype).min - # Need to be a tensor, otherwise we get error: `RuntimeError: expected scalar type float but found double`. - # Need to be on the same device, otherwise `RuntimeError: ..., x and y to be on the same device` - mask_value = torch.full([], mask_value, dtype=attn_weights.dtype).to(attn_weights.device) - attn_weights = torch.where(causal_mask, attn_weights.to(attn_weights.dtype), mask_value) - - if attention_mask is not None: - # Apply the attention mask - attn_weights = attn_weights + attention_mask - - attn_weights = nn.functional.softmax(attn_weights, dim=-1) - - # Downcast (if necessary) back to V's dtype (if in mixed-precision) -- No-Op otherwise - attn_weights = attn_weights.type(value.dtype) - attn_weights = self.attn_dropout(attn_weights) - - # Mask heads if we want to - if head_mask is not None: - attn_weights = attn_weights * head_mask - - attn_output = torch.matmul(attn_weights, value) - - return attn_output, attn_weights - - def _upcast_and_reordered_attn(self, query, key, value, attention_mask=None, head_mask=None): - # Use `torch.baddbmm` (a bit more efficient w/ alpha param for scaling -- from Megatron-LM) - bsz, num_heads, q_seq_len, dk = query.size() - _, _, k_seq_len, _ = key.size() - - # Preallocate attn_weights for `baddbmm` - attn_weights = torch.empty(bsz * num_heads, q_seq_len, k_seq_len, dtype=torch.float32, device=query.device) - - # Compute Scale Factor - scale_factor = 1.0 - if self.scale_attn_weights: - scale_factor /= float(value.size(-1)) ** 0.5 - - if self.scale_attn_by_inverse_layer_idx: - scale_factor /= float(self.layer_idx + 1) - - # Upcast (turn off autocast) and reorder (Scale K by 1 / root(dk)) - with autocast(enabled=False): - q, k = query.reshape(-1, q_seq_len, dk), key.transpose(-1, -2).reshape(-1, dk, k_seq_len) - attn_weights = torch.baddbmm(attn_weights, q.float(), k.float(), beta=0, alpha=scale_factor) - attn_weights = attn_weights.reshape(bsz, num_heads, q_seq_len, k_seq_len) - - if not self.is_cross_attention: - # if only "normal" attention layer implements causal mask - query_length, key_length = query.size(-2), key.size(-2) - causal_mask = self.bias[:, :, key_length - query_length : key_length, :key_length] - mask_value = torch.finfo(attn_weights.dtype).min - # Need to be a tensor, otherwise we get error: `RuntimeError: expected scalar type float but found double`. - # Need to be on the same device, otherwise `RuntimeError: ..., x and y to be on the same device` - mask_value = torch.tensor(mask_value, dtype=attn_weights.dtype).to(attn_weights.device) - attn_weights = torch.where(causal_mask, attn_weights, mask_value) - - if attention_mask is not None: - # Apply the attention mask - attn_weights = attn_weights + attention_mask - - attn_weights = nn.functional.softmax(attn_weights, dim=-1) - - # Downcast (if necessary) back to V's dtype (if in mixed-precision) -- No-Op if otherwise - if attn_weights.dtype != torch.float32: - raise RuntimeError("Error with upcasting, attn_weights does not have dtype torch.float32") - attn_weights = attn_weights.type(value.dtype) - attn_weights = self.attn_dropout(attn_weights) - - # Mask heads if we want to - if head_mask is not None: - attn_weights = attn_weights * head_mask - - attn_output = torch.matmul(attn_weights, value) - - return attn_output, attn_weights - - def _split_heads(self, tensor, num_heads, attn_head_size): - """ - Splits hidden_size dim into attn_head_size and num_heads - """ - new_shape = tensor.size()[:-1] + (num_heads, attn_head_size) - tensor = tensor.view(new_shape) - return tensor.permute(0, 2, 1, 3) # (batch, head, seq_length, head_features) - - def _merge_heads(self, tensor, num_heads, attn_head_size): - """ - Merges attn_head_size dim and num_attn_heads dim into hidden_size - """ - tensor = tensor.permute(0, 2, 1, 3).contiguous() - new_shape = tensor.size()[:-2] + (num_heads * attn_head_size,) - return tensor.view(new_shape) - - def forward( - self, - hidden_states: Optional[Tuple[torch.FloatTensor]], - layer_past: Optional[Tuple[torch.Tensor]] = None, - attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.Tensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - use_cache: Optional[bool] = False, - output_attentions: Optional[bool] = False, - ) -> Tuple[Union[torch.Tensor, Tuple[torch.Tensor]], ...]: - if encoder_hidden_states is not None: - if not hasattr(self, "q_attn"): - raise ValueError( - "If class is used as cross attention, the weights `q_attn` have to be defined. " - "Please make sure to instantiate class with `GPT2Attention(..., is_cross_attention=True)`." - ) - - query = self.q_attn(hidden_states) - key, value = self.c_attn(encoder_hidden_states).split(self.split_size, dim=2) - attention_mask = encoder_attention_mask - else: - query, key, value = self.c_attn(hidden_states).split(self.split_size, dim=2) - - query = self._split_heads(query, self.num_heads, self.head_dim) - key = self._split_heads(key, self.num_heads, self.head_dim) - value = self._split_heads(value, self.num_heads, self.head_dim) - - if layer_past is not None: - past_key, past_value = layer_past - key = torch.cat((past_key, key), dim=-2) - value = torch.cat((past_value, value), dim=-2) - - if use_cache is True: - present = (key, value) - else: - present = None - - if self.reorder_and_upcast_attn: - attn_output, attn_weights = self._upcast_and_reordered_attn(query, key, value, attention_mask, head_mask) - else: - attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask) - - attn_output = self._merge_heads(attn_output, self.num_heads, self.head_dim) - attn_output = self.c_proj(attn_output) - attn_output = self.resid_dropout(attn_output) - - outputs = (attn_output, present) - if output_attentions: - outputs += (attn_weights,) - - return outputs # a, present, (attentions) - - -class GPT2MLP(nn.Module): - def __init__(self, intermediate_size, config): - super().__init__() - embed_dim = config.hidden_size - self.c_fc = Conv1D(intermediate_size, embed_dim) - self.c_proj = Conv1D(embed_dim, intermediate_size) - self.act = ACT2FN[config.activation_function] - self.dropout = nn.Dropout(config.resid_pdrop) - - def forward(self, hidden_states: Optional[Tuple[torch.FloatTensor]]) -> torch.FloatTensor: - hidden_states = self.c_fc(hidden_states) - hidden_states = self.act(hidden_states) - hidden_states = self.c_proj(hidden_states) - hidden_states = self.dropout(hidden_states) - return hidden_states - - -class GPT2Block(nn.Module): - def __init__(self, config, layer_idx=None): - super().__init__() - hidden_size = config.hidden_size - inner_dim = config.n_inner if config.n_inner is not None else 4 * hidden_size - - self.ln_1 = nn.LayerNorm(hidden_size, eps=config.layer_norm_epsilon) - self.attn = GPT2Attention(config, layer_idx=layer_idx) - self.ln_2 = nn.LayerNorm(hidden_size, eps=config.layer_norm_epsilon) - - if config.add_cross_attention: - self.crossattention = GPT2Attention(config, is_cross_attention=True, layer_idx=layer_idx) - self.ln_cross_attn = nn.LayerNorm(hidden_size, eps=config.layer_norm_epsilon) - - self.mlp = GPT2MLP(inner_dim, config) - - def forward( - self, - hidden_states: Optional[Tuple[torch.FloatTensor]], - layer_past: Optional[Tuple[torch.Tensor]] = None, - attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.Tensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - use_cache: Optional[bool] = False, - output_attentions: Optional[bool] = False, - ) -> Union[Tuple[torch.Tensor], Optional[Tuple[torch.Tensor, Tuple[torch.FloatTensor, ...]]]]: - residual = hidden_states - hidden_states = self.ln_1(hidden_states) - attn_outputs = self.attn( - hidden_states, - layer_past=layer_past, - attention_mask=attention_mask, - head_mask=head_mask, - use_cache=use_cache, - output_attentions=output_attentions, - ) - attn_output = attn_outputs[0] # output_attn: a, present, (attentions) - outputs = attn_outputs[1:] - # residual connection - hidden_states = attn_output + residual - - if encoder_hidden_states is not None: - # add one self-attention block for cross-attention - if not hasattr(self, "crossattention"): - raise ValueError( - f"If `encoder_hidden_states` are passed, {self} has to be instantiated with " - "cross-attention layers by setting `config.add_cross_attention=True`" - ) - residual = hidden_states - hidden_states = self.ln_cross_attn(hidden_states) - cross_attn_outputs = self.crossattention( - hidden_states, - attention_mask=attention_mask, - head_mask=head_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - output_attentions=output_attentions, - ) - attn_output = cross_attn_outputs[0] - # residual connection - hidden_states = residual + attn_output - outputs = outputs + cross_attn_outputs[2:] # add cross attentions if we output attention weights - - residual = hidden_states - hidden_states = self.ln_2(hidden_states) - feed_forward_hidden_states = self.mlp(hidden_states) - # residual connection - hidden_states = residual + feed_forward_hidden_states - - if use_cache: - outputs = (hidden_states,) + outputs - else: - outputs = (hidden_states,) + outputs[1:] - - return outputs # hidden_states, present, (attentions, cross_attentions) - - -class GPT2PreTrainedModel(PreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = GPT2Config - load_tf_weights = load_tf_weights_in_gpt2 - base_model_prefix = "transformer" - is_parallelizable = True - supports_gradient_checkpointing = True - _no_split_modules = ["GPT2Block"] - _skip_keys_device_placement = "past_key_values" - - def __init__(self, *inputs, **kwargs): - super().__init__(*inputs, **kwargs) - - def _init_weights(self, module): - """Initialize the weights.""" - if isinstance(module, (nn.Linear, Conv1D)): - # Slightly different from the TF version which uses truncated_normal for initialization - # cf https://github.com/pytorch/pytorch/pull/5617 - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - if module.bias is not None: - module.bias.data.zero_() - elif isinstance(module, nn.Embedding): - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - if module.padding_idx is not None: - module.weight.data[module.padding_idx].zero_() - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - - # Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme: - # > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale - # > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers. - # > -- GPT-2 :: https://openai.com/blog/better-language-models/ - # - # Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py - for name, p in module.named_parameters(): - if name == "c_proj.weight": - # Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block - p.data.normal_(mean=0.0, std=(self.config.initializer_range / math.sqrt(2 * self.config.n_layer))) - - def _set_gradient_checkpointing(self, module, value=False): - if isinstance(module, GPT2Model): - module.gradient_checkpointing = value - - -@dataclass -class GPT2DoubleHeadsModelOutput(ModelOutput): - """ - Base class for outputs of models predicting if two sentences are consecutive or not. - - Args: - loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided): - Language modeling loss. - mc_loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `mc_labels` is provided): - Multiple choice classification loss. - logits (`torch.FloatTensor` of shape `(batch_size, num_choices, sequence_length, config.vocab_size)`): - Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). - mc_logits (`torch.FloatTensor` of shape `(batch_size, num_choices)`): - Prediction scores of the multiple choice classification head (scores for each choice before SoftMax). - past_key_values (`Tuple[Tuple[torch.Tensor]]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): - Tuple of length `config.n_layers`, containing tuples of tensors of shape `(batch_size, num_heads, - sequence_length, embed_size_per_head)`). - - Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see - `past_key_values` input) to speed up sequential decoding. - hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of - shape `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - GPT2Attentions weights after the attention softmax, used to compute the weighted average in the - self-attention heads. - """ - - loss: Optional[torch.FloatTensor] = None - mc_loss: Optional[torch.FloatTensor] = None - logits: torch.FloatTensor = None - mc_logits: torch.FloatTensor = None - past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None - hidden_states: Optional[Tuple[torch.FloatTensor]] = None - attentions: Optional[Tuple[torch.FloatTensor]] = None - - -GPT2_START_DOCSTRING = r""" - - This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the - library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads - etc.) - - This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. - Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage - and behavior. - - Parameters: - config ([`GPT2Config`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - -GPT2_INPUTS_DOCSTRING = r""" - Args: - input_ids (`torch.LongTensor` of shape `(batch_size, input_ids_length)`): - `input_ids_length` = `sequence_length` if `past_key_values` is `None` else - `past_key_values[0][0].shape[-2]` (`sequence_length` of input past key value states). Indices of input - sequence tokens in the vocabulary. - - If `past_key_values` is used, only `input_ids` that do not have their past calculated should be passed as - `input_ids`. - - Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - past_key_values (`Tuple[Tuple[torch.Tensor]]` of length `config.n_layers`): - Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see - `past_key_values` output below). Can be used to speed up sequential decoding. The `input_ids` which have - their past given to this model should not be passed as `input_ids` as they have already been computed. - attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - If `past_key_values` is used, `attention_mask` needs to contain the masking strategy that was used for - `past_key_values`. In other words, the `attention_mask` always has to have the length: - `len(past_key_values) + len(input_ids)` - - [What are attention masks?](../glossary#attention-mask) - token_type_ids (`torch.LongTensor` of shape `(batch_size, input_ids_length)`, *optional*): - Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, - 1]`: - - - 0 corresponds to a *sentence A* token, - - 1 corresponds to a *sentence B* token. - - [What are token type IDs?](../glossary#token-type-ids) - position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, - config.max_position_embeddings - 1]`. - - [What are position IDs?](../glossary#position-ids) - head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): - Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This - is useful if you want more control over how to convert `input_ids` indices into associated vectors than the - model's internal embedding lookup matrix. - - If `past_key_values` is used, optionally only the last `inputs_embeds` have to be input (see - `past_key_values`). - use_cache (`bool`, *optional*): - If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see - `past_key_values`). - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. -""" -PARALLELIZE_DOCSTRING = r""" - This is an experimental feature and is a subject to change at a moment's notice. - - Uses a device map to distribute attention modules of the model across several devices. If no device map is given, - it will evenly distribute blocks across all devices. - - Args: - device_map (`Dict[int, list]`, optional, defaults to None): - A dictionary that maps attention modules to devices. Note that the embedding module and LMHead are always - automatically mapped to the first device (for esoteric reasons). That means that the first device should - have fewer attention modules mapped to it than other devices. For reference, the gpt2 models have the - following number of attention modules: - - - gpt2: 12 - - gpt2-medium: 24 - - gpt2-large: 36 - - gpt2-xl: 48 - - Example: - - ```python - # Here is an example of a device map on a machine with 4 GPUs using gpt2-xl, which has a total of 48 attention modules: - model = GPT2LMHeadModel.from_pretrained("gpt2-xl") - device_map = { - 0: [0, 1, 2, 3, 4, 5, 6, 7, 8], - 1: [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21], - 2: [22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34], - 3: [35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47], - } - model.parallelize(device_map) - ``` -""" -DEPARALLELIZE_DOCSTRING = r""" - Moves the model to cpu from a model parallel state. - - Example: - - ```python - # On a 4 GPU machine with gpt2-large: - model = GPT2LMHeadModel.from_pretrained("gpt2-large") - device_map = { - 0: [0, 1, 2, 3, 4, 5, 6, 7], - 1: [8, 9, 10, 11, 12, 13, 14, 15], - 2: [16, 17, 18, 19, 20, 21, 22, 23], - 3: [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], - } - model.parallelize(device_map) # Splits the model across several devices - model.deparallelize() # Put the model back on cpu and cleans memory by calling torch.cuda.empty_cache() - ``` -""" - - -@add_start_docstrings( - "The bare GPT2 Model transformer outputting raw hidden-states without any specific head on top.", - GPT2_START_DOCSTRING, -) -class GPT2Model(GPT2PreTrainedModel): - def __init__(self, config): - super().__init__(config) - - self.embed_dim = config.hidden_size - - self.wte = nn.Embedding(config.vocab_size, self.embed_dim) - self.wpe = nn.Embedding(config.max_position_embeddings, self.embed_dim) - - self.drop = nn.Dropout(config.embd_pdrop) - self.h = nn.ModuleList([GPT2Block(config, layer_idx=i) for i in range(config.num_hidden_layers)]) - self.ln_f = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_epsilon) - - # Model parallel - self.model_parallel = False - self.device_map = None - self.gradient_checkpointing = False - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings(PARALLELIZE_DOCSTRING) - def parallelize(self, device_map=None): - # Check validity of device_map - warnings.warn( - "`GPT2Model.parallelize` is deprecated and will be removed in v5 of Transformers, you should load your" - " model with `device_map='balanced'` in the call to `from_pretrained`. You can also provide your own" - " `device_map` but it needs to be a dictionary module_name to device, so for instance {'h.0': 0, 'h.1': 1," - " ...}", - FutureWarning, - ) - self.device_map = ( - get_device_map(len(self.h), range(torch.cuda.device_count())) if device_map is None else device_map - ) - assert_device_map(self.device_map, len(self.h)) - self.model_parallel = True - self.first_device = "cpu" if "cpu" in self.device_map.keys() else "cuda:" + str(min(self.device_map.keys())) - self.last_device = "cuda:" + str(max(self.device_map.keys())) - self.wte = self.wte.to(self.first_device) - self.wpe = self.wpe.to(self.first_device) - # Load onto devices - for k, v in self.device_map.items(): - for block in v: - cuda_device = "cuda:" + str(k) - self.h[block] = self.h[block].to(cuda_device) - # ln_f to last - self.ln_f = self.ln_f.to(self.last_device) - - @add_start_docstrings(DEPARALLELIZE_DOCSTRING) - def deparallelize(self): - warnings.warn( - "Like `parallelize`, `deparallelize` is deprecated and will be removed in v5 of Transformers.", - FutureWarning, - ) - self.model_parallel = False - self.device_map = None - self.first_device = "cpu" - self.last_device = "cpu" - self.wte = self.wte.to("cpu") - self.wpe = self.wpe.to("cpu") - for index in range(len(self.h)): - self.h[index] = self.h[index].to("cpu") - self.ln_f = self.ln_f.to("cpu") - torch.cuda.empty_cache() - - def get_input_embeddings(self): - return self.wte - - def set_input_embeddings(self, new_embeddings): - self.wte = new_embeddings - - def _prune_heads(self, heads_to_prune): - """ - Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} - """ - for layer, heads in heads_to_prune.items(): - self.h[layer].attn.prune_heads(heads) - - @add_start_docstrings_to_model_forward(GPT2_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=BaseModelOutputWithPastAndCrossAttentions, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.Tensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, BaseModelOutputWithPastAndCrossAttentions]: - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - use_cache = use_cache if use_cache is not None else self.config.use_cache - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") - elif input_ids is not None: - self.warn_if_padding_and_no_attention_mask(input_ids, attention_mask) - input_shape = input_ids.size() - input_ids = input_ids.view(-1, input_shape[-1]) - batch_size = input_ids.shape[0] - elif inputs_embeds is not None: - input_shape = inputs_embeds.size()[:-1] - batch_size = inputs_embeds.shape[0] - else: - raise ValueError("You have to specify either input_ids or inputs_embeds") - - device = input_ids.device if input_ids is not None else inputs_embeds.device - - if token_type_ids is not None: - token_type_ids = token_type_ids.view(-1, input_shape[-1]) - - if past_key_values is None: - past_length = 0 - past_key_values = tuple([None] * len(self.h)) - else: - past_length = past_key_values[0][0].size(-2) - if position_ids is None: - position_ids = torch.arange(past_length, input_shape[-1] + past_length, dtype=torch.long, device=device) - position_ids = position_ids.unsqueeze(0) - - # GPT2Attention mask. - if attention_mask is not None: - if batch_size <= 0: - raise ValueError("batch_size has to be defined and > 0") - attention_mask = attention_mask.view(batch_size, -1) - # We create a 3D attention mask from a 2D tensor mask. - # Sizes are [batch_size, 1, 1, to_seq_length] - # So we can broadcast to [batch_size, num_heads, from_seq_length, to_seq_length] - # this attention mask is more simple than the triangular masking of causal attention - # used in OpenAI GPT, we just need to prepare the broadcast dimension here. - attention_mask = attention_mask[:, None, None, :] - - # Since attention_mask is 1.0 for positions we want to attend and 0.0 for - # masked positions, this operation will create a tensor which is 0.0 for - # positions we want to attend and the dtype's smallest value for masked positions. - # Since we are adding it to the raw scores before the softmax, this is - # effectively the same as removing these entirely. - attention_mask = attention_mask.to(dtype=self.dtype) # fp16 compatibility - attention_mask = (1.0 - attention_mask) * torch.finfo(self.dtype).min - - # If a 2D or 3D attention mask is provided for the cross-attention - # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length] - if self.config.add_cross_attention and encoder_hidden_states is not None: - encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size() - encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length) - if encoder_attention_mask is None: - encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device) - encoder_attention_mask = self.invert_attention_mask(encoder_attention_mask) - else: - encoder_attention_mask = None - - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # head_mask has shape n_layer x batch x n_heads x N x N - head_mask = self.get_head_mask(head_mask, self.config.n_layer) - - if inputs_embeds is None: - inputs_embeds = self.wte(input_ids) - position_embeds = self.wpe(position_ids) - hidden_states = inputs_embeds + position_embeds - - if token_type_ids is not None: - token_type_embeds = self.wte(token_type_ids) - hidden_states = hidden_states + token_type_embeds - - hidden_states = self.drop(hidden_states) - - output_shape = (-1,) + input_shape[1:] + (hidden_states.size(-1),) - - if self.gradient_checkpointing and self.training: - if use_cache: - logger.warning_once( - "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." - ) - use_cache = False - - presents = () if use_cache else None - all_self_attentions = () if output_attentions else None - all_cross_attentions = () if output_attentions and self.config.add_cross_attention else None - all_hidden_states = () if output_hidden_states else None - for i, (block, layer_past) in enumerate(zip(self.h, past_key_values)): - # Model parallel - if self.model_parallel: - torch.cuda.set_device(hidden_states.device) - # Ensure layer_past is on same device as hidden_states (might not be correct) - if layer_past is not None: - layer_past = tuple(past_state.to(hidden_states.device) for past_state in layer_past) - # Ensure that attention_mask is always on the same device as hidden_states - if attention_mask is not None: - attention_mask = attention_mask.to(hidden_states.device) - if isinstance(head_mask, torch.Tensor): - head_mask = head_mask.to(hidden_states.device) - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if self.gradient_checkpointing and self.training: - - def create_custom_forward(module): - def custom_forward(*inputs): - # None for past_key_value - return module(*inputs, use_cache, output_attentions) - - return custom_forward - - outputs = torch.utils.checkpoint.checkpoint( - create_custom_forward(block), - hidden_states, - None, - attention_mask, - head_mask[i], - encoder_hidden_states, - encoder_attention_mask, - ) - else: - outputs = block( - hidden_states, - layer_past=layer_past, - attention_mask=attention_mask, - head_mask=head_mask[i], - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - use_cache=use_cache, - output_attentions=output_attentions, - ) - - hidden_states = outputs[0] - if use_cache is True: - presents = presents + (outputs[1],) - - if output_attentions: - all_self_attentions = all_self_attentions + (outputs[2 if use_cache else 1],) - if self.config.add_cross_attention: - all_cross_attentions = all_cross_attentions + (outputs[3 if use_cache else 2],) - - # Model Parallel: If it's the last layer for that device, put things on the next device - if self.model_parallel: - for k, v in self.device_map.items(): - if i == v[-1] and "cuda:" + str(k) != self.last_device: - hidden_states = hidden_states.to("cuda:" + str(k + 1)) - - hidden_states = self.ln_f(hidden_states) - - hidden_states = hidden_states.view(output_shape) - # Add last hidden state - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if not return_dict: - return tuple( - v - for v in [hidden_states, presents, all_hidden_states, all_self_attentions, all_cross_attentions] - if v is not None - ) - - return BaseModelOutputWithPastAndCrossAttentions( - last_hidden_state=hidden_states, - past_key_values=presents, - hidden_states=all_hidden_states, - attentions=all_self_attentions, - cross_attentions=all_cross_attentions, - ) - - -@add_start_docstrings( - """ - The GPT2 Model transformer with a language modeling head on top (linear layer with weights tied to the input - embeddings). - """, - GPT2_START_DOCSTRING, -) -class GPT2LMHeadModel(GPT2PreTrainedModel): - _tied_weights_keys = ["lm_head.weight"] - - def __init__(self, config): - super().__init__(config) - self.transformer = GPT2Model(config) - self.lm_head = nn.Linear(config.n_embd, config.vocab_size, bias=False) - - # Model parallel - self.model_parallel = False - self.device_map = None - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings(PARALLELIZE_DOCSTRING) - def parallelize(self, device_map=None): - warnings.warn( - "`GPT2LMHeadModel.parallelize` is deprecated and will be removed in v5 of Transformers, you should load" - " your model with `device_map='balanced'` in the call to `from_pretrained`. You can also provide your own" - " `device_map` but it needs to be a dictionary module_name to device, so for instance {'transformer.h.0':" - " 0, 'transformer.h.1': 1, ...}", - FutureWarning, - ) - self.device_map = ( - get_device_map(len(self.transformer.h), range(torch.cuda.device_count())) - if device_map is None - else device_map - ) - assert_device_map(self.device_map, len(self.transformer.h)) - self.transformer.parallelize(self.device_map) - self.lm_head = self.lm_head.to(self.transformer.first_device) - self.model_parallel = True - - @add_start_docstrings(DEPARALLELIZE_DOCSTRING) - def deparallelize(self): - warnings.warn( - "Like `parallelize`, `deparallelize` is deprecated and will be removed in v5 of Transformers.", - FutureWarning, - ) - self.transformer.deparallelize() - self.transformer = self.transformer.to("cpu") - self.lm_head = self.lm_head.to("cpu") - self.model_parallel = False - torch.cuda.empty_cache() - - def get_output_embeddings(self): - return self.lm_head - - def set_output_embeddings(self, new_embeddings): - self.lm_head = new_embeddings - - def prepare_inputs_for_generation(self, input_ids, past_key_values=None, inputs_embeds=None, **kwargs): - token_type_ids = kwargs.get("token_type_ids", None) - # only last token for inputs_ids if past is defined in kwargs - if past_key_values: - input_ids = input_ids[:, -1].unsqueeze(-1) - if token_type_ids is not None: - token_type_ids = token_type_ids[:, -1].unsqueeze(-1) - - attention_mask = kwargs.get("attention_mask", None) - position_ids = kwargs.get("position_ids", None) - - if attention_mask is not None and position_ids is None: - # create position_ids on the fly for batch generation - position_ids = attention_mask.long().cumsum(-1) - 1 - position_ids.masked_fill_(attention_mask == 0, 1) - if past_key_values: - position_ids = position_ids[:, -1].unsqueeze(-1) - else: - position_ids = None - - # if `inputs_embeds` are passed, we only want to use them in the 1st generation step - if inputs_embeds is not None and past_key_values is None: - model_inputs = {"inputs_embeds": inputs_embeds} - else: - model_inputs = {"input_ids": input_ids} - - model_inputs.update( - { - "past_key_values": past_key_values, - "use_cache": kwargs.get("use_cache"), - "position_ids": position_ids, - "attention_mask": attention_mask, - "token_type_ids": token_type_ids, - } - ) - return model_inputs - - @add_start_docstrings_to_model_forward(GPT2_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=CausalLMOutputWithCrossAttentions, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.Tensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, CausalLMOutputWithCrossAttentions]: - r""" - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set - `labels = input_ids` Indices are selected in `[-100, 0, ..., config.vocab_size]` All labels set to `-100` - are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size]` - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - transformer_outputs = self.transformer( - input_ids, - past_key_values=past_key_values, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - hidden_states = transformer_outputs[0] - - # Set device for model parallelism - if self.model_parallel: - torch.cuda.set_device(self.transformer.first_device) - hidden_states = hidden_states.to(self.lm_head.weight.device) - - lm_logits = self.lm_head(hidden_states) - - loss = None - if labels is not None: - # move labels to correct device to enable model parallelism - labels = labels.to(lm_logits.device) - # Shift so that tokens < n predict n - shift_logits = lm_logits[..., :-1, :].contiguous() - shift_labels = labels[..., 1:].contiguous() - # Flatten the tokens - loss_fct = CrossEntropyLoss() - loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1)) - - if not return_dict: - output = (lm_logits,) + transformer_outputs[1:] - return ((loss,) + output) if loss is not None else output - - return CausalLMOutputWithCrossAttentions( - loss=loss, - logits=lm_logits, - past_key_values=transformer_outputs.past_key_values, - hidden_states=transformer_outputs.hidden_states, - attentions=transformer_outputs.attentions, - cross_attentions=transformer_outputs.cross_attentions, - ) - - @staticmethod - def _reorder_cache( - past_key_values: Tuple[Tuple[torch.Tensor]], beam_idx: torch.Tensor - ) -> Tuple[Tuple[torch.Tensor]]: - """ - This function is used to re-order the `past_key_values` cache if [`~PreTrainedModel.beam_search`] or - [`~PreTrainedModel.beam_sample`] is called. This is required to match `past_key_values` with the correct - beam_idx at every generation step. - """ - return tuple( - tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past) - for layer_past in past_key_values - ) - - -@add_start_docstrings( - """ -The GPT2 Model transformer with a language modeling and a multiple-choice classification head on top e.g. for -RocStories/SWAG tasks. The two heads are two linear layers. The language modeling head has its weights tied to the -input embeddings, the classification head takes as input the input of a specified classification token index in the -input sequence). -""", - GPT2_START_DOCSTRING, -) -class GPT2DoubleHeadsModel(GPT2PreTrainedModel): - _tied_weights_keys = ["lm_head.weight"] - - def __init__(self, config): - super().__init__(config) - config.num_labels = 1 - self.transformer = GPT2Model(config) - self.lm_head = nn.Linear(config.n_embd, config.vocab_size, bias=False) - self.multiple_choice_head = SequenceSummary(config) - - # Model parallel - self.model_parallel = False - self.device_map = None - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings(PARALLELIZE_DOCSTRING) - def parallelize(self, device_map=None): - warnings.warn( - "`GPT2DoubleHeadsModel.parallelize` is deprecated and will be removed in v5 of Transformers, you should" - " load your model with `device_map='balanced'` in the call to `from_pretrained`. You can also provide your" - " own `device_map` but it needs to be a dictionary module_name to device, so for instance" - " {'transformer.h.0': 0, 'transformer.h.1': 1, ...}", - FutureWarning, - ) - self.device_map = ( - get_device_map(len(self.transformer.h), range(torch.cuda.device_count())) - if device_map is None - else device_map - ) - assert_device_map(self.device_map, len(self.transformer.h)) - self.transformer.parallelize(self.device_map) - self.lm_head = self.lm_head.to(self.transformer.first_device) - self.multiple_choice_head = self.multiple_choice_head.to(self.transformer.first_device) - self.model_parallel = True - - @add_start_docstrings(DEPARALLELIZE_DOCSTRING) - def deparallelize(self): - warnings.warn( - "Like `parallelize`, `deparallelize` is deprecated and will be removed in v5 of Transformers.", - FutureWarning, - ) - self.transformer.deparallelize() - self.transformer = self.transformer.to("cpu") - self.lm_head = self.lm_head.to("cpu") - self.multiple_choice_head = self.multiple_choice_head.to("cpu") - self.model_parallel = False - torch.cuda.empty_cache() - - def get_output_embeddings(self): - return self.lm_head - - def set_output_embeddings(self, new_embeddings): - self.lm_head = new_embeddings - - def prepare_inputs_for_generation(self, input_ids, past_key_values=None, **kwargs): - token_type_ids = kwargs.get("token_type_ids", None) - # only last token for inputs_ids if past is defined in kwargs - if past_key_values: - input_ids = input_ids[:, -1].unsqueeze(-1) - if token_type_ids is not None: - token_type_ids = token_type_ids[:, -1].unsqueeze(-1) - - attention_mask = kwargs.get("attention_mask", None) - position_ids = kwargs.get("position_ids", None) - - if attention_mask is not None and position_ids is None: - # create position_ids on the fly for batch generation - position_ids = attention_mask.long().cumsum(-1) - 1 - position_ids.masked_fill_(attention_mask == 0, 1) - if past_key_values: - position_ids = position_ids[:, -1].unsqueeze(-1) - else: - position_ids = None - - return { - "input_ids": input_ids, - "past_key_values": past_key_values, - "use_cache": kwargs.get("use_cache"), - "position_ids": position_ids, - "attention_mask": attention_mask, - "token_type_ids": token_type_ids, - } - - @add_start_docstrings_to_model_forward(GPT2_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=GPT2DoubleHeadsModelOutput, config_class=_CONFIG_FOR_DOC) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - mc_token_ids: Optional[torch.LongTensor] = None, - labels: Optional[torch.LongTensor] = None, - mc_labels: Optional[torch.LongTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - **kwargs, - ) -> Union[Tuple, GPT2DoubleHeadsModelOutput]: - r""" - mc_token_ids (`torch.LongTensor` of shape `(batch_size, num_choices)`, *optional*, default to index of the last token of the input): - Index of the classification token in each input sequence. Selected in the range `[0, input_ids.size(-1) - - 1]`. - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set - `labels = input_ids`. Indices are selected in `[-100, 0, ..., config.vocab_size - 1]`. All labels set to - `-100` are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size - 1]` - mc_labels (`torch.LongTensor` of shape `(batch_size)`, *optional*): - Labels for computing the multiple choice classification loss. Indices should be in `[0, ..., num_choices]` - where *num_choices* is the size of the second dimension of the input tensors. (see *input_ids* above) - - Return: - - Example: - - ```python - >>> import torch - >>> from transformers import AutoTokenizer, GPT2DoubleHeadsModel - - >>> tokenizer = AutoTokenizer.from_pretrained("gpt2") - >>> model = GPT2DoubleHeadsModel.from_pretrained("gpt2") - - >>> # Add a [CLS] to the vocabulary (we should train it also!) - >>> num_added_tokens = tokenizer.add_special_tokens({"cls_token": "[CLS]"}) - >>> # Update the model embeddings with the new vocabulary size - >>> embedding_layer = model.resize_token_embeddings(len(tokenizer)) - - >>> choices = ["Hello, my dog is cute [CLS]", "Hello, my cat is cute [CLS]"] - >>> encoded_choices = [tokenizer.encode(s) for s in choices] - >>> cls_token_location = [tokens.index(tokenizer.cls_token_id) for tokens in encoded_choices] - - >>> input_ids = torch.tensor(encoded_choices).unsqueeze(0) # Batch size: 1, number of choices: 2 - >>> mc_token_ids = torch.tensor([cls_token_location]) # Batch size: 1 - - >>> outputs = model(input_ids, mc_token_ids=mc_token_ids) - >>> lm_logits = outputs.logits - >>> mc_logits = outputs.mc_logits - ```""" - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - transformer_outputs = self.transformer( - input_ids, - past_key_values=past_key_values, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - hidden_states = transformer_outputs[0] - - # Set device for model parallelism - if self.model_parallel: - torch.cuda.set_device(self.transformer.first_device) - hidden_states = hidden_states.to(self.lm_head.weight.device) - - lm_logits = self.lm_head(hidden_states) - mc_logits = self.multiple_choice_head(hidden_states, mc_token_ids).squeeze(-1) - - mc_loss = None - if mc_labels is not None: - loss_fct = CrossEntropyLoss() - mc_loss = loss_fct(mc_logits.view(-1, mc_logits.size(-1)), mc_labels.view(-1)) - lm_loss = None - if labels is not None: - labels = labels.to(lm_logits.device) - shift_logits = lm_logits[..., :-1, :].contiguous() - shift_labels = labels[..., 1:].contiguous() - loss_fct = CrossEntropyLoss() - lm_loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1)) - - if not return_dict: - output = (lm_logits, mc_logits) + transformer_outputs[1:] - if mc_loss is not None: - output = (mc_loss,) + output - return ((lm_loss,) + output) if lm_loss is not None else output - - return GPT2DoubleHeadsModelOutput( - loss=lm_loss, - mc_loss=mc_loss, - logits=lm_logits, - mc_logits=mc_logits, - past_key_values=transformer_outputs.past_key_values, - hidden_states=transformer_outputs.hidden_states, - attentions=transformer_outputs.attentions, - ) - - @staticmethod - def _reorder_cache( - past_key_values: Tuple[Tuple[torch.Tensor]], beam_idx: torch.Tensor - ) -> Tuple[Tuple[torch.Tensor]]: - """ - This function is used to re-order the `past_key_values` cache if [`~PreTrainedModel.beam_search`] or - [`~PreTrainedModel.beam_sample`] is called. This is required to match `past_key_values` with the correct - beam_idx at every generation step. - """ - return tuple( - tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past) - for layer_past in past_key_values - ) - - -@add_start_docstrings( - """ - The GPT2 Model transformer with a sequence classification head on top (linear layer). - - [`GPT2ForSequenceClassification`] uses the last token in order to do the classification, as other causal models - (e.g. GPT-1) do. - - Since it does classification on the last token, it requires to know the position of the last token. If a - `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If - no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the - padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in - each row of the batch). - """, - GPT2_START_DOCSTRING, -) -class GPT2ForSequenceClassification(GPT2PreTrainedModel): - def __init__(self, config): - super().__init__(config) - self.num_labels = config.num_labels - self.transformer = GPT2Model(config) - self.score = nn.Linear(config.n_embd, self.num_labels, bias=False) - - # Model parallel - self.model_parallel = False - self.device_map = None - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(GPT2_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - checkpoint="microsoft/DialogRPT-updown", - output_type=SequenceClassifierOutputWithPast, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, SequenceClassifierOutputWithPast]: - r""" - labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., - config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If - `config.num_labels > 1` a classification loss is computed (Cross-Entropy). - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - transformer_outputs = self.transformer( - input_ids, - past_key_values=past_key_values, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - hidden_states = transformer_outputs[0] - logits = self.score(hidden_states) - - if input_ids is not None: - batch_size, sequence_length = input_ids.shape[:2] - else: - batch_size, sequence_length = inputs_embeds.shape[:2] - - assert ( - self.config.pad_token_id is not None or batch_size == 1 - ), "Cannot handle batch sizes > 1 if no padding token is defined." - if self.config.pad_token_id is None: - sequence_lengths = -1 - else: - if input_ids is not None: - sequence_lengths = (torch.eq(input_ids, self.config.pad_token_id).long().argmax(-1) - 1).to( - logits.device - ) - else: - sequence_lengths = -1 - logger.warning( - f"{self.__class__.__name__} will not detect padding tokens in `inputs_embeds`. Results may be " - "unexpected if using padding tokens in conjunction with `inputs_embeds.`" - ) - - pooled_logits = logits[torch.arange(batch_size, device=logits.device), sequence_lengths] - - loss = None - if labels is not None: - if self.config.problem_type is None: - if self.num_labels == 1: - self.config.problem_type = "regression" - elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int): - self.config.problem_type = "single_label_classification" - else: - self.config.problem_type = "multi_label_classification" - - if self.config.problem_type == "regression": - loss_fct = MSELoss() - if self.num_labels == 1: - loss = loss_fct(pooled_logits.squeeze(), labels.squeeze()) - else: - loss = loss_fct(pooled_logits, labels) - elif self.config.problem_type == "single_label_classification": - loss_fct = CrossEntropyLoss() - loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1)) - elif self.config.problem_type == "multi_label_classification": - loss_fct = BCEWithLogitsLoss() - loss = loss_fct(pooled_logits, labels) - if not return_dict: - output = (pooled_logits,) + transformer_outputs[1:] - return ((loss,) + output) if loss is not None else output - - return SequenceClassifierOutputWithPast( - loss=loss, - logits=pooled_logits, - past_key_values=transformer_outputs.past_key_values, - hidden_states=transformer_outputs.hidden_states, - attentions=transformer_outputs.attentions, - ) - - -@add_start_docstrings( - """ - GPT2 Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for - Named-Entity-Recognition (NER) tasks. - """, - GPT2_START_DOCSTRING, -) -class GPT2ForTokenClassification(GPT2PreTrainedModel): - def __init__(self, config): - super().__init__(config) - self.num_labels = config.num_labels - - self.transformer = GPT2Model(config) - if hasattr(config, "classifier_dropout") and config.classifier_dropout is not None: - classifier_dropout = config.classifier_dropout - elif hasattr(config, "hidden_dropout") and config.hidden_dropout is not None: - classifier_dropout = config.hidden_dropout - else: - classifier_dropout = 0.1 - self.dropout = nn.Dropout(classifier_dropout) - self.classifier = nn.Linear(config.hidden_size, config.num_labels) - - # Model parallel - self.model_parallel = False - self.device_map = None - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(GPT2_INPUTS_DOCSTRING) - # fmt: off - @add_code_sample_docstrings( - checkpoint="brad1141/gpt2-finetuned-comp2", - output_type=TokenClassifierOutput, - config_class=_CONFIG_FOR_DOC, - expected_loss=0.25, - expected_output=["Lead", "Lead", "Lead", "Position", "Lead", "Lead", "Lead", "Lead", "Lead", "Lead", "Lead", "Lead"], - ) - # fmt: on - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, TokenClassifierOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., - config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If - `config.num_labels > 1` a classification loss is computed (Cross-Entropy). - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - transformer_outputs = self.transformer( - input_ids, - past_key_values=past_key_values, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - hidden_states = transformer_outputs[0] - hidden_states = self.dropout(hidden_states) - logits = self.classifier(hidden_states) - - loss = None - if labels is not None: - labels = labels.to(logits.device) - loss_fct = CrossEntropyLoss() - loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) - - if not return_dict: - output = (logits,) + transformer_outputs[2:] - return ((loss,) + output) if loss is not None else output - - return TokenClassifierOutput( - loss=loss, - logits=logits, - hidden_states=transformer_outputs.hidden_states, - attentions=transformer_outputs.attentions, - ) - - -@add_start_docstrings( - """ - The GPT-2 Model transformer with a span classification head on top for extractive question-answering tasks like - SQuAD (a linear layer on top of the hidden-states output to compute `span start logits` and `span end logits`). - """, - GPT2_START_DOCSTRING, -) -class GPT2ForQuestionAnswering(GPT2PreTrainedModel): - def __init__(self, config): - super().__init__(config) - self.num_labels = config.num_labels - self.transformer = GPT2Model(config) - self.qa_outputs = nn.Linear(config.hidden_size, 2) - - # Model parallel - self.model_parallel = False - self.device_map = None - self.gradient_checkpointing = False - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(GPT2_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=QuestionAnsweringModelOutput, - config_class=_CONFIG_FOR_DOC, - real_checkpoint=_CHECKPOINT_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - start_positions: Optional[torch.LongTensor] = None, - end_positions: Optional[torch.LongTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, QuestionAnsweringModelOutput]: - r""" - start_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for position (index) of the start of the labelled span for computing the token classification loss. - Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence - are not taken into account for computing the loss. - end_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for position (index) of the end of the labelled span for computing the token classification loss. - Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence - are not taken into account for computing the loss. - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.transformer( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = outputs[0] - - logits = self.qa_outputs(sequence_output) - start_logits, end_logits = logits.split(1, dim=-1) - start_logits = start_logits.squeeze(-1).contiguous() - end_logits = end_logits.squeeze(-1).contiguous() - - total_loss = None - if start_positions is not None and end_positions is not None: - # If we are on multi-GPU, split add a dimension - if len(start_positions.size()) > 1: - start_positions = start_positions.squeeze(-1).to(start_logits.device) - if len(end_positions.size()) > 1: - end_positions = end_positions.squeeze(-1).to(end_logits.device) - # sometimes the start/end positions are outside our model inputs, we ignore these terms - ignored_index = start_logits.size(1) - start_positions = start_positions.clamp(0, ignored_index) - end_positions = end_positions.clamp(0, ignored_index) - - loss_fct = CrossEntropyLoss(ignore_index=ignored_index) - start_loss = loss_fct(start_logits, start_positions) - end_loss = loss_fct(end_logits, end_positions) - total_loss = (start_loss + end_loss) / 2 - - if not return_dict: - output = (start_logits, end_logits) + outputs[2:] - return ((total_loss,) + output) if total_loss is not None else output - - return QuestionAnsweringModelOutput( - loss=total_loss, - start_logits=start_logits, - end_logits=end_logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/openai/modeling_openai.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/openai/modeling_openai.py deleted file mode 100644 index 2d56272721e2129de6072da651605bed3df508a8..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/openai/modeling_openai.py +++ /dev/null @@ -1,860 +0,0 @@ -# coding=utf-8 -# Copyright 2018 The OpenAI Team Authors and HuggingFace Inc. team. -# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""PyTorch OpenAI GPT model.""" - - -import json -import math -import os -from dataclasses import dataclass -from typing import Any, Dict, Optional, Tuple, Union - -import torch -from torch import nn -from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss - -from ...activations import gelu_new, silu -from ...modeling_outputs import BaseModelOutput, CausalLMOutput, SequenceClassifierOutput -from ...modeling_utils import PreTrainedModel, SequenceSummary -from ...pytorch_utils import Conv1D, find_pruneable_heads_and_indices, prune_conv1d_layer -from ...utils import ( - ModelOutput, - add_code_sample_docstrings, - add_start_docstrings, - add_start_docstrings_to_model_forward, - logging, - replace_return_docstrings, -) -from .configuration_openai import OpenAIGPTConfig - - -logger = logging.get_logger(__name__) - -_CHECKPOINT_FOR_DOC = "openai-gpt" -_CONFIG_FOR_DOC = "OpenAIGPTConfig" - -OPENAI_GPT_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "openai-gpt", - # See all OpenAI GPT models at https://huggingface.co/models?filter=openai-gpt -] - - -def load_tf_weights_in_openai_gpt(model, config, openai_checkpoint_folder_path): - """Load tf pre-trained weights in a pytorch model (from NumPy arrays here)""" - import re - - import numpy as np - - if ".ckpt" in openai_checkpoint_folder_path: - openai_checkpoint_folder_path = os.path.dirname(openai_checkpoint_folder_path) - - logger.info(f"Loading weights from {openai_checkpoint_folder_path}") - - with open(openai_checkpoint_folder_path + "/parameters_names.json", "r", encoding="utf-8") as names_handle: - names = json.load(names_handle) - with open(openai_checkpoint_folder_path + "/params_shapes.json", "r", encoding="utf-8") as shapes_handle: - shapes = json.load(shapes_handle) - offsets = np.cumsum([np.prod(shape) for shape in shapes]) - init_params = [np.load(openai_checkpoint_folder_path + f"/params_{n}.npy") for n in range(10)] - init_params = np.split(np.concatenate(init_params, 0), offsets)[:-1] - init_params = [param.reshape(shape) for param, shape in zip(init_params, shapes)] - - # This was used when we had a single embedding matrix for positions and tokens - # init_params[0] = np.concatenate([init_params[1], init_params[0]], 0) - # del init_params[1] - init_params = [arr.squeeze() for arr in init_params] - - # Check that the token and position embeddings weight dimensions map those of the init parameters. - if model.tokens_embed.weight.shape != init_params[1].shape: - raise ValueError( - f"tokens_embed.weight.shape: {model.tokens_embed.weight.shape} does not match init_param[1].shape:" - f" {init_params[1].shape}" - ) - - if model.positions_embed.weight.shape != init_params[0].shape: - raise ValueError( - f"positions_embed.weight.shape: {model.positions_embed.weight.shape} does not match init_param[0].shape:" - f" {init_params[0].shape}" - ) - - model.tokens_embed.weight.data = torch.from_numpy(init_params[1]) - model.positions_embed.weight.data = torch.from_numpy(init_params[0]) - names.pop(0) - # Pop position and token embedding arrays - init_params.pop(0) - init_params.pop(0) - - for name, array in zip(names, init_params): # names[1:n_transfer], init_params[1:n_transfer]): - name = name[6:] # skip "model/" - if name[-2:] != ":0": - raise ValueError(f"Layer {name} does not end with :0") - name = name[:-2] - name = name.split("/") - pointer = model - for m_name in name: - if re.fullmatch(r"[A-Za-z]+\d+", m_name): - scope_names = re.split(r"(\d+)", m_name) - else: - scope_names = [m_name] - if scope_names[0] == "g": - pointer = getattr(pointer, "weight") - elif scope_names[0] == "b": - pointer = getattr(pointer, "bias") - elif scope_names[0] == "w": - pointer = getattr(pointer, "weight") - else: - pointer = getattr(pointer, scope_names[0]) - if len(scope_names) >= 2: - num = int(scope_names[1]) - pointer = pointer[num] - - # Ensure that the pointer and array have compatible shapes. - if pointer.shape != array.shape: - raise ValueError(f"Pointer shape {pointer.shape} and array shape {array.shape} mismatched") - - logger.info(f"Initialize PyTorch weight {name}") - pointer.data = torch.from_numpy(array) - return model - - -ACT_FNS = {"relu": nn.ReLU(), "silu": silu, "gelu": gelu_new, "swish": silu} - - -class Attention(nn.Module): - def __init__(self, nx, n_positions, config, scale=False): - super().__init__() - n_state = nx # in Attention: n_state=768 (nx=n_embd) - # [switch nx => n_state from Block to Attention to keep identical to TF implementation] - if n_state % config.n_head != 0: - raise ValueError(f"Attention n_state shape: {n_state} must be divisible by config.n_head {config.n_head}") - self.register_buffer( - "bias", - torch.tril(torch.ones(n_positions, n_positions)).view(1, 1, n_positions, n_positions), - persistent=False, - ) - self.n_head = config.n_head - self.split_size = n_state - self.scale = scale - - self.c_attn = Conv1D(n_state * 3, nx) - self.c_proj = Conv1D(n_state, nx) - self.attn_dropout = nn.Dropout(config.attn_pdrop) - self.resid_dropout = nn.Dropout(config.resid_pdrop) - self.pruned_heads = set() - - def prune_heads(self, heads): - if len(heads) == 0: - return - heads, index = find_pruneable_heads_and_indices( - heads, self.n_head, self.split_size // self.n_head, self.pruned_heads - ) - index_attn = torch.cat([index, index + self.split_size, index + (2 * self.split_size)]) - # Prune conv1d layers - self.c_attn = prune_conv1d_layer(self.c_attn, index_attn, dim=1) - self.c_proj = prune_conv1d_layer(self.c_proj, index, dim=0) - # Update hyper params - self.split_size = (self.split_size // self.n_head) * (self.n_head - len(heads)) - self.n_head = self.n_head - len(heads) - self.pruned_heads = self.pruned_heads.union(heads) - - def _attn(self, q, k, v, attention_mask=None, head_mask=None, output_attentions=False): - w = torch.matmul(q, k) - if self.scale: - w = w / math.sqrt(v.size(-1)) - # w = w * self.bias + -1e9 * (1 - self.bias) # TF implementation method: mask_attn_weights - # XD: self.b may be larger than w, so we need to crop it - b = self.bias[:, :, : w.size(-2), : w.size(-1)] - w = w * b + -1e4 * (1 - b) - - if attention_mask is not None: - # Apply the attention mask - w = w + attention_mask - - w = nn.functional.softmax(w, dim=-1) - w = self.attn_dropout(w) - - # Mask heads if we want to - if head_mask is not None: - w = w * head_mask - - outputs = [torch.matmul(w, v)] - if output_attentions: - outputs.append(w) - return outputs - - def merge_heads(self, x): - x = x.permute(0, 2, 1, 3).contiguous() - new_x_shape = x.size()[:-2] + (x.size(-2) * x.size(-1),) - return x.view(*new_x_shape) # in Tensorflow implementation: fct merge_states - - def split_heads(self, x, k=False): - new_x_shape = x.size()[:-1] + (self.n_head, x.size(-1) // self.n_head) - x = x.view(*new_x_shape) # in Tensorflow implementation: fct split_states - if k: - return x.permute(0, 2, 3, 1) - else: - return x.permute(0, 2, 1, 3) - - def forward(self, x, attention_mask=None, head_mask=None, output_attentions=False): - x = self.c_attn(x) - query, key, value = x.split(self.split_size, dim=2) - query = self.split_heads(query) - key = self.split_heads(key, k=True) - value = self.split_heads(value) - - attn_outputs = self._attn(query, key, value, attention_mask, head_mask, output_attentions) - a = attn_outputs[0] - - a = self.merge_heads(a) - a = self.c_proj(a) - a = self.resid_dropout(a) - - outputs = [a] + attn_outputs[1:] - return outputs # a, (attentions) - - -class MLP(nn.Module): - def __init__(self, n_state, config): # in MLP: n_state=3072 (4 * n_embd) - super().__init__() - nx = config.n_embd - self.c_fc = Conv1D(n_state, nx) - self.c_proj = Conv1D(nx, n_state) - self.act = ACT_FNS[config.afn] - self.dropout = nn.Dropout(config.resid_pdrop) - - def forward(self, x): - h = self.act(self.c_fc(x)) - h2 = self.c_proj(h) - return self.dropout(h2) - - -class Block(nn.Module): - def __init__(self, n_positions, config, scale=False): - super().__init__() - nx = config.n_embd - self.attn = Attention(nx, n_positions, config, scale) - self.ln_1 = nn.LayerNorm(nx, eps=config.layer_norm_epsilon) - self.mlp = MLP(4 * nx, config) - self.ln_2 = nn.LayerNorm(nx, eps=config.layer_norm_epsilon) - - def forward(self, x, attention_mask=None, head_mask=None, output_attentions=False): - attn_outputs = self.attn( - x, - attention_mask=attention_mask, - head_mask=head_mask, - output_attentions=output_attentions, - ) - a = attn_outputs[0] - - n = self.ln_1(x + a) - m = self.mlp(n) - h = self.ln_2(n + m) - - outputs = [h] + attn_outputs[1:] - return outputs - - -class OpenAIGPTPreTrainedModel(PreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = OpenAIGPTConfig - load_tf_weights = load_tf_weights_in_openai_gpt - base_model_prefix = "transformer" - - def _init_weights(self, module): - """Initialize the weights.""" - if isinstance(module, (nn.Linear, Conv1D)): - # Slightly different from the TF version which uses truncated_normal for initialization - # cf https://github.com/pytorch/pytorch/pull/5617 - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - if module.bias is not None: - module.bias.data.zero_() - elif isinstance(module, nn.Embedding): - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - if module.padding_idx is not None: - module.weight.data[module.padding_idx].zero_() - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - - -@dataclass -class OpenAIGPTDoubleHeadsModelOutput(ModelOutput): - """ - Base class for outputs of models predicting if two sentences are consecutive or not. - - Args: - loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided): - Language modeling loss. - mc_loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `mc_labels` is provided): - Multiple choice classification loss. - logits (`torch.FloatTensor` of shape `(batch_size, num_choices, sequence_length, config.vocab_size)`): - Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). - mc_logits (`torch.FloatTensor` of shape `(batch_size, num_choices)`): - Prediction scores of the multiple choice classification head (scores for each choice before SoftMax). - hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of - shape `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights after the attention softmax, used to compute the weighted average in the self-attention - heads. - """ - - loss: Optional[torch.FloatTensor] = None - mc_loss: Optional[torch.FloatTensor] = None - logits: torch.FloatTensor = None - mc_logits: torch.FloatTensor = None - hidden_states: Optional[Tuple[torch.FloatTensor]] = None - attentions: Optional[Tuple[torch.FloatTensor]] = None - - -OPENAI_GPT_START_DOCSTRING = r""" - - This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the - library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads - etc.) - - This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. - Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage - and behavior. - - Parameters: - config ([`OpenAIGPTConfig`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - -OPENAI_GPT_INPUTS_DOCSTRING = r""" - Args: - input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): - Indices of input sequence tokens in the vocabulary. - - Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - token_type_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, - 1]`: - - - 0 corresponds to a *sentence A* token, - - 1 corresponds to a *sentence B* token. - - [What are token type IDs?](../glossary#token-type-ids) - position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, - config.max_position_embeddings - 1]`. - - [What are position IDs?](../glossary#position-ids) - head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): - Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This - is useful if you want more control over how to convert `input_ids` indices into associated vectors than the - model's internal embedding lookup matrix. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. -""" - - -@add_start_docstrings( - "The bare OpenAI GPT transformer model outputting raw hidden-states without any specific head on top.", - OPENAI_GPT_START_DOCSTRING, -) -class OpenAIGPTModel(OpenAIGPTPreTrainedModel): - def __init__(self, config): - super().__init__(config) - - self.tokens_embed = nn.Embedding(config.vocab_size, config.n_embd) - self.positions_embed = nn.Embedding(config.n_positions, config.n_embd) - self.drop = nn.Dropout(config.embd_pdrop) - self.h = nn.ModuleList([Block(config.n_positions, config, scale=True) for _ in range(config.n_layer)]) - - self.register_buffer("position_ids", torch.arange(config.n_positions), persistent=False) - # Initialize weights and apply final processing - self.post_init() - - def get_input_embeddings(self): - return self.tokens_embed - - def set_input_embeddings(self, new_embeddings): - self.tokens_embed = new_embeddings - - def _prune_heads(self, heads_to_prune): - """ - Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} - """ - for layer, heads in heads_to_prune.items(): - self.h[layer].attn.prune_heads(heads) - - @add_start_docstrings_to_model_forward(OPENAI_GPT_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=BaseModelOutput, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple[torch.Tensor], BaseModelOutput]: - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") - elif input_ids is not None: - self.warn_if_padding_and_no_attention_mask(input_ids, attention_mask) - input_shape = input_ids.size() - input_ids = input_ids.view(-1, input_shape[-1]) - elif inputs_embeds is not None: - input_shape = inputs_embeds.size()[:-1] - else: - raise ValueError("You have to specify either input_ids or inputs_embeds") - - if position_ids is None: - # Code is different from when we had a single embedding matrix from position and token embeddings - position_ids = self.position_ids[None, : input_shape[-1]] - - # Attention mask. - if attention_mask is not None: - # We create a 3D attention mask from a 2D tensor mask. - # Sizes are [batch_size, 1, 1, to_seq_length] - # So we can broadcast to [batch_size, num_heads, from_seq_length, to_seq_length] - # this attention mask is more simple than the triangular masking of causal attention - # used in OpenAI GPT, we just need to prepare the broadcast dimension here. - attention_mask = attention_mask.unsqueeze(1).unsqueeze(2) - - # Since attention_mask is 1.0 for positions we want to attend and 0.0 for - # masked positions, this operation will create a tensor which is 0.0 for - # positions we want to attend and the dtype's smallest value for masked positions. - # Since we are adding it to the raw scores before the softmax, this is - # effectively the same as removing these entirely. - attention_mask = attention_mask.to(dtype=next(self.parameters()).dtype) # fp16 compatibility - attention_mask = (1.0 - attention_mask) * torch.finfo(self.dtype).min - - # Prepare head mask if needed - head_mask = self.get_head_mask(head_mask, self.config.n_layer) - - if inputs_embeds is None: - inputs_embeds = self.tokens_embed(input_ids) - position_embeds = self.positions_embed(position_ids) - if token_type_ids is not None: - token_type_ids = token_type_ids.view(-1, token_type_ids.size(-1)) - token_type_embeds = self.tokens_embed(token_type_ids) - else: - token_type_embeds = 0 - hidden_states = inputs_embeds + position_embeds + token_type_embeds - hidden_states = self.drop(hidden_states) - - output_shape = input_shape + (hidden_states.size(-1),) - - all_attentions = () if output_attentions else None - all_hidden_states = () if output_hidden_states else None - for i, block in enumerate(self.h): - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - outputs = block(hidden_states, attention_mask, head_mask[i], output_attentions=output_attentions) - hidden_states = outputs[0] - if output_attentions: - all_attentions = all_attentions + (outputs[1],) - - hidden_states = hidden_states.view(*output_shape) - # Add last layer - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if not return_dict: - return tuple(v for v in [hidden_states, all_hidden_states, all_attentions] if v is not None) - - return BaseModelOutput( - last_hidden_state=hidden_states, - hidden_states=all_hidden_states, - attentions=all_attentions, - ) - - -@add_start_docstrings( - """ - OpenAI GPT Model transformer with a language modeling head on top (linear layer with weights tied to the input - embeddings). - """, - OPENAI_GPT_START_DOCSTRING, -) -class OpenAIGPTLMHeadModel(OpenAIGPTPreTrainedModel): - _tied_weights_keys = ["lm_head.weight"] - - def __init__(self, config): - super().__init__(config) - self.transformer = OpenAIGPTModel(config) - self.lm_head = nn.Linear(config.n_embd, config.vocab_size, bias=False) - - # Initialize weights and apply final processing - self.post_init() - - def get_output_embeddings(self): - return self.lm_head - - def set_output_embeddings(self, new_embeddings): - self.lm_head = new_embeddings - - @add_start_docstrings_to_model_forward(OPENAI_GPT_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=CausalLMOutput, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple[torch.Tensor], CausalLMOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set - `labels = input_ids` Indices are selected in `[-100, 0, ..., config.vocab_size]` All labels set to `-100` - are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size]` - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - transformer_outputs = self.transformer( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - hidden_states = transformer_outputs[0] - lm_logits = self.lm_head(hidden_states) - - loss = None - if labels is not None: - # Shift so that tokens < n predict n - shift_logits = lm_logits[..., :-1, :].contiguous() - shift_labels = labels[..., 1:].contiguous() - # Flatten the tokens - loss_fct = CrossEntropyLoss() - loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1)) - - if not return_dict: - output = (lm_logits,) + transformer_outputs[1:] - return ((loss,) + output) if loss is not None else output - - return CausalLMOutput( - loss=loss, - logits=lm_logits, - hidden_states=transformer_outputs.hidden_states, - attentions=transformer_outputs.attentions, - ) - - def prepare_inputs_for_generation(self, input_ids: torch.LongTensor, **kwargs) -> Dict[str, Any]: - return {"input_ids": input_ids} - - -@add_start_docstrings( - """ -OpenAI GPT Model transformer with a language modeling and a multiple-choice classification head on top e.g. for -RocStories/SWAG tasks. The two heads are two linear layers. The language modeling head has its weights tied to the -input embeddings, the classification head takes as input the input of a specified classification token index in the -input sequence). -""", - OPENAI_GPT_START_DOCSTRING, -) -class OpenAIGPTDoubleHeadsModel(OpenAIGPTPreTrainedModel): - _tied_weights_keys = ["lm_head.weight"] - - def __init__(self, config): - super().__init__(config) - - config.num_labels = 1 - self.transformer = OpenAIGPTModel(config) - self.lm_head = nn.Linear(config.n_embd, config.vocab_size, bias=False) - self.multiple_choice_head = SequenceSummary(config) - - # Initialize weights and apply final processing - self.post_init() - - def get_output_embeddings(self): - return self.lm_head - - def set_output_embeddings(self, new_embeddings): - self.lm_head = new_embeddings - - @add_start_docstrings_to_model_forward(OPENAI_GPT_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=OpenAIGPTDoubleHeadsModelOutput, config_class=_CONFIG_FOR_DOC) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - mc_token_ids: Optional[torch.LongTensor] = None, - labels: Optional[torch.LongTensor] = None, - mc_labels: Optional[torch.LongTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple[torch.Tensor], OpenAIGPTDoubleHeadsModelOutput]: - r""" - mc_token_ids (`torch.LongTensor` of shape `(batch_size, num_choices)`, *optional*, default to index of the last token of the input): - Index of the classification token in each input sequence. Selected in the range `[0, input_ids.size(-1) - - 1]`. - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set - `labels = input_ids` Indices are selected in `[-1, 0, ..., config.vocab_size]` All labels set to `-100` are - ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size]` - mc_labels (`torch.LongTensor` of shape `(batch_size)`, *optional*): - Labels for computing the multiple choice classification loss. Indices should be in `[0, ..., num_choices]` - where *num_choices* is the size of the second dimension of the input tensors. (see *input_ids* above) - - Return: - - Examples: - - ```python - >>> from transformers import AutoTokenizer, OpenAIGPTDoubleHeadsModel - >>> import torch - - >>> tokenizer = AutoTokenizer.from_pretrained("openai-gpt") - >>> model = OpenAIGPTDoubleHeadsModel.from_pretrained("openai-gpt") - >>> tokenizer.add_special_tokens( - ... {"cls_token": "[CLS]"} - ... ) # Add a [CLS] to the vocabulary (we should train it also!) - >>> model.resize_token_embeddings(len(tokenizer)) - - >>> choices = ["Hello, my dog is cute [CLS]", "Hello, my cat is cute [CLS]"] - >>> input_ids = torch.tensor([tokenizer.encode(s) for s in choices]).unsqueeze(0) # Batch size 1, 2 choices - >>> mc_token_ids = torch.tensor([input_ids.size(-1) - 1, input_ids.size(-1) - 1]).unsqueeze(0) # Batch size 1 - - >>> outputs = model(input_ids, mc_token_ids=mc_token_ids) - >>> lm_logits = outputs.logits - >>> mc_logits = outputs.mc_logits - ```""" - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - transformer_outputs = self.transformer( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - hidden_states = transformer_outputs[0] - - lm_logits = self.lm_head(hidden_states) - mc_logits = self.multiple_choice_head(hidden_states, mc_token_ids).squeeze(-1) - - lm_loss, mc_loss = None, None - if mc_labels is not None: - loss_fct = CrossEntropyLoss() - mc_loss = loss_fct(mc_logits.view(-1, mc_logits.size(-1)), mc_labels.view(-1)) - if labels is not None: - shift_logits = lm_logits[..., :-1, :].contiguous() - shift_labels = labels[..., 1:].contiguous() - loss_fct = CrossEntropyLoss() - lm_loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1)) - - if not return_dict: - output = (lm_logits, mc_logits) + transformer_outputs[1:] - if mc_loss is not None: - output = (mc_loss,) + output - return ((lm_loss,) + output) if lm_loss is not None else output - - return OpenAIGPTDoubleHeadsModelOutput( - loss=lm_loss, - mc_loss=mc_loss, - logits=lm_logits, - mc_logits=mc_logits, - hidden_states=transformer_outputs.hidden_states, - attentions=transformer_outputs.attentions, - ) - - -@add_start_docstrings( - """ - The Original OpenAI GPT Model transformer with a sequence classification head on top (linear layer). - [`OpenAIGPTForSequenceClassification`] uses the last token in order to do the classification, as other causal - models (e.g. GPT-2) do. Since it does classification on the last token, it requires to know the position of the - last token. If a `pad_token_id` is defined in the configuration, it finds the last token that is not a padding - token in each row. If no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since - it cannot guess the padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take - the last value in each row of the batch). - """, - OPENAI_GPT_START_DOCSTRING, -) -class OpenAIGPTForSequenceClassification(OpenAIGPTPreTrainedModel): - def __init__(self, config): - super().__init__(config) - self.num_labels = config.num_labels - self.transformer = OpenAIGPTModel(config) - self.score = nn.Linear(config.n_embd, self.num_labels, bias=False) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(OPENAI_GPT_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=SequenceClassifierOutput, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple[torch.Tensor], SequenceClassifierOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., - config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If - `config.num_labels > 1` a classification loss is computed (Cross-Entropy). - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - transformer_outputs = self.transformer( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - hidden_states = transformer_outputs[0] - logits = self.score(hidden_states) - - if input_ids is not None: - batch_size, sequence_length = input_ids.shape[:2] - else: - batch_size, sequence_length = inputs_embeds.shape[:2] - - # Ensure the batch size is > 1 if there is no padding. - if self.config.pad_token_id is None and batch_size != 1: - raise ValueError("Cannot handle batch sizes > 1 if no padding token is defined.") - - if self.config.pad_token_id is None: - sequence_lengths = -1 - else: - if input_ids is not None: - sequence_lengths = (torch.eq(input_ids, self.config.pad_token_id).long().argmax(-1) - 1).to( - logits.device - ) - else: - sequence_lengths = -1 - logger.warning( - f"{self.__class__.__name__} will not detect padding tokens in `inputs_embeds`. Results may be " - "unexpected if using padding tokens in conjunction with `inputs_embeds.`" - ) - - pooled_logits = logits[range(batch_size), sequence_lengths] - - loss = None - if labels is not None: - if self.config.problem_type is None: - if self.num_labels == 1: - self.config.problem_type = "regression" - elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int): - self.config.problem_type = "single_label_classification" - else: - self.config.problem_type = "multi_label_classification" - - if self.config.problem_type == "regression": - loss_fct = MSELoss() - if self.num_labels == 1: - loss = loss_fct(pooled_logits.squeeze(), labels.squeeze()) - else: - loss = loss_fct(pooled_logits, labels) - elif self.config.problem_type == "single_label_classification": - loss_fct = CrossEntropyLoss() - loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1)) - elif self.config.problem_type == "multi_label_classification": - loss_fct = BCEWithLogitsLoss() - loss = loss_fct(pooled_logits, labels) - if not return_dict: - output = (pooled_logits,) + transformer_outputs[1:] - return ((loss,) + output) if loss is not None else output - - return SequenceClassifierOutput( - loss=loss, - logits=pooled_logits, - hidden_states=transformer_outputs.hidden_states, - attentions=transformer_outputs.attentions, - ) diff --git a/spaces/yuan2023/Stable-Diffusion-ControlNet-WebUI/app.py b/spaces/yuan2023/Stable-Diffusion-ControlNet-WebUI/app.py deleted file mode 100644 index af7e800ed13e4616bed6afcd8e3f52e8a610d77b..0000000000000000000000000000000000000000 --- a/spaces/yuan2023/Stable-Diffusion-ControlNet-WebUI/app.py +++ /dev/null @@ -1,62 +0,0 @@ -import gradio as gr - -from diffusion_webui.helpers import ( - keras_stable_diffusion_app, - stable_diffusion_controlnet_canny_app, - stable_diffusion_controlnet_depth_app, - stable_diffusion_controlnet_hed_app, - stable_diffusion_controlnet_mlsd_app, - stable_diffusion_controlnet_pose_app, - stable_diffusion_controlnet_scribble_app, - stable_diffusion_controlnet_seg_app, - stable_diffusion_img2img_app, - stable_diffusion_inpaint_app, - stable_diffusion_text2img_app, -) - -app = gr.Blocks() -with app: - gr.HTML( - """ -

- Stable Diffusion + ControlNet + Keras Diffusion WebUI -

- """ - ) - gr.Markdown( - """ -

- Follow me for more! - Twitter | Github | Linkedin -

- """ - ) - with gr.Row(): - with gr.Column(): - with gr.Tab("Text2Img"): - stable_diffusion_text2img_app() - with gr.Tab("Img2Img"): - stable_diffusion_img2img_app() - with gr.Tab("Inpaint"): - stable_diffusion_inpaint_app() - - with gr.Tab("ControlNet"): - with gr.Tab("Canny"): - stable_diffusion_controlnet_canny_app() - with gr.Tab("Depth"): - stable_diffusion_controlnet_depth_app() - with gr.Tab("HED"): - stable_diffusion_controlnet_hed_app() - with gr.Tab("MLSD"): - stable_diffusion_controlnet_mlsd_app() - with gr.Tab("Pose"): - stable_diffusion_controlnet_pose_app() - with gr.Tab("Seg"): - stable_diffusion_controlnet_seg_app() - with gr.Tab("Scribble"): - stable_diffusion_controlnet_scribble_app() - - with gr.Tab("Keras Diffusion"): - keras_diffusion_app = keras_stable_diffusion_app() - -app.launch(debug=True) diff --git a/spaces/yukiarimo/Uta-AI/app.py b/spaces/yukiarimo/Uta-AI/app.py deleted file mode 100644 index 5eccb5bf78fb8aebd144cc1243f52f11d0f08e0a..0000000000000000000000000000000000000000 --- a/spaces/yukiarimo/Uta-AI/app.py +++ /dev/null @@ -1,21 +0,0 @@ -import gradio as gr -from transformers import pipeline - -summarizer = pipeline("summarization", model="yukiarimo/Uta-AI") - -def summarize_conversation(conversation): - summary = summarizer(conversation) - return summary[0]['summary_text'] - -dialogue_input = gr.inputs.Textbox(lines=5) - -dialogue_output = gr.outputs.Textbox(label="Uta AI Lyrics") - -def get_summary(conversation): - return summarize_conversation(conversation) - -app = gr.Interface(fn=get_summary, inputs=dialogue_input, outputs=dialogue_output, title="Uta AI", description="Enter an idea and get lyrics", - layout="vertical", theme="compact") - -if __name__ == '__main__': - app.launch() diff --git a/spaces/yyyyulia/7390_nlp_interactive_v2/README.md b/spaces/yyyyulia/7390_nlp_interactive_v2/README.md deleted file mode 100644 index ac44fecf693ff8bcaae55aa94fa42b1d59864d66..0000000000000000000000000000000000000000 --- a/spaces/yyyyulia/7390_nlp_interactive_v2/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 7390 Nlp Interactive V2 -emoji: 🦀 -colorFrom: indigo -colorTo: yellow -sdk: streamlit -sdk_version: 1.28.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/zeno-ml/translation-report/gpt-MT/tools/doc_score.py b/spaces/zeno-ml/translation-report/gpt-MT/tools/doc_score.py deleted file mode 100644 index 3ce885d3d4cb0b9e37b703ecbdaa13aaed0c0a2c..0000000000000000000000000000000000000000 --- a/spaces/zeno-ml/translation-report/gpt-MT/tools/doc_score.py +++ /dev/null @@ -1,179 +0,0 @@ -import numpy as np -import argparse -import json -import os - -from comet import download_model, load_from_checkpoint -from transformers import AutoTokenizer - -COMET_REF_MODELS = ["wmt20-comet-da", "wmt21-comet-mqm", "wmt22-comet-da"] -COMET_SRC_MODELS = ["wmt20-comet-qe-da", "wmt21-comet-qe-mqm", "wmt22-cometkiwi-da"] - -os.environ['TOKENIZERS_PARALLELISM'] = 'false' -tokenizer = AutoTokenizer.from_pretrained("microsoft/infoxlm-large") - -def _is_doc_boundary(doc_ids, idx): - after_idx = min(len(doc_ids) - 1, idx + 1) - return (not doc_ids[after_idx] == doc_ids[idx]) or (idx == len(doc_ids) - 1) - -def _build_context(doc, current_idx, context_window, start_left=True): - balance = context_window - low = current_idx if start_left else max([0, current_idx - (context_window // 2)]) - balance -= (current_idx - low) - high = min([len(doc), current_idx + balance]) - balance -= (high - current_idx) - low = max([0, low - balance]) - pos = current_idx - low - return doc[low:high], pos - -def _check_max_tokens(src_context, mt_context, ref_context=None, max_tokens=512): - src = " ".join(src_context).strip() - mt = " ".join(mt_context).strip() - if ref_context: - ref = " ".join(ref_context).strip() - full_input = tokenizer(" ".join([src, mt, ref])).input_ids - else: - full_input = tokenizer(" ".join([src, mt])).input_ids - return len(full_input) < max_tokens - -def _calculate_doc_comet(args, model, src_docs, hyp_docs, ref_docs=None): - scores, doc_lengths = [], [] - if ref_docs: - for s, h, r in zip(src_docs, hyp_docs, ref_docs): - data_for_eval = [] - # Check if the doc has length shorter than the context length - if len(s) <= args.context_length: - data_for_eval.append({"src": " ".join(s).strip(), "mt": " ".join(h).strip(), "ref": " ".join(r).strip()}) - else: - prev_context_src, prev_context_mt, prev_context_ref = [], [], [] - for i in range(len(s)): - src_context, _ = _build_context(s, i, args.context_length) - mt_context, _ = _build_context(h, i, args.context_length) - ref_context, _ = _build_context(r, i, args.context_length) - - # Ensure max_tokens is respected - reduce = 1 - while (not _check_max_tokens(src_context, mt_context, ref_context=ref_context)) and (args.context_length - reduce > 1): - src_context, _ = _build_context(s, i, args.context_length - reduce) - mt_context, _ = _build_context(h, i, args.context_length - reduce) - ref_context, _ = _build_context(r, i, args.context_length - reduce) - reduce += 1 - - # Ensure same context is not evaluated twice - if not src_context == prev_context_src and not mt_context == prev_context_mt and not ref_context == prev_context_ref: - src, mt, ref = " ".join(src_context).strip(), " ".join(mt_context).strip(), " ".join(ref_context).strip() - data_for_eval.append({ - "src": src, "mt": mt, "ref": ref - }) - prev_context_src, prev_context_mt, prev_context_ref = src_context, mt_context, ref_context - - # Compute the score - pred = model.predict(data_for_eval, batch_size=8, gpus=1) - scores.append(pred.system_score) - doc_lengths.append(len(s)) - - else: - for s, h in zip(src_docs, hyp_docs): - data_for_eval = [] - # Check if the doc has length shorter than the context length - if len(s) <= args.context_length: - data_for_eval.append({"src": " ".join(s).strip(), "mt": " ".join(h).strip()}) - else: - prev_context_src, prev_context_mt = [], [] - for i in range(len(s)): - src_context, _ = _build_context(s, i, args.context_length) - mt_context, _ = _build_context(h, i, args.context_length) - - # Ensure max_tokens is respected - reduce = 1 - while (not _check_max_tokens(src_context, mt_context)) and (args.context_length - reduce > 1): - src_context, _ = _build_context(s, i, args.context_length - reduce) - mt_context, _ = _build_context(h, i, args.context_length - reduce) - reduce += 1 - - # Ensure same context is not evaluated twice - if not src_context == prev_context_src and not mt_context == prev_context_mt: - src, mt = " ".join(src_context).strip(), " ".join(mt_context).strip() - data_for_eval.append({ - "src": src, "mt": mt - }) - prev_context_src, prev_context_mt = src_context, mt_context - - # Compute the score - pred = model.predict(data_for_eval, batch_size=8, gpus=1) - scores.append(pred.system_score) # type: ignore - doc_lengths.append(len(s)) - - return scores, doc_lengths - -def _load_data(args): - with open(args.sources_file, 'r') as src_file, open(args.hypotheses_file, 'r') as hyp_file, open(args.docids_file, 'r') as docids_file: - sources = src_file.readlines() - hypotheses = hyp_file.readlines() - docids = docids_file.readlines() - - src_docs, hyp_docs, ref_docs = [], [], None - current_src_doc, current_hyp_doc = [], [] - i = 0 - while i < len(docids): - current_src_doc.append(sources[i].strip()) - current_hyp_doc.append(hypotheses[i].strip()) - if _is_doc_boundary(docids, i): - src_docs.append(current_src_doc) - hyp_docs.append(current_hyp_doc) - current_src_doc, current_hyp_doc = [], [] - i += 1 - - if args.references_file: - # Load reference files - with open(args.references_file, 'r') as ref_file: - references = ref_file.readlines() - ref_docs = [] - current_ref_doc = [] - i = 0 - while i < len(docids): - current_ref_doc.append(references[i].strip()) - if _is_doc_boundary(docids, i): - ref_docs.append(current_ref_doc) - current_ref_doc = [] - i += 1 - - return src_docs, hyp_docs, ref_docs - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument('--sources-file', '-src', type=str, required=True, help='A path to the source file') - parser.add_argument('--hypotheses-file', '-hyp', type=str, required=True, help='A path to the model output file') - parser.add_argument('--references-file', '-ref', type=str, required=False, help='A path to the reference file') - parser.add_argument('--docids-file', '-doc', type=str, required=True, help='A path to the doc-ids file') - parser.add_argument('--model', type=str, required=True, help='The COMET model name used for automatic evaluation') - parser.add_argument('--sliding-window', type=int, required=False, default=1, help='The stride step over document') - parser.add_argument('--context-length', type=int, required=False, default=4, help='The number of sentences in a single context') - args = parser.parse_args() - - comet_model_path = download_model(args.model) - model = load_from_checkpoint(comet_model_path) - - if args.references_file: - assert args.model in COMET_REF_MODELS, f"Reference files should not be passed for evaluating {COMET_SRC_MODELS}" - else: - assert args.model not in COMET_REF_MODELS, f"Reference files are required for evaluating {COMET_REF_MODELS}" - - src_docs, mt_docs, ref_docs = _load_data(args) - scores, _ = _calculate_doc_comet(args, model, src_docs, mt_docs, ref_docs) - - ret = { - 'model': args.model, - 'sources_file': args.sources_file, - 'mt_file': args.hypotheses_file, - 'sliding_window': args.sliding_window, - 'context_length': args.context_length, - 'score': np.mean(scores) - } - - print(json.dumps(ret, indent=2)) - - -if __name__ == "__main__": - main() diff --git a/spaces/zhanghaohui/szu-gpt-academic/crazy_functions/test_project/cpp/cppipc/shm.cpp b/spaces/zhanghaohui/szu-gpt-academic/crazy_functions/test_project/cpp/cppipc/shm.cpp deleted file mode 100644 index 593ce3129dc1574dbc8fc8b088cf595df215de93..0000000000000000000000000000000000000000 --- a/spaces/zhanghaohui/szu-gpt-academic/crazy_functions/test_project/cpp/cppipc/shm.cpp +++ /dev/null @@ -1,103 +0,0 @@ - -#include -#include - -#include "libipc/shm.h" - -#include "libipc/utility/pimpl.h" -#include "libipc/memory/resource.h" - -namespace ipc { -namespace shm { - -class handle::handle_ : public pimpl { -public: - shm::id_t id_ = nullptr; - void* m_ = nullptr; - - ipc::string n_; - std::size_t s_ = 0; -}; - -handle::handle() - : p_(p_->make()) { -} - -handle::handle(char const * name, std::size_t size, unsigned mode) - : handle() { - acquire(name, size, mode); -} - -handle::handle(handle&& rhs) - : handle() { - swap(rhs); -} - -handle::~handle() { - release(); - p_->clear(); -} - -void handle::swap(handle& rhs) { - std::swap(p_, rhs.p_); -} - -handle& handle::operator=(handle rhs) { - swap(rhs); - return *this; -} - -bool handle::valid() const noexcept { - return impl(p_)->m_ != nullptr; -} - -std::size_t handle::size() const noexcept { - return impl(p_)->s_; -} - -char const * handle::name() const noexcept { - return impl(p_)->n_.c_str(); -} - -std::int32_t handle::ref() const noexcept { - return shm::get_ref(impl(p_)->id_); -} - -void handle::sub_ref() noexcept { - shm::sub_ref(impl(p_)->id_); -} - -bool handle::acquire(char const * name, std::size_t size, unsigned mode) { - release(); - impl(p_)->id_ = shm::acquire((impl(p_)->n_ = name).c_str(), size, mode); - impl(p_)->m_ = shm::get_mem(impl(p_)->id_, &(impl(p_)->s_)); - return valid(); -} - -std::int32_t handle::release() { - if (impl(p_)->id_ == nullptr) return -1; - return shm::release(detach()); -} - -void* handle::get() const { - return impl(p_)->m_; -} - -void handle::attach(id_t id) { - if (id == nullptr) return; - release(); - impl(p_)->id_ = id; - impl(p_)->m_ = shm::get_mem(impl(p_)->id_, &(impl(p_)->s_)); -} - -id_t handle::detach() { - auto old = impl(p_)->id_; - impl(p_)->id_ = nullptr; - impl(p_)->m_ = nullptr; - impl(p_)->s_ = 0; - impl(p_)->n_.clear(); - return old; -} - -} // namespace shm -} // namespace ipc